Documentation

Get Started with
ComputeBase

Everything you need to deploy and scale compute infrastructure in minutes.

Quick Start Guide

1

Install the CLI

Install the ComputeBase CLI to manage your infrastructure from the command line.

bash
# macOS / Linux
curl -fsSL https://get.computebase.dev | sh

# Windows (PowerShell)
iwr https://get.computebase.dev/install.ps1 | iex

# Verify installation
computebase --version
2

Authenticate

Get your API key from the dashboard and authenticate the CLI.

bash
# Login interactively
computebase login

# Or set API key directly
export COMPUTEBASE_API_KEY="cb_your_api_key_here"
3

Create Your First Instance

Spin up a compute instance in seconds.

bash
# Create a CPU instance
computebase instances create \
  --type cpu-4 \
  --region us-east-1 \
  --image ubuntu-22.04 \
  --name my-first-instance

# Create a GPU instance
computebase instances create \
  --type gpu-a100 \
  --region us-west-2 \
  --image ubuntu-22.04-cuda \
  --name gpu-instance
4

Connect to Your Instance

SSH into your instance or use the web console.

bash
# SSH directly
computebase ssh my-first-instance

# Or get connection details
computebase instances get my-first-instance

# Connect manually
ssh ubuntu@<instance-ip>

API Reference

Programmatic access to all ComputeBase features via REST API.

Authentication

All API requests require authentication using your API key in the Authorization header.

bash
curl -X GET https://api.computebase.dev/v1/instances \
  -H "Authorization: Bearer cb_your_api_key" \
  -H "Content-Type: application/json"

Create Instance

POST /v1/instances - Create a new compute instance

json
curl -X POST https://api.computebase.dev/v1/instances \
  -H "Authorization: Bearer cb_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "type": "gpu-a100",
    "region": "us-east-1",
    "image": "ubuntu-22.04-cuda",
    "name": "ml-training",
    "ssh_keys": ["ssh-rsa AAAAB3..."]
  }'

# Response
{
  "id": "i-7x9k2m",
  "name": "ml-training",
  "type": "gpu-a100",
  "status": "running",
  "ip": "192.168.100.42",
  "region": "us-east-1",
  "created_at": "2025-01-15T10:30:00Z"
}

List Instances

GET /v1/instances - List all your instances

json
curl -X GET "https://api.computebase.dev/v1/instances?status=running" \
  -H "Authorization: Bearer cb_your_api_key"

# Response
{
  "instances": [
    {
      "id": "i-7x9k2m",
      "name": "ml-training",
      "type": "gpu-a100",
      "status": "running",
      "ip": "192.168.100.42",
      "region": "us-east-1"
    }
  ],
  "total": 1
}

Auto-Scaling

POST /v1/autoscaling - Configure auto-scaling rules

json
curl -X POST https://api.computebase.dev/v1/autoscaling \
  -H "Authorization: Bearer cb_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "min_instances": 2,
    "max_instances": 50,
    "target_cpu": 70,
    "scale_up_threshold": 80,
    "scale_down_threshold": 30,
    "cooldown_period": 300
  }'

Official SDKs

Python

@computebase/python

Official Python SDK with async support and type hints.

Installation:

pip install computebase
View Docs

Node.js

@computebase/sdk

TypeScript-first SDK for Node.js and browser environments.

Installation:

npm install @computebase/sdk
View Docs

Go

github.com/computebase/go-sdk

Idiomatic Go SDK with full API coverage.

Installation:

go get github.com/computebase/go-sdk
View Docs

Ruby

computebase

Ruby gem for ComputeBase API integration.

Installation:

gem install computebase
View Docs

Best Practices

Use SSH Keys for Authentication

Always use SSH key-based authentication instead of passwords. Add your public keys during instance creation for secure, passwordless access.

Tag Your Resources

Apply meaningful tags to instances for better organization and cost tracking. Use tags like 'environment:production', 'team:ml', 'project:app-backend'.

Enable Auto-Scaling

Configure auto-scaling rules to handle traffic spikes automatically. Set appropriate thresholds to balance performance and cost.

Monitor Your Usage

Set up alerts for CPU, memory, and network usage. Use the built-in monitoring dashboard to track performance metrics in real-time.

Implement Graceful Shutdown

Handle SIGTERM signals in your applications to ensure clean shutdowns when instances are terminated by auto-scaling or manual actions.

Use Regions Strategically

Deploy instances in regions closest to your users for minimal latency. Consider multi-region deployments for high availability.

Need Help?

Our team is here to help you get started. Join our community or reach out directly.