Skip to main content

Runs

Ephemeral batch compute. Spawn fire-and-forget containers that run to completion — ML training, ETL, data processing at scale. Any language, any image.

Runs vs Tasks

Tasks are TypeScript handlers with durable execution (queues, retries, step.run, step.sleep) — ideal for API-triggered work like sending emails or processing webhooks.
Runs are ephemeral containers that run to exit — ideal for ML training, ETL pipelines, and batch workloads where you bring your own image (Python, Go, Rust, etc.).

Parallel Execution

Spawn N runs simultaneously with Promise.all() — fan-out workloads made simple

Run to Completion

Container runs until exit — no idle timeouts, no heartbeats needed

Shared Volume Mounts

Mount ReadWriteMany volumes for shared feature caches across parallel runs

Resource Control

Set CPU and memory per run for predictable cost and performance

Log Capture

Full stdout + stderr captured and stored on completion (1 MiB each)

Quick Start

train.ts
import { RunsClient, createConfig } from '@sylphx/sdk'

const config = createConfig({
  secretKey: process.env.SYLPHX_SECRET_KEY!,
  ref: process.env.SYLPHX_PROJECT_REF!,
})

// Single run — wait for completion
const result = await RunsClient.createAndWait(config, {
  image: 'registry.sylphx.com/myorg/trainer:abc123',
  command: ['python', 'train.py', '--fold', '0'],
  resources: { requests: { cpu: '4', memory: '8Gi' } },
  timeoutSeconds: 3600,
})

if (result.exitCode !== 0) {
  throw new Error(`Training failed: ${result.stderr}`)
}
console.log('Accuracy:', JSON.parse(result.stdout).accuracy)

Parallel Walk-Forward Training

Spawn multiple runs in parallel, each processing a different data fold:

train-parallel.ts
import { RunsClient, createConfig } from '@sylphx/sdk'

const config = createConfig({ secretKey: process.env.SYLPHX_SECRET_KEY! })

const folds = [0, 1, 2, 3, 4, 5, 6, 7, 8]

// Spawn all runs in parallel
const handles = await Promise.all(
  folds.map((fold) =>
    RunsClient.create(config, {
      image: 'registry.sylphx.com/myorg/trainer:abc123def',
      command: ['python', 'train.py', '--fold', String(fold)],
      env: { FOLD_ID: String(fold), DATABASE_URL: process.env.DATABASE_URL! },
      resources: { requests: { cpu: '4', memory: '8Gi' } },
      // Shared feature cache volume (ReadWriteMany — concurrent access OK)
      volumeMounts: [{ volumeId: process.env.CACHE_VOLUME_ID!, mountPath: '/cache' }],
      timeoutSeconds: 7200,
    }),
  ),
)

// Wait for all to complete
const results = await Promise.all(handles.map((h) => h.wait()))
const failures = results.filter((r) => r.exitCode !== 0)
if (failures.length > 0) {
  console.error(`${failures.length}/${folds.length} folds failed`)
}
const avgAccuracy = results
  .filter((r) => r.exitCode === 0)
  .map((r) => JSON.parse(r.stdout).accuracy)
  .reduce((a, b) => a + b, 0) / folds.length
console.log('Average accuracy:', avgAccuracy)

Configuration Reference

PropertyTypeDescription
imagerequiredstringContainer image. Must be in registry.sylphx.com/<org>/<repo>:<sha>.
commandrequiredstring[]Command and arguments to run inside the container.
envRecord<string, string>Environment variables injected into the container.
resources.requests.cpustringCPU request (e.g. "1", "2", "4"). Determines scheduling and billing.
resources.requests.memorystringMemory request (e.g. "2Gi", "8Gi", "32Gi").
volumeMountsRunVolumeMount[]Volumes to mount. ReadWriteMany for cross-run sharing.
timeoutSecondsnumberHard deadline. Run is killed if exceeded. Default: 3600 (1h), max: 86400 (24h).

RunHandle API

PropertyTypeDescription
handle.idstringRun ID (run_xxxxx). Store this to poll later.
handle.wait(options?)Promise<RunResult>Polls until terminal state. Returns RunResult with exitCode, stdout, stderr.
handle.statusRunStatusCurrent status at time of creation: pending | running | succeeded | failed | cancelled | timeout

Pricing

Billed per vCPU-second of actual runtime. Runs that exit early are billed for actual duration only.

PropertyTypeDescription
Free tierAll plans100K vCPU-seconds per month
Beyond free tierUsage-based$0.00004 per vCPU-second ($0.144/vCPU-hour)

Cost example

9 parallel runs × 4 vCPUs × 600s each = 21,600 vCPU-seconds ≈ $0.86 total. Free tier covers ~7 hours of 4-vCPU compute per month.