Object Storage

S3-compatible object storage for files, images, and documents. Declare buckets in catalog-info.yaml — credentials and SDKs are provisioned automatically.

How It Works

  1. Builder reads spec.storage from your catalog-info.yaml
  2. For each entry, it provisions a MinIO bucket and generates credentials via HashiCorp Vault
  3. A Vault Agent sidecar writes S3 credentials to /vault/secrets/storage inside your pod and auto-rotates them
  4. The @insureco/storage SDK reads from the Vault file before each operation — picks up rotated credentials without a restart

Zero config: no bucket creation, no credential management, no rotation code needed.

YAML Configuration

# catalog-info.yaml
spec:
  storage:
    - name: default
      tier: s3-sm

The bucket name follows {service}-{env}-{name}. For example: my-api-prod-default.

Multiple Buckets

spec:
  storage:
    - name: uploads
      tier: s3-md
    - name: exports
      tier: s3-sm

Each bucket gets its own env var. The default bucket uses S3_BUCKET. Named buckets use S3_{NAME}_BUCKET (e.g., S3_UPLOADS_BUCKET).

Storage Tiers

TierCapacityGas/MonthUSD/Month
s3-sm1 GB200$2
s3-md5 GB800$8
s3-lg25 GB3,000$30
s3-xl100 GB10,000$100

You can upgrade a bucket's tier later by changing the tier value and redeploying. Data is preserved across tier changes.

Credential Variables

VariableDescription
S3_HOSTMinIO server host
S3_PORTMinIO server port (9000)
S3_ACCESS_KEY_IDDynamic credential (auto-rotated by Vault)
S3_SECRET_ACCESS_KEYDynamic credential (auto-rotated)
S3_BUCKETDefault bucket name
S3_{NAME}_BUCKETNamed bucket (e.g., S3_UPLOADS_BUCKET)

Using the SDK

npm install @insureco/storage
import { StorageClient } from '@insureco/storage'

// Auto-reads credentials from /vault/secrets/storage
const storage = StorageClient.fromEnv()

// Upload a file
const url = await storage.upload({
  bucket: process.env.S3_BUCKET,
  key: 'documents/invoice-001.pdf',
  body: pdfBuffer,
  contentType: 'application/pdf',
})

// Generate a presigned download URL (1-hour TTL)
const downloadUrl = await storage.presign({
  bucket: process.env.S3_BUCKET,
  key: 'documents/invoice-001.pdf',
  expiresIn: 3600,
})

// Delete a file
await storage.delete({
  bucket: process.env.S3_BUCKET,
  key: 'documents/invoice-001.pdf',
})

Local Development

For local dev, set env vars directly (the Vault sidecar is only available in-cluster):

# .env.local
S3_HOST=localhost
S3_PORT=9000
S3_ACCESS_KEY_ID=minioadmin
S3_SECRET_ACCESS_KEY=minioadmin
S3_BUCKET=my-api-dev-default

Run MinIO locally with Docker:

docker run -p 9000:9000 minio/minio server /data

Key Facts

  • Buckets are created on first deploy and persist across subsequent deploys
  • Credentials are Vault-managed and auto-rotated — no manual rotation needed
  • The @insureco/storage SDK handles Vault file reading; raw AWS/MinIO SDK requires reading credentials manually
  • Storage gas is charged monthly based on tier, not per-operation

Last updated: February 28, 2026