S
Solo Kit
DocumentationComponentsPricingChangelogRoadmapFAQContact
LoginGet Started
DocumentationComponentsPricing
LoginGet Started
Welcome to Solo Kit DocumentationIntroductionTech StackRoadmapFAQGetting Started
Database OverviewSchema DesignDatabase QueriesDatabase MigrationsDatabase SeedingDatabase PerformanceDatabase SecurityBackup & RestoreDatabase MonitoringAdvanced Database Features
Database

Backup & Restore

Backup & Restore

Learn how to protect your Solo Kit database with comprehensive backup strategies, automated recovery procedures, and disaster preparedness. This guide covers cloud-native backups, point-in-time recovery, and data protection best practices.

💾 Backup Overview

Solo Kit Backup Strategy

Solo Kit implements multi-layered data protection with:

  • Cloud-native backups: Automated provider backups (Neon, Supabase, Railway)
  • Application-level exports: Custom backup scripts for full control
  • Point-in-time recovery: Restore to any specific moment
  • Cross-region replication: Geographic disaster protection
  • Migration-safe backups: Schema-aware backup and restore

Backup Architecture

🏗️ Application Data
    ↓ (automated)
☁️  Cloud Provider Backups (Neon, Supabase, Railway)
    ↓ (manual/scripted)
📁 Application Exports (pg_dump, custom scripts)
    ↓ (disaster recovery)
🌍 Cross-region Replication

Three-tier protection:

  1. Real-time: Database transactions and WAL (Write-Ahead Logging)
  2. Automated: Cloud provider continuous backups
  3. Manual: Custom exports and disaster recovery procedures

☁️ Cloud Provider Backups

Neon Database Backups

Automatic backup features:

  • Continuous backups: Point-in-time recovery for the last 7 days (free tier)
  • Branch-based backups: Create database branches for safe testing
  • Automated retention: Configurable backup retention policies
# Neon CLI backup operations
npm install -g neonctl

# Create a backup branch
neonctl branches create --parent main --name backup-$(date +%Y%m%d)

# List all branches (including backups)
neonctl branches list

# Create point-in-time backup
neonctl branches create \
  --parent main \
  --name backup-$(date +%Y%m%d-%H%M) \
  --timestamp "2024-01-15T10:30:00Z"

Neon backup strategy:

// Automated Neon backup script
export async function createNeonBackup(backupName?: string) {
  const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
  const branchName = backupName || `backup-${timestamp}`;
  
  try {
    // Create backup branch using Neon API
    const response = await fetch(`https://console.neon.tech/api/v2/projects/${projectId}/branches`, {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${process.env.NEON_API_KEY}`,
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({
        name: branchName,
        parent_id: 'main',
        timestamp: new Date().toISOString(),
      }),
    });
    
    const backup = await response.json();
    console.log(`✅ Neon backup created: ${backup.branch.name}`);
    return backup;
  } catch (error) {
    console.error('❌ Neon backup failed:', error);
    throw error;
  }
}

Supabase Backups

Built-in backup features:

  • Daily backups: Automatic daily backups retained for 7 days (free tier)
  • Point-in-time recovery: Available on paid plans
  • Database snapshots: Manual snapshot creation
# Supabase CLI backup operations
npm install -g supabase

# Create database dump
supabase db dump --file backup-$(date +%Y%m%d).sql

# Restore from dump
supabase db reset
supabase db push --file backup-$(date +%Y%m%d).sql

Railway Backups

Railway backup features:

  • Automatic snapshots: Daily automated backups
  • Manual snapshots: On-demand backup creation
  • Cross-deployment restoration: Restore to different environments
# Railway CLI backup operations
npm install -g @railway/cli

# Create manual snapshot
railway database snapshot create

# List available snapshots  
railway database snapshot list

# Restore from snapshot
railway database restore --snapshot-id <snapshot-id>

🛠️ Application-Level Backups

PostgreSQL pg_dump Backups

Complete database backup with pg_dump:

#!/bin/bash
# scripts/backup-database.sh

# Configuration
BACKUP_DIR="./backups"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="solo_kit_backup_${TIMESTAMP}.sql"
BACKUP_PATH="${BACKUP_DIR}/${BACKUP_FILE}"

# Ensure backup directory exists
mkdir -p "${BACKUP_DIR}"

# Create database backup
echo "🗃️  Creating database backup..."
pg_dump "${DATABASE_URL}" \
  --verbose \
  --clean \
  --if-exists \
  --create \
  --format=plain \
  --file="${BACKUP_PATH}"

if [ $? -eq 0 ]; then
  echo "✅ Backup created successfully: ${BACKUP_PATH}"
  
  # Compress backup
  gzip "${BACKUP_PATH}"
  echo "✅ Backup compressed: ${BACKUP_PATH}.gz"
  
  # Show backup size
  BACKUP_SIZE=$(du -h "${BACKUP_PATH}.gz" | cut -f1)
  echo "📊 Backup size: ${BACKUP_SIZE}"
else
  echo "❌ Backup failed"
  exit 1
fi

Custom backup script with metadata:

// scripts/backup-full.ts
import { db } from '@packages/database';
import { execSync } from 'child_process';
import * as fs from 'fs/promises';
import * as path from 'path';

interface BackupMetadata {
  timestamp: string;
  version: string;
  tables: string[];
  recordCounts: Record<string, number>;
  schemaVersion: string;
  environment: string;
}

export async function createFullBackup(): Promise<string> {
  const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
  const backupDir = path.join(process.cwd(), 'backups', timestamp);
  
  try {
    // Ensure backup directory exists
    await fs.mkdir(backupDir, { recursive: true });
    
    // Create database backup
    console.log('🗃️  Creating database dump...');
    const dumpFile = path.join(backupDir, 'database.sql');
    execSync(`pg_dump "${process.env.DATABASE_URL}" > "${dumpFile}"`);
    
    // Gather metadata
    console.log('📊 Gathering metadata...');
    const metadata = await gatherBackupMetadata();
    await fs.writeFile(
      path.join(backupDir, 'metadata.json'),
      JSON.stringify(metadata, null, 2)
    );
    
    // Create schema backup
    console.log('📋 Backing up schema...');
    const schemaFile = path.join(backupDir, 'schema.sql');
    execSync(`pg_dump "${process.env.DATABASE_URL}" --schema-only > "${schemaFile}"`);
    
    // Create data-only backup
    console.log('📄 Backing up data...');
    const dataFile = path.join(backupDir, 'data.sql');
    execSync(`pg_dump "${process.env.DATABASE_URL}" --data-only > "${dataFile}"`);
    
    // Compress entire backup
    console.log('🗜️  Compressing backup...');
    execSync(`tar -czf "${backupDir}.tar.gz" -C "${path.dirname(backupDir)}" "${path.basename(backupDir)}"`);
    
    // Clean up uncompressed directory
    execSync(`rm -rf "${backupDir}"`);
    
    console.log(`✅ Full backup created: ${backupDir}.tar.gz`);
    return `${backupDir}.tar.gz`;
    
  } catch (error) {
    console.error('❌ Full backup failed:', error);
    throw error;
  }
}

async function gatherBackupMetadata(): Promise<BackupMetadata> {
  const database = await db;
  if (!database) {
    throw new Error('Database not available');
  }
  
  // Get table list and record counts
  const tablesResult = await database.execute(sql`
    SELECT table_name 
    FROM information_schema.tables 
    WHERE table_schema = 'public'
    ORDER BY table_name
  `);
  
  const tables = tablesResult.map((row: any) => row.table_name);
  const recordCounts: Record<string, number> = {};
  
  // Get record count for each table
  for (const table of tables) {
    try {
      const countResult = await database.execute(
        sql`SELECT COUNT(*) as count FROM ${sql.identifier(table)}`
      );
      recordCounts[table] = parseInt(countResult[0]?.count || '0');
    } catch (error) {
      recordCounts[table] = -1; // Error getting count
    }
  }
  
  return {
    timestamp: new Date().toISOString(),
    version: process.env.npm_package_version || 'unknown',
    tables,
    recordCounts,
    schemaVersion: 'drizzle-schema-v1', // Update as schema evolves
    environment: process.env.NODE_ENV || 'unknown',
  };
}

Incremental Backups

Track changes for efficient incremental backups:

// Incremental backup based on timestamps
export async function createIncrementalBackup(since: Date): Promise<string> {
  const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
  const backupPath = `./backups/incremental_${timestamp}.sql`;
  
  try {
    const database = await db;
    if (!database) {
      throw new Error('Database not available');
    }
    
    // Export only changed records since last backup
    const changedUsers = await database
      .select()
      .from(users)
      .where(gte(users.updatedAt, since));
      
    const changedTransactions = await database
      .select()
      .from(transactions)
      .where(gte(transactions.createdAt, since));
    
    // More efficient: use pg_dump with WHERE conditions
    const whereClause = `updated_at >= '${since.toISOString()}'`;
    
    execSync(`pg_dump "${process.env.DATABASE_URL}" \
      --data-only \
      --table=users \
      --where="${whereClause}" \
      > "${backupPath}"`);
    
    console.log(`✅ Incremental backup created: ${backupPath}`);
    return backupPath;
    
  } catch (error) {
    console.error('❌ Incremental backup failed:', error);
    throw error;
  }
}

🔄 Restore Procedures

Basic Database Restore

Restore from pg_dump backup:

#!/bin/bash
# scripts/restore-database.sh

BACKUP_FILE=$1

if [ -z "$BACKUP_FILE" ]; then
  echo "Usage: $0 <backup_file>"
  echo "Example: $0 ./backups/solo_kit_backup_20240115_143000.sql.gz"
  exit 1
fi

# Confirmation prompt
echo "⚠️  WARNING: This will replace all data in the database."
echo "Database: ${DATABASE_URL}"
echo "Backup file: ${BACKUP_FILE}"
read -p "Are you sure? (yes/no): " confirmation

if [ "$confirmation" != "yes" ]; then
  echo "❌ Restore cancelled"
  exit 0
fi

# Decompress if needed
if [[ "$BACKUP_FILE" == *.gz ]]; then
  echo "📦 Decompressing backup..."
  TEMP_FILE=$(mktemp)
  gunzip -c "$BACKUP_FILE" > "$TEMP_FILE"
  BACKUP_FILE="$TEMP_FILE"
fi

# Restore database
echo "🔄 Restoring database..."
psql "${DATABASE_URL}" < "${BACKUP_FILE}"

if [ $? -eq 0 ]; then
  echo "✅ Database restored successfully"
else
  echo "❌ Database restore failed"
  exit 1
fi

# Clean up temp file
if [ -n "$TEMP_FILE" ]; then
  rm "$TEMP_FILE"
fi

Point-in-Time Recovery

Restore to specific timestamp:

// Point-in-time recovery implementation
export async function restoreToTimestamp(
  backupPath: string, 
  targetTimestamp: Date
): Promise<void> {
  try {
    console.log(`🕐 Restoring to timestamp: ${targetTimestamp.toISOString()}`);
    
    // 1. Restore base backup
    console.log('📦 Restoring base backup...');
    execSync(`psql "${process.env.DATABASE_URL}" < "${backupPath}"`);
    
    // 2. Apply WAL files up to target timestamp (if available)
    console.log('📝 Applying transaction logs...');
    await applyWALToTimestamp(targetTimestamp);
    
    // 3. Verify restoration
    console.log('✅ Verifying restoration...');
    const database = await db;
    if (database) {
      // Check that data exists and is consistent
      const userCount = await database.select({ count: count() }).from(users);
      console.log(`📊 Restored ${userCount[0].count} users`);
    }
    
    console.log('✅ Point-in-time recovery completed');
    
  } catch (error) {
    console.error('❌ Point-in-time recovery failed:', error);
    throw error;
  }
}

async function applyWALToTimestamp(targetTimestamp: Date): Promise<void> {
  // Implementation depends on your backup strategy
  // For cloud providers, this is handled automatically
  // For self-hosted, you'd replay WAL files
  
  console.log('ℹ️  WAL replay handled by cloud provider');
}

Migration-Safe Restore

Restore with schema version compatibility:

export async function restoreMigrationSafe(
  backupPath: string,
  targetVersion?: string
): Promise<void> {
  try {
    // Read backup metadata
    const metadataPath = backupPath.replace('.sql', '.metadata.json');
    const metadata = JSON.parse(await fs.readFile(metadataPath, 'utf8'));
    
    console.log(`📋 Backup schema version: ${metadata.schemaVersion}`);
    console.log(`🎯 Target version: ${targetVersion || 'current'}`);
    
    // Check schema compatibility
    const currentSchemaVersion = getCurrentSchemaVersion();
    if (metadata.schemaVersion !== currentSchemaVersion) {
      console.warn('⚠️  Schema version mismatch - migrations may be needed');
      
      // Optionally run migrations after restore
      const runMigrations = await promptForConfirmation(
        'Run migrations after restore?'
      );
      
      if (runMigrations) {
        await restoreWithMigrations(backupPath, metadata);
        return;
      }
    }
    
    // Standard restore
    await restoreStandard(backupPath);
    
  } catch (error) {
    console.error('❌ Migration-safe restore failed:', error);
    throw error;
  }
}

async function restoreWithMigrations(
  backupPath: string, 
  metadata: BackupMetadata
): Promise<void> {
  console.log('🔄 Restoring with migrations...');
  
  // 1. Restore to temporary database
  const tempDbName = `temp_restore_${Date.now()}`;
  await createTemporaryDatabase(tempDbName);
  
  try {
    // 2. Restore backup to temp database
    await restoreToDatabase(backupPath, tempDbName);
    
    // 3. Run migrations on temp database
    await runMigrationsOnDatabase(tempDbName);
    
    // 4. Copy data from temp to main database
    await copyDataBetweenDatabases(tempDbName, 'main');
    
    console.log('✅ Migration-safe restore completed');
    
  } finally {
    // Clean up temporary database
    await dropTemporaryDatabase(tempDbName);
  }
}

🔄 Automated Backup Strategies

Scheduled Backups

Cron-based automated backups:

#!/bin/bash
# .github/workflows/backup.yml or crontab entry

# Daily backup at 2 AM UTC
# 0 2 * * * /path/to/backup-script.sh

BACKUP_RETENTION_DAYS=30

# Create backup
BACKUP_FILE=$(node -e "
  const { createFullBackup } = require('./scripts/backup-full');
  createFullBackup().then(console.log).catch(console.error);
")

# Upload to cloud storage (AWS S3 example)
aws s3 cp "${BACKUP_FILE}" "s3://your-backup-bucket/$(date +%Y/%m/%d)/"

# Clean up local backups older than retention period
find ./backups -name "*.tar.gz" -mtime +${BACKUP_RETENTION_DAYS} -delete

echo "✅ Automated backup completed: ${BACKUP_FILE}"

GitHub Actions automated backup:

# .github/workflows/database-backup.yml
name: Database Backup

on:
  schedule:
    # Daily at 2 AM UTC
    - cron: '0 2 * * *'
  workflow_dispatch: # Manual trigger

jobs:
  backup:
    runs-on: ubuntu-latest
    
    steps:
      - uses: actions/checkout@v4
      
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          
      - name: Install dependencies
        run: pnpm install
        
      - name: Create database backup
        env:
          DATABASE_URL: ${{ secrets.DATABASE_URL }}
        run: |
          pnpm tsx scripts/backup-full.ts
          
      - name: Upload backup to S3
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        run: |
          aws s3 cp backups/ s3://your-backup-bucket/ --recursive --exclude "*" --include "*.tar.gz"

Backup Monitoring

Monitor backup health and success:

// Monitor backup operations
export async function monitorBackupHealth(): Promise<BackupHealthReport> {
  try {
    const backupDir = './backups';
    const files = await fs.readdir(backupDir);
    
    const backupFiles = files
      .filter(f => f.endsWith('.tar.gz') || f.endsWith('.sql'))
      .sort()
      .reverse();
    
    const latestBackup = backupFiles[0];
    const latestBackupPath = path.join(backupDir, latestBackup);
    const stats = await fs.stat(latestBackupPath);
    
    const hoursOld = (Date.now() - stats.mtime.getTime()) / (1000 * 60 * 60);
    
    const report: BackupHealthReport = {
      status: hoursOld < 25 ? 'healthy' : 'stale', // Allow 1 hour buffer
      latestBackup,
      latestBackupAge: hoursOld,
      backupCount: backupFiles.length,
      latestBackupSize: stats.size,
      recommendations: [],
    };
    
    if (hoursOld > 25) {
      report.recommendations.push('Latest backup is more than 25 hours old');
    }
    
    if (backupFiles.length < 7) {
      report.recommendations.push('Less than 7 backup files available');
    }
    
    return report;
    
  } catch (error) {
    return {
      status: 'error',
      error: error.message,
      recommendations: ['Fix backup monitoring system'],
    };
  }
}

interface BackupHealthReport {
  status: 'healthy' | 'stale' | 'error';
  latestBackup?: string;
  latestBackupAge?: number;
  backupCount?: number;
  latestBackupSize?: number;
  recommendations: string[];
  error?: string;
}

🌍 Disaster Recovery

Multi-Region Strategy

Cross-region backup replication:

// Disaster recovery with cross-region backups
export async function setupDisasterRecovery(): Promise<void> {
  const regions = ['us-east-1', 'eu-west-1', 'ap-southeast-1'];
  
  try {
    // Create backups in multiple regions
    for (const region of regions) {
      console.log(`🌍 Creating backup in ${region}...`);
      
      await createRegionalBackup(region);
      await verifyRegionalBackup(region);
    }
    
    console.log('✅ Multi-region disaster recovery setup complete');
    
  } catch (error) {
    console.error('❌ Disaster recovery setup failed:', error);
    throw error;
  }
}

async function createRegionalBackup(region: string): Promise<void> {
  // Implementation depends on your cloud provider
  // Example for AWS S3
  const backupFile = await createFullBackup();
  
  await execAsync(`
    aws s3 cp "${backupFile}" \
    "s3://disaster-recovery-${region}/$(date +%Y/%m/%d)/" \
    --region ${region}
  `);
}

Recovery Testing

Regularly test backup restoration:

// Automated backup restoration testing
export async function testBackupRestoration(): Promise<TestResult> {
  const testDbName = `test_restore_${Date.now()}`;
  
  try {
    console.log('🧪 Starting backup restoration test...');
    
    // 1. Create test database
    await createTemporaryDatabase(testDbName);
    
    // 2. Find latest backup
    const latestBackup = await findLatestBackup();
    
    // 3. Restore backup to test database
    await restoreToDatabase(latestBackup, testDbName);
    
    // 4. Verify data integrity
    const verificationResult = await verifyDataIntegrity(testDbName);
    
    // 5. Test basic queries
    const queryTestResult = await testBasicQueries(testDbName);
    
    const result: TestResult = {
      success: verificationResult.success && queryTestResult.success,
      backupFile: latestBackup,
      dataIntegrity: verificationResult,
      queryTests: queryTestResult,
      timestamp: new Date().toISOString(),
    };
    
    console.log('✅ Backup restoration test completed');
    return result;
    
  } catch (error) {
    console.error('❌ Backup restoration test failed:', error);
    return {
      success: false,
      error: error.message,
      timestamp: new Date().toISOString(),
    };
  } finally {
    // Clean up test database
    await dropTemporaryDatabase(testDbName);
  }
}

interface TestResult {
  success: boolean;
  backupFile?: string;
  dataIntegrity?: VerificationResult;
  queryTests?: QueryTestResult;
  error?: string;
  timestamp: string;
}

🎯 Best Practices

1. Backup Strategy

Follow the 3-2-1 rule:

  • 3 copies of your data (original + 2 backups)
  • 2 different media types (cloud + local, or cloud + different cloud)
  • 1 offsite backup (different geographic location)

2. Testing & Verification

Regularly test your backups:

# Weekly backup restoration test
0 3 * * 0 /path/to/test-backup-restoration.sh

# Monthly full disaster recovery test
0 4 1 * * /path/to/full-disaster-recovery-test.sh

3. Documentation

Document your recovery procedures:

  • Recovery Time Objective (RTO): How quickly you can restore
  • Recovery Point Objective (RPO): How much data loss is acceptable
  • Step-by-step recovery procedures: Clear instructions for team members
  • Contact information: Who to call during disasters

4. Security

Secure your backups:

  • Encrypt backups at rest and in transit
  • Access control - limit who can access backups
  • Audit logging - track backup access and operations
  • Regular testing - verify backup integrity and security

📊 Backup Metrics

Key metrics to monitor:

  • Backup frequency: Daily, weekly, monthly
  • Backup success rate: Percentage of successful backups
  • Backup size trends: Growing data and storage requirements
  • Recovery time: How long restoration takes
  • Recovery success rate: Percentage of successful recoveries

🎯 Next Steps

Strengthen your data protection with these guides:

  1. Monitoring - Monitor backup and recovery health
  2. Security - Secure backup practices
  3. Advanced - Advanced backup strategies
  4. Performance - Optimize backup performance

Comprehensive backup and disaster recovery ensures your Solo Kit application can recover from any data loss scenario and maintain business continuity.

Database Security

Secure your Solo Kit database with proper authentication, authorization, encryption, and security best practices

Database Monitoring

Monitor database health, performance metrics, and operational status with comprehensive observability tools and practices

On this page

Backup & Restore💾 Backup OverviewSolo Kit Backup StrategyBackup Architecture☁️ Cloud Provider BackupsNeon Database BackupsSupabase BackupsRailway Backups🛠️ Application-Level BackupsPostgreSQL pg_dump BackupsIncremental Backups🔄 Restore ProceduresBasic Database RestorePoint-in-Time RecoveryMigration-Safe Restore🔄 Automated Backup StrategiesScheduled BackupsBackup Monitoring🌍 Disaster RecoveryMulti-Region StrategyRecovery Testing🎯 Best Practices1. Backup Strategy2. Testing & Verification3. Documentation4. Security📊 Backup Metrics🎯 Next Steps