S
Solo Kit
DocumentationComponentsPricingChangelogRoadmapFAQContact
LoginGet Started
DocumentationComponentsPricing
LoginGet Started
Welcome to Solo Kit DocumentationIntroductionTech StackRoadmapFAQGetting Started
Database OverviewSchema DesignDatabase QueriesDatabase MigrationsDatabase SeedingDatabase PerformanceDatabase SecurityBackup & RestoreDatabase MonitoringAdvanced Database Features
Database

Backup & Restore

Backup & Restore

Learn how to protect your Solo Kit database with comprehensive backup strategies, automated recovery procedures, and disaster preparedness. This guide covers Convex's built-in backups, export procedures, and data protection best practices.

Backup Overview

Solo Kit Backup Strategy

Solo Kit with Convex implements multi-layered data protection with:

  • Automatic Convex backups: Built-in continuous backup and recovery
  • Point-in-time recovery: Restore to any specific moment
  • Export capabilities: Custom data exports for additional safety
  • Cross-environment sync: Easy data movement between environments

Backup Architecture

Application Data
    ↓ (automatic)
Convex Automatic Backups (continuous)
    ↓ (manual/scripted)
Custom Data Exports (JSON, CSV)
    ↓ (disaster recovery)
External Backup Storage

Three-tier protection:

  1. Real-time: Convex's automatic continuous backups
  2. Snapshot: Point-in-time recovery capabilities
  3. Export: Custom export scripts for external storage

Convex Built-in Backups

Automatic Backup Features

Convex provides automatic backup and recovery:

  • Continuous backups: All data changes are automatically backed up
  • Point-in-time recovery: Available on Pro and Enterprise plans
  • Zero configuration: No setup required for basic backups
  • Instant recovery: Quick restoration from backups

Accessing Backups

Via Convex Dashboard:

  1. Open npx convex dashboard
  2. Navigate to your project settings
  3. Access backup and recovery options
  4. Select restore point if needed

Point-in-Time Recovery

For Pro/Enterprise plans:

# Contact Convex support for point-in-time recovery
# or use dashboard for snapshot restoration
npx convex dashboard

Custom Data Exports

Export Functions

Create export functions for custom backup needs:

// convex/exports.ts
import { v } from 'convex/values';
import { query, action } from './_generated/server';

// Export all users
export const exportUsers = query({
  handler: async (ctx) => {
    const users = await ctx.db.query('users').collect();

    return users.map((user) => ({
      id: user._id,
      email: user.email,
      name: user.name,
      role: user.role,
      createdAt: new Date(user.createdAt).toISOString(),
      updatedAt: new Date(user.updatedAt).toISOString(),
    }));
  },
});

// Export with pagination for large datasets
export const exportUsersPaginated = query({
  args: {
    cursor: v.optional(v.id('users')),
    limit: v.optional(v.number()),
  },
  handler: async (ctx, args) => {
    const limit = args.limit ?? 1000;

    let query = ctx.db.query('users').order('asc');

    if (args.cursor) {
      const cursorDoc = await ctx.db.get(args.cursor);
      if (cursorDoc) {
        query = query.filter((q) => q.gt(q.field('_creationTime'), cursorDoc._creationTime));
      }
    }

    const users = await query.take(limit + 1);
    const hasMore = users.length > limit;
    const batch = hasMore ? users.slice(0, -1) : users;

    return {
      data: batch.map((user) => ({
        id: user._id,
        email: user.email,
        name: user.name,
        role: user.role,
        createdAt: user.createdAt,
      })),
      nextCursor: hasMore ? batch[batch.length - 1]._id : null,
      hasMore,
    };
  },
});

// Export all data from a table
export const exportTable = query({
  args: { tableName: v.string() },
  handler: async (ctx, args) => {
    // Note: This requires knowing your table names
    const tables: Record<string, any> = {
      users: ctx.db.query('users'),
      sessions: ctx.db.query('sessions'),
      subscriptions: ctx.db.query('subscriptions'),
    };

    const tableQuery = tables[args.tableName];
    if (!tableQuery) {
      throw new Error(`Unknown table: ${args.tableName}`);
    }

    return await tableQuery.collect();
  },
});

Full Database Export Script

// scripts/backup-database.ts
import { ConvexHttpClient } from 'convex/browser';
import { api } from '../convex/_generated/api';
import * as fs from 'fs/promises';
import * as path from 'path';

const TABLES_TO_EXPORT = ['users', 'userPreferences', 'sessions', 'subscriptions', 'transactions'];

interface BackupMetadata {
  timestamp: string;
  tables: string[];
  recordCounts: Record<string, number>;
  version: string;
}

async function createFullBackup() {
  const convexUrl = process.env.NEXT_PUBLIC_CONVEX_URL!;
  const client = new ConvexHttpClient(convexUrl);

  const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
  const backupDir = path.join(process.cwd(), 'backups', timestamp);

  await fs.mkdir(backupDir, { recursive: true });

  console.log('Starting full database backup...');

  const metadata: BackupMetadata = {
    timestamp: new Date().toISOString(),
    tables: TABLES_TO_EXPORT,
    recordCounts: {},
    version: process.env.npm_package_version || 'unknown',
  };

  for (const tableName of TABLES_TO_EXPORT) {
    console.log(`Exporting ${tableName}...`);

    try {
      const data = await client.query(api.exports.exportTable, { tableName });

      const filePath = path.join(backupDir, `${tableName}.json`);
      await fs.writeFile(filePath, JSON.stringify(data, null, 2));

      metadata.recordCounts[tableName] = data.length;
      console.log(`  Exported ${data.length} records`);
    } catch (error) {
      console.error(`  Error exporting ${tableName}:`, error);
      metadata.recordCounts[tableName] = -1;
    }
  }

  // Save metadata
  const metadataPath = path.join(backupDir, 'metadata.json');
  await fs.writeFile(metadataPath, JSON.stringify(metadata, null, 2));

  // Create archive
  const archivePath = `${backupDir}.tar.gz`;
  const { execSync } = require('child_process');
  execSync(
    `tar -czf "${archivePath}" -C "${path.dirname(backupDir)}" "${path.basename(backupDir)}"`
  );

  // Clean up uncompressed directory
  await fs.rm(backupDir, { recursive: true });

  console.log(`Backup completed: ${archivePath}`);
  return archivePath;
}

// Run backup
createFullBackup().catch(console.error);

Data Import/Restore

Import Functions

// convex/imports.ts
import { v } from 'convex/values';
import { internalMutation } from './_generated/server';

// Import users from backup
export const importUsers = internalMutation({
  args: {
    users: v.array(
      v.object({
        email: v.string(),
        name: v.string(),
        role: v.union(v.literal('admin'), v.literal('user')),
        createdAt: v.optional(v.number()),
        updatedAt: v.optional(v.number()),
      })
    ),
    skipExisting: v.optional(v.boolean()),
  },
  handler: async (ctx, args) => {
    const results = {
      imported: 0,
      skipped: 0,
      errors: 0,
    };

    for (const userData of args.users) {
      try {
        // Check if user exists
        const existing = await ctx.db
          .query('users')
          .withIndex('by_email', (q) => q.eq('email', userData.email))
          .first();

        if (existing) {
          if (args.skipExisting) {
            results.skipped++;
            continue;
          }
          // Update existing
          await ctx.db.patch(existing._id, {
            ...userData,
            updatedAt: Date.now(),
          });
        } else {
          // Insert new
          await ctx.db.insert('users', {
            ...userData,
            createdAt: userData.createdAt ?? Date.now(),
            updatedAt: userData.updatedAt ?? Date.now(),
          });
        }

        results.imported++;
      } catch (error) {
        console.error(`Error importing user ${userData.email}:`, error);
        results.errors++;
      }
    }

    return results;
  },
});

// Bulk import with validation
export const bulkImport = internalMutation({
  args: {
    tableName: v.string(),
    records: v.array(v.any()),
  },
  handler: async (ctx, args) => {
    const results = {
      success: 0,
      failed: 0,
    };

    for (const record of args.records) {
      try {
        // Remove _id if present (will be auto-generated)
        const { _id, _creationTime, ...data } = record;

        await ctx.db.insert(args.tableName as any, {
          ...data,
          createdAt: data.createdAt ?? Date.now(),
          updatedAt: data.updatedAt ?? Date.now(),
        });

        results.success++;
      } catch (error) {
        results.failed++;
      }
    }

    return results;
  },
});

Restore Script

// scripts/restore-database.ts
import { ConvexHttpClient } from 'convex/browser';
import { api } from '../convex/_generated/api';
import * as fs from 'fs/promises';
import * as path from 'path';
import * as readline from 'readline';

async function restoreFromBackup(backupPath: string) {
  const convexUrl = process.env.NEXT_PUBLIC_CONVEX_URL!;
  const client = new ConvexHttpClient(convexUrl);

  // Confirm restore
  const rl = readline.createInterface({
    input: process.stdin,
    output: process.stdout,
  });

  const confirmation = await new Promise<string>((resolve) => {
    rl.question('WARNING: This will modify your database. Type "yes" to confirm: ', resolve);
  });
  rl.close();

  if (confirmation !== 'yes') {
    console.log('Restore cancelled');
    return;
  }

  console.log('Starting database restore...');

  // Extract backup if compressed
  let extractedPath = backupPath;
  if (backupPath.endsWith('.tar.gz')) {
    const { execSync } = require('child_process');
    const extractDir = path.dirname(backupPath);
    execSync(`tar -xzf "${backupPath}" -C "${extractDir}"`);
    extractedPath = backupPath.replace('.tar.gz', '');
  }

  // Read metadata
  const metadataPath = path.join(extractedPath, 'metadata.json');
  const metadata = JSON.parse(await fs.readFile(metadataPath, 'utf8'));

  console.log(`Backup from: ${metadata.timestamp}`);
  console.log(`Tables: ${metadata.tables.join(', ')}`);

  // Restore each table
  for (const tableName of metadata.tables) {
    const filePath = path.join(extractedPath, `${tableName}.json`);

    try {
      const data = JSON.parse(await fs.readFile(filePath, 'utf8'));
      console.log(`Restoring ${tableName} (${data.length} records)...`);

      const result = await client.mutation(api.imports.bulkImport, {
        tableName,
        records: data,
      });

      console.log(`  Success: ${result.success}, Failed: ${result.failed}`);
    } catch (error) {
      console.error(`  Error restoring ${tableName}:`, error);
    }
  }

  console.log('Restore completed');
}

// Usage: npx tsx scripts/restore-database.ts ./backups/2024-01-15T10-30-00Z.tar.gz
const backupPath = process.argv[2];
if (!backupPath) {
  console.log('Usage: npx tsx scripts/restore-database.ts <backup-path>');
  process.exit(1);
}

restoreFromBackup(backupPath).catch(console.error);

Automated Backup Strategies

Scheduled Exports

Using Convex cron jobs:

// convex/crons.ts
import { cronJobs } from 'convex/server';
import { internal } from './_generated/api';

const crons = cronJobs();

// Daily backup at 2 AM UTC
crons.daily('daily-backup', { hourUTC: 2 }, internal.backups.createDailyBackup);

// Weekly full export on Sunday
crons.weekly(
  'weekly-export',
  { dayOfWeek: 'sunday', hourUTC: 3 },
  internal.backups.createWeeklyExport
);

export default crons;
// convex/backups.ts
import { internalMutation, internalAction } from './_generated/server';

export const createDailyBackup = internalMutation({
  handler: async (ctx) => {
    // Record backup metadata
    await ctx.db.insert('backupLogs', {
      type: 'daily',
      status: 'completed',
      timestamp: Date.now(),
      tables: ['users', 'sessions', 'subscriptions'],
    });

    console.log('Daily backup recorded');
  },
});

export const createWeeklyExport = internalAction({
  handler: async (ctx) => {
    // For external storage, use an action to call external APIs
    // This could upload to S3, Google Cloud Storage, etc.

    console.log('Weekly export completed');
  },
});

GitHub Actions Backup

# .github/workflows/database-backup.yml
name: Database Backup

on:
  schedule:
    # Daily at 2 AM UTC
    - cron: '0 2 * * *'
  workflow_dispatch: # Manual trigger

jobs:
  backup:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install dependencies
        run: pnpm install

      - name: Create database backup
        env:
          NEXT_PUBLIC_CONVEX_URL: ${{ secrets.CONVEX_URL }}
        run: npx tsx scripts/backup-database.ts

      - name: Upload backup to S3
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        run: |
          aws s3 cp backups/ s3://your-backup-bucket/$(date +%Y/%m/%d)/ --recursive --include "*.tar.gz"

      - name: Clean old local backups
        run: find ./backups -name "*.tar.gz" -mtime +7 -delete

Backup Monitoring

Backup Health Checks

// convex/backups.ts
import { query } from './_generated/server';

export const getBackupStatus = query({
  handler: async (ctx) => {
    // Get recent backup logs
    const backupLogs = await ctx.db.query('backupLogs').order('desc').take(10);

    const latestBackup = backupLogs[0];
    const hoursOld = latestBackup
      ? (Date.now() - latestBackup.timestamp) / (1000 * 60 * 60)
      : Infinity;

    return {
      status: hoursOld < 25 ? 'healthy' : 'stale',
      latestBackup: latestBackup
        ? {
            timestamp: new Date(latestBackup.timestamp).toISOString(),
            type: latestBackup.type,
            status: latestBackup.status,
          }
        : null,
      hoursOld: Math.round(hoursOld),
      recentBackups: backupLogs.map((log) => ({
        timestamp: new Date(log.timestamp).toISOString(),
        type: log.type,
        status: log.status,
      })),
    };
  },
});

Disaster Recovery

Recovery Procedures

Step-by-step recovery process:

  1. Assess the situation

    # Check current database status
    npx convex dashboard
  2. Identify backup to restore

    # List available backups
    ls -la ./backups/
  3. Restore from backup

    # Run restore script
    npx tsx scripts/restore-database.ts ./backups/latest.tar.gz
  4. Verify restoration

    # Check data integrity
    npx convex run health:check

Recovery Testing

// convex/recovery.ts
import { internalMutation } from './_generated/server';

export const verifyDataIntegrity = internalMutation({
  handler: async (ctx) => {
    const checks = {
      users: false,
      sessions: false,
      subscriptions: false,
    };

    try {
      const users = await ctx.db.query('users').take(1);
      checks.users = true;

      const sessions = await ctx.db.query('sessions').take(1);
      checks.sessions = true;

      const subscriptions = await ctx.db.query('subscriptions').take(1);
      checks.subscriptions = true;
    } catch (error) {
      console.error('Integrity check failed:', error);
    }

    const allPassed = Object.values(checks).every(Boolean);

    return {
      status: allPassed ? 'passed' : 'failed',
      checks,
    };
  },
});

Best Practices

1. Backup Strategy

Follow the 3-2-1 rule:

  • 3 copies of your data (original + 2 backups)
  • 2 different storage types (Convex + external)
  • 1 offsite backup (different geographic location)

2. Testing & Verification

Regularly test your backups:

# Monthly backup restoration test
0 4 1 * * npx tsx scripts/test-restore.ts

3. Documentation

Document your recovery procedures:

  • Recovery Time Objective (RTO): How quickly you can restore
  • Recovery Point Objective (RPO): How much data loss is acceptable
  • Step-by-step procedures: Clear instructions for team members
  • Contact information: Who to call during disasters

4. Security

Secure your backups:

  • Encrypt backups at rest and in transit
  • Access control - limit who can access backups
  • Audit logging - track backup access and operations
  • Regular testing - verify backup integrity and security

Backup Metrics

Key metrics to monitor:

  • Backup frequency: Daily, weekly, monthly
  • Backup success rate: Percentage of successful backups
  • Backup size trends: Growing data and storage requirements
  • Recovery time: How long restoration takes
  • Recovery success rate: Percentage of successful recoveries

Next Steps

Strengthen your data protection with these guides:

  1. Monitoring - Monitor backup and recovery health
  2. Security - Secure backup practices
  3. Advanced - Advanced backup strategies
  4. Performance - Optimize backup performance

Comprehensive backup and disaster recovery ensures your Solo Kit application can recover from any data loss scenario and maintain business continuity.

Database Security

Secure your Solo Kit database with proper authentication, authorization, and security best practices using Convex

Database Monitoring

Monitor database health, performance metrics, and operational status with comprehensive observability tools and practices using Convex

On this page

Backup & RestoreBackup OverviewSolo Kit Backup StrategyBackup ArchitectureConvex Built-in BackupsAutomatic Backup FeaturesAccessing BackupsPoint-in-Time RecoveryCustom Data ExportsExport FunctionsFull Database Export ScriptData Import/RestoreImport FunctionsRestore ScriptAutomated Backup StrategiesScheduled ExportsGitHub Actions BackupBackup MonitoringBackup Health ChecksDisaster RecoveryRecovery ProceduresRecovery TestingBest Practices1. Backup Strategy2. Testing & Verification3. Documentation4. SecurityBackup MetricsNext Steps