Backup and Disaster Recovery
This guide covers backup strategies and restore procedures for all Made Open data stores.
Overview
| Data Store | Role | Critical? | Rebuildable? |
|---|---|---|---|
| PostgreSQL (Supabase) | Source of truth for all entities, audit log, credentials | Yes | No |
| NATS JetStream | Event stream history | Yes (for replay) | Streams recreated by hub, but message history is lost |
| Meilisearch | Full-text search indexes | No | Yes, rebuilt from PostgreSQL by SearchService |
| Redis | Cache layer (LRU + TTL) | No | Yes, rebuilt from PostgreSQL on cache miss |
Priority: Always back up PostgreSQL first. It is the single source of truth. The audit_log table is append-only and must never lose data.
PostgreSQL Backup
Supabase Cloud (Production)
Supabase Cloud handles backups automatically:
| Plan | Retention | PITR |
|---|---|---|
| Free | None | No |
| Pro | 7 days | Yes |
| Enterprise | 30+ days | Yes |
Point-in-time recovery (PITR) lets you restore to any second within the retention window. Enable it in the Supabase dashboard under Settings > Database > Backups.
Manual backup via Supabase CLI:
# Link your project first (one-time)
supabase link --project-ref <your-project-ref>
# Dump the entire database
supabase db dump -f backup.sql
# Dump data only (no schema)
supabase db dump --data-only -f backup-data.sql
Local / Self-Hosted
Using pg_dump (recommended):
# Full dump (schema + data)
pg_dump -h localhost -p 54322 -U postgres postgres > backup.sql
# Restore into a clean database
psql -h localhost -p 54322 -U postgres postgres < backup.sql
Docker volume backup (raw files):
docker run --rm \
-v made-open_supabase_db_data:/data \
-v "$(pwd)":/backup \
alpine tar czf /backup/postgres-data.tar.gz /data
Docker volume restore:
docker compose down
docker run --rm \
-v made-open_supabase_db_data:/data \
-v "$(pwd)":/backup \
alpine sh -c "rm -rf /data/* && tar xzf /backup/postgres-data.tar.gz -C /"
docker compose up -d
NATS JetStream Backup
JetStream persists event data in the nats_data Docker volume. The hub's EventBus.ensureStreams() recreates stream configurations on startup (idempotent), but message history is lost without a volume backup.
Volume backup:
docker run --rm \
-v made-open_nats_data:/data \
-v "$(pwd)":/backup \
alpine tar czf /backup/nats-data.tar.gz /data
Per-stream export (requires the nats CLI):
# Export a single stream
nats stream backup EVENTS /path/to/backup/events
# Restore a stream
nats stream restore EVENTS /path/to/backup/events
Volume restore:
docker compose down
docker run --rm \
-v made-open_nats_data:/data \
-v "$(pwd)":/backup \
alpine sh -c "rm -rf /data/* && tar xzf /backup/nats-data.tar.gz -C /"
docker compose up -d
Meilisearch Backup
Meilisearch indexes can be fully rebuilt from PostgreSQL. The SearchService re-indexes automatically on startup if indexes are empty. Explicit backups are optional but useful for faster recovery.
Create a snapshot (binary, fast restore):
curl -X POST http://localhost:7700/snapshots \
-H "Authorization: Bearer ${MEILI_MASTER_KEY}"
Snapshots are stored inside the Meilisearch data directory (the meili_data Docker volume).
Create a dump (portable, version-independent):
curl -X POST http://localhost:7700/dumps \
-H "Authorization: Bearer ${MEILI_MASTER_KEY}"
Volume backup:
docker run --rm \
-v made-open_meili_data:/data \
-v "$(pwd)":/backup \
alpine tar czf /backup/meili-data.tar.gz /data
Alternative: skip backup entirely. Stop Meilisearch, delete the volume, restart, and let the hub rebuild indexes from PostgreSQL.
Redis Backup
Redis is configured with AOF persistence (redis-server --appendonly yes). It serves as a cache layer only; all data originates in PostgreSQL and will be repopulated on cache miss. Backup is optional.
Volume backup:
docker run --rm \
-v made-open_redis_data:/data \
-v "$(pwd)":/backup \
alpine tar czf /backup/redis-data.tar.gz /data
Alternative: skip backup entirely. Delete the volume and restart. The CacheService will repopulate from PostgreSQL on demand.
Full Platform Restore
Follow these steps to restore the entire platform from backups.
Step-by-step
-
Stop all services:
docker compose down -
Restore PostgreSQL (choose one method):
# Method A: pg_dump restore (requires a running PostgreSQL instance) supabase start psql -h localhost -p 54322 -U postgres postgres < backup.sql # Method B: Docker volume restore docker run --rm \ -v made-open_supabase_db_data:/data \ -v "$(pwd)":/backup \ alpine sh -c "rm -rf /data/* && tar xzf /backup/postgres-data.tar.gz -C /" -
Restore NATS volume (optional -- streams are recreated automatically, but history is lost without this):
docker run --rm \ -v made-open_nats_data:/data \ -v "$(pwd)":/backup \ alpine sh -c "rm -rf /data/* && tar xzf /backup/nats-data.tar.gz -C /" -
Start infrastructure:
docker compose up -d -
Start the hub:
pnpm hubThe hub will:
- Recreate NATS JetStream streams (idempotent)
- Rebuild Meilisearch indexes if they are empty
- Reconnect to PostgreSQL and resume normal operation
-
Verify the platform is healthy:
curl http://localhost:4101/health/detailedAll services should report
"status": "ok".
Automated Backup Script
A daily cron job that backs up PostgreSQL and NATS volumes with 30-day retention:
#!/bin/bash
# backup.sh — run daily via cron: 0 2 * * * /opt/made-open/backup.sh
set -euo pipefail
BACKUP_DIR="/backups/made-open/$(date +%Y-%m-%d)"
mkdir -p "$BACKUP_DIR"
echo "[$(date)] Starting Made Open backup..."
# PostgreSQL (source of truth)
pg_dump -h localhost -p 54322 -U postgres postgres > "$BACKUP_DIR/postgres.sql"
echo " PostgreSQL dumped."
# NATS JetStream volume (event history)
docker run --rm \
-v made-open_nats_data:/data \
-v "$BACKUP_DIR":/backup \
alpine tar czf /backup/nats-data.tar.gz /data
echo " NATS volume backed up."
# Cleanup backups older than 30 days
find /backups/made-open -maxdepth 1 -type d -mtime +30 -exec rm -rf {} \;
echo "[$(date)] Backup complete: $BACKUP_DIR"
Install the cron job:
chmod +x /opt/made-open/backup.sh
echo "0 2 * * * /opt/made-open/backup.sh >> /var/log/made-open-backup.log 2>&1" | crontab -
Key Points
- PostgreSQL is the source of truth -- always prioritize its backup and test restores regularly.
- The audit_log table is append-only (no UPDATE, no DELETE) and must never lose data.
- Meilisearch and Redis are rebuildable from PostgreSQL -- explicit backups save recovery time but are not required.
- NATS streams are recreated by the hub on startup, but historical event messages are lost without a volume backup.
- For production, use Supabase Cloud's built-in PITR (Pro plan) instead of manual pg_dump.
- Store backups off-site (S3, GCS, or another region) to protect against host-level failures.
- Test your restores periodically -- a backup you have never restored from is not a backup.