Deployment
This guide covers deploying Made Open to production. The platform is composed of four independently deployable components that must all be running for full functionality.
Overview
| Component | Technology | Port |
|---|---|---|
| Hub | Fastify (Node.js) | 4101 |
| Web | Next.js | 4100 |
| NATS JetStream | NATS server | 4222 |
| Meilisearch | Meilisearch | 7700 |
| Supabase | Managed (Supabase Cloud) | — |
For production, use Supabase Cloud for the database, auth, and storage layer. Self-hosting Supabase is possible but adds significant operational overhead and is not covered here.
Prerequisites Checklist
Before deploying, ensure you have:
- A Supabase project created at supabase.com
- The Supabase CLI installed and authenticated (
supabase login) - A NATS server with JetStream enabled (self-hosted or Synadia Cloud)
- A Meilisearch instance (self-hosted or Meilisearch Cloud)
- Docker installed on your deployment host (for hub and web)
- All environment variables from the reference table below
Step 1: Supabase Setup
Link your local project to the Supabase Cloud project and push all migrations:
supabase link --project-ref <your-project-ref>
supabase db push
supabase db push applies all migrations in supabase/migrations/ to the cloud database.
After pushing migrations, retrieve your production keys from the Supabase dashboard under Settings > API:
SUPABASE_URL— the project URL (e.g.https://abcdefgh.supabase.co)SUPABASE_SERVICE_ROLE_KEY— the service role key (bypasses RLS — keep this secret)SUPABASE_ANON_KEY— the anon key (safe for client-side use)DATABASE_URL— the connection string from Settings > Database
Supabase Vault is used for user-provided credentials (Twilio keys, MS Graph tokens). No additional setup is required — the Plugin Manager handles Vault reads automatically. Vault is enabled by default on all Supabase projects.
Step 2: NATS JetStream
The hub's EventBus creates all required JetStream streams on startup (idempotent — safe to run multiple times). You only need a running NATS server with JetStream enabled.
Option A: Docker (self-hosted)
docker run -d \
--name nats \
-p 4222:4222 \
-p 8222:8222 \
-v nats_data:/data \
nats:2.10-alpine -js -m 8222
Option B: docker compose (from this repo)
The included docker-compose.yml runs NATS with JetStream and persistence:
# Core infrastructure only (no hub or web containers)
docker compose up -d nats meilisearch
Option C: Synadia Cloud
Synadia Cloud provides managed NATS JetStream. Create an account, provision a cluster, and use the provided NATS_URL (which will include credentials).
Set NATS_URL in your .env to point at whichever NATS instance you choose.
Step 3: Meilisearch
Option A: Docker (self-hosted)
docker run -d \
--name meilisearch \
-p 7700:7700 \
-e MEILI_MASTER_KEY=your-strong-master-key \
-v meili_data:/meili_data \
getmeili/meilisearch:v1.12
Option B: Meilisearch Cloud
Meilisearch Cloud provides managed Meilisearch. Create a project, copy the host URL and master key into MEILI_URL and MEILI_MASTER_KEY.
Use a strong, randomly generated MEILI_MASTER_KEY in production (32+ characters).
Step 4: Environment Variables
Full reference for all variables. Set these on your deployment host, in Docker --env-file, in Vercel dashboard, or in your Railway/Fly.io secrets.
Core Infrastructure
| Variable | Required | Default | Description |
|---|---|---|---|
NATS_URL | Required | nats://localhost:4222 | NATS JetStream connection URL |
MEILI_URL | Required | http://localhost:7700 | Meilisearch base URL |
MEILI_MASTER_KEY | Required | masterKey | Meilisearch master key — use a strong random string in production |
Supabase
| Variable | Required | Default | Description |
|---|---|---|---|
SUPABASE_URL | Required | http://localhost:54321 | Supabase project URL |
SUPABASE_SERVICE_ROLE_KEY | Required | — | Service role key — bypasses RLS, keep secret |
SUPABASE_ANON_KEY | Required | — | Anon key — safe for client-side use |
DATABASE_URL | Required | postgresql://postgres:postgres@localhost:54322/postgres | Direct PostgreSQL connection string |
Authentication
| Variable | Required | Default | Description |
|---|---|---|---|
JWT_SECRET | Required | change-me-in-production-use-at-least-32-chars | Secret used to sign JWTs — must be 32+ chars in production |
Security
| Variable | Required | Default | Description |
|---|---|---|---|
CORS_ORIGINS | Required | http://localhost:4100 | Comma-separated list of allowed CORS origins |
NODE_ENV | Required | development | Set to production for production deployments |
AI and LLM
| Variable | Optional | Default | Description |
|---|---|---|---|
OPENROUTER_API_KEY | Optional | — | OpenRouter API key for LLM routing. Required to use the AI query endpoint. |
OLLAMA_URL | Optional | http://localhost:11434 | Ollama base URL for local LLM inference |
Billing
| Variable | Optional | Default | Description |
|---|---|---|---|
STRIPE_SECRET_KEY | Optional | — | Stripe secret key for billing features |
STRIPE_WEBHOOK_SECRET | Optional | — | Stripe webhook signing secret |
Hub Server
| Variable | Optional | Default | Description |
|---|---|---|---|
PORT | Optional | 4101 | Hub HTTP server port |
HUB_URL | Optional | http://localhost:4101 | Public URL of the hub (used in webhook URLs and redirects) |
Federation
| Variable | Optional | Default | Description |
|---|---|---|---|
FEDERATION_USERNAME | Optional | hub | ActivityPub actor username for federation |
Web App (Next.js public vars)
| Variable | Required | Default | Description |
|---|---|---|---|
NEXT_PUBLIC_SUPABASE_URL | Required | http://localhost:54321 | Supabase URL exposed to the browser |
NEXT_PUBLIC_SUPABASE_ANON_KEY | Required | — | Supabase anon key exposed to the browser |
Step 5: Deploy the Hub
Docker
# Build from the monorepo root
docker build -t made-open-hub -f apps/hub/Dockerfile .
# Run with environment file
docker run -d \
--name made-open-hub \
-p 4101:4101 \
--env-file .env \
made-open-hub
The hub exposes a health check endpoint at GET /health. Configure your load balancer or container orchestrator to use it:
curl https://your-hub.example.com/health
# {"status":"ok","version":"0.0.1"}
Railway
# Install Railway CLI
npm install -g @railway/cli
# Deploy from monorepo root
railway login
railway link
railway up --dockerfile apps/hub/Dockerfile
# Set environment variables
railway variables set NATS_URL=nats://... SUPABASE_URL=https://... # etc.
Fly.io
# Install flyctl
# See https://fly.io/docs/hands-on/install-flyctl/
flyctl launch --dockerfile apps/hub/Dockerfile --name made-open-hub
flyctl secrets set NATS_URL=nats://... SUPABASE_URL=https://... # etc.
flyctl deploy
Step 6: Deploy the Web App
Vercel (recommended)
# Install Vercel CLI
npm install -g vercel
cd apps/web
vercel --prod
Set environment variables in the Vercel dashboard under Settings > Environment Variables:
NEXT_PUBLIC_SUPABASE_URLNEXT_PUBLIC_SUPABASE_ANON_KEY
The web app only needs the two NEXT_PUBLIC_* variables at build time. All backend communication goes through the hub.
Docker
# Build from the monorepo root
docker build -t made-open-web -f apps/web/Dockerfile .
# Run with environment file
# Note: the web Dockerfile exposes Next.js on port 3000 internally.
# Map host port 4100 to container port 3000 to match the local dev convention.
docker run -d \
--name made-open-web \
-p 4100:3000 \
--env-file .env.web \
made-open-web
Create a separate .env.web file containing only the NEXT_PUBLIC_* variables.
Step 7: Verify Deployment
Run these checks after deploying to confirm everything is healthy:
# Basic health check
curl https://your-hub.example.com/health
# Detailed health (includes all service statuses)
curl https://your-hub.example.com/health/detailed
# API version
curl https://your-hub.example.com/api/version
# OpenAPI spec (confirms routes are registered)
curl https://your-hub.example.com/api/openapi.json | jq '.info'
In the browser:
- Swagger UI —
https://your-hub.example.com/api/docs - Web app —
https://your-web.example.com
Monitoring
| Endpoint | Description |
|---|---|
GET /health | Simple liveness probe — returns {"status":"ok"} |
GET /health/detailed | Detailed health including status of each service (NATS, Meilisearch, Supabase, job queue) |
GET /metrics | Prometheus-compatible metrics scrape endpoint |
GET /api/version | Hub version and build metadata |
Error tracking dashboard is available at https://your-web.example.com/admin/errors (admin accounts only).
For structured log aggregation, the hub uses pino and outputs JSON logs to stdout. Route stdout to your log aggregator (Datadog, Loki, CloudWatch, etc.).
Security Checklist
Before going live:
-
JWT_SECRETis a strong random string (32+ chars) — not the default value -
MEILI_MASTER_KEYis a strong random string — notmasterKeyordev-master-key -
SUPABASE_SERVICE_ROLE_KEYis stored as a secret and never exposed to clients -
NEXT_PUBLIC_SUPABASE_ANON_KEYis the anon key, not the service role key -
CORS_ORIGINSis set to your production web app URL only -
NODE_ENVis set toproduction - NATS is not exposed publicly (should be internal network only)
- Meilisearch is not exposed publicly without authentication
- Rate limiting is active (enabled by default in the hub)
- Supabase RLS policies are in place (all tables have RLS enabled by default via migrations)
- Audit log is enabled (it is — the Policy Service writes to
audit_logby default)
Kubernetes (Helm)
A Helm chart ships at charts/made-open/.
Install
helm install made-open charts/made-open \
--namespace made-open \
--create-namespace \
--set hub.image.digest=sha256:<current hub digest> \
--set web.image.digest=sha256:<current web digest> \
--set secrets.databaseUrl="postgresql://..." \
--set secrets.jwtSecret="$(openssl rand -hex 32)"
Updates
Set UPDATE_RUNNER=kubernetes in the hub ConfigMap, set HELM_RELEASE_NAME and HELM_NAMESPACE, and run updates through /updates/apply just like any other deployment. helm upgrade --atomic handles app-side rollback; database snapshots are the operator's responsibility — take a pg_dump via kubectl exec before confirming.
Bare-source (no Docker)
For users running directly from a git clone:
git clone https://github.com/drdropout/made-open.git
cd made-open
pnpm install --frozen-lockfile
pnpm build
# Start the hub under your process manager of choice (systemd example below).
systemd unit
[Unit]
Description=Made Open Hub
After=network.target
[Service]
Type=simple
User=made-open
WorkingDirectory=/srv/made-open
EnvironmentFile=/srv/made-open/.env
ExecStart=/usr/bin/pnpm --filter @made-open/hub start
Restart=on-failure
[Install]
WantedBy=multi-user.target
Updates
Set these in your .env:
UPDATE_RUNNER=bare-source
UPDATE_WORKDIR=/srv/made-open
UPDATE_RESTART_COMMAND=systemctl restart made-open-hub
The runner will git fetch && git checkout <tag> && pnpm install && pnpm build && pnpm --filter @made-open/hub migrate, then invoke your restart command. A pg_dump snapshot is taken before anything else, and .made-open-update-state.json records the prior tag so rollback works even if the hub crashes mid-update.
Further Reading
- Getting Started — Local development setup
- System Overview — Full service architecture
- Event-Driven Spine — NATS JetStream and job queue configuration
- Credential Wallet — How Supabase Vault handles user credentials