Deploy Flaggr on your own infrastructure with Docker or Kubernetes
Last updated March 15, 2026
Self-Hosting
Flaggr can be self-hosted on your own infrastructure using Docker, Cloud Run, or Kubernetes.
Docker
Quick Start
docker run -p 3000:3000 \
-e FIREBASE_PROJECT_ID=your-project-id \
-e FIREBASE_CLIENT_EMAIL=your-service-account@project.iam.gserviceaccount.com \
-e FIREBASE_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n" \
-e NEXT_PUBLIC_FIREBASE_API_KEY=your-api-key \
-e NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN=your-project.firebaseapp.com \
-e NEXT_PUBLIC_FIREBASE_PROJECT_ID=your-project-id \
ghcr.io/flaggr/flaggr:latestDocker Compose
version: '3.8'
services:
flaggr:
image: ghcr.io/flaggr/flaggr:latest
ports:
- "3000:3000"
environment:
# Firebase Authentication (required)
FIREBASE_PROJECT_ID: ${FIREBASE_PROJECT_ID}
FIREBASE_CLIENT_EMAIL: ${FIREBASE_CLIENT_EMAIL}
FIREBASE_PRIVATE_KEY: ${FIREBASE_PRIVATE_KEY}
NEXT_PUBLIC_FIREBASE_API_KEY: ${NEXT_PUBLIC_FIREBASE_API_KEY}
NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN: ${NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN}
NEXT_PUBLIC_FIREBASE_PROJECT_ID: ${NEXT_PUBLIC_FIREBASE_PROJECT_ID}
# Redis for caching and rate limiting (recommended)
UPSTASH_REDIS_REST_URL: ${UPSTASH_REDIS_REST_URL:-}
UPSTASH_REDIS_REST_TOKEN: ${UPSTASH_REDIS_REST_TOKEN:-}
# Email notifications (optional)
RESEND_API_KEY: ${RESEND_API_KEY:-}
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/api/health"]
interval: 30s
timeout: 10s
retries: 3Build from Source
git clone https://github.com/flaggr/flaggr.git
cd flaggr
# Build the Docker image
docker build -t flaggr:local .
# Run locally
docker run -p 3000:3000 flaggr:localKubernetes
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: flaggr
labels:
app: flaggr
spec:
replicas: 2
selector:
matchLabels:
app: flaggr
template:
metadata:
labels:
app: flaggr
spec:
containers:
- name: flaggr
image: ghcr.io/flaggr/flaggr:latest
ports:
- containerPort: 3000
envFrom:
- secretRef:
name: flaggr-secrets
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
readinessProbe:
httpGet:
path: /api/health
port: 3000
initialDelaySeconds: 10
periodSeconds: 10
livenessProbe:
httpGet:
path: /api/health
port: 3000
initialDelaySeconds: 30
periodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: flaggr
spec:
selector:
app: flaggr
ports:
- port: 80
targetPort: 3000
type: ClusterIPSecrets
kubectl create secret generic flaggr-secrets \
--from-literal=FIREBASE_PROJECT_ID=your-project-id \
--from-literal=FIREBASE_CLIENT_EMAIL=your-email \
--from-literal=FIREBASE_PRIVATE_KEY="$(cat service-account-key.json | jq -r '.private_key')" \
--from-literal=NEXT_PUBLIC_FIREBASE_API_KEY=your-api-key \
--from-literal=NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN=your-project.firebaseapp.com \
--from-literal=NEXT_PUBLIC_FIREBASE_PROJECT_ID=your-project-idGoogle Cloud Run
gcloud run deploy flaggr \
--image ghcr.io/flaggr/flaggr:latest \
--port 3000 \
--region us-central1 \
--allow-unauthenticated \
--set-env-vars "FIREBASE_PROJECT_ID=your-project-id" \
--set-secrets "FIREBASE_PRIVATE_KEY=flaggr-firebase-key:latest"Environment Variables
| Variable | Required | Description |
|---|---|---|
FIREBASE_PROJECT_ID | Yes | Firebase project ID |
FIREBASE_CLIENT_EMAIL | Yes | Service account email |
FIREBASE_PRIVATE_KEY | Yes | Service account private key |
NEXT_PUBLIC_FIREBASE_API_KEY | Yes | Firebase client API key |
NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN | Yes | Firebase auth domain |
NEXT_PUBLIC_FIREBASE_PROJECT_ID | Yes | Firebase project ID (client) |
UPSTASH_REDIS_REST_URL | Recommended | Redis for caching/rate limiting |
UPSTASH_REDIS_REST_TOKEN | Recommended | Redis auth token |
RESEND_API_KEY | Optional | Email notifications |
RESEND_FROM_EMAIL | Optional | From address for emails |
Warning
Always trim environment variables when passing them via deployment configs. Trailing whitespace in URLs causes cryptic connection failures.
Health Check
All deployments should monitor the health endpoint:
curl https://your-flaggr-instance/api/health
# Returns: { "status": "ok", "commit": "abc123", "timestamp": "..." }Performance Tuning
- Replicas: Start with 2 replicas for high availability. Each instance handles ~1000 evaluations/second.
- Redis: Strongly recommended for production. Without Redis, each serverless instance has its own in-memory cache (no sharing).
- Memory: 256MB minimum, 512MB recommended for high-volume services.
- Cold starts: Next.js serverless functions have ~200ms cold start. Use min-instances=1 on Cloud Run to eliminate this.