Deploy flag evaluation at the edge with zero-dependency providers for Cloudflare Workers, Vercel Edge, and Deno
Edge & Serverless
Flaggr provides specialized providers optimized for edge runtimes and serverless environments where cold start time and bundle size matter.
Edge Provider Options
| Provider | Dependencies | Cold Start | Real-time | Best For |
|---|---|---|---|---|
StandaloneEvaluator | Zero | Instant | No | Cloudflare Workers |
InProcessProvider | Minimal | Fast | Polling | Vercel Edge |
FetchOnceProvider | Minimal | Fast | No | Lambda, short-lived |
EdgeHybridProvider | Minimal | Fast | Background sync | Edge with fallback |
EdgeFileProvider | Zero | Instant | No | Static configs |
Standalone Evaluator
Zero-dependency evaluator that works in any JavaScript runtime. No Node.js APIs required.
import { StandaloneEvaluator } from '@flaggr/client'
const evaluator = new StandaloneEvaluator()
// Fetch flags once
const response = await fetch('https://flaggr.dev/api/flags?serviceId=edge-app', {
headers: { Authorization: 'Bearer flg_your_token' },
})
const { flags } = await response.json()
// Evaluate locally (microseconds)
const result = evaluator.evaluate(flags['checkout-v2'], {
targetingKey: 'user-123',
plan: 'enterprise',
})StandaloneEvaluator has zero dependencies and adds minimal code to your bundle. It's the best choice when bundle size is critical.
Cloudflare Workers Example
export default {
async fetch(request: Request, env: Env) {
const evaluator = new StandaloneEvaluator()
// Fetch flags from KV or API
const flagsJson = await env.FLAGS_KV.get('flags', 'json')
const flags = flagsJson || await fetchFlags(env.FLAGGR_TOKEN)
const userId = getUserId(request)
const showNewUI = evaluator.evaluate(flags['new-ui'], {
targetingKey: userId,
})
return showNewUI.value
? renderNewUI(request)
: renderClassicUI(request)
},
}In-Process Provider
Downloads the full flag configuration and evaluates locally. Supports background polling for updates.
import { InProcessProvider } from '@flaggr/client'
const provider = new InProcessProvider({
apiUrl: 'https://flaggr.dev',
serviceId: 'edge-worker',
apiToken: 'flg_your_token',
pollIntervalMs: 30000, // Refresh every 30s
})
await provider.initialize()
// All evaluations are local — no network calls
const result = provider.resolveBooleanEvaluation('checkout-v2', false, context)After initialize(), all evaluations are in-process. The only network calls are periodic background polls to refresh the configuration.
FetchOnce Provider
Fetches the flag configuration once on startup and never refreshes. Ideal for short-lived processes like Lambda functions.
import { FetchOnceProvider } from '@flaggr/client'
const provider = new FetchOnceProvider({
apiUrl: 'https://flaggr.dev',
serviceId: 'lambda-function',
apiToken: 'flg_your_token',
})
await provider.initialize()
// Flags are cached for the lifetime of this processEdge Hybrid Provider
Combines local evaluation with background synchronization. Falls back to stale configuration if the API is unreachable.
import { EdgeHybridProvider } from '@flaggr/client'
const provider = new EdgeHybridProvider({
apiUrl: 'https://flaggr.dev',
serviceId: 'edge-app',
apiToken: 'flg_your_token',
localFirst: true,
syncIntervalMs: 60000,
})Edge File Provider
Loads flags from a JSON file bundled with your deployment. No network calls at all.
import { EdgeFileProvider } from '@flaggr/client'
import flagConfig from './flags.json'
const provider = new EdgeFileProvider({
data: flagConfig,
})Vercel Edge Functions
import { InProcessProvider } from '@flaggr/client'
export const runtime = 'edge'
let provider: InProcessProvider | null = null
async function getProvider() {
if (!provider) {
provider = new InProcessProvider({
apiUrl: process.env.FLAGGR_URL!,
serviceId: process.env.FLAGGR_SERVICE_ID!,
apiToken: process.env.FLAGGR_TOKEN!,
})
await provider.initialize()
}
return provider
}
export async function GET(request: Request) {
const p = await getProvider()
const showBanner = p.resolveBooleanEvaluation('show-banner', false, {
targetingKey: 'anonymous',
})
return Response.json({ showBanner: showBanner.value })
}In serverless environments, initialize the provider outside the handler function so it persists across warm invocations. The first cold start pays the initialization cost; subsequent requests are instant.
Performance
| Scenario | Latency |
|---|---|
| In-process evaluation | Under 0.1ms |
| Cold start + first fetch | 50-200ms |
| Warm invocation | Under 0.1ms |
| Background poll (no changes) | 10-50ms |
Related
- Provider Architecture — Full provider comparison
- Caching Strategy — Cache tiers and TTLs
- Performance — Latency benchmarks