Deep dive into consistent hashing, percentage rollouts, variant distribution, and evaluation internals
Advanced Evaluation
This guide covers the internals of Flaggr's evaluation engine — how consistent hashing works, how variants are distributed, and how to design robust rollout strategies.
Evaluation Pipeline
Every flag evaluation follows this sequence:
Disabled Check
If the flag is disabled, return defaultValue with reason DISABLED. No further evaluation occurs.
Targeting Rules
Rules are evaluated top-to-bottom. The first rule whose conditions all match wins. Returns the rule's value or named variant with reason TARGETING_MATCH.
Rollout Check
If a matching rule has rolloutPercentage, the user's hash determines inclusion. If excluded, evaluation continues to the next rule.
Variant Selection
If the flag has variants with weights, consistent hashing selects one. Returns with reason VARIANT.
Default Value
If nothing matched, return defaultValue with reason DEFAULT.
Consistent Hashing
Flaggr uses a modified djb2 hash algorithm to ensure deterministic evaluation. The same user always gets the same result for the same flag.
How It Works
hash_input = flagKey + targetingKey
hash = djb2(hash_input) // 32-bit integer
bucket = abs(hash) % 100 // 0-99
included = bucket < rolloutPercentage
The hash function:
function hashString(str: string): number {
let hash = 0
for (let i = 0; i < str.length; i++) {
const char = str.charCodeAt(i)
hash = ((hash << 5) - hash) + char
hash = hash & hash // Convert to 32-bit integer
}
return Math.abs(hash)
}Properties
- Deterministic: Same input always produces the same output
- Uniform: Buckets 0-99 are evenly distributed across users
- Stable: Changing the rollout percentage from 10% to 20% includes the original 10% plus 10% more — no users are removed
- Independent: Each flag has its own hash space (different flags, different assignments)
Without targetingKey in the evaluation context, consistent hashing falls back to random assignment. Users may see different values on each request.
Percentage Rollouts
Basic Rollout
A 25% rollout to all users:
{
"targeting": [
{
"id": "gradual",
"conditions": [],
"rolloutPercentage": 25,
"value": true
}
]
}Targeted Rollout
25% of enterprise users only:
{
"targeting": [
{
"id": "enterprise-canary",
"conditions": [
{ "property": "plan", "operator": "equals", "value": "enterprise" }
],
"rolloutPercentage": 25,
"value": true
}
]
}Progressive Rollout Strategy
Internal Dogfood (1%)
Deploy to internal team members first. Use a targeting rule matching @yourcompany.com emails.
Canary (5%)
Expand to a small percentage of real users. Monitor error rates and latency.
Early Access (25%)
Increase the rollout percentage. Check business metrics and user feedback.
General Availability (100%)
Remove the rollout percentage to serve all users. Clean up the feature flag once stable.
Variant Distribution
For A/B testing and multi-way experiments, use variants with weights:
{
"variants": [
{ "name": "control", "value": "classic-checkout", "weight": 50 },
{ "name": "treatment-a", "value": "new-checkout", "weight": 30 },
{ "name": "treatment-b", "value": "express-checkout", "weight": 20 }
]
}Selection Algorithm
hash = djb2(flagKey + targetingKey) % 100 // e.g., 73
Variant ranges:
control: 0-49 (50%)
treatment-a: 50-79 (30%)
treatment-b: 80-99 (20%)
73 falls in treatment-a → user gets "new-checkout"
Weights are cumulative. The algorithm iterates through variants, accumulating weights until the hash bucket is covered.
Variant weights should sum to approximately 100. If they sum to less, some users fall through to the default value. If they sum to more, the last variant's effective weight is reduced.
Combining Variants with Targeting
Force specific users into specific variants:
{
"targeting": [
{
"id": "force-enterprise-control",
"conditions": [
{ "property": "plan", "operator": "equals", "value": "enterprise" }
],
"variant": "control"
}
],
"variants": [
{ "name": "control", "value": "classic", "weight": 50 },
{ "name": "treatment", "value": "new", "weight": 50 }
]
}Enterprise users always see control. Everyone else is split 50/50.
Evaluation Reasons
Every evaluation returns a reason field explaining how the value was determined:
| Reason | Description | Typical Cause |
|---|---|---|
DISABLED | Flag is disabled | enabled: false |
TARGETING_MATCH | A targeting rule matched | Conditions satisfied |
VARIANT | Selected via variant weights | Hash-based assignment |
DEFAULT | No match, default value used | No rules matched |
NOT_FOUND | Flag doesn't exist | Typo in flag key, wrong service |
ERROR | Evaluation error | Invalid context, storage failure |
CACHED | Served from cache | Cache TTL still valid |
STATIC | Static/hardcoded value | File provider or bootstrap |
Debug Information
The evaluation API returns timing data in the _debug field:
{
"flagKey": "checkout-v2",
"value": true,
"reason": "TARGETING_MATCH",
"_debug": {
"timings": {
"rateLimit": 2,
"validation": 1,
"cacheGet": 0.5,
"evaluate": 3,
"total": 12
},
"cacheHit": false,
"totalMs": 12
}
}Use this to diagnose slow evaluations. Typical healthy values:
- Cache hit: total under 5ms
- Cache miss: total under 50ms
Type Safety
Flaggr enforces type matching between flags and evaluations:
| Flag Type | Allowed Default Values | Example |
|---|---|---|
boolean | true, false | useBooleanFlag('dark-mode', false) |
string | Any string | useStringFlag('theme', 'light') |
number | Any number | useNumberFlag('max-items', 10) |
object | JSON object | useObjectFlag('config', {}) |
If the evaluated value doesn't match the expected type, the SDK returns the default value with reason ERROR.
Related
- Targeting Rules — Condition operators and rule structure
- Provider Architecture — Local vs remote evaluation
- REST API Reference — Evaluation endpoint details