Products / Experiments
A/B tests and self-optimizing bandits — built into your flags.
For product teams and engineers running continuous experiments, SignaKit experiments are rule types within a feature flag — not a separate product. Run a fixed A/B split or let the bandit auto-optimize toward your conversion goal. No additional SDK, no add-on pricing.
Two experiment types. One SDK call.
Choose fixed A/B splits for statistical rigor, or multi-armed bandits for continuous optimization. Both use sk.decideAll() — no integration change needed.
A/B Test
Split traffic between variations at a fixed ratio for the duration of the test. Best when you need rigorous statistical inference before making a permanent decision.
Best for
- Testing pricing, copy, or UX changes
- When statistical significance matters more than speed
- Multi-variant tests (A/B/C/n)
- When the experiment runs longer than a few days
Multi-Armed Bandit
Uses Thompson Sampling to dynamically shift traffic toward better-performing variations as data accumulates. Minimizes regret — users spend less time in the losing variation.
Best for
- Limited-time campaigns where every conversion counts
- Continuous optimization with no fixed end date
- When you want results faster than a fixed A/B test
- CTA copy, hero variants, pricing page layouts
A/B test vs. multi-armed bandit
| Aspect | A/B Test | Multi-Armed Bandit |
|---|---|---|
| Traffic split | Fixed (e.g. 50/50) | Dynamic — shifts toward winner |
| Goal | Prove statistical significance | Maximize conversions during test |
| Duration | Runs until significance reached | Continuous optimization |
| Regret | Higher — equal traffic to loser | Lower — traffic shifts faster |
| Best for | Learning which variation is better | Optimizing while learning |
| SDK change needed | No | No — weights in existing config |
How SignaKit experiments work
Experiments live inside feature flags
There is no separate experiment tool. A/B tests and MAB are rule types within a flag. You use the same sk.decideAll() call for both — the SDK handles the traffic allocation transparently.
Event-based conversion tracking
Define any user action as a conversion metric in the dashboard. When a user fires that event, SignaKit attributes it back to the variation they were exposed to — accurate even across sessions.
Experiment snapshots
Results are computed as experiment snapshots — Growth and Enterprise get hourly snapshots, Starter gets 3-hour snapshots, Free gets daily. Each snapshot shows p-value, uplift, confidence interval, and a verdict.
Multiple variations
Run A/B/C/n tests with as many variations as your experiment requires. Traffic is distributed across all variations using the same MurmurHash3 bucketing that guarantees consistent user assignment.
Thompson Sampling for MAB
Multi-armed bandits use Thompson Sampling (Beta distribution). The bandit-runner Lambda recalculates allocations hourly and writes new weights into the SDK config — no SDK changes required.
Exploration phase guard
MAB flags respect a minimum exploration period (24h default) and a minimum exposure threshold (100 per variation) before shifting traffic. This prevents premature exploitation on small samples.
MAB optimization uses Thompson Sampling (W.R. Thompson, 1933) — Beta distribution, recalculated hourly
Common questions
Is A/B testing a paid add-on?
No. A/B testing and multi-armed bandits are included on every plan — including the free tier. There is no paid experimentation module.
How does the multi-armed bandit know which variation is winning?
The bandit-runner Lambda queries conversion events from your project, computes Thompson Sampling (Beta distribution) allocations for each variation, and writes updated weights into the flag config on S3. CloudFront delivers the new config to all SDKs within minutes.
Do I need to change my SDK integration to use MAB?
No. MAB weights are written into the same variationAllocation shape that A/B tests use. Your existing sk.decideAll() call works identically for both rule types — the SDK handles everything automatically.
How does conversion attribution work?
When a user is bucketed into a variation, the SDK records a $exposure event. When that user later fires a conversion event (e.g. purchase_completed), SignaKit joins the two events on visitor_id to attribute the conversion to the correct variation — even if days pass between exposure and conversion.
What statistical test does SignaKit use for A/B tests?
Results use a two-proportion z-test to compute p-value and a confidence interval for the uplift. The dashboard shows a plain-language verdict (statistically significant win/loss/inconclusive) alongside the raw numbers.
Can I run an experiment on only a subset of users?
Yes. Each experiment rule has two stages: audience targeting (who is eligible) and traffic allocation (what percentage of eligible users enter the experiment). You can target beta users and run a 50/50 split only within that audience.
Start experimenting today.
A/B tests and multi-armed bandits included on every plan. No add-on, no per-seat charge.