The Experiments page lets you run controlled A/B tests comparing multiple funnel variants simultaneously. Test different headlines, layouts, pricing, or complete funnel flows to discover what converts best for your audience.
Scientific optimization: FunnelFox automatically handles traffic splitting, statistical significance calculations, and performance tracking so you can focus on creating variants and analyzing results.

Why Experiments Matter

Guessing what works is expensive. Experiments (A/B testing) remove the guesswork by letting real user behavior tell you what converts better.

Data-Driven Decisions

Replace opinions with facts. See exactly which variant performs better with statistical confidence before making changes permanent.

Continuous Improvement

Small improvements compound. A 5% better conversion rate each month leads to 80% improvement over a year.

Risk Mitigation

Test radical changes safely. If a new approach fails, only a portion of traffic sees it, limiting potential losses.

Learn Your Audience

Discover what resonates. Tests reveal preferences you couldn’t predict, helping you understand your customers better.

How FunnelFox Experiments Work

The Experiment Process

1

Create variants

Build different versions of your funnel with changes you want to test
2

Set up experiment

Choose control and test variants, configure traffic split
3

Collect data

FunnelFox automatically distributes traffic and tracks performance
4

Analyze results

Monitor metrics with statistical significance calculations
5

Pick winner

Once confident, end the experiment and route all traffic to the winner

Key Concepts

Control Funnel

Your baseline funnel — the current version you’re trying to beat. This is what you measure improvements against.

Variant Funnels

Alternative versions with changes you’re testing. Could be:
  • Different headlines or copy
  • Alternate layouts or designs
  • Modified pricing or offers
  • Completely different flows
You can test up to 4 funnels simultaneously (1 control + 3 variants), though most tests compare just 2 versions for faster results.

Viewing Experiments

Experiments List

The main experiments page shows all active and completed experiments:
Experiments list view

Understanding the Columns

Name
string
Your experiment identifier. Use descriptive names like “Checkout-PriceTest” or “Homepage-HeadlineVariant” for easy tracking.
Hosted at
url
The URL where the experiment runs. Visitors to this URL are automatically enrolled in the experiment and randomly assigned to variants.
Actions
buttons
Quick actions for each experiment:
  • Settings: Configure experiment parameters
  • Analytics: View detailed performance metrics
  • Toggle: Enable/disable the experiment
Enabled
toggle
Experiment status:
  • On: Actively splitting traffic and collecting data
  • Off: Paused but not ended (can be resumed)

Creating an Experiment

Setting Up Your First Experiment

Create new experiment
1

Prepare your funnels

Before creating an experiment:
  • Build your variant funnels with the changes you want to test
  • Publish all funnels (both control and variants)
  • Test each funnel individually to ensure they work
Unpublished funnels cannot be used in experiments. Always publish your variants before setting up the experiment.
2

Name your experiment

Choose a clear, descriptive name that explains what you’re testing:
  • ✅ “Checkout-FreeTrial-vs-Paid”
  • ✅ “Homepage-LongForm-vs-Short”
  • ❌ “Test1” or “New experiment”
3

Set the experiment URL

The Alias field determines where your experiment lives:
  • Enter a URL path like “special-offer” or “quiz”
  • This creates: your-project.fnlfx.com/special-offer
  • All variants share this single URL
The experiment URL can be different from your funnel URLs. This lets you run experiments without changing existing campaign links.
4

Select control funnel

Choose your baseline funnel—what you’re currently using or testing against. This should be your proven performer or current default.
5

Add variant funnels

Click Add funnel variant to include test versions:
  • Add up to 3 variants (4 total including control)
  • Each variant should test a specific hypothesis
6

Configure traffic split

Adjust the traffic distribution slider:
  • Equal split: Fastest statistical significance
  • Custom: Set specific percentages per variant
The split shows as percentages below the slider.
7

Create experiment

Click Create to save your experiment configuration.
Critical: After creating the experiment, you MUST republish all funnels (control and variants) for the experiment to work. The experiment won’t function until all funnels are republished.
8

Republish all funnels

After creating the experiment:
  1. Go to each funnel used in the experiment
  2. Click Publish on every funnel (even if already published)
  3. This links the funnels to the experiment system
The experiment begins collecting data only after all funnels are republished. This step is required every time you create or modify an experiment.

Analyzing Results

Experiment Dashboard

Click on any experiment to view detailed performance metrics:
Experiment analytics view

Key Metrics Explained

Users
number
Total unique visitors who entered each variant. This is your sample size—more users mean more reliable results.
Conversions
number
Number of users who completed the desired action (purchase, signup, etc.). The definition depends on your funnel goals.
Conversion Rate
percentage
Percentage of users who converted. The primary metric for most tests. Shows with color coding:
  • 🟢 Green: Performing better than control
  • 🔴 Red: Performing worse than control
  • ⚪ Gray: No significant difference yet
ARPU
currency
Average Revenue Per User. Critical for revenue optimization. Sometimes a lower conversion rate with higher ARPU is better.
CR Confidence Level
percentage
Statistical confidence that the conversion rate difference is real:
  • < 95%: Too early to call (need more data)
  • 95-98%: Likely winner (can make decision)
  • > 98%: Clear winner (very confident)
CR Confidence Interval
range
Range where the true conversion rate likely falls. Narrower intervals mean more precise measurements. Example: “5.6% - 6.3%” means we’re confident the true rate is within this range.
Observed Power
percentage
Statistical power of your test. Higher power means better ability to detect real differences:
  • < 80%: May miss real improvements
  • > 80%: Good ability to detect differences
  • > 95%: Excellent test sensitivity

Metrics Over Time

The graph shows performance trends throughout the experiment:
  • Time selector: View data by day, week, or month
  • Metric selector: Switch between different KPIs
  • Variant lines: Compare performance visually
  • Confidence bands: See variability in measurements
Early results often fluctuate wildly. Wait at least a week and for confidence levels to stabilize before making decisions.

Managing Experiments

Experiment Settings

Access settings to modify your running experiment:
Experiment settings page

What You Can Change

During the experiment:
  • Experiment name and description
  • Traffic split percentages
  • Pause/resume the experiment
  • Add or remove variants (see warning below)
Cannot change:
  • The experiment URL (would break tracking)
  • Funnel content (must republish funnels separately)
Changing variants resets analytics: If you add or remove variants while an experiment is running, all analytics data will be reset and the experiment starts fresh. Previous data is archived and can be viewed using the “View previous version” button, but it won’t be combined with new data. Only modify variants if absolutely necessary.

Ending an Experiment

When you’re ready to pick a winner:
1

Verify statistical significance

Ensure you have:
  • At least 95% confidence level
  • Sufficient sample size (minimum 100 conversions per variant)
  • Consistent results over time (not just a lucky day)
2

Click Finish Experiment

In the settings page, click the Finish Experiment button
3

Select the winner

Choose which variant should get all future traffic. The winning funnel automatically takes over the experiment URL.
Ending is permanent: Once you finish an experiment and select a winner, you cannot restart it. The winning funnel takes over the experiment URL.

Deleting Experiments

To remove an experiment completely:
  1. Navigate to experiment settings
  2. Click Delete Experiment
  3. Confirm deletion (this is irreversible)
Deleting an experiment removes all its data permanently. Consider exporting results first if you need them for future reference.

Common Experiment Ideas

High-Impact Experiments to Try

Pricing & Offers
  • Free trial vs paid trial vs no trial
  • Monthly vs annual pricing emphasis
  • Discount amounts and presentation
  • Urgency and scarcity messaging
Headlines & Copy
  • Benefit-focused vs feature-focused
  • Long-form vs short-form copy
  • Different value propositions
  • Social proof placement
Funnel Structure
  • Single-step vs multi-step checkout
  • Form fields required vs optional
  • Upsel timing and presentation
  • Exit intent offers
Visual Design
  • Button colors and sizes
  • Image vs video content
  • Layout and information hierarchy
  • Mobile-specific optimizations

Analytics Integration

FunnelFox sends experiment data to your analytics platforms:

Automatic Tracking

Every visitor gets tagged with:
  • ff-experiment: The experiment name

Analyzing in External Tools

To analyze results in your analytics platform:
  1. Filter by experiment: Use ff-experiment property
  2. Filter by variation: Use ff-funnel property
  3. Compare metrics: Analyze beyond basic conversion
  4. Segment further: Combine with other user properties
This integration lets you analyze experiment impact on metrics FunnelFox doesn’t track, like long-term retention or lifetime value.

Frequently Asked Questions

Troubleshooting

Experiment Not Getting Traffic

If your experiment shows no visitors:
  1. Did you republish all funnels? This is the most common issue
  2. Verify the experiment is enabled (toggle is on)
  3. Check the experiment URL is correct
  4. Ensure you’re sending traffic to the experiment URL
  5. Confirm all funnels in the experiment are published

Slow Statistical Significance

If confidence levels aren’t improving:
  1. You may need more traffic (be patient)
  2. The difference might be very small
  3. Check for technical issues affecting one variant
  4. Consider ending if no difference after 10,000+ visitors

Inconsistent Results

If metrics fluctuate wildly:
  1. Wait for more data to smooth variations
  2. Check for external factors (campaigns, seasonality)
  3. Verify tracking is working correctly
  4. Look for technical issues in specific variants

Need Help?