- Ads That Convert
- Posts
- The Simple 5 Step Framework to A/B Split Testing Like a Scientist
The Simple 5 Step Framework to A/B Split Testing Like a Scientist
Steal this A/B split testing framework to make incremental improvements in results.
You know when you’re in a meeting and someone throws out a BIG idea?
Then it’s met with a “let’s split test that.”
98% of the time it goes nowhere and it’s forgotten about.
It’s the exact point where big ideas die.
Well... I’ve turned this common experience on its head.
This is how I turn an ideas graveyard into massive learnings.
Your ‘why’
Remember in year 8 chemistry when you finally get to use the bunsen burner? (aka the coolest part of high school)
Well, we’re going to use the framework that Mr. White taught us for making experiments.
First we need our hypothesis.
This is an idea (or gut feeling) we’re going to test.
First we need to find our why.
A lot of the time I find my why when looking through Google Analytics.
I’ll be looking in the Landing Page report and think
“Oh, the conversion rate on /lp /bundle/ is 12% higher than the homepage - what more can we test from here?”
What we’re trying to prove
Now we know why we’re testing - we can lay down how we’re going to test.
But first - getting back to the hypothesis.
We’re coming up a “if I do X then I predict Y will happen” statement.
A few I’ve used in the past:
Average order value will increase if the user gets a free gift at $100.
The USP angle will convert at a higher rate than the eco-friendly angle.
Sending paid traffic to the bundle offer will have a higher conversion rate than the product page.
Cause and effect is the name of the game.
Now we build
Running the experiment is going to be different every time.
So, we need to learn what we’re going to be testing.
Depending on what your hypothesis is you could be testing:
Minimum order amounts
Short vs long form copy
Landing page layout
Creative angles
Free shipping
Headlines
Free gift
Offer
I’ll leave completing the test up to you.
Just don’t break the 3 golden rules.
Control variables won’t change for the entire experiment.
Wait for statistically significant data.
Test more than once.
Analyse twice, report once
So… you now have enough data to see how right your hypothesis was?
Let’s keep it simple, eh?
Remember what you were testing for, what are the direct metrics that would show this.
Going back to my original examples you could answer these questions:
Did the average order value increase in the test variable where a free gift was offered? Did the order value increase enough to justify the offer or did it just chew into margins?
Did the USP angle convert at a higher conversion rate than eco-friendly? Was there a difference in order size between the creatives and by how much?
Did the bundle offer convert at a higher conversion rate? Did sending traffic to a higher cost SKU/bundle impact conversion rate?
Test & Learn
I keep track of ALL my experiments in a simple Google Sheet.
Nothing fancy, just a few columns - like this .
(Of course I can’t share my real one with my dozens and dozens of experiments in it).
Keeping this over time lets me collect learnings of winners and losers.
I recommend you do the same.
How about you?
Do you run your experiments with a similar method?
Reply to this email with the vertical you’re inl and I can probably pull a learning from my journal to share with you!