- ZeeMail
- Posts
- 3x Tests = +$38k in Annual Revenue
3x Tests = +$38k in Annual Revenue
We tested one email 3 times and jacked annual rev. by $38k
Most brands āset and forgetā their email flows, bleeding revenue and wasting ad spend. My Email Flow CRO plugs those leaks, drives up revenue, and sharpens your brand. Want to see how much extra you could be adding to your bottom line? Get a free audit today.
As a merchant, itās not often you get to take the time to look back on a series of tests that have been run in your email account and see how things have progressed, along with the proven increase in revenue.
ā¦but thatās probably because you donāt have me piloting the account.
Today weāre diving into one of my favourite clients: Dr. Woof
Founded by Dr. Ron See, a veterinarian by education, who runs the business like his life depends on it. (And we just so happen to share the same birthday. š) Today, weāre looking at the first email in Dr. Woofās Browse Abandonment Flow.
TEST #1: DESIGN CHANGE
When taking over an account, I rarely like to change the copy first. Generally, Iāll upgrade the Look & Feel and use the same copy, at least to start.
While I can look back and cringe at this design now, back then it was an improvement on the original. Same copy, but we threw in some very popular scrubs colours into the hero image. We also created distinction between the dynamic section that the user had viewed and the following sections.
Also, BIGGER BUTTONS.
TEST #1: THE RESULTS
Placed Order Rate š
AOV š
Revenue Per Recip š (Which is really the golden metric for most automation testing.)
If you take into account the amount of recips during the test and extrapolate that out across 365 days you wind up with a trajectory difference that looks like this:
Blue = what the Control was going to bring in, cumulatively, month-over-month, and the pink is what the Variant stacked on top of that result. šŖ
Caveat: These are extrapolations. Not 100% accurate, obviously. The recips during the testing time frame might be higher or lower than the median. This can impact the trajectory projections. (Orā¦ ātrajectionsāā¦)
Bottom line, use it as a guide and a bit of inspiration, but donāt buy your next Lambo based upon these projections. āļø
TEST #2: COPY & DESIGN CHANGE
Okay! Whatās the first thing you notice about this? CONTROL WINS.
Sometimes, we take an L and thatās cool. Itās about the process and trying to work out why something did or didnāt go our way. Hereās why I thought the Variant would win:
Slicker design. Hero was much more in alignment with the look and feel of winning campaigns we were sending out.
Eyes-forward model. Usually works to have the model looking straight down the barrel at the reader.
Hereās why I donāt believe it paid off:
I donāt think the copy change in the hero worked. On the Control we were asking the readerās opinion. People love to be asked their opinions, even if itās by an email.
(BTW, why do YOU think the Variant didnāt win? š Seriously, though. Iām very much a student and would dig your feedback.)
Second reason on the copy: Kind of patronizing. Youāve almost made a good choice? Considering the audience is educated medical professionals, maybe not the best attitude to take with them.
Smaller buttons.
Could have been the footer, but I donāt think so. Additionally, we wanted to change the footer to be a bit more modern and this was a universal change across the account.
What do we do when we fail?
We observe the test, and as soon as it looks like itās not going our way, we end it and try something else. The single biggest issue I see when auditing accounts is either:
No tests running
Tests that have been running for-š¤¬-ever. Whether winners or losers, youāre getting a fraction of the revenue you could be getting with tests that run too long.
TEST #3: REVERT COPY + CHANGE DESIGN
Winner, winner, chicken dinner. š VARIANT WINS! šļø Which is exactly what we want.
Weāve taken what was in the first test, and improved it. Then we took the improvement and improved upon that. Incremental improvements. šŖ
At the end of the day I think the design was just better. We did have that model looking down the barrel and we once again asked the reader for their opinion. Button size reduced and button copy changed, but didnāt seem to matter. We got the W.
TEST #3: THE RESULTS
Placed Order Rate š
AOV š
Revenue Per Recip š
Over 365, we look like this:
The last successful test took the revenue trajectory total from this email over a year to about $52k. This test ended with a total of $64k. š
Whatās next? Another test, of course. Either on this email or another email in the flow. (But, rarely, if ever, two at the same time. Thatās just bad practice because you create lots of variations of the customer journey. Just not ideal.)
Bottom line, when a test ends, you start another based upon your learnings.
CONCLUSION
Now, hereās the deal: The projections look sexy, but all weāre doing is taking a snapshot of the testing time period and saying:
ā¦If the rest of the year looked exactly like the testing period, what would the performance of this email be?
Is it going to be the same? Nah. Butā¦
Will it be similar? Probably, yes.
Have we run a test whereby the Variant beats the Control? Yes.
Have we moved forward with the winning Variant? Yes.
Have we done that on repeat? Yes.
Can we assume that with every winning test, weāll improve performance of that email? Yes.
Will this improve overall account efficiency? Yes.
Will this improve advertising costs because weāre more efficient at extracting dollars from the leads the ad channels send into it? Yes.
Will our automated email UI/UX improve as a result of this testing? Yes.
Will that UI/UX alignment with our brand improve customer experience? Yes.
Itās just like site CRO. In fact, I call this process Email Flow CRO. If youāre willing to invest in site CRO, you should also be willing to invest in the automated email backbone of your business. Between your site and your email account, this is the very bottom of your funnel and most merchants never seriously look here.
Reply