A while back, before I became an employee of WhatCounts, the folks in the Baltimore office were kind enough to do a sponsored A/B split test of one of my newsletters for my podcast, Marketing Over Coffee. I had been doing a hack job of slapping together a newsletter using the most primitive HTML possible, and the WhatCounts design team cringed at it and offered the help of one of our Creative Services team members. Here’s a peek at the old vs. the new:
Previously Used Template
Which template do you think will deliver higher performance?
I maintained, however, that my audience wouldn’t really care how it looked as long as the content was valuable. The creative team posited that the nicer design would deliver higher click through performance.
Let The Testing Begin!
The A/B split test was launched on September 22nd to 40% of the MoC email list. 20% of their list received the old template, while the other 20% received the new template. Subject line, From Name, email content and date/time of send were all completely identical so that the only factor we were testing was the actual template design. We allowed the A/B test to run for a sufficient amount of time, then gathered the email metrics to determine the winning email that would be sent to the remaining 60% of the list.
After letting the A/B test run for two hours, the new template had a slight edge on the old template. The old template had received an open rate of 9.8% and a click-through rate of 0.9%. The new template, however, had received an open rate of 11.6% and a click-through rate of 2.3%. Satisfied with these results, we sent the “winning” new template to the remainder of the list.
We continued to follow the results of the A/B test into the next day, and then something interesting happened…
The Winner is…
The results were almost identical. The old template finished strong with a 25.7% open rate and a 4.2% click-through rate, whereas the new template received a 27.0% open rate and a 4.7% click-through rate. The new template’s results were still slightly higher, but statistically, it was a tie. In addition, we tracked downloads of an OPML file on the MoC web site. We found that unique clicks from each email to the OPML file were almost identical from the two templates, while there were more repeat clicks to the file from the old template than the new template.
What a colossal waste of time, you may be thinking. Not so! What we learned from this experiment was that industry best practices do not necessarily work for all audiences. To learn what works for your email recipients, you must test rigorously even if you think you’re certain about the outcome. Not all audiences will respond to the same type of subject line, call-to-action, email design, landing page, etc. Test different types with your own email recipients, and use what works best for your email campaigns.
We walked away from the test knowing that our audience valued the content more than the format. A nicer format certainly does convey a sense of emotion and perhaps connection to a brand, but if you need to get information distributed to your audience that’s valuable, you might be able to use a very simple design.
The most important takeaway from this test for everyone including the WhatCounts team is that you can’t assume that the audience is going to behave in a predictable fashion. This is classic confirmation bias, and it’s something that can get you into a world of trouble (and reduced performance) as an email marketer. Skip testing at your peril.
Christopher S. Penn
Director of Inbound Marketing, WhatCounts