This week, we’re testing out a new email newsletter template for our weekly newsletter, The GameChanger. However, unlike most A/B split tests, we’re doing it slightly differently. Instead of the usual 20/80 split – where you test to 20% of your list and then send the winner to the remaining 80% of your list – we’re going to send the old and new template to our list in an even 50/50 test with no “winner”.
Why test this way? In order to thoroughly assess the effectiveness of the template, we want to use the largest possible sample size, which in this case is our entire list. Here’s why. Most A/B split tests are configured to run for a few hours at most. People tend to configure them to run for anywhere from 4-12 hours, and then send the winner after that.
However, when we look at our Google Analytics data for when conversions (valuable activities that drive the business, rather than just open/click rates) occur, it’s not a sharp Pareto curve that says all the juice happens right away, which would justify a fast 20/80 A/B test. Instead, what we see in our data is that a good chunk of activity happens on day 1, tapers off some on day 2, days 3-4 are empty, and then on day 5 we see conversions bump up again.
Why? We send on Thursdays. Some folks open on Fridays. No one really reads on the weekend that converts. Then on Monday morning, as people are cleaning out their inboxes to start the week, we see conversions pick up again. Imagine if we chose an A/B split test that only used Thursday’s data. This would effectively be a form of selection bias in that the people and behaviors of email openers on Monday could potentially be significantly different than the self-selecting pool of folks who race to open the email on Thursday. We know Monday readers are valuable based on our conversion data.
Thus, to get maximum insight into our new template, we’re sending an even 50/50 split and then looking at the data over a week’s time.
Look at your sources of data like Google Analytics to see if there are differences in your audience based on when they read their email (and convert). If there are, if you see a cluster of people opening and converting over a period of time that exceeds your normal 20/80 A/B split, consider using a 50/50 split and see if your results vary.
Christopher S. Penn
Director of Inbound Marketing, WhatCounts