The desire to test experiments and outcomes has been a part of science since science first began, but has only become a part of marketing in the last few decades as marketing has become more measurable. One of the biggest crossovers from science to marketing in recent times has been multivariate testing, or Taguchi testing, named after Genichi Taguchi (a postwar Japanese scientist).
Prior to multivariate testing, marketers typically did very little testing beyond a standard A/B split, where two versions of a marketing piece were tested side by side and the better-performing of the two was used. The trouble with A/B split testing is that in many cases, there are multiple variables that influence whether a piece of marketing collateral performs well or poorly that a simple A/B split test cannot account for.
In other cases, marketers may have tested a series of different variables but were unable to measure how they influenced each other. Marketers could discretely test any one variable but couldn’t see the big picture, how all the pieces fit together.
Let’s take the marketing of a beer for example and look at two creative ideas. Let’s say that you market one beer with the usual mountain streams and snowy imagery, and you market the other beer with a woman in a swimsuit. While the woman in swimwear may initially attract more attention, that imagery may interact negatively with a certain part of the audience. This is a simple A/B split test. So far, so good.
Next, add in variables like pricing and suddenly you have multiple variations of your marketing – and you may see unexpected results. You may find that imagery of mountain streams and a higher price point sells better than imagery of swimwear and a higher price point. Or you may find the converse to be true. As you add more variables like imagery, language, formatting, and pricing, you get increasingly complex interaction among variables, and the ability of marketers to predict what combination of variables will be effective decreases proportionally. Once you go beyond a single variable (such as subject line), you also go past the limits of what a simple A/B split test can reveal.
How does this apply to email marketing? Advanced email service providers allow you to do far more than just a simple A/B split test with your email marketing campaigns. Here’s an example of how you might conduct a Taguchi multivariate test.
First, determine which variables you’ll test. Broadly, subject line and From: line very often determines a significant portion of your email campaign’s open rate, while the content of the message determines the action rate (clickthrough, all the way to purchase). Make a list of what you’ll be testing, from a spreadsheet containing multiple subjects to a series of different email messages that vary the content.
Remember that every variable you introduce adds additional complexity and requires more segments of your audience to test. For example, here’s what an array would look like with 3 different pieces of content and 3 different subject lines, requiring 9 total segments:
Here’s an example of what the same spreadsheet would look like if you also wanted to test 3 different From: addresses, requiring 27 different segments:
Ultimately, set up as many subjects and messages as are practical, and then load them into your email marketing software. Generally speaking, it’s probably a good idea if you’re just getting started out to test two variables at a time with two tests, such as two subject lines and two pieces of content. Once you’re comfortable with testing and have a large enough list, you can expand to more variables and more test candidates in each variable.
If you use a system like the WhatCounts Professional Edition or Publicaster Edition, enter in all of your different subjects and messages. You’ll then set a testing window of what percentage of your list you want to test (anywhere from 5%-50% of your list), what the winning criteria will be (what matters more to you – opens or clicks?), and how long you want the test to run for. Once the testing period is over, the platform will send the winning message to the rest of your list automatically.
We strongly recommend that if possible, each test segment contain at least 1,000 email addresses in order to provide a statistically significant enough pool of candidates for testing. If your list isn’t large enough to support a 9-way test with 9,000 addresses, then scale back the test conditions until you reach a test that meets the 1,000 address per segment conditions.
What makes this significant is that it removes guesswork about your email marketing to a great degree, as well as accounts for marketing variables influencing each other. By doing large multivariate tests, you’ll be judging all of the different factors that make up your email marketing messages based on the final outcome you specify and letting the software automatically choose which combination of variables works best with your audience.
One final note on multivariate testing – it’s not a one-time deal. Your audience will respond differently to every message you send! Sometimes the time of year makes a difference as to which message test is most effective; other times, your list may have changed as subscribers come and go. You may get radically different results from the same set of variables, so immediately raise a red flag if someone in your organization says, “We don’t need to test any more, we know what the audience wants”. They’re almost certainly wrong. Send every major campaign using multivariate testing and you’ll squeeze as much ROI as possible out of your email marketing.
Christopher S. Penn
Director of Inbound Marketing, WhatCounts