Insider’s Corner with Peter Rosenwald: Testing Isn’t An Option – It’s The Only Option
You probably know that testing your direct marketing campaigns is important, but do you know how or what to test?
If not, don’t worry! In this issue, our industry insider offers up an easy-to-understand overview to help you get started.
Questions about segmentation? Read Peter’s previous post here.
No matter how successful we are as marketers, we may find ourselves wondering, “Could our results be even better?” Luckily for us, there’s a surefire method to evaluating the success of any campaign. That method is (drumroll, please…) testing!
No doubt, that’s why direct and data-driven marketing guru Denny Hatch, in his invaluable Ultimate 85-Point Marketeer’s Checklist, quotes Malcolm Decker’s unequivocal dictum:
“In direct marketing, two rules and two rules only exist.
Rule #1: Test everything.
Rule #2: See Rule #1.”
Denny also wisely recommends to, “Always have tests in the works,” and he backs this up by suggesting marketers allocate 20% of their marketing budget to tests.1 That may sound like a lot, but it’s the kind of advice you can’t afford to ignore.
The Big Picture
So you know that you should be testing, but do you know how to test or what to test? Before diving into the details, it is essential to understand the big picture.
These are the overarching terms to understand:
- The Constants: These are the things you can’t change. For instance, your product or service, its marketability and market conditions are constants over which you will likely have little to no control.
- The Variables: These are the things you can change. For instance, your offers, price points, media channels and creative are variables that can be changed and the changes can impact your results.
For most intents and purposes, constants can’t be tested because they can’t be changed. Your variables are the items that can be changed in order to test varying results. Under these broad umbrella rules, there are many different types of testing that can occur.
Here are some of the most common types of testing:
- A/B Testing: A/B testing is arguably the most common type of testing, and it simply refers to testing two versions of the same variable to see which one performs the best. A/B testing can be used to test any variable, but common subjects for A/B tests include offers, copy, design, email subject lines and calls to action.
- Audience Data Dry Testing: This is the process of testing a data set, at no cost, to determine the viability of the list for acquisition marketing. It’s beneficial for marketers because they can select the potential consumers that are identified as the most likely to engage prior to launching a campaign. Dry testing can mitigate costs, reduce overall spend and improve campaign performance, which ultimately increases ROI. Infocore routinely utilizes and negotiates this type of testing for their clients at no additional cost.
- Multi-Channel Testing: It’s also possible to test offers which combine different media types and have multiple steps (for example, a television ad that drives in-bound telephone calls or an email campaign with a direct mail follow-up). The benefit of this type of testing is that you can test the individual types of media as well as their combined effects.
- Frequency Testing: Frequency testing is a little like Goldilocks finding the porridge that’s just right. If you don’t contact your customers frequently enough, you may miss out on sales. If you contact them too often, you may annoy consumers and drive them away. Testing to find the frequency sweet spot can avoid dangerously diminishing returns that hurt both short- and long-term results.
- Day-of-the-Week/Time-of-Day Testing: Marketers have discovered that some days of the week (or month) or times of the day produce better results than others. While I wouldn’t recommend this method be high on the testing priority list, it can sometimes yield meaningful data.
- Creative and Design Testing: This is such a big and important area that it deserves its own “solus” Suffice it to say that once your offer, pricing, media and timing have all been optimized, how you tell your story and compel your audience visually becomes paramount.
The Results Are In
When it comes to analyzing the results of all this testing, I’m apt to paraphrase one of the great lines in the brilliant British playwright Tom Stoppard’s Jumpers –Democracy is not in the voting. It’s in the counting. Test results should not be judged by raw numbers but rather by how these numbers are interpreted and how they stack up against anticipation. In other words, we need to make sure that we have a solid standard metric for measurement.
For example, if we are measuring whether one price or offer is superior to another, we need to measure not only which generates the MOST business responses but which also delivers the best FINAL economic result, profit or contribution. Sad to say, some great front-end responses can often seed disastrous back-end, bottom-line results. Better to catch them early.
(The biggest disaster in my long career was popping champagne corks too early after testing the rollout of a new negative option music club. Because negative option businesses send products to interested consumers before they pay, it appeared on the front end that we were wildly successful. However, we didn’t pay sufficient attention to whether or not the happy new members would actually pay for their music choices. They didn’t. And we got killed! Had we followed good testing practice and carefully examined front- and back-end results before rolling out, the disaster would probably have been averted.)
A Subscription By Any Other Name
Recently, I was at a seminar and asked the participants if they had any subscriptions. Very few raised their hands. Then I asked how many had streaming video services and nearly all the hands went up.
This clearly bore out in the results of an A/B email test which offered the A group a “subscription” and the B group, a “free trial” of a no-commitment newsletter. After a month or so, a fully paid and committed “subscription” was offered to responders from both groups.
As you no doubt guessed, the B group’s response was much higher than the A group’s. (The word “subscription” obviously carried the odor of an unwanted commitment even though in this case there was none.) And even though the subsequent conversion to paid subscriptions of the A group takers was somewhat higher in percentage terms than the B group takers, the revenue generation, return on marketing investment (ROMI) from the B group was higher.
It is also important to remember that the size of the expected response from each test group must be large enough to be statistically valid. The smaller the sample, the greater the likelihood of inaccurate data.
Following an accepted hierarchy in a testing program is almost always beneficial and should tell you if any essential element of the promotion is not pulling its weight so you can make necessary adjustments.
The Bottom Line
With the complexities of testing, it is important to not lose sight of the following key strategic objectives:
- Increase the profit-per-thousand for the same lifetime assumption generating a higher percentage response and lower promotional cost per subscriber.
- Increase the Lifetime Value (LTV) by changing the construct of the offer (i.e. 30-month subscription rather than open-ended for a discount on the normal price per month).
- Increase the rate of response by offering a significant incentive with the key metric being the larger pool of subscribers, even with a higher incentive cost.
The bottom line is always how much response has to increase over the current results to justify the different variable cost elements. And if up-front that increase doesn’t seem to be realistically achievable, perhaps it is better not to include it at all.
Surprisingly, marketing managers often judge the result of tests by including the so-called “first costs” – the creative or other non-repeating costs. This can badly distort results. Tests, to realize their real value, must be judged not on what they actually cost but what a large distribution of the promotion with the first costs amortized over the whole initiative, would actually cost. Many economically valid programs have been abandoned as a result of this error.
We all know that testing is like peeling an onion (tears and all) but the final dish is almost always worth it. While there may be many more layers left to peel, this simplified overview should serve as an invitation to learn more and to make a regular testing program an integral part of your marketing effort.