There are formal methods and tools you can use to optimize marketing spend, including software from SAS, IBM and HP (among others). The usefulness of these methods, however, depends on basic disciplines that are missing from many Marketing organizations.
In this post I’d like to propose some informal rules for marketing optimization. These do not exclude using formal methods as well — think of them as organizing principles to put in place before you go shopping for optimization software.
(1) Ignore your agency’s “metrics”.
You use agencies to implement your Marketing campaigns, and they will be more than happy to provide you with analysis that shows you how much value you’re getting from the agency. Ignore these. Asking your agency to measure results of the campaigns they implement is like asking Bernie Madoff to serve as the custodian for your investments.
Every agency analyst understands that the role of analytics is to make the account team look good. This influences the analytic work product in a number of ways, from use of bogus and irrelevant metrics to cherry-picking the numbers.
Digital media agencies are very good at execution, and they should play a role in developing strategy. But if you are serious about getting the most from your Marketing effort, you should have your own people measure campaign results, or engage an independent analytics firm to perform this task for you.
(2) Use market testing to measure every campaign.
Market testing isn’t the gold standard for campaign measurement; it’s the only standard. The principle is straightforward: you assign marketing treatments to prospects at random, including a control group who receive no treatment. You then measure subsequent buying behavior among members of the treatment and control groups; the campaign impact is the difference between the two.
The beauty of test marketing is that you do not need a hard link between impressions and revenue at the point of sale, nor do you need to control for other impressions or market noise. If treatments and controls are assigned at random, any differences in buying behavior are attributable to effects of the campaign.
Testing takes more effort to design and implement, which is one reason your agency will object to it. The other reason is that rigorous testing often shows that brilliant creative concepts have no impact on sales. Agency strategists tend to see themselves as advocates for creative “branding”; they oppose metrics that expose them as gasbags. That is precisely why you should insist on it.
(3) Kill campaigns that do not cover media costs.
Duh, you think. Should be obvious, right? Think again.
A couple of years ago, I reviewed the digital media campaigns for a big retailer we shall call Big Brand Stores. Big Brand ran forty-two digital campaigns per fiscal year; stunningly, exactly one campaign — a remarketing campaign — showed incremental revenue sufficient to cover media costs. (This analysis made no attempt to consider other costs, including creative development, site-side development, program management or, for that matter, cost of goods sold.)
There is a technical term for campaigns that do not cover media costs. They’re called “losers”.
The client’s creative and media strategists had a number of excuses for running these campaigns, such as:
- “We’re investing in building the brand.”
- “We’re driving traffic into the stores.”
- “Our revenue attribution is faulty.”
Building a brand is a worthy project; you do it by delivering great products and services over time, not by spamming everyone you can find.
It’s possible that some of the shoppers rummaging through the marked-down sweaters in your bargain basement saw your banner ad this morning. Possible, but not likely; it’s more likely they’re there because they know exactly when you mark down sweaters every season.
Complaints about revenue attribution usually center on the “last click” versus “full-funnel” debate, a tiresome argument you can avoid by insisting on measurement through market testing.
If you can’t measure the gain, don’t do the campaign.
(4) Stop doing one-off campaigns.
Out of Big Brand’s forty-two campaigns, thirty-nine were one-offs: campaigns run once and never again. A few of these made strategic sense: store openings, special competitive situations and so forth. The majority were simply “concepts”, based on the premise that the client needed to “do something” in April.
The problem with one-off campaigns is that you learn little or nothing from them. The insight you get from market testing enables you to tune and improve one campaign with a well-defined value proposition targeted to a particular audience. You get the most value from that insight when you repeat the campaign. Marketing organizations stuck in the one-off trap never build the knowledge and insight needed to compete effectively. They spend much, but learn nothing.
Allocate no more than ten percent of your Marketing spend to one-off campaigns. Hold this as a reserve for special situations — an unexpected competitive threat, product recall or natural disaster. Direct the rest of your budget toward ongoing programs defined by strategy. For more on that, read the next section.
(5) Drive campaign concepts from strategy.
Instead of spending your time working with the agency to decide which celebrity endorsement to tout in April, develop ongoing programs that address key strategic business problems. For example, among a certain segment of consumers, awareness and trial of your product may be low; for a credit card portfolio, share of revolving balances may be lagging competing cards among certain key segments.
The exact challenge depends on your business situation; what matters is that you choose initiatives that (a) can be influenced through a sustained program of marketing communications and (b) will make a material impact to your business.
Note that “getting lots of clicks in April” satisfies the former but not the latter.
This principle assumes that you have a strategic segmentation in place, because segmentation is to Marketing what maneuver is to warfare. You simply cannot expect to succeed by attempting to appeal to all consumers in the same way. Your choice of initiatives should also demonstrate some awareness of the customer lifecycle; for example, you don’t address current customers in the same way that you address prospective customers or former customers.
When doing this, keep the second and third principles in mind: a campaign concept is only a concept until it is tested. A particular execution may fail market testing, but if you have chosen your initiatives well you will try again using a different approach. Keep in mind that you learn as much from failed market tests as from successful market tests.