How to Optimize Your Marketing Spend

There are formal methods and tools you can use to optimize marketing spend, including software from SAS, IBM and HP (among others).  The usefulness of these methods, however, depends on basic disciplines that are missing from many Marketing organizations.

In this post I’d like to propose some informal rules for marketing optimization.  These do not exclude using formal methods as well — think of them as organizing principles to put in place before you go shopping for optimization software.

(1) Ignore your agency’s “metrics”.

You use agencies to implement your Marketing campaigns, and they will be more than happy to provide you with analysis that shows you how much value you’re getting from the agency.  Ignore these.  Asking your agency to measure results of the campaigns they implement is like asking Bernie Madoff to serve as the custodian for your investments.

Every agency analyst understands that the role of analytics is to make the account team look good.   This influences the analytic work product in a number of ways, from use of bogus and irrelevant metrics to cherry-picking the numbers.

Digital media agencies are very good at execution, and they should play a role in developing strategy.  But if you are serious about getting the most from your Marketing effort, you should have your own people measure campaign results, or engage an independent analytics firm to perform this task for you.

(2) Use market testing to measure every campaign.

Market testing isn’t the gold standard for campaign measurement; it’s the only standard.  The principle is straightforward: you assign marketing treatments to prospects at random, including a control group who receive no treatment.  You then measure subsequent buying behavior among members of the treatment and control groups; the campaign impact is the difference between the two.

The beauty of test marketing is that you do not need a hard link between impressions and revenue at the point of sale, nor do you need to control for other impressions or market noise.  If treatments and controls are assigned at random, any differences in buying behavior are attributable to effects of the campaign.

Testing takes more effort to design and implement, which is one reason your agency will object to it.  The other reason is that rigorous testing often shows that brilliant creative concepts have no impact on sales.  Agency strategists tend to see themselves as advocates for creative “branding”; they oppose metrics that expose them as gasbags.  That is precisely why you should insist on it.

(3) Kill campaigns that do not cover media costs.

Duh, you think.  Should be obvious, right?  Think again.

A couple of years ago, I reviewed the digital media campaigns for a big retailer we shall call Big Brand Stores.  Big Brand ran forty-two digital campaigns per fiscal year; stunningly, exactly one campaign — a remarketing campaign — showed incremental revenue sufficient to cover media costs.  (This analysis made no attempt to consider other costs, including creative development, site-side development, program management or, for that matter, cost of goods sold.)

There is a technical term for campaigns that do not cover media costs.  They’re called “losers”.

The client’s creative and media strategists had a number of excuses for running these campaigns, such as:

  • “We’re investing in building the brand.”
  • “We’re driving traffic into the stores.”
  • “Our revenue attribution is faulty.”

Building a brand is a worthy project; you do it by delivering great products and services over time, not by spamming everyone you can find.

It’s possible that some of the shoppers rummaging through the marked-down sweaters in your bargain basement saw your banner ad this morning.  Possible, but not likely; it’s more likely they’re there because they know exactly when you mark down sweaters every season.

Complaints about revenue attribution usually center on the “last click” versus “full-funnel” debate, a tiresome argument you can avoid by insisting on measurement through market testing.

If you can’t measure the gain, don’t do the campaign.

(4) Stop doing one-off campaigns.

Out of Big Brand’s forty-two campaigns, thirty-nine were one-offs: campaigns run once and never again.  A few of these made strategic sense: store openings, special competitive situations and so forth.  The majority were simply “concepts”, based on the premise that the client needed to “do something” in April.

The problem with one-off campaigns is that you learn little or nothing from them.  The insight you get from market testing enables you to tune and improve one campaign with a well-defined value proposition targeted to a particular audience.  You get the most value from that insight when you repeat the campaign.  Marketing organizations stuck in the one-off trap never build the knowledge and insight needed to compete effectively.  They spend much, but learn nothing.

Allocate no more than ten percent of your Marketing spend to one-off campaigns.  Hold this as a reserve for special situations — an unexpected competitive threat, product recall or natural disaster.  Direct the rest of your budget toward ongoing programs defined by strategy.  For more on that, read the next section.

(5) Drive campaign concepts from strategy.

Instead of spending your time working with the agency to decide which celebrity endorsement to tout in April, develop ongoing programs that address key strategic business problems.  For example, among a certain segment of consumers, awareness and trial of your product may be low; for a credit card portfolio, share of revolving balances may be lagging competing cards among certain key segments.

The exact challenge depends on your business situation; what matters is that you choose initiatives that (a) can be influenced through a sustained program of marketing communications and (b) will make a material impact to your business.

Note that “getting lots of clicks in April” satisfies the former but not the latter.

This principle assumes that you have a strategic segmentation in place, because segmentation is to Marketing what maneuver is to warfare.  You simply cannot expect to succeed by attempting to appeal to all consumers in the same way.  Your choice of initiatives should also demonstrate some awareness of the customer lifecycle; for example, you don’t address current customers in the same way that you address prospective customers or former customers.

When doing this, keep the second and third principles in mind: a campaign concept is only a concept until it is tested.  A particular execution may fail market testing, but if you have chosen your initiatives well you will try again using a different approach.  Keep in mind that you learn as much from failed market tests as from successful market tests.

2013 Rexer Data Miner Survey

Rexer Analytics published its 2013 Data Miner Survey just before the Holidays, and it’s an excellent read.

As always when working with survey research, one should use some caution in interpreting the results; it’s very difficult to build a representative sample of analysts and data miners.  While it is easy to find fault with Rexer’s sample — which vendors who are unhappy with some of the findings will likely try to do — there is no better survey of working analysts available today.

Key findings:

  • Customer Analytics is the most frequently cited application for analytics:
    • Understanding customers
    • Improving customer experience
    • Customer acquisition, upsell and cross-sell
  • Respondents recognize growing data volumes, but the size of their analytic data sets is stable
    • In other words, one should not confuse managing Big Data with analyzing Big Data
  • R is the most widely used analytic software
    • 70% of respondents say they use R
    • 24% say R is their primary tool, more than any other software
  • Text mining is mainstream; 70% of respondents say they mine text now or plan to start
  • Time to deployment remains an issue; respondents report deployment cycles ranging from weeks to a year or more

One of the most interesting pieces of analysis in the survey is a clustering based on the importance ratings of tool selection criteria.  Rexer’s analysis reveals two principal dimensions in the data, one labeled as “Cost” and the other labeled as “Ease of Use and Interface Quality”.  The largest cluster, which includes respondents who rated everything important, should be discounted as an artifact of questionnaire design; it reflects a phenomenon known as the “wrist effect”, where respondents simply check all of the boxes on one end of the scale.   Of the remaining respondents:

  • Respondents who value the ability to write one’s own code generally do not value ease of use, and vice versa.  These respondents are most likely to cite SAS or R as their primary software
    • Among these users, those who cite the importance of cost are much more likely to cite R as their primary tool
    • Those who place a lower value on cost tend to value the quality of the user interface
  • Respondents who value ease of use and the quality of the user interface are more likely to be new to analytics
    • These respondents are most likely to cite Statistica, Rapid Miner and IBM SPSS Modeler as their primary tool

For more information about the survey and to get a copy, go here.

Analytic Applications (Part Three): Operational Analytics

This is the third post in a series on analytic applications organized by how analytic work product is used within the enterprise.

  • The first post, linked here, covers Strategic Analytics (defined as analytics for the C-Suite)
  • The second post, linked here, covers Managerial Analytics (defined as analytics to measure and optimize the performance of value-delivering units such as programs, products, stores or factories).

This post covers Operational Analytics, defined as analytics that improve the efficiency or effectiveness of a business process.  The distinction between Managerial and Operational analytics can be subtle, and generally boils down to the level of aggregation and frequency of the analysis.  For example, the CMO is interested in understanding the performance and ROI of all Marketing programs, but is unlikely to be interested in the operational details of any one program.  The manager of that program, however, may be intensely interested in its operational details, but have no interest in the performance of other programs.

Differences in level of aggregation and frequency lead to qualitative differences in the types of analytics that are pertinent.  A CMO’s interest in Marketing programs is typically at a level of “keep or kill”;  continue funding the program if its effective, kill it if it is not.  This kind of problem is well-suited to dashboard-style BI combined with solid revenue attribution, activity based costing and ROI metrics.  The Program Manager, on the other hand, is intensely interested in a range of metrics that shed insight not simply on how well the program is performing, but why it is performing as it is and how to improve it.  Moreover, the Program Manager in this example will be deeply involved in operational decisions such as selecting the target audience, determining which offers to assign, handling response exceptions and managing delivery to schedule and budget.  This is the realm of Operational Analytics.

While any BI package can handle different levels of aggregation and cadence, the problem is made more challenging due to the very diverse nature of operational detail across business processes.   A social media marketing program relies on data sources and operational systems that are entirely different from web media or email marketing programs; preapproved and non-pre-approved credit card acquisition programs do not use the same systems to assign credit lines; some or all of these processes may be outsourced.  Few enterprises have successfully rationalized all of their operational data into a single enterprise data store (nor is it likely they will ever do so).  As a result, it is very rare that a common BI system comprehensively supports both Managerial and Operational analytic needs.  More typically, one system supports Managerial Analytics (for one or more disciplines), while diverse systems and ad hoc analysis support Operational Analytics.

At this level, questions tend to be domain-specific and analysts highly specialized in that domain.  Hence, an analyst who is an expert in search engine optimization will typically not be seen as qualified to perform credit risk analysis.  This has little to do with the analytic methods used, which tend to be similar across business disciplines, and more to do with the language and lingo used in the discipline as well as domain-specific technology and regulatory issues.  A biostatistician must understand common health care data formats and HIPAA regulations; a consumer credit risk analysis must understand FICO scores, FISERV formats and FCRA.  In both cases, the analyst must have or develop a deep understanding of the organization’s business processes, because this is essential to recognizing opportunities for improvement and prioritizing analytic projects.

While there is a plethora of different ways that analytics improve business processes, most applications fall in to one of three categories:

(1) Applied decision systems supporting business processes such as customer-requested line increases or credit card transaction authorizations.  These applications improve the business process by applying consistent data-driven rules designed to balance risks and rewards.  Analytics embedded in such systems help the organization optimize the tradeoff between “loose” and “tight” criteria, and ensure that decision criteria reflect actual experience.  An analytics-driven decisioning system performs in a faster and more consistent way than systems based on human decisions, and can take more information into account than a human decision-maker.

(2) Targeting and routing systems (such as a text-processing system that reads incoming email and routes it to a customer service specialist).  While applied decision systems in the first category tend to recommend categorical yes/no, approve/decline decisions in a stream of transactions, a targeting system selects from a larger pool of candidates, and may make qualitative decisions among a large number of alternate routings.   The business benefit from this kind of system is improved productivity and reduced processing time as, for example, the organization no longer requires a team to read every email and route it to the appropriate specialist.  Applied analytics make these systems possible.

(3) Operational forecasting (such as a system that uses projected store traffic to determine staffing levels).   These systems enable to organization to operate more efficiently through better alignment of operations to customer demand.  Again, applied analytics make such systems possible; while it is theoretically possible to build such a system without an analytic forecasting component, it is inconceivable that any management would risk the serious customer service issues that would be created without one.  Unlike the first two applications, forecasting systems often work with aggregate data rather than atomic data.

For analytic reporting, the ability to flexibly ingest data from operational data sources (internal and external) is critical, as is the ability to publish reports into a broad-based reporting and BI presentation system.

Deployability is the key requirement for predictive analytics; the analyst must be able to publish a predictive model as a PMML (Predictive Model Markup Language) document or as executable code in a choice of programming languages.

In the next post, I will cover the most powerful and disruptive form of analytics, what I call Customer-Enabling Analytics: analytics that differentiate your products and services and deliver value to the customer.

Analytic Applications (Part Two): Managerial Analytics

This is the second in a four-part taxonomy of analytics based on how the analytic work product is used.  In the first post of this series, I covered Strategic Analytics, or analytics that support the C-suite.  In this post, I will cover Managerial Analytics: analytics that support middle management, including functional and regional line managers.

At this level, questions and issues are functionally focused:

  • What is the best way to manage our cash?
  • Is product XYZ performing according to expectations?
  • How effective are our marketing programs?
  • Where can we find the best opportunities for new retail outlets?

There are differences in nomenclature across functions, as well as distinct opportunities for specialized analytics (retail store location analysis, marketing mix analysis, new product forecasting), but managerial questions and issues tend to fall into three categories:

  • Measuring the results of existing entities (products, programs, stores, factories)
  • Optimizing the performance of existing entities
  • Planning and developing new entities

Measuring existing entities with reports, dashboards, drill-everywhere (etc.) is the sweet spot for enterprise business intelligence systems.  Such systems are highly effective when the data is timely and credible, reports are easy to use and the system reflects a meaningful assessment framework.  This means that metrics (activity, revenue, costs, profits) reflect the goals of the business function and are standardized to enable comparison across entities.

Given the state of BI technology, analysis teams within functions (Marketing, Underwriting, Store Operations etc.) spend a surprisingly large amount of time preparing routine reports for managers.  (For example, an insurance client asked my firm to perform an assessment of actual work performed by a group of more than one hundred SAS users.  The client was astonished to learn that 80% of the SAS usage could be done in Cognos, which the client also owned).

In some cases, this is simply due to a lack of investment by the organization in the necessary tools and enablers, a problem that is easily fixed.  More often than not, though, the root cause is the absence of consensus within the function of what is to be measured and how performance should be compared across entities.   In organizations that lack measurement discipline, assessment is a free-for-all where individual program and product managers seek out customized reports that show their program or product to the best advantage; in this environment, every program or product is a winner and analytics lose credibility with management.  There is no technical “fix” for this problem; it takes leadership for management to set out clear goals for the organization and build consensus for an assessment framework.

Functional analysts often complain that they spend so much time preparing routine reports that they have little or no time to perform analytics that optimize the performance of existing entities.  Optimization technology is not new, but tends to be used more pervasively in Operational Analytics (which I will discuss in the next post in this series).   Functionally focused optimization tools for management decisions have been available for well over a decade, but adoption is limited for several reasons:

  • First, an organization stuck in the “ad hoc” trap described in the previous paragraph will never build the kind of history needed to optimize anything.
  • Second, managers at this level tend to be overly optimistic about the value of their own judgment in business decisions, and resist efforts to replace intuitive judgment with systematic and metrics-based optimization.
  • Finally, in areas such as Marketing Mix decisions, constrained optimization necessarily means choosing one entity over another for resources; this is inherently a leadership decision, so unless functional leadership understands and buys into the optimization approach it will not be used.

Analytics for planning and developing new entities (such as programs, products or stores) usually require information from outside of the organization, and may also require skills not present in existing staff.  For both reasons, analytics for this purpose are often outsourced to providers with access to pertinent skills and data.  For analysts inside the organization, technical requirements look a lot like those for Strategic Analytics: the ability to rapidly ingest data from any source combined with a flexible and agile programming environment and functional support for a wide range of generic analytic problems.

In the next post in this series, I’ll cover Operational Analytics, defined as analytics whose purpose is to improve the efficiency or effectiveness of a business process.

Analytic Applications (Part One)

Conversations about analytics tend to get muddled because the word describes everything from a simple SQL query to climate forecasting.  There are several different ways to classify analytic methods, but in this post I propose a taxonomy of analytics based on how the results are used.

Before we can define enterprise best practices for analytics, we need to understand how they add value to the organization.  One should not lump all analytics together because, as I will show, the generic analytic applications have fundamentally different requirements for people, processes and tooling.

There are four generic analytic applications:

  • Strategic Analytics
  • Managerial Analytics
  • Operational Analytics
  • Customer-Enabling Analytics

In today’s post, I’ll address Strategic Analytics; the rest I’ll cover in subsequent posts.

Strategic Analytics directly address the needs of the C-suite.  This includes answering non-repeatable questions, performing root-cause analysis and supporting make-or-break decisions (among other things).   Some examples:

  • “How will Hurricane Sandy impact our branch banks?”
  • “Why does our top-selling SUV turn over so often?”
  • “How will a merger with XYZ Co. impact our business?”

Strategic issues are inherently not repeatable and fall outside of existing policy; otherwise the issue would be delegated.   Issues are often tinged with a sense of urgency, and a need for maximum credibility; when a strategic decision must be taken, time is of the essence, and the numbers must add up.   Answers to strategic questions frequently require data that is not readily accessible and may be outside of the organization.

Conventional business intelligence systems do not address the needs of Strategic Analytics, due to the ad hoc and sui generis nature of the questions and supporting data requirements.   This does not mean that such systems add no value to the organization; in practice, the enterprise BI system may be the first place an analyst will go to seek an answer.  But no matter how good the enterprise BI system is, it will never be sufficiently complete to provide all of the answers needed by the C-suite.

The analyst is key to the success of Strategic Analytics.  This type of work tends to attract the best and most capable analysts, who are able to work rapidly and accurately under pressure.  Backgrounds tend to be eclectic: an insurance company I’ve worked with, for example, has a strategic analysis team that includes an anthropologist, an economist, an epidemiologist and graduate of the local community college who worked her way up in the Claims Department.

Successful strategic analysts develop domain, business and organizational expertise that lends credibility to their work.  Above all, the strategic analyst takes a skeptical approach to the data, and demonstrates the necessary drive and initiative to get answers.  This often means doing hard stuff, such as working with programming tools and granular data to get to the bottom of a problem.

More often than not, the most important contribution of the IT organization to Strategic Analytics is to stay out of the way.  Conventional IT production standards are a bug, not a feature, in this kind of work, where the sandbox environment is the production environment.  Smart IT organizations recognize this, and allow the strategic analysts some latitude in how they organize and manage data.   Dumb IT organizations try to force the strategic analysis team into a “Production” framework.  This simply inhibits agility, and encourages top executives to outsource strategic issues to outside consultants.

Analytic tooling tends to reflect the diverse backgrounds of the analytics, and can be all over the map.  Strategic analysts use SAS, R, Stata, Statsoft, or whatever to do the work, and drop the results into Powerpoint.  One of the best strategy analysts I’ve ever worked with used nothing other than SQL and Excel.  Since strategic analysis teams tend to be small, there is little value in demanding use of a single tool set; moreover, most strategic analysts want to use the best tool for the job, and prefer to use niche tools that are optimized for a single problem.

The most important common requirement is the capability to rapidly ingest and organize data from any source and in any format.  For many organizations, this has historically meant using SAS.  (A surprisingly large number of analytic teams use SAS to ingest and organize the data, but perform the actual analysis using other tools).    Growing data volumes, however, pose a performance challenge for the conventional SAS architecture, so analytic teams increasingly look to data warehouse appliances like IBM Netezza, to Hadoop, or a combination of the two.

In the next post, I’ll cover Managerial Analytics, which includes analytics designed to monitor and optimize the performance of programs and products.

Recent Books on Analytics

For your Christmas gift list,  here is a brief roundup of four recently published books on analytics.

Business Intelligence in Plain Language by Jeremy Kolb (Kindle Edition only) is a straightforward and readable summary of conventional wisdom about Business Intelligence.  Unlike many guides to BI, this book devotes some time and attention to data mining.  As an overview, however, Mr. Kolb devotes too little attention to the most commonly used techniques in predictive analytics, and too much attention to more exotic methods.  There is nothing wrong with this per se, but given the author’s conventional approach to implementation it seems eccentric.  At $6.99, though, even an imperfect book is a pretty good value.

Tom Davenport’s original Harvard Business Review article Competing on Analytics is one of the ten most-read articles in HBR’s history; Google Trends shows a spike in search activity for the term “analytics” concurrent with its publication, and steady growth in interest since them.  Mr. Davenport’s latest book  Enterprise Analytics: Optimize Performance, Process, and Decisions Through Big Data is a collection of essays by Mr. Davenport and members of the International Institute of Analytics, a commercial research organization funded in part by SAS.   (Not coincidentally, SAS is the most frequently mentioned analytics vendor in the book).  Mr. Davenport defines enterprise analytics in the negative, e.g. not “sequestered into several small pockets of an organization — market research, or actuarial or quality management”.    Ironically, though, the best essays in this book are about narrowly focused applications, while the worst essay, The Return on Investments in Analytics, is little more than a capital budgeting primer for first-year MBA students, with the word “analytics” inserted.  This book would benefit from a better definition of enterprise analytics, the value of “unsequestering” analytics from departmental silos, and more guidance on exactly how to make that happen.

Jean-Paul Isson and Jesse Harriott have hit a home run with Win with Advanced Business Analytics: Creating Business Value from Your Data, an excellent survey of the world of Business Analytics.   This book combines an overview of traditional topics in business analytics (with a practical “what works/what does not work” perspective) with timely chapters on emerging areas such as social media analytics, mobile analytics and the analysis of unstructured data.  A valuable contribution to the business library.

The “analytical leaders” featured in Wayne Eckerson’s  Secrets of Analytical Leaders: Insights from Information Insiders — Eric Colson, Dan Ingle, Tim Leonard, Amy O’Connor, Ken Rudin, Darren Taylor and Kurt Thearling — are executives who have actually done this stuff, which distinguishes them from many of those who write and speak about analytics.  The practical focus of this book is apparent from its organization — departing from the conventional wisdom of how to talk about analytics, Eckerson focuses on how to get an analytics initiative rolling, and keep it rolling.  Thus, we read about how to get executive support for an analytics program, how to gain momentum, how to hire, train and develop analysts, and so forth.  Instead of writing about “enterprise analytics” from a top-down perspective, Eckerson writes about how to deploy analytics in an enterprise — which is the real problem that executives need to solve.

What Business Practices Enable Agile Analytics?

Part four in a four-part series.

We’ve mentioned some of the technical innovations that support an Agile approach to analytics; there are also business practices to consider.   Some practices in Agile software development apply equally well to analytics as any other project, including the need for a sustainable development pace; close collaboration; face-to-face conversation; motivated and trustworthy contributors, and continuous attention to technical excellence.  Additional practices pertinent to analytics include:

  • Commitment to open standards architecture
  • Rigorous selection of the right tool for the task
  • Close collaboration between analysts and IT
  • Focus on solving the client’s business problem

More often than not, customers with serious cycle time issues are locked into closed single-vendor architecture.  Lacking an open architecture to interface with data at the front end and back end of the analytics workflow, these organizations are forced into treating the analytics tool as a data management tool and decision engine; this is comparable to using a toothbrush to paint your house.  Server-based analytic software packages are very good at analytics, but perform poorly as databases and decision engines.

Agile analysts take a flexible, “best-in-class” approach to solving the problem at hand.  No single vendor offers “best-in-class” tools for every analytic method and algorithm.  Some vendors, like KXEN, offer unique algorithms that are unavailable from other vendors; others, like Salford Systems, have specialized experience and intellectual property that enables them to offer a richer feature set for certain data mining methods.  In an Agile analytics environment, analysts freely choose among commercial, open source and homegrown software, using a mashup of tools as needed.

While it may seem like a platitude to call for collaboration between an organization’s analytics community and the IT organization, we frequently see customers who have developed complex processes for analytics that either duplicate existing IT processes, or perform tasks that can be done more efficiently by IT. Analysts should spend their time doing analysis, not data movement, management, enhancement, cleansing or scoring; but surveyed analysts typically report that they spend much of their time performing these tasks.  In some cases, this is because IT has failed to provide the needed support; in other cases, the analytics team insists on controlling the process.   Regardless of the root cause, IT and analytics leadership alike need to recognize the need for collaboration, and an appropriate division of labor.

Focusing the analytics effort on the client’s business problem is essential for the practice of Agile analytics.  Organizations frequently get stuck on issues that are difficult to resolve because the parties are focused on different goals; in the analytics world, this takes the form of debates over tools, methods and procedures.  Analysts should bear in mind that clients are not interested in winning prizes for the “best” model, and they don’t care about the analyst’s advanced degrees.   Business requires speed, agility and clarity, and analysts who can’t deliver on these expectations will not survive.

What Is Driving Interest in Agile Analytics?

Part three in a four-part series.

A combination of market forces and technical innovation drive interest in Agile methods for analytics:

  • Clients require more timely and actionable analytics
  • Data warehouses have reduced latency in the data used by predictive models
  • Innovation directly impacts the analytic workflow itself

Business requirements for analytics are changing rapidly, and clients demand predictive analytics that can support decisions today.  For example, consider direct marketing:  ten years ago, firms relied mostly on direct mail and outbound telemarketing; marketing campaigns were served by batch-oriented systems, and analytic cycle times were measured in months or even years.  Today, firms have shifted that marketing spend to email, web media and social media, where cycle times are measured in days, hours or even minutes.  The analytics required to support these channels are entirely different, and must operate at a digital cadence.

Organizations have also substantially reduced the latency built into data warehouses.  Ten years ago, analysts frequently worked with monthly snapshot data, delivered a week or more into the following month.  While this is still the case for some organizations, data warehouses with daily, inter-day and real-time updates are increasingly common.  A predictive model score is as timely as the data it consumes; as firms drive latency from data warehousing processes, analytical processes are exposed as cumbersome and slow.

Numerous innovations in analytics create the potential to reduce cycle time:

  • In-database analytics eliminate the most time-consuming tasks, data marshalling and model scoring
  • Tighter database integration by vendors such as SAS and SPSS enable users to achieve hundred-fold runtime improvements for front-end processing
  • Enhancements to the PMML standard make it possible for firms to integrate a wide variety of end-user analytic tools with high performance data warehouses

All of these factors taken together add up to radical reductions in time to deployment for predictive models.  Organizations used to take a year or more to build and deploy models; a major credit card issuer I worked with in the 1990s needed two years to upgrade its behavior scorecards.  Today, IBM Netezza customers who practice Agile methods can reduce this cycle to a day or less.