Enterprises should not be asking whether they have the data, APT's VP of client services Rupert Naylor said in a telephone interview with TechRepublic, but how to best analyze it and make better sales and marketing decisions. Applied Predictive Technologies (APT) is a cloud-based predictive analytics software company headquartered in Washington, D.C.
I spoke with Naylor about the June 2014 report "Decisive action: How businesses make decisions and how they could do it better," which was commissioned by APT and written by The Economist Intelligence Unit (EIU). The report examines business leaders' approaches to decision making in the era of big data.
APT Founder and Chairman Jim Manzi wrote in the report foreword that "it's no longer enough to use intuition — which is ultimately rooted in one's prior experience — as a basis for making decisions" in a rapidly changing business environment that is full of data.
While 59% of respondents in the report said they are "data-driven," 68% claimed they would be trusted to make a decision not supported by data. Close to three-quarters said they trust their intuition, and 57% said they would reanalyze the data when a result contradicted their gut feeling. Also in the report, 45% who said they are growing faster than their competition also use predictive analytics in their decision making process.
TechRepublic: What is the big takeaway message from this report for top-level executives and decision makers?
Rupert Naylor: I think the big takeaway is that you should no longer be debating whether you have the data or not. You should be thinking about how to use it and analyze it.
If you look at the companies that are growing faster than their peers [in the report], those are the companies that are analyzing their data and running business experiments, running tests to drive their decisions. 45% of these fast-growing companies were using tests to make decisions, and only 10% of those growing more slowly than their competition were using that approach.
It is clear that we have an analytic tool with big data to drive better decision making, and the companies that are more advanced on that path are really driving superior performance. That to me was one of the major takeaways.
TechRepublic: What has your experience with APT taught you about decision making in organizations? Did the report change or confirm your thinking?
Rupert Naylor: I have been at APT for about a year now, and it's clear that some companies still struggle with how to make decisions. What we often see is that there will be a big decision on the table, and each of a company's functional teams will be using data to underpin their point of view on the initiative. Often it will be marketing that has a particular program they want to run, whether it is a promotional campaign or something. And finance will be acting as gatekeepers, and the two of them disagree.
There you will see that in some organizations it is politics that come to play in making a big decision. In the course of a day, you or I make hundreds of small decisions that are just informed by experience. But when it comes to a decision that is more complex, it is increasingly clear that companies want that decision to be underpinned by business data, by some sort of data-based fact.
Because I was working in consulting before APT, I have actually seen a shift over the past few years from, "where do we get the data to support this?" to the question "are we reading the data correctly?," and, "are we looking at this the right way?"
Companies still struggle with making decisions. But they benefit from what you might call having a single version of the truth, the underlying data and an agreed methodology. The debate is moving from "where is the data?" to "what is the right way to answer this with data?"
TechRepublic: In the report, 57% of respondents said that if the data conflicts with their gut feeling they would choose to reanalyze the data. The report paints this is as a gap — what is driving this gap in your view?
Rupert Naylor: At a high level, what you have to understand is that the human element should not be taken out of the equation. A lot of analytics solutions are automated. But ultimately, if you are a human in an organization, you've got to be able to say this is what has happened, and this is why we think it has happened. And part of that is just checking the data.
We were working with a client three weeks ago, who was trying to analyze the impact of putting a product flyer inside a newspaper. And they have run these tests in that area before, and got the general message that, yes, running these flyers is actually beneficial, and with certain products works best at certain times of the year.
They were running another test just to tease out a little more detail, and it showed no effect at all from having this flyer. We took a look, and everything seemed okay. They had not done anything crazy, it was all very sensible. And we said, are you sure the test actually ran? They went to their media agency, and they discovered that the media agency had forgotten to run the test.
There are several elements here. One is the data is giving you a counterintuitive answer. But actually, the reason is not because the data is wrong. It is because of human error in the system. But that element of gut feel I guess is a part of becoming confident with the answer, and in that instance did make our client more confident.
I think the issue you were raising in your initial question relates to Jim Manzi's quote [in the foreword of the report], that your intuition is rooted in your historical experience. But the business world and the environment we're living in is moving so quickly that you have to make decisions based on things that are not in your past experience. If you make them based on your gut feel, you will end up not making a correct decision, if you allow that to overrule what the data and information are telling you.
TechRepublic: What are the traditional barriers to decision making, and how can big data change them?
Rupert Naylor: One typical barrier in decision making is not actually deciding what you are the basing your decision on, and what you are actually looking for. And the second is a lack of trust in what was being read.
For example, say you are doing a store redesign of the aisles in a retail store. Well, what would happen is that people would come up with some way of comparing that with stores not being tested, that wasn't necessarily statistically significant. Because there is not a whole lot of trust in that, people would then become quite anecdotal. The store in which there was a test, closest to the head office, would be used as an example of whether or not the plan was working.
You can imagine this happening anecdotally: "I have been to that store on my way home and I don't see that it's getting any more people buying the laundry detergent," or whatever it may be. And that is because people couldn't separate the signal from the noise. You have probably heard of that expression — there is a book called The Signal and the Noise.
Having a lot of data allows you to capture consumer behavior and environmental characteristics. By being able to understand consumer behavior much more closely, you are able to tease out what is happening because of some initiative, what's happening because they redesigned the stores, or what's happening because of some other external factor, such as the weather on a particular day, or a road closure, or people in an area having a lot of children.
These are the factors that can skew results, unless you have a properly set up control group on which to base a test that you are running. Now that you have the data, you are able to come up with much cleaner reads. And that is shifting people from "is this working or not?" to actually being able to see what is happening, and not spending their time debating whether the data is right, but spending their time interpreting the results and then planning business actions on that basis.
TechRepublic: Only 19% of respondents feel that decision makers are held accountable. That's a pretty low figure. What are your insights into that result?
Rupert Naylor: It was surprising to me as well. By running tests, however, you are actually able to break up decisions into smaller pieces, and so the accountability for the decisions themselves is less risky.
If you want to do a major retail store project, it is going to cost you millions and millions of dollars. But if you do it in just 20 stores, and it turns out to be a disaster, well, it is better to know that for having done it in just 20 stores, and just cost you $1 million, rather than doing it in 1,000 stores, and it costs you 50 times as much. That is one slightly self-serving example of how those decisions become less visible.
But I am with you — it is certainly surprising. The tools are there now to analyze the effects of initiatives and decisions. And I guess that comes back to political culture, how that is measured and what people are measured on.
- Better predictive analytics lead to happier customers
- Listen, learn, and execute to effectively use big data in customer analytics
- 10 things you shouldn't expect big data to do
Brian will do client work for AtTask.
Brian Taylor is a contributing writer for TechRepublic. He covers the tech trends, solutions, risks, and research that IT leaders need to know about, from startups to the enterprise. Technology is creating a new world, and he loves to report on it.