The Science of Success Podcast

View Original

The Power of Experiments: How To Drive Innovation and Opportunity During Times of Uncertainty with Stefan Thomke

See this content in the original post

In this episode we share the power of the experimental mindset. How can you use experiments to make better decisions and improve your life? What makes for good experiments? We share all this and more with our guest Stefan Thomke. 

Stefan Thomke is a professor at Harvard Business School. He has worked with global firms on product, process, and technology development, organizational design and change, and strategy. He is a widely published author with articles in leading journals and is also author of the new book, EXPERIMENTATION WORKS: The Surprising Power of Business Experiments and many more. He is the recipient of numerous awards, including awards from Harvard in innovative teaching and more!

  • Innovation is about dealing with uncertainty. 

  • Business is fundamentally about making decisions under conditions of uncertainty. 

  • The good news is that uncertainty creates opportunity. 

  • When we’re dealing with uncertainty we usually rely on experience and intuition. 

  • When your analyzing a lot of things in your business you often see a lot of correlation, but not a lot of actual causation 

  • What is innovation? 

    • Novelty + Value

  • You can innovate across the whole spectrum of your business

    • Products, services, customer experience, technology, process, business model innovations

  • Most people think that innovation needs to be breakthrough, disruptive, huge innovation - but most innovation is incremental and often incremental innovation can create a huge impact over time 

  • Small changes can have a massive impact on performance 

  • A big change is usually the result of the sum of many small changes

  • Even successful business people have about a 10% success rate when they conduct their experiments. You’re much more likely to get it wrong than to get it right. 

  • It’s desirable to have a fairly low success rate in your experiments, if you’re not succeeding enough you’re not pushing innovation enough. 

  • There’s a difference between a mistake and failure. Mistakes are failures to execute operationally. Failures however are different, they are at the heart of how innovation works. 

  • Failure is a result from a question and testing a hypothesis. 

  • What is an experiment? (Especially in the context of your business)

    • A perfect experiment the tester will separate an independent variable (The presumed caused) and then the dependent variable (the observed effect) while holding everything else constant. 

    • The key is to only change ONE thing and then see what the result is. 

    • This is hard to do in business. 

    • The best solution to account for 

  • The constant change in business is to randomize the changes over a big enough data set and randomly assign subjects to an A/B test. 

    • Randomization helps equalize the distribution of all causes except for the cause being tested.

  • An observational study is an experiment without any controls. 

  • How do you build an experimentation capability in your business?

    • You need an infrastructure.

    • You need the tools. 

    • Even in a brick and mortar environment there are tools you can use. 

    • There are lots of third party tools that are available now for running experiments. 

    • The tools are the easiest part. The harder part is to develop a culture of experimentation. 

    • What’s the right organizational design? 

  • You need to create a culture and norms that make experimentation a part of your culture and your business. 

  • Cultural pillars of experimentation

    • Curiosity.

      • You need a curious environment. You need a lot of hypotheses to test. 

    • Data trumps opinion (most of the time). 

      • This is really difficult. We happily accept results that are supported by our intuition, but we have a hard time accepting results that go against our intuition. 

    • Democratize experimentation. 

      • Empower people to run experiments without getting permission every single time. 

    • Ethics

    • Embrace a different leadership model

  • What are the leadership changes necessary to embrace experimentation in your business?

    • Leaders need to acknowledge that they are sometimes part 

  • “HIPPO’s” can be very dangerous 

    • Highest Paid Person’s Opinion

  • The leader has to set a GRAND CHALLENGE that can be broken into TESTABLE HYPOTHESIS that aim towards the goal

  • How do you scale this methodology down to smaller businesses?

    • Adopt A/B Testing

    • Leverage the tools available, they can be very inexpensive

  • How do you overcome low sample size? 

    • Bigger changes need smaller sample sizes. 

    • Small changes need bigger sample sizes. 

  • What do you use experiments for at a smaller organization?

    • Small Optimizations?

    • You can also run exploration experiments where you explore a direction - you may not have causality, but you can get a sense of direction and then pursue smaller experiments that get more towards causality

  • Experimentation is the engine of innovation. 

  • Homework: Acknowledge that experimentation matters. Then adopt a disciplined framework and start thinking about the basics of experiment design. Just get started, don’t worry too much about scale at the beginning.

Thank you so much for listening!

Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions on how to do that).

The Science of Success is brought to you by MetPro a world-renowned concierge nutrition, fitness & lifestyle coaching company. Using Metabolic Profiling, MetPro’s team of experts analyze your metabolism and provides an individualized approach to obtaining your goals.

Science of Success listeners receive a complimentary Metabolic Profiling assessment and a 30-minute consultation with a MetPro expert. To claim this offer head to metpro.co/success

MetPro’s team of experts will guide you through personalized nutrition and fitness strategies and educate you about how your body responds to macro and micro-adjustments to your fitness, nutrition, and daily routine.

Starting your own business is an incredible feat. 
It's a labor of love which makes getting through the late nights, early mornings, and occasional all-nighter so worth it. It's no secret that business owners are incredibly busy!

So why not make things easier?

FreshBooks invoicing and accounting software is simple, intuitive, and keeps you incredibly organized. Create and send professional invoices in thirty seconds. FreshBooks helps you get paid two times faster with automated online payments.

Go to FreshBooks.com/science today and enter THE SCIENCE OF SUCCESS in the how did you find us section and get started today!!

Want To Dig In More?! - Here’s The Show Notes, Links, & Research

General

Media

Videos

Books

See this content in the original post

Episode Transcript

[00:00:04.4] ANNOUNCER: Welcome to The Science of Success. Introducing your host, Matt Bodnar.

[0:00:11.8] MB: Welcome to the Science of Success; the number one evidence-based growth podcast on the Internet with more than five million downloads and listeners in over a hundred countries.

In this episode, we share the power of the experimental mindset. How can you use experiments to make better decisions and improve your life? What makes for a good experiment? We share all of this and much more with our guest, Stefan Thomke.

In our previous, episode we shared how to memorize a deck of cards in less than 60 seconds, how to remember anything and hacks from one of the world's leading memory experts; our previous guest, Nelson Dellis.

Are you a fan of the show and have you been enjoying the content that we put together for you? If you have, I would love it if you signed up for our e-mail list. We have some amazing content on there, along with a really great free course that we put a ton of time into called How To Create Time for What Matters Most In Your Life. If that sounds exciting and interesting and you want a bunch of other free goodies and giveaways along with that, just go to successpodcast.com. You can sign up right on the homepage. That’s successpodcast.com. Or if you’re on your phone right now, all you have to do is text the word smarter, that’s S-M-A-R-T-E-R to the number 44-222.

Now, for our interview with Stefan.

[0:01:35.0] MB: Stefan Thomke is a professor at Harvard Business School. He has worked with global firms on product, process and technology development, organizational design and change and strategy. He is a widely published author with articles in leading journals and is also author of the new book Experimentation Works: The Surprising Power of Business Experiments. He is the recipient of numerous awards, including awards from Harvard and Innovation and much more. Stefan, welcome to the Science of Success.

[0:02:03.9] ST: Well, great to be here. Thanks, Matt.

[0:02:05.7] MB: Well, we're so excited to have you on the show today. There's so many insights. Experimentation has always been something that I thought is so important and I'm really excited to bring you on here and dig into it. I want to start out with something that a lot of business leaders and business people today, when they're making decisions, what are the current tools that they're using to inform those decisions and why might those not be necessarily the best approach?

[0:02:34.9] ST: Well, when you're thinking about decisions depending on what decision you make, there are various tools available. If you're making financial decisions for example, if you're calculating net present value and things like that. There's a whole arsenal of tools. The big issue and this is what my book is really about is innovation here, because innovation is fundamentally about uncertainty. This is really this about decision-making under uncertainty.

Usually in organizations, we're all about driving out uncertainty. In fact, a lot of the traditional tools are about eliminating, or minimizing uncertainty. In innovation, uncertainty actually creates opportunity. I always tell folks that in innovation, uncertainty is your friend, uncertainty and variability is your friend, because it creates opportunity for someone else to move into that space.

Now why is then uncertainty so difficult? Why is decision-making under uncertainty so difficult? Well, it helps to be a little bit more precise here about what uncertainty really means. When it comes to innovation, you face different kinds of uncertainties in a company every single day. First there is R&D uncertainty. That is when you're trying to create something new, does it and it could be a product, or service, or a customer experience, does it actually work as intended?

Then we have scale up uncertainty. Make something work, but can we scale it up? Can we make it at large volume, high-quality, reasonable cost and so forth. Then we have customer experience uncertainty. For a customer-facing, do we really know that the customers want what we are creating? Are they willing to pay for it? Lots of questions.

Then finally, there is what I call business uncertainty. If you're running a business, you need to make an investment decision. Again, the tools that we typically use for these kinds of things is net present value, internal rate of returns and these kinds of things. The reality of course is that when you're dealing with innovation and uncertainty, often you are the one who is actually creating the market. You're creating the segment. How do you put a net present value in something that doesn't exist yet?

How do we deal with this, Matt? Well, we rely on experience. Experience can really get in the way for us in a whole myriad of reasons. Then some of the listeners may say, “Well, but now we live in a world of big data and analytics and we can do – we can use all that to make decision-making better.” Here, we run into another set of problems. That is if something is really novel, by definition there is less data. Because if there was a lot of data around, that means someone has already done it before. It wouldn't be very novel. Then of call context matter; something that works in one context, doesn't work in another context.

Then third, I think and this is a big problem and I’m happy to maybe go more deeply into that. When you're running analysis in a lot of data, you get correlations. Correlations means that one variable changes along with another variable, they call vary. You don't really get information about causation. Of course, we're really interested in causation. We want to know that if I take an action, I want to have a certain outcome.

You can see where the challenges come in when you're traditional decision-making and of course, that's where the experiment comes in, because the experiment allows us to address some of these dilemmas and a well-designed controlled experiment will actually tell me something about causality.

[0:06:22.2] MB: Yeah, that's a great insight into what uncertainty is and how we start to think about making decisions under conditions of uncertainty. That topic especially has been one that we've really strived hard to answer on the podcast. I'm curious, coming back to innovation, before we even dig into experimentation which is a huge component of this. Tell me about innovation and what is the actual definition of innovation and what is the difference between that and things like invention and what people often perceive innovation being.

[0:06:55.9] ST: Well, when we think about innovation, we really think about two things; first element on this is of course, novelty. That's what usually comes to mind right away, but then there's also value. It's novelty, plus value. That makes it very different than the word ‘invention’. The word ‘invention’ is usually associated with patents. For those of you who have your name on a patent, know that there's no value requirement in a patent. Just has to be new and non-obvious and never published before.

Invention is an input to innovation, but it's not quite the same thing. In fact, I've seen companies that have lots of patents that created no value for anybody. Now the outputs of innovation could be many things; could be products, could be services, it could be new customer experiences. Then of course, it could be processes. I've seen companies that are really great at process innovation. Could be new technologies. This is perhaps one of the most difficult things to do for companies, it’s business model innovation. How do you create a new business model while you’re trying to make money with an existing business model?

Now when we think about innovation, we also think about different degrees. Often, Matt, when people talk about innovation, they often think about disruption, or breakthroughs and these kinds of things. Well, most innovation in the world is incremental. I think it's actually perfectly okay, because incremental innovation is more predictable. Incremental innovation is something that everybody can do. If I told all your listeners, “From tomorrow morning on, you're going to be a disruptive innovator,” most of us wouldn't know what to do. Like, “Do I come to work late? Do I dress differently? I mean, what do I actually do?”

Then of course, incremental innovation in the digital age is a little different than we traditionally think about incremental innovation. In the past, Matt, we associated incremental innovation with incremental changes in performance. In the digital world, that's no longer true. In fact, incremental small changes can have a massive impact on performance, because if you are a digital, you can scale things instantly and you can stale it to possibly hundreds of millions of people.

Even perhaps a 2% or 3% or 4% change, that's considered to be small, can actually have tens or even hundreds of millions of revenue impact. What is innovation? Well, it's all of this. It's all of this what I just described. To do it, you really need different models of approaching it.

[0:09:42.2] MB: That's a great point. You have a really good story about Bing and a very small change that they made there that led to a huge impact, because – I'd like to hear that, because it's so important to understand that almost the power of compound interest, these little changes can accrue and create huge results.

[0:10:02.3] ST: Absolutely, Matt. A big change is usually the result of the sum of many small changes. The Microsoft example, or the Bing example is a fascinating example. There's a Microsoft employee who was working on the search engine, of course. Had an idea about changing the way displayed ad headline. He thought by taking some of the subtext in the headline and moving it up and making the headline longer, that could actually have an impact on user engagement.

The employee showed this to a manager and the manager looked at it and wasn't really sure whether this would lead to anything. Because you can imagine that when you're adding more to a headline, maybe users will not read the headline because it's too long. Any case, the manager basically didn't pick up on that and this idea just lingered. It wasn't a complex idea. Would only take a few days to actually make the changes.

It lingered and then after six months or so, the engineer I think got a little impatient and decided just to go ahead with it, I assume without management permission and just launched this thing. Within hours, an alarm goes off. Now Bing, or Microsoft has lots of KPIs that they monitor automatically. When something unusually happens, there's a set of different kinds of alarms go off, this was an alarm called a too-good-to-be-true alarm. Something really strange happened when he launched this thing.

Immediately when the alarm goes off, an investigation begins. It's usually when you get a too-good-to-be-true alarm, there's some a coding error, except they couldn't find one. They run it again and the result replicates. Now what's even more amazing is that that change which by the way, again, it only took a few days of time led to an astonishing 12% of increased revenue. This was more than a 100 million dollars in just that one year alone and of course, more than a 100 million dollars in subsequent years.

Now what made the difference here? Well, the difference is the ability of an employee to actually launch the experiment and find out, because if the employee never launched the experiment, they would have never known. It's all about opportunity cost. It's an amazing story, where a small change led to a massive impact on revenue. In fact, turns out that people at Microsoft told me that this was in fact that biggest, most significant change or experiment that they ran in the history of Bing.

[0:12:47.3] MB: It's amazing. It reminds me of some of the research that has been done around creativity, which comes to a similar conclusion, which is that it's really, really hard especially in uncertain conditions for even the most experienced managers, or the people with a lot of previous success, or expertise to actually predict in the future what will succeed and what will fail. If you look at some of the creativity science, compositions from Beethoven and Bach and patents and all kinds of stuff, and even the most eminent creators really had very little ability to predict whether or not their next output would be a smashing success, or a total failure. That to me is very similar to what you're saying about business results and the importance of having a systematic approach to pursuing innovation.

[0:13:37.9] ST: That's absolutely right, Matt. I saw it in my research. In fact, I even got some data from companies on this. Turns out that and this was pretty consistent across different companies who are running a lot of these experiments. They all told me that they get it wrong about eight to nine out of 10 times. 80% to 90% of the times when they launch an experiment and they have a hypothesis, it turns out that when they observe the result that they get either a null result, or they get a negative result and that is that the effect is in the opposite direction of what they expected.

You can imagine now is – I mean, it's daunting, right? Is that you're running these experiments and you know ahead of time that you're much more likely to get it wrong, to get it right, and that is predicting what customers or consumers will do. It's just a normal way of doing things. When you're dealing with such “high-failure rates,” so what is the best approach to get this resolved? How do I adjudicate these kinds of things? Maybe the solution is and this is again what I'm advocating is just to run a lot of experiments.

That is if you're running say, a 1,000 experiments a year and you only get a 10% hit rate, you're still getting a 100 experiments that work and one of those experiments could be like the Bing experiment. You're also getting laser-precision, and that is you launch an experiment. Again, if it's well-designed, if it's controlled, it will actually tell you which actions cost what outcome. This is extremely powerful.

[0:15:25.7] MB: You touched on this a little bit, but to me it's really important to understand the success rate of experiments and the reality that even some of the top experiment-driven companies in the world, people like Amazon, etc., are still batting way less than 50%. Something like 10%, 20% success rate is a great success rate for running experiments in your business.

[0:15:52.0] ST: Absolutely. In fact, if the success rate were too high, I'd honestly be a little concerned, because maybe then they're not trying hard enough. Maybe they're being too conservative about what they're trying. Maybe they're already testing things that they already know. In fact, I think it's even desirable to have a fairly low success rate.

By the way, success is a loaded word in this context. Success and failure and what does failure mean? I know failure itself is not necessarily a positive word. I'm always very careful about what I mean by failure. I draw a distinction between what I call failure and a mistake. A mistake to me is something that creates absolutely no value. There's no learning going on. For example, operational execution. You’ll find the Amazon and I'm building yet another distribution center. That to me is an operational execution. There's really no question that I'm trying to answer here.

Of course, I want to minimize these kinds of things. I want to minimize mistakes. Failures are something different. Failures are at the heart of how the innovation process works. Usually, a failure is preceded by a question. When I've got a question or even a hypothesis and I'll run something and I get a failure, that then allows me to refine my hypothesis, or even refine my question and run another experiment and another experiment, another experiment. They all build on each other and there's learning going on each time it happens.

What you want to do is as an organization, you want to create an organization where failure is okay, failure is encouraged, but where mistakes are discouraged or minimized. That of course is very difficult. If you're operating at a large number of experiments and you're operating in an environment where in this case, failure 80% to 90% is just the way things do work every single day. It's normal.

I think whenever I run into people who operate in these kinds of environments, they're quite honestly don't think that much about these failures. It's just normal, because you see so many every single day.

[0:18:03.4] MB: That's a great point in understanding that distinction between a mistake and a failure is a critical piece of the mindset of experimentation. I want to come back to the broader concept of using experiments within business. Let's talk about and I'm curious to hear from you what are some of the best practices, the strategies, because it's easy to say, “Oh, yeah. I should be doing more experiments.” How do we actually start to really integrate those into our business? How do we really start to think about actually bringing experimentation into the workflow and the resource allocation and the processes of an organization?

[0:18:44.3] ST: That's a great question, Matt. I think may be helpful perhaps to take a step back and ask ourselves, first what is an experiment.

[0:18:51.4] MB: Yeah, that'd be great actually.

[0:18:53.4] ST: Yes, because usually when people speak about experiments in just a casual English language, I think they mean very different things. Often when I say I experiment, I mean, I'm trying something. Sometimes when I see in companies, an experiment becomes an experiment after the fact they've tried something and it didn't work and therefore, they won't call it an experiment. It wasn't really an experiment at the outset. There are different kinds of experiments that companies can run. When I talk about experiments, I mean, disciplined or rigorous experiments in the spirit of the scientific method.

Let me give you the pure definition first of what an ideal experiment is and of course, sometimes we have to relax some of these conditions, because sometimes the environments don't allow us to do these kinds of experiments. Here's what we're trying to accomplish in an experiment; in a perfect experiment, we have someone who's testing, a tester. In this perfect experiment, the tester will actually separate and what we call an independent variable, that is the pursuant cost, that is the thing that we're trying to change. For example, say a bonus that we want to give to the sales force.

From a dependent variable and the dependent variable for us is the observed effect. That for example, would be the revenue that that sales person generates, while holding all other potential cost is constant. That would be the ideal, right? You're only changing one thing and then you're observing some variable at the end and you don't have to worry about any of the other possible causes changing while I'm doing the experiment and affecting the experiment.

Now of course, that's an ideal experiment and maybe in a scientific laboratory, sometimes you can create these conditions where it can hold everything else constant. In a business, you can't really do that. There's a lot of things that are changing all the time. That's fine, because we can deal with that. The way we actually deal with a lot of things changing all the time is we randomize.

Going back to the example with the salesperson, what we want to do is the revenue that a salesperson generates could be affected by many things. It could be by maybe whether the person was sick on a particular day. It could be affected by the weather in certain environments. It could be affected by many, many different things of course, but we're only interested in one thing and that is the bonus that we're giving to that salesperson.

Again, the way we deal with this in experiments, we randomize, that is we take basically two groups or multiple groups, if there are multiple levels of experiments and then what we do is we basically randomly assign subjects, basically to these two conditions. One is basically no bonus and one is bonus. Now why do we randomize? The reason for randomization is really clever. That is we're taking all the other possible causes that could affect in a revenue of that salesperson and we equally distribute it across all the different salespeople that we’re testing them on.

By for example, flipping a coin, and so what we're doing is this way, we're doing is we're making sure that no particular sales person is biased in a particular way, which then would pollute the result. I think, Matt, maybe you're getting a sense of where I’m heading in. There's a lot of thought that needs to go into the design of these kinds of things to make sure that they work.

Now intuitively, the way people would often approach this, if you had this issue, the way they typically approach this – again, let's pick the salesperson problem again. What we would do is we would basically pick up here, say of a month and we're basically let the salesperson work for a month with no bonus. Then we do another period for a month where we actually approach the same problem again. We would basically take the salesperson and then give them a bonus. Then we compare the two periods. That would be the wrong way to do it, because it could be that during those two periods, there are a lot of other factors at work; the weather could be very different, the salesperson would feel very different. There are lots of different things going on. Maybe there are health issues. Lots of different things. We don't want to do that. We call that an observational studies, because there is no control.

The reason why we do it together at the same time, we run it at the same time, we split it essentially up in a condition where there is no bonus and the condition where there's a bonus is that we can then compare and contrast. We have a control that allows us really again to disentangle that one variable that we're interested in from all the other variables. That's just to get a sense of what a really good experiment looks like. There are many other variables that I talk about in the book that we ought to think through when we're actually designing the experiment. Some of them are may not be totally obvious, but if you don't do that the integrity of the results that come back may not be very good. Then the problem is and then you get a lot of noise and then you still don't know what decision to make, because of the high noise conditions. Yeah, hopefully that's helpful, Matt.

[0:24:21.2] MB: Yeah, that's really helpful and shines a lot of light on what needs to go into an experiment. I like the clarification of what differentiates an experiment from an observational study and those two distinctions as well.

[0:24:35.4] ST: By the way, Matt, there's a lot of research out there. There was actually a very famous paper written, a highly cited paper in the medical community where someone did a meta study. They actually compared medical studies where you would imagine the rigor is much, much higher than what we typically do in management. They actually compare observational studies with controlled studies. Turned out when they actually did the comparison, they found that most observational studies don't replicate. That is you can reproduce the result that you observed in that one observational studies.

It turns out that when you have control studies, they are more likely to be replicated than not. That tells you something about the importance of making that distinction. When you’re trying to run experiments, in which you try to identify cause and effect.

[0:25:27.3] MB: Very interesting.

[0:25:31.5] AF: What’s up, everybody? This podcast is brought to you by MetPro, a world-renowned concierge nutrition, fitness and lifestyle coaching company. We at the Science of Success love MetPro, because they use science to help you achieve your goals; using metabolic profile and MetPro’s team of experts, analyze your metabolism and provide an individualized, custom to you approach to obtaining your goals.

Now MetPro is backed by data and it’s driven by science. MetPro’s team of industry leading experts are challenging journalized health guidance by teaching people how to optimally manage their weight and achieve their associated goals.

As a leader, you understand it’s not just about the number of hours in the day, it's about productivity. The same goes for health and wellness. It's not fundamentally about what you eat, or how you train, although those are very important pieces. What MetPro is focused on is time management, working smarter and establishing a game plan specific to your goals and lifestyle needs.

You know what? Check it out. We did an interview recently with Angelo Poli, who is the Founder and CEO of MetPro. It should be last week's episode or the week before when this airs. Go check that out for more long-form deep dive conversation into not only what MetPro is, but the science behind it and the science of accountability nutrition and how you can achieve your goals today.

Right now, Science of Success listeners are receiving a complimentary metabolic profiling assessment in a 30-minute consultation with a MetPro expert. Claim this offer today. Head to www.metpro.co. That’s M-E-T-P-R-O.co/success to get your complimentary metabolic profiling assessment and a 30-minute consultation. You're going to get so much value out of that 30-minute consultation. Believe me, having done it for months, I know the power. Head on over to metpro.co/success today.

[0:27:33.0] MB: I want to come back to the second part that I asked you before we delved into this really necessary definition of what an experiment is. Coming back to this idea, how do you think about this strategies, the best practices, etc., for actually implementing experimentation in your business? Because that's one I've long thought that experimentation is really important, but often struggled with thinking about exactly how do we really make that a part of what we're actually executing from a day-to-day perspective in our business?

[0:28:05.1] ST: The question is really how do you build an experimentation capability in your business? Building a capability involves a number of different things. There are different factors and I'll just give you some examples without going through all of them. The book is quite detailed about these things.

First of all, of course you need an infrastructure and you need the tools. You don't want people to reinvent these tools every single time. Some of the leading companies while you're looking at an Amazon, or a booking, or Microsoft, or any of these company, Netflix, that do this at large scale in an online business, they all have a fairly advanced infrastructure. Even if you go brick-and-mortar, even in brick-and-mortar environments, there are tools available that you can use.

Now the good news is there are third-party tools, so you don't have to really build the same kinds of infrastructure that these companies had to build when they got started and the tools were not around. The tools are important. The tools turns out and this is often surprising. The tools may be the easier part, because you know what to do and if you put enough money into it and you hire enough people and all that.

I think the harder part is to build a culture for experimentation, to make sure that the behaviors and the norms and these kinds of things actually facilitate experiments, rather than inhibit them. That can be tricky, especially when you're trying to grow up and scale, when you're trying to do more than maybe just run 5 or 10 of those a year, when you suddenly want to run a 100 or 500, or even a 1,000, or even more than that. The culture really gets in the way.

There's a number of different elements that I identify, Matt, that are important when you're thinking about an experimentation culture. In fact, I call when you reach the end point, when you really create an experimentation culture, I call this an experimentation organization. Let me give you just quick five examples. The first is what I call cultivate curiosity. If you want to experiment, you need curious people, because they need to ask a lot of questions and they need to come up with a lot of hypotheses, because in order to feed a big experimentation apparatus, if you want to feed the infrastructure, you need a lot of hypotheses to feed it. Unless, you have a curious environment where people see failures not as costly mistakes, but as opportunities for learning, you're not going to get there.

The second thing I think that's really important is to create an environment where data trumps opinions most of the time. This is really difficult, because we often are driven by opinions, sometimes the boss's opinions. That's not going to work in an environment like this. Human nature is a big obstacle here.

We tend to happily accept what we call good results, the kinds of results that seem to go with our intuition, or that confirm our biases. When we see something that we consider to be bad that goes against our assumptions, we will then thoroughly investigate those things and even challenge them. You need to create an environment where in fact, where the data is essentially king. That doesn't mean by the way that every decision has to be made exactly according to what the experiment says. There are other reasons why you may not want to do it. On average and most of the times, the data has to trump opinions.

The third one is what I call you have to democratize experiments. That means you have to empower people to run experiments without getting permission every single time, because if they have to get permission every single time, you're not going to get scale. That requires again, an environment that's totally transparent, where people can also stop any experiment that they want, but it's completely democratized.

The fourth one is ethics. When you run experiments, you've got to be ethically sensitive. Sometimes it's very difficult to answer that question, to figure out what is actually unethical and what is ethical. Sometimes it's actually quite clear cut. If you're running unethical experiments, I can tell you, it's not going to be good for business in the long run and there many examples out there when companies ran experiments that maybe they didn't consider them to be unethical, but where users were not really happy about them and that really backfired.

Then finally a fifth one and there's more out there, but I just want to give you five examples is you have to embrace a different leadership model. That is the role that leaders have to play in an experimented culture is actually quite different than what they traditionally do. If in fact it turns out that a lot of decisions are adjudicated by experiments, you have to ask yourself what in fact is the role of a senior leader in an environment like this?

[0:33:00.6] MB: I'd be curious to dig into that a little bit more. What are the changes in the leadership model that are necessitated by a focus on experimentation, a focus on more data and a focus on using some of these methodologies?

[0:33:15.6] ST: Well, I think first of all, leaders have to acknowledge that maybe sometimes they're part of the problem, broad interest being only part of the solution. There's a word for those leaders out there in the community. It's called a HPO, a highest paid person's opinion. I think we all know that hippos are very dangerous animals. Sometimes when the hippo is out there, when they're circulating in an organization, it's very difficult for employees to challenge these HPOs.

What is in fact then the role of these senior leaders? Well, I've defined three roles, three important roles in these kinds of environments. Of course, there are still some decisions, like what’s the strategic direction and what acquisitions to make? These are the kinds of things that may not be testable anyway. When it's testable, three things again, which I think are really important. First of all, the leader has to send a grand challenge that can be broken into testable hypotheses.

Why is that important? Well, if you have an environment where there's a lot of people who are just experimenting running lots of experiments, you want to make sure that the experiments are aiming at a certain direction, rather than just doing things willy-nilly. There has to be an overall program that these experiments push forward. That's what I call the grand challenge. What is the grand challenge here that we're aiming towards? Then once you have a grand challenge, obviously you may not be able to test that grand challenge. For example, it could be create the best online user experience in the industry. You got to then break that down into lots of small hypotheses that all aim towards that goal.

The second one and that one is really important as well, is senior leaders have to put in place the systems and the resources to make it possible. You can't expect organizations to suddenly do a lot of experiments if the resources and the systems are not in place. It's things like what I talked about before, infrastructure tools and so on. Then they also need to think about what the right organizational design is. How do I – if people are starting to experiment, which groups start out? Where's the expertise in my organization? How do I roll it out? What are the decision rights and so on and so on?

Then the third role which I think is just as important is to be a role model. Now what does it mean to be a role model? It means that the leaders have to live by the same rules as everyone else. It also means that their own ideas have to be subjected to these kinds of tests. That's very difficult. One CEO told me that this is hard for most CEOs. You can't have an ego thinking that you always know best. It involves going into a meeting and telling people, “I just don't know.” Admit that you're wrong, having intellectual humility and so on.

Francis Bacon, the forefather of the scientific method once said and I really love that quote, Matt. “If a man will begin with certainties, he shall end in doubts. But if he will be content to begin with doubts, he shall end in certainties.” You have to have that. That's the challenge. I think a lot of the leaders have to look in the mirror and really ask themselves whether their approach is really the right approach in this world that we're currently operating in.

There's a fun story at booking.com, where a new CEO came in and the team had some discussion around what the best logo design is. The CEO then basically said, “I decided this is the logo that we're going to go with.” People then looked at him and asked, “Well, that's an interesting suggestion. We'll run the test and we'll let you know what happens.” You need that healthy culture, where even the senior leaders can be challenged.

[0:37:20.4] MB: Yeah, that's such a great point. Oftentimes, one of my favorite Peter Drucker quotes is that the bottleneck is always at the top of the bottle. In the same vein, it's so easy for leadership to sometimes get in their own way around looking at the data, or putting their own opinions aside, etc.

[0:37:39.3] ST: Sometimes, Matt, even the leaders have the best intentions. There's a great story, another story. Ron Johnson. I don't know if you're familiar with that story. Ron Johnson was together with Steve Jobs. They created the Apple Store. It's really fascinating, because the Apple store is by any measure, perhaps the most successful retail concept that I think was created maybe in the last decade and enormously successful.

JCPenney, another big retailer in the US decides – they're looking at Apple and they're seeing all these amazing things happening and they decide, “Why don't we hire Ron Johnson as the CEO and with a mandate to do the sorts of magical things that he did for Apple.” At the time, I think Ron was a retail God, I mean, by any measure.

He gets hired as a CEO with a big incentive package. He comes to JCPenney and starts to implement a new bold plan. He does the kinds of things that he did at Apple, such as eliminating coupons. He has branded boutiques and new technology and all sorts of things. 17 months later, JCPenney is fighting for survival. Sales have plunged. Losses are soaring. Johnson loses his job and he's out and they're bringing the old CEO back in with a mandate to restore all the things that they did before Johnson arrived. The question is what actually went wrong? I mean, they had lots of data and so forth.

If you can listen to the folks there and the people on the board and others, they will tell you. They said, “Part of the problem is that we didn't run the test. We didn't run the experiments.” That probably could have told you. We don't know it's a counterfactual. We don't know whatever. They probably could have at least given you an indication that somebody's changes are not going to work for the kinds of customers that go to a JCPenney.

Even Ron Johnson later on reflected on this. He said that nothing rightfully, so he doesn't consider himself to be an arrogant person. He actually comes across as quite modest. He referred to this as situational arrogance. It's not that you're generally arrogant and you get so confident in your results, because you're so successful that you become situationally arrogant. The kinds of context that he was in just didn't transfer into the context that JCPenney had.

You have to again, even as a senior leader, even when you're really successful, you always got to look in the mirror and saying, “Is what I'm doing really true?” Even run the test. We've seen it at Snap Inc. it happened and many other companies, where people – where senior leaders got a little bit ahead of themselves. They didn't do enough testing and they paid the price.

[0:40:42.9] MB: Such a great insight. I want to bring back one other topic that we touched on earlier and just get your sense around this. Is there a certain organizational scale that this starts to kick in at? Or asking this in a different way; I can see this totally makes sense at a Fortune 500, a big company, huge budget. You could have a whole department that's doing this. For somebody who's in a small business, or a startup, or there's a sense of resource scarcity, how do you think about implementing this experimentation mindset and methodology at a smaller scale, at an organization that may not have the budget, or the opportunity to pursue it at that big of a level?

[0:41:26.4] ST: Yes. Even smaller companies that don't have the budgets or the resources can in fact adopt the same kinds of approaches. In fact, I think in these kinds of environments, it may be even more valuable. By the way, research by one of my colleagues has actually shown that they do actually adopt a lot of the tools in one space for sure, I called AB testing. It's one experiment and there are lots of tools out there. They adopt those tools. It actually helps them, because the tools end up being less expensive than heavily investing in market research, which they often don't have the resources for either. Rather than doing a lot of market research and trying to figure out what works and what doesn't work for more qualitative methods, they all just test it. That's one issue.

The other issue that often comes up, Matt, is the issue of sample size. Yes, maybe we're startup, maybe we have very small sample sizes, or even in a brick-and-mortar environment. We're not like a Booking that has 500 to 700 million visitors a month. We may have a much, much smaller number of visitors to our website. Or if we are a brick-and-mortar environment, we may only have maybe a few stores or so on which can try to experiment in. It turns out that even in small sample environment, you can run experiments.

There are actually again, analytical techniques that are available that allow you to get meaningful results from small sample environments, which are some of these methods are again, described in the book.

There's another thing also, which is important too. That is turns out that when you make bigger changes, you end up needing smaller sample sizes. It has to do with the power of statistical concept. If you make very small changes, then of course, you need larger sample size. The intuition is quite clear, that is you have a lot of noise in the background. Then if you make big changes, you want to basically detect the changes relative to the noise. It just takes the bigger the changes, the easier is this to detect it, so you can get away with smaller sample sizes.

I encourage small organizations that perhaps have much less traffic, or even in brick-and-mortar environment, encourage them to make bigger changes. It's also the question, what do you use experiments for? There are different kinds of experiments that you can run. You can certainly run optimization experiments. This is the kinds of experiments they say an Amazon will run on their websites to make sure that everything is optimized and that's what everybody essentially. All the big players essentially do.

You can also run exploration type of experiments, where maybe you're just exploring direction. Now, that's not going to give you causality, because you may be changing too many variables at the same time to give you a meaningful sense for causality about one individual variable. It may give you just a sense of direction, which then can be followed up by smaller experiments, more isolated experiments that then can teach you again about causality.

You're mixing. You're going back and forth. You could maybe toggle between your more exploration type of experiments and then more optimization experiments. There are lots of different ways of doing this. Again, I tried to outline all these different ways in the book.

[0:45:02.9] MB: For listeners who want to concretely implement this in their lives in some way, what would be one action step that you would give them to start implementing more experimentation in their lives or their business?

[0:45:17.0] ST: Well, I think beginning, you need to first acknowledge. You need to be aware that experimentation matters I always tell people experimentation is the engine of innovation. If you want to innovate, you need to experiment.

Now most people would say and in fact all people would say, “That's a good thing. I understand that I need to experiment more.” Then the question is what's the next step? The next step is you need to adopt some rigorous framework. You have to build some discipline around it, rather than thinking about experiments, “Okay, we're just trying something.” I think that's an important starting point. Be committed to building an organizational capability around it.

It also means that you can't do it alone. You need people around you. Then once you start and once you have some framework in place, it doesn't have to be the ideal experiment, but it needs to have some elements of what a good experiment is. Once you have that in place, you can start thinking about designing experiments. What would be involved, for example?

Well, the ability to write down a good hypothesis. We know. We use the word hypothesis all the time. Trying to understand some of what a good hypothesis is and what a bad hypothesis is, maybe train people, giving them templates of what it is. That's just an example of what I mean by a framework. Then once you have that in place, you just got to get going on, and so you get better at it and start overtime, then scaling it.

People sometimes get a little nervous when they hear, “Oh, okay. The companies are running a thousand experiments, even tens of thousands of experiments a year.” You have to always remember that all these companies started small. They all started with a handful of experiments. Then over time, they just got better and better and they gradually increased scale.

I think that would be my recommendation. Just get going on it. Don't think too much about it. Experimentation is going to be part of the competitive game going forward, whether you're in digital, moving into digital, or not digital. In fact, some CEOs told me that are doing this at large scale, unless you do this, you're going to be dead. I mean, that's a pretty big endorsement. That's my advice. Get going on it.

[0:47:34.0] MB: Where can listeners find you and the book and your work online?

[0:47:38.4] ST: The book is of course, available in all bookstores, online and also physical bookstores. It's out there, all the usual ones; Amazon and Barnes & Noble, independent bookstores and so forth. If they want to learn more about what I do, you can find me online. I'm at Harvard Business School. I'm not going anywhere. I'm here. I've been here for almost 25 years now. You would find me on www.thomke.com and that will take you directly to Harvard Business School, my website. You can also go directly to Harvard Business School and search me.

If you want to contact me, you can send me a link. In the request, just tell me where you heard me, so I can make the connection. If you've got a question, send me an e-mail. It's very simple as well. It's just the t@hbs.adu. Lots of different ways to get to me.

[0:48:34.9] MB: Well Stefan, thank you so much for coming on the show, for sharing all this wisdom, great insights into the power of experimentation.

[0:48:43.5] ST: Thanks, Matt. Thanks for having me.

[0:48:45.4] MB: Thank you so much for listening to the Science of Success. We created this show to help you our listeners, master evidence-based growth. I love hearing from listeners. If you want to reach out, share your story, or just say hi, shoot me an e-mail. My e-mail is matt@successpodcast.com. That’s M-A-T-T@successpodcast.com. I’d love to hear from you and I read and respond to every single listener e-mail.

I'm going to give you three reasons why you should sign up for our e-mail list today by going to successpodcast.com, signing up right on the homepage. There are some incredible stuff that’s only available to those on the e-mail list, so be sure to sign up, including an exclusive curated weekly e-mail from us called Mindset Monday, which is short, simple, filled with articles, stories, things that we found interesting and fascinating in the world of evidence-based growth in the last week.

 

Next, you're going to get an exclusive chance to shape the show, including voting on guests, submitting your own personal questions that we’ll ask guests on air and much more. Lastly, you’re going to get a free guide we created based on listener demand, our most popular guide, which is called how to organize and remember everything. You can get it completely for free along with another surprise bonus guide by signing up and joining the e-mail list today. Again, you can do that at successpodcast.com, sign up right at the homepage, or if you're on the go, just text the word SMARTER, S-M-A-R-T-E-R to the number 44-222. 

Remember, the greatest compliment you can give us is a referral to a friend either live or online. If you enjoyed this episode, please leave us an awesome review and subscribe on iTunes because that helps boost the algorithm, that helps us move up the iTunes rankings and helps more people discover the Science of Success. 

Don't forget, if you want to get all the incredible information we talk about in the show, links transcripts, everything we discussed and much more, be sure to check out our show notes. You can get those at successpodcast.com, just hit the show notes button right at the top. 

Thanks again, and we'll see you on the next episode of the Science of Success.