Masterclass: Achieving innovation excellence (Part 2)

Dr. Peter Temes, Founder & President, ILO Institute

Transcript:

Peter Temes (00:10):

For our next conversation. I want to invite you if you want to come closer. It's not a bad idea. It's not a big group. Can you all hear me? Okay. Okay. I want to ask a question to the group before we get started. Actually, two questions. Question number one, what's the biggest challenge your organization is facing right now? And remind us who you are and where you work.

Audience Member 1 (00:43):

Okay, So my name is Laurie. I work at Amica Insurance and we're a very old insurer. We're about 35 years old. So right now, I think that our biggest thing is how we deal with change, and I know that sounds really general, however it seems to be daily and constant issue. There's so much coming at us and I really think that some of the senior leadership is really afraid to take the plunge when things have been operating as they happen.

Peter Temes (01:12):

Right. How do you deal with change?

Audience Member 1 (01:16):

Piecemeal?

Peter Temes (01:17):

Yeah.

Audience Member 1 (01:17):

So It's like you have to baby feed things. You have to be like, Hey, I heard this really great company and I think that be an agent of change for us. Why don't we start with just conversation?

Peter Temes (01:30):

Yeah.

Audience Member 1 (01:33):

And it's really Like baby steps.

Peter Temes (01:36):

I want to jump to here a pure innovation slide here. So this applies pretty much across any big organization. So here's the notion. Most large organizations are oriented either toward efficiency or toward discovery, and most are oriented toward efficiency. Most functions inside a large organization are oriented toward efficiency or discovery, and most are toward deficiency. The biggest challenge for change initiatives and even for strategy generally, is that the accountability measures, the success measures for people are almost always the efficiency measures, and they're absolutely different and often the opposite from discovery measures. So when you are running a big organization, you want efficiency, you're going to have a lot of repeating of what works, repeated success, process control, high specialization, narrow and efficient selection of people and of tasks, low autonomy relatively. You're going to serve existing markets with familiar tools and familiar ways of working, but when you start doing discovery work, you realize it's almost exactly the opposite, right? You're going to do a lot of testing and learning, which is kind of inherently inefficient compared to knowing the right answer ahead of time. You're going to be doing a lot of internal market stuff, which means you're going to be experimenting with who's good at doing what and different approaches to the same thing until the internal market in the firm kind of gives you an answer about what works best. Every choice is going to be offered. You can have multiple paths. That's how discovery works. I've been in this situation and I've seen it a hundred times, right? A brave CEO or business unit head says, we need to do this. I'm going to put a little money on the table. Or someone internally rises up and finds the extra leverage time and energy to start doing the work. And then they get the phone call that says, every Thursday we're going to talk about every cent you spend and every hour your team is investing to make sure it's aligned with strategy. That is not helpful. This chart explains a lot. It's usually honestly only useful if you're a very, very senior and you can think about this and tell people to change because of it. It's also really useful when you have a job offer that you haven't accepted yet and you say, Hey, wait a minute. I may have literally had this. I was hired by the president of a big institution and he and I had a really clear agreement about what I was supposed to do and frankly about what I'm good at and what I'm not good at. And then I was told, we've got a vice president who's going to be really helpful to the four people who are your peers and you. I'm like, awesome. And then he started hanging around. I'm like, well, that's interesting. And he's asking, I thought he was asking me for advice about a lot of stuff. And then his assistant called and said, Chris wants to start a one hour weekly call with all his direct reports. And I was like, oh my God, am I a direct report to Chris? What? And it was exactly this, right? And I think often people who lead innovation, people who are brought in or empowered to lead a change initiative, they're in a situation like this and they have personal talent and personal skills and maybe some good strategies for absorbing the pain. And it's like they're holding the two live wires together. They can't do it forever, but they're extraordinary people. And I could name four or five who we've worked with as innovation leaders in big firms, or they're so brilliant in terms of politics and human relations. And the key that we've seen is the difference between transactional and transformational leadership. And this comes from leadership theory.

(05:29)

I wrote a dissertation about the civil rights movement. Martin Luther King, I mean Martin Luther King, we know as a transformational leader, but he began as a transactional leader. And the difference is the transactional leader gives the group what it already knows at once. The transformation leader helps the group want something different and better. So King at the age of 26 is brought in to give a speech in Montgomery, Alabama when this movement led by a group of local women and long suffering veterans is stood up, other people started it and he was invited to give a speech and it was just so good. But that movement about who could ride the bus in the front of the bus, in the back of the bus, the group knew what it wanted and they asked King to help get it. And miraculously he did. And they had a couple more victories in that mode.

(06:24)

And then he started talking about stuff like a beloved community and non-violence and walking arm in arm with people who don't have your best interest at hand. He went first from extraordinary transactional success to then elevate people and to help them want something different and better. I've seen this seven or eight times in big companies we work with. Someone comes in, a guy at Pepsi brilliant comes in and he says, okay, I think I see an agenda for big change. And by the way, the Pepsi case is really interesting. So Pepsi's delivered mostly in trucks on local roots. Most of the truck is filled with Pepsi, diet, Pepsi, different variations of Pepsi, 80% of the truck. And then they have 50 other branded beverages, and that's the back of the truck. They get slotted in. Where's the margin margin's in the back of the truck? The 20% of non Pepsi branded items have much higher margins. So that's the coconut water. It's stuff people probably from our demographic are spending four and $5 for not a 75 cent Coke or Pepsi. And he, in very short order coming in from the outside said, okay, this is the opportunity to do something big. Nobody would come with me on that journey today. Nobody would. He also did a very interesting maneuver for making change in a big organization. He'd come from Samsung, we worked with him at Samsung. Samsung is an efficiency oriented group. The most important way to be a hero at Samsung is to save a billion dollars as an executive. That is not the case at Pepsi. Pepsi is brand oriented. The most important way to be a hero at Pepsi is to champion a brand champion, an international market. And what he recognized, he asked this very interesting question because a lot of us as agents of change will say is the CEO on my side. The big advocacy is what will help this initiative work. And that's true, but the army of people in the middle of the firm, hundreds and hundreds of them are the ones who can block you even if the CEO supports what you want to do, the business unit head. So what this fellow's name is Luke Mansfield, he's now a chief strategy officer at Harley Davidson because he got fired from Pepsi, he did too good a job. What he did is he said, okay, first, who here in this company makes up that army of people in the middle who I need to be my most important allies? And he said, it's the hundreds of brand managers and the hundreds of national market of country managers. And they said, all right, what do they want? Forget what I want. I already had my strategic vision of what the future should be here. What do they already know they want now? And he said they want to have meaningful, measurable growth in sales, both brand by brand and market by market. And he did this brilliant thing. He said, how many, I think I even have a slide on this, but for not important, how many beverage brands do we own in at least one market where they're making money in at least one market, but they haven't been tested in most other markets? And his answer was 20, right? Care Cocoa 2009, Pepsi bought the number one selling coconut water in Brazil called care cocoa in a little cardboard box. I hate coconut water. And they sold it like crazy in Brazil. They professionalized it, they increased the margins, they crossed into some new demographics in Brazil and it wasn't being sold anywhere else. He made a grid, 20 products, 20 branded products that were killing it in one market and were mostly untested in others, and the 20 biggest markets. So that's 400 unique items, this brand in this market, this brand in that market. And then he went out and he said, okay, I want to do market testing to see which of these brands is almost sure to be a big winner in which of these markets. And then I'm going to bring that data to the brand manager and the market manager to show them where this easy low hanging fruit is so they can post good numbers on the board in 24 months. And the first thing he had to do was get, and this is the innovation toolkit. First thing he had to do was 400 in-market consumer research studies. So he goes to Nielsen, which is Pepsi's marketing agency of record, and he says, guys, here's what you need to know. This is unofficial. This is my initiative. I have no budget. Can you do this for me? And they go out back and they say, good news, Luke, we can do it for you for only $5 million. And he says, okay, see my remarks above, I have no money, but maybe if you tell me everything you'll do, write down all the things you'll do. Some of where this is going, inevitably it's very sneaky that will help me get budget, which is literally true. And they give him the phone book and then he assembles this crew. And this is one of the answers to the question of how do you do new stuff that you don't have permission to? How do you do experiments with new technologies? How do you operate off the books in some ways? So he's got a staff at that point of six people mostly doing stuff that is already stuff that was being done, right? They're waiting to see what he can do. He goes out, and this is one of the most incredible resources that most of us are not taking advantage of. You don't have to be senior in your organization to take advantage of. He recruits three new brand new technology oriented market research agencies that are recently venture capital funded. So once you've taken in five or 10 or 20 million in venture capital, you don't need another half a million or a million dollar customer. You need a story to tell. You need a success story working with Pepsi. You need to be the hero. You have enough operating money for a while at least. So these people volunteered three different startup market research companies. It was like different kinds of math. It was all about statistical innovation using big data and the right new chips and all that stuff. They sat at the table, he got a couple interns, he got the least worst country managers, like one, two and the least worst brand managers.

(12:30)

And he had M B A interns and buddies and friends. And over the course of a month, they created a model they did spend, and he was able to find $500,000 to do these tests. Lots of interesting details. The statistical modeling they did was very innovative, which meant the validity of all the 400 tests increased as they did them. So he had to redo the first five or six because they were almost random in their results. But as the system learned, the results were really good with relatively little effort. And he had 100 in market successes that over about three years drove another billion in revenue. That was his goal. He reached it, and in the end, he was able to start doing the big stuff because he had a hundred new friends in the middle of the Org whom he had made money for. Move this over to AIG. So I mentioned in the last session, we started doing work with AIG In 2013, guy named Merley had replaced the central chief innovation officer who we'd worked with a lot earlier. Everyone ignored her, but she was really interesting inside the firm. He was hired by the new CEO of the property and casualty group who had been installed by the federal government, which was still the leader of the receivership for the company. And Merley asked us to help him with some internal strategy with some external research. Here's what he did. Very smart guy. He had come from farmers and he wanted to create a digital data innovation team. And he started spending money. He had some money spending money on identifying all the data internally that was in-house but not being used. He started looking at all the small firms who could help him scraping social media, looking at every possible bucket, some proprietary data, some non-proprietary data. And they just did all that kind of PhD level stuff to rate risk. So if I'm spending 2 billion on an oil platform to build it, I'm probably going to insure it. I'm probably spending 10, 20 million a year on risk premiums. That's the kind of customer that these guys go after. And the underwriter to be an underwriter at AIG in some of those sectors, at those levels is being an investment banker. And there were two characteristics of that. Number one, big dollars. If you have a good year, you make enough money, you don't have to work anymore. But also hyper competitive and very transparent, very much at an investment bank. Everybody knows how good a year you're having and the level of competition is just brutal. So he recruits two underwriters, again, the least worst. And he just says, look, ignore it if you wish. Share a little bit of data about your next big underwriting decision and I'll tell you things about the risk profile that you don't know.

(15:26)

And he gets enough traction with these two. And he starts, it's a pair of glasses. You all know this. We're insuring the world with blurry vision. I see how much risk is there. I'm going to insure it for $10. And I put my glasses like, oh crap, there's much more risk. No, I won't insure it at any price or there's less seven, and I'll underbid everybody and have a bigger book of business, but with less real risk so I don't have to send out the proceeds. So in a year, these two have good years, and people come to them and say, what are you doing? They say, well, I'm working with this bozo, I think he's a jerk, but man, his numbers really helped me make more money. And then it ripples out. So that's that kind of mechanism of change. But based on hitting people where they are first, the transactional leadership, I'm going to give you what you know want what you're already asking for, and I'm going to build the social capital to then become the kind of leader who has enough trust that you can do something with me. That may sound crazy, but it will start in small bites and we'll start really making that change. That's what we've seen. So that was a long riff on your question or your answer rather about what's your most important priority. And the answer was how we can make change. Who else would like to share a current most important priority or opportunity?

Audience Member 2 (16:51):

We can't get data, reliable or consistent. We build a portal for the end user to come in and try and see their coverages that they just bought from us as a brokerage. And it has been, I work in tech, all different other kinds of industries, and I've come into this place for the past year and it's shocking to me that my agents, when I ask them, how do they service these people? It's like, oh, well, they call us and we hang up with them and then we call the carrier and do the service for them.

Peter Temes (17:25):

Yeah. Have you heard of a company called Patientslikeme.com? It's very small. Anybody heard of them? Really interesting. And again, if any of your work touches on the medical world, and even just as fellow sufferers with human bodies, HIPAA is filled with restrictions of what you can do with data. And of course, because you have these very bureaucratic systems and structures in healthcare, I mean, HIPAA says you can't do this and operating people say, you can't do this just to be sure, right? Because it's too hard to do it right. We'll just do it twice as much. We'll over build patients like me began almost 20 years ago, and you might not know this, but a lot of the commercial internet, the most active early online communities before Netscape were what they called disease communities. I did some work in a prior life a long, long time ago for Prodigy Internet. Remember Prodigy? Anybody? Yeah, prodigy, one of the real pioneers, the closed garden. Initially you couldn't even go to the public internet. It was just a thousand different pages that were tightly curated their early business. They told me it was driven by two things, pornography, but mostly words, which somehow I think is less terrible. I don't know. Is that true? Alright, mostly words and disease communities. So my kid, my grandma, someone I love or I have a disease that has a prevalence of one in 2000 that's still a lot of people, 300 million Americans, but it means there's probably nobody in my neighborhood who has the same problem. Who do I talk to? These online communities began to aggregate around certain kinds of diseases, then they call them disease communities. And it would be like, here's my kid's symptoms. What helps because the doctor's not helping me. And they began really, and you create strong social connections as well as sharing of information in that kind of environment. So patients like me as the internet matured, tackled a somewhat different problem with the same dynamic. And it was started by a couple of families who had kids who had certain diseases. And what they did was they invited people, ordinary folks to go onto the site and say, here's the medicine that my doctor prescribed. Here's the dosage, here's what's happened. And they circumvented HIPAA by having the patient disclose voluntarily. There was no third party involved. And this thing became a really, really big database. After a few years, I think to their credit, they had big notices saying, we will anonymize and aggregate your data and share it at a cost with drug companies. So they help develop better drugs for people like us, but really clear we're selling your data after it's been anonymized and they had to have compliance people and all that. There was amazing value once they reached a few hundred thousand folks. Amazing value. And a lot of people, just anecdotally, a lot of people said, my doctor said he was giving me the appropriate dose. Everyone else is getting a different dose. What the hell is going on? Or my kid has debilitating leg cramps, 400 milligrams of magnesium before bed. Who knew? My doctor didn't know, but this community of people knew. So that was super valuable. The opportunities that exist and you spend money on it, you have to cultivate it to get people to voluntarily share data that folks who should be sharing or that regulatory things block is underutilized. That kind of, in fact, well, I had on a different slide.

Audience Member 2 (21:02):

We have to sign into your carrier to give us the data directly.

Peter Temes (21:05):

So I can sign in and essentially force my carrier to send the data to you. And how do you manage that? How do you incentivize people to do that?

Audience Member 2 (21:15):

We've found actually that the faster quoting process is if you'd win as well as our agencies. One of the things that HUB has done really well is we've acquired a lot of agency over just all the M and A that they do. And so the amount of advice that they get directly from the brokerage even on policies we don't sell you, gets you to come over to us.

Peter Temes (21:38):

Yeah, that's really interesting. Many of us know that in the corporate world, people will volunteer frank information to be part of an industry study in exchange for getting a copy of the report. And what we've found dealing with workers especially, but consumers as well, is anyone who you want to be a data donor, you have to treat as a data customer. So if you're asking me or giving me the opportunity to share or to activate the sharing of my personal data, you need to tell me exactly why. You need to show me the last report that you did with stuff that you are allowed to share. That gives me the value of the sharing. I'm not just a donor, I'm a customer. And the more transparent you can be, the less resistance there is to that privacy concern. Another question or concern or answer to my question.

Audience Member 1 (22:32):

So I have too much data, so I do a lot of catastrophe quake. So I do collect a lot of data about the sources of data available, how to validate the data, but more importantly, data is just a description. And one of my challenge today is to transform my description into protection. So I do have model, there's plenty of model that exists for 20 years into CAT model, stochastic model. They're unreliable. Yes. I mean time there is a new American, it's not working. And then the capabilities to absorb all these data, knowing that most of it's not reliable, and then to the teams to make decision, something that can be useful for them to make it consistent and Quality information. It's missing.

Peter Temes (23:33):

Yes,

Audience Member 1 (23:34):

There are no available and I, 30 years at AIG before this company, so yeah, they're probably better. But our all industry is right, definitely in this world. That's the ability to develop accurate models else that I'm going to help that we're in that. I mean, there's nothing, out of all the shares, and I speak to a lot of people, nothing really is. I mean, we address a small piece of the process all the time. So my big issue today is that I flash to the underwriter a lot of description and I hope for the best for them to make the best decision. And I have a little bit model that gives us description. But when you give to a human being 50 points of information on the risk, they'll use fire.

Peter Temes (24:28):

Right exactly.

Audience Member 1 (24:28):

To make the decisions because too much for the human brain. And they will find, they'll select those that are not the most relevant for the decision to make, but the one.

Peter Temes (24:40):

Right?

Audience Member 1 (24:41):

So the big issue is that it's great to have data, but how you can translate into a industry oo Something that can be a prediction or even prescription.

Peter Temes (24:50):

Yes,

Audience Member 1 (24:50):

I think we're user away from.

Peter Temes (24:53):

I think so. But there are two drivers to what you're talking about. One is where are we technically, but the other is socially and organizationally, how are we making use of people's intelligence? We talk a lot about something that we've labeled narrative intelligence. And here's the example in terms of data. So Emerson, no, not Emerson, it's a Cummins. Cummins engine makes engines that are small enough to put in a pickup truck, but big enough to drive a steamship. The really big engines are often so large, they have to be shipped in pieces. And then they have technicians who are very highly paid, who swarm over a site and in a week, two weeks, in a few days, they put it together often just with hand tools because you've got these enormous things that you have to physically ship separately. Often they'll go by barges down rivers.

(25:42)

So at one point, about eight years ago, they started putting RFID chips into all the hand tools, which created data. So they're doing hundreds of these builds of these really big engines every year. Some of them are great, some of them are not great. They have A, B and C's, now they're looking at the difference between an A build and a C build because they want to make the, what are they doing differently? So they've got the hand tools, and initially this was just so people stop losing the hand tools like where's the tool cost $800? And this guy left it in his pickup truck, so we can find it. Now far more important. This is something we got involved in. You could animate. In fact, the native state of that data wasn't a spreadsheet, it was a visual image. You look at this site and you can create a moving image, you can animate how the tools moved over, let's say two weeks. And it looks like a bird's nest because most of the tools are moving, right? And you have to look at the speed. So what we had do was play those animations, set them up on PCs with a speed dial because this is new kinds of data. This is data they never had before. And they had the blessing of having no expectations and not knowing where they thought the value in the data lay. They didn't already have a set of questions that data was going to answer. And the only question they wanted to answer is, what's the difference between a really good build and a lousy build? And they had enough examples of each and the two things we said were have people watch this and make them able to vary the speed and have as wide a diversity of people watching as possible. So there's a guy named Alf Bingham who used to run research and development for Eli Lilly and the drug company.

(27:26)

He started something called incent. If you haven't seen it, you should go see it. It's a place where for a substantial fee, big companies and government agencies can post open challenges and it's highly curated. So people who are very sophisticated from all over the world go to it and try to solve problems based on bounties, right? Alf loves the phrase optimal distance. He says, when you look open source a problem at first, the solutions you get come from the lunch server, the truck driver. And sometimes those are great solutions, but as you get closer to an answer over time, you don't get the PhD specialist in the area usually winning because if it's a problem that made it to an incentive, the experts have already tried and failed to solve it. You tend to get people with optimal distance. So NASA very famously had a contest to create the next generation of glove for the spacesuit. And the guy who won perfect optimal distance, so not a fabric scientist, not someone with a PhD, but someone with a master's degree in material science who was working from his garage in Maine. He was a third generation glove maker in traditional glove making in Maine. So it was these unexpected strands of experience and resources, but also that master's level, he knows what some of the fundamental technical challenges are, the way that I wouldn't, right, the optimal distance. So what happens now, you have these bird's nest animations. This is what a great build looks like, and this is what a lousy one looks like. And if I watch it in real time, I can't tell the difference. It's barely moving, it's dense, and then I go faster, I go slower. And it's not just the scientists, not just the data people. Interestingly, the people who created the best insights, and there were two takeaways that were worth a lot of money from this experiment, were people who had some training, but like data bootcamp training, like 90 day bootcamp training. And they were natively really intelligent. They were working in jobs that were somewhat related to technology, film degrees, English degrees, music degrees, and here are the two big things that they found. Number one, the really good builds are a little bit less dense. After staring at that long enough, you said, huh, they're using somewhat fewer tools on the really good build. How is that even possible? And the answer is, you're crawling around inside this machine as you're building it. The technicians who are very highly paid have a tool belt. They also have a much bigger toolbox on the edge of the site. Every so often they need a tool that's not on the belt. They climb off, they get that tool and they come back. It's not just that it costs time. Their work is so sophisticated that they get into a flow state, right? Like a musician or a writer, they're in that groove, and when they have to climb off and go get the tool, they break out of it so they become less efficient before they leave and they're less efficient for a half hour coming back, getting into the groove. So having tools that are repurposed, the best people doing that work take a tool that's built to do this. And they know, so they do this with it. They substitute, they expand the range of utility of any one tool. They have off-label uses of tools. This is something you can teach. This is something you can procure for. They hadn't thought to do it. They started doing it. The other thing is you realize when you are watching it at the right speed, the really good builds have a couple of extra breaks every day. Now these are union jobs, so usually when they're on a break, the boss can't talk to anybody. The break is the break. It's a negotiated benefit. But the good site supervisors took extra time, typically twice in a shift to pull everybody off the engine and say, what surprised us so far today? What should we do different going forward? Just that check-in turned out to be really, really valuable. So they found that, but the reason they found those things and they have become the new normal, right? In that ratchet thing I described in the last session, like Starbucks, the best practice today becomes a standard practice tomorrow. The reason they found it is because these are people who are trained to have narrative intelligence. I mean, you go to a movie, anybody have a good friend or a spouse who's like a film major? No. Oh, you need to get out more. You need to widen your circle. Go to some film festivals. Yeah.

Audience Member 3 (31:53):

I come from a a little bit of an automotive background. What comes to mind for me is, okay, if they're using said 800 tool improperly?

Peter Temes (31:58):

Yes.

Audience Member 3 (31:59):

Then you then go back to an engineer and say, Hey, we need to either a, build a new tool with purpose engineer part of this engine to improve efficiency.

Peter Temes (32:07):

Oh, That's interesting. Yeah. It's all the above. All the above and hopefully driven by the user of the tool and some good stories about that. But the way Toyota always is kind of having contest to reprocess the steps in a function, as soon as you have a process gain where let's say you reduce the weight of the engine in a Jeep by 20 pounds now because it it's smaller, it weighs less. You can have one fewer bolts bolting into the chassis with one fewer bolt. There's room for something else. It's that cascade of change. It's very hard to predict and you need to kind of discover that experimentally. Go ahead.

Audience Member 3 (32:45):

That brings up, it wasn't necessarily too to save weight, they from steel aluminum, yeah. Payload there by sell more trucks by saying, Hey, get more payload passes?

Peter Temes (32:59):

Yeah, I like it. But a lot of it's unintentional and unanticipated when you have a material science breakthrough that takes weight out of the engine, it's not because someone wanted to have one less bolt so that they could put in something here instead of over here, which is why you need the people on the shop floor to constantly do that kind of experimentation. And you need people who can do the analytics. So go to the movie, go to a movie with a film theory graduate school student. They're annoying as heck because they're seeing things that you're not seeing. They're trained to see the story on top and the story below, and the symbolic meaning and the hidden meaning and the connections to a hundred other things. And I think that's kind of cool. I was just, but nobody likes it. I'm just sitting. I do some theater work terrible at it, sitting with this wonderful group of people in Seattle where I live, and we're putting on Neil Simon's, Brighton Beach memoirs. Anybody know that movie play show? Yeah, it's good. The movie was terrible. The play's great, and I grew up in the next neighborhood. I'm the only Jew in the cast. The whole thing is about Jews. I'm the only one from Brooklyn. My natural speaking is the accent they want. It's ridiculous. And one of the lines is this kid, this annoying 15 year old version of Neil Simon, the famous playwright saying something, I, Eugene Jacobs in this place, which is part of this place in Brighton Beach, in Brooklyn, in New York. And suddenly I realized, oh, and I thought I would share this with my castmates. I said, oh, that's James Joyce. He's like riffing on what James Joyce did in portrait of the artist. As a young man, I'm looking around and these aren't my grad school buddies. They're looking at me like, shut up. We're busy. You made a connection. Because I was trained as a scholar of literature. I think everyone is interested in that. I find it fascinating. It's that level under the level, level of the text, yes, it can be very annoying, but people who have that training, people who are interestingly, not music theory majors, music performance majors, this is the universal symbol for the violin music performance majors. Anybody here have a music performance degree or background? Probably one or two people do. You're just shy. One of the really cool things, if you get a degree in music performance from any legitimate university, you're probably vastly better than most people who try to play that instrument. Are you good enough to be the 1% or the 1% who can make a living at it? Probably not. I mean, just statistically, probably not. I can't tell the difference between you being in the top half of 1% and being 20 times better. My ear is not sophisticated enough regardless of how well you play, if you reach that point in your music education, you have spent years and years and years learning how to listen to music at a much deeper level than I have. So you may play great or lousy, but you listen like a professional. IBM has a long history of hiring people from second tier universities who have music performance degrees and a tech degree as salespeople, because they know that if you can take the genius with which you listen to a violin concerto, studied violin for 30 years, and turn that listening skill toward your customer to really hear what they have to say, you'll be a better salesman. There's just no question about that. The data that floods in without a preconceived model yet is something that we need to listen to. In the same way, how do you get the data to speak to you in different ways and how do you get people to hear it? And this is kind of the open sourcing of where the significance is. People finding the correlation between this and this that are humanly meaningful because we are so different as people, we bring different needs and expectations to the table. Some of us are trained to see deeper levels of meaning. The more we expose more people to the data in that kind of environment, tell us what you hear. It's a contest. Just like the contest on the floor in Japan, when you're building a Toyota, your team has to take as many steps out of the process of mounting the engine in the Jeep.

(37:09)

And these guys are fighting against you. They're going to find things management can't anticipate. And it's because of the constant return to discovery and competition that you get there. The more we can do that with novel expressions of data, the more we're likely to see. And it will always feel like an accident, but it's the kind of accident that drives evolution. You know, have a billion people and one or two of them have a slight mutation, but they survive better. So it's oriented toward progress, even though there is a great degree of randomness in it. So the ways in which we as data professionals can accelerate that positive deployment of data is to create more opportunities for people from more backgrounds to interact with the data, to make the data visible in some ways, make it an animation. If a spreadsheet doesn't work, the more we can do that, the more we'll discover where some of the unanticipated value in the data is and how it matches the kind of work that people do. You, okay, I don't know if that was a helpful answer.

Audience Member 1 (38:10):

It's take a lot of time as a team of two. Oh, people at, after two years they develop two products. The quality that this is definitely does.

Peter Temes (38:22):

Yeah.

Audience Member 1 (38:22):

A lot of research, but I think in industry we need to find a way to set out those developments because what you describe it takes way too much time. And then we spend a lot of time to try find, to implement them to our Industry and that's what of the issue we do, you guys, we've got plenty of ideas and we spent a lot of money on trying to get these items, but the pace of delivery.

Peter Temes (38:50):

Yeah, that makes perfect sense to me. That certainly what we see. However, I think the question is how you have a kind of dual purpose discovery. And what I mean by that is instead of saying we need to take dedicated resources to understanding data and its applications, if we can capture the way people are interacting with data already and put more data in their path for them to interact with as they're doing their core work, we can pull more lessons from it. I mean, it's not a panacea.

Audience Member 1 (39:23):

But they issue the virus. The more you expose them to the data, the more that they will be viruses in the way.

Peter Temes (39:30):

Absolutely.

Audience Member 1 (39:33):

We really occupied by those viruses. Because you're not sure that they'll use it in the Most right, Adequate way. And when you do analysis even to a human being and experience, I can tell you that uniting some of the time by unreliable is more than years experience because there are plenty human being used and knowledge from the Past. To be predicted. And it's very, very difficult to, to change the mindset of the person rely on the some experiment 20 years ago and to.

Peter Temes (40:24):

Right.

Audience Member 1 (40:26):

So that's why even the unstill weather, but I know that working with a corner standard division, that's way too large.

Peter Temes (40:36):

Yeah.

Audience Member 1 (40:36):

I mean, for what industry?

Peter Temes (40:38):

Yeah.

Audience Member 1 (40:39):

So you need to bring people with different type of Background, But then you're making investments for a very, very long period of time.

Peter Temes (40:50):

Yeah, I agree. The question is how do you capture the investments you're already making? How do you put it to separate purposes? I have a chart that I think is done in this deck all about predictive data, which I think is very important. And it's really about pricing and pricing models. If you ever look at pricing elasticity, and the simple version of it is you have two bars across the middle of this chart. One is the gut informal sense of what you can sell something for at the max and the least you'd want sell it for. So this is, I own a hardware store. My grandfather started it, three generations of hardware. I can tell you, I know how much a hammer should cost. No one's going to pay more than 30 bucks for the best hammer, and it's ridiculous for me to sell it for less than eight. If I sell it for more than 30, I'm losing business. If I sell it for less than eight, I'm giving away margin. So that's my range. And then I come in, some industry firm aggregates all the data, every hammer ever sold across the country for the last six months. Some were priced at $150, some were priced at 50 cents, and that data defines actually a better smaller range. I'm actually losing money at $26. I can actually never have to charge less than 11, something like that based on a million transactions. That's a good price elasticity model. The challenge is it will always shift. So we say that's my model. I paid a lot of money to be able to be on the distribution of it, and I've changed my pricing. I'm selling more and I'm making more. But we call it predictive. It's really descriptive. The day that it was published, it started to change. So then we have to refresh it every once in a while. And my initial gut range goes like this, but this thing will shift. And the only way you can really discover where it shifts is by being stupid for a brief period of time. You have to start selling hammers for $4 and for $150. So you can see the real market response. And the best thing about that kind of modeling is you don't need a theory for it. You don't need to understand why. You're just diagnosing that if I charge this, this is the result I get. Now, you can spend plenty of time building a theory. The theory almost doesn't matter. Having more discipline and doing more price testing in a discipline way gets you more. Let me roll back to the question. I think we've only had three people answer it. Yes, ma'am.

Audience Member 2 (43:23):

One of the things that's interesting about working in these really big companies has been that they're sort of more attuned to the idea that they should do research and then they find out what the research says, and then I can't change their mind afterwards. The data is the The data, right? Because they looked at the data six months ago.

Peter Temes (43:50):

So one of the first studies we ever did at the request of one of our members was to identify which industry sectors are consistently the most innovative. And we were defining innovation at that time as generating more new products and services and serving different groups of customers over time profitably. And the answer, the number one sector, and this is almost 20 years ago, was the strategy consulting business. Several, a cluster of multi-billion dollar companies. You have McKinsey, BCG, Bain, five or six others. And what's really interesting is that every five years or so, they're really selling something different. I mean, yeah, they're selling strategy advisory services, but for three years they were making all their money advising people how to set up call centers in India and China. And then they started focusing more on data, and it was like, wow, they are really changing what they're fundamentally selling quite frequently. And if you look at the top 10 customers of any one of those firms, they change almost completely every five to 10 years. So we say, great, we will declare those the most innovative. What do they do that's different? Number one, they hire for ability over experience. That's really interesting. It's the third tier consulting firms that hire someone because they bring a book of business with them at the very high level. They don't want you to bring business with you. If you had 5 million in your two person firm of revenue, they want you to leave that behind. That's not interesting to them. They want you to take their a hundred million, their set of 20, a hundred million dollars accounts and you're a sector and make 'em 10% better each. Right. The other thing, and this is to your point that surprised us, was they value in terms of the strategic voice for the management of the firm. They value sales much more than marketing. And the same is true in investment banking. I, when I was a lot younger, one of my first really exciting consulting gigs after I left an academic job was doing a project for the only, at that time, the only vice president of marketing at Goldman Sachs. I was like, oh man, this must be the most important person in the world. Because in most firms, the head of marketing has really big strategic voice. The guy doesn't even have a window. He's working out of a closet. And mostly he's creating brochures for partners who've cooked up new services and a partner at Goldman and a goodier is making 10, $20 million at least in those days before they went public. And you look at why that is, the partner is having breakfast, lunch, dinner, and drinks with their biggest customers. Most senior people.

(46:28)

The partner is customizing the offering after every encounter with a customer. And each individual deal could be worth a hundred million dollars, could be worth a lot less, but the dollar size of the deal makes that customer sales interaction strategically more important With marketing, what you see unlike sales, is you see the creation of a model and then the model stays in place for a fixed period of time. And then you plan for a new model, you improve it or you replace it. And then that new model is the orthodoxy for a certain period of time. Innovation trades off that model driven, research driven, planning driven model for much more fluid responsiveness to the customer. Jeff Bezos says this about Amazon, I think one of the most brilliant things he's ever said that most people, I mean it's gotten so little discussion, it shocks me, he said, not that long ago before he stepped out of the CEO role, he said, look, we are famously customer obsessed. And it's true. And I've done plenty of work with Amazon. They are paying a lot of attention. Interestingly, they're one of the two companies. When you look at discovery and efficiency that we know where most of these firms are, one or the other with a little bit of overlap, Amazon and Toyota are completely overlapped. They are almost a hundred percent discovery oriented and a hundred percent efficiency oriented. They're measuring everything they do in every market interaction every minute, but they feed those insights into a new model that everybody has to follow almost instantly. That's really interesting. It makes it hard to work there.

Audience Member 3 (48:01):

So yeah, I think there are two points here because we've implemented predictive models for, and the discipline that you described is what I think drives the success of it, especially in commercial underwriting because the biases that underwriters have is so powerful that their judgment Versus what the model tells them, Basically undermine your entire investment in the Model. And then the last piece is that every year we iterate based on what we've learned over the course of the year, and it made significant Improvements By sticking to the discipline.

Peter Temes (48:49):

A hundred percent. And by the way, if you haven't read the book Moneyball or seen the movie Moneyball, right? That's the first stop on that trajectory. What's really interesting, and for most of you, I think you probably know the book or the movie, right? Michael Lewis, great nonfiction writer who started writing about business, wrote that book about Billy Bean, who at the time was the general manager of the Oakland A's. Billy Bean, was really borrowing 30 years of hard slogging work on data by a guy named Bill James, who is the Saber metrics guy. If you're a baseball fan or a stats fan. Bill James was the geeky spectrum dwelling genius, who a tiny circle of people took seriously. And it took decades for an executive in baseball who was not an intellectual, who was not a stats guy, who just wanted to win to pick it up and play with it. And he started punching above his weight. And it took another decade for folks to realize what he was doing, and he had to fight the orthodoxy every day. And what was the orthodoxy? That guy is not athletic. Well, who cares if he's athletic getting on base? The orthodoxy was you got to hit the long ball. And just looking at the data, if you follow baseball, and I assume most of you're at least mildly fluent in baseball, what's really interesting, especially if you have an athlete in your family or if you've taught at school that went to a school of big athletics, you get this kid, let's say he's 22 and he's in the major leagues in high school, he was almost certainly the best baseball player who ever played in that high school. And if it's a typical American town, everybody loves him and he looks great, very athletic, and what does he like doing the most? If he's a typical 17 year old genius at baseball, he likes hitting the ball. He can hit home runs. He can hit everyone's pitching in his town, then he goes to college maybe, and it's harder, and he learns more skills and he gets better. He's still awfully good. If he's going to get to the majors, he gets to the majors. If he's working for Billy Bean, Billy Bean's going to give him one bit of advice when he's at the plate. He's the best ever in his whole state. And Billy Bean's advice is, don't swing. Don't swing. Statistically, you're much better off getting on base with a walk than with a hit. Why? Because the most valuable offensive player is the pitcher. The pitcher has more predictive value for who wins and who loses. And if you get on base with a walk, you're going to be thrown 10, 11 pitches. If you get on base with a base, hit three, four. So you're wearing down their best tool three, four times as quickly. If you can pull and there's a skill, you stand in the right way and you read the ball as it's coming. So you get on base with a walk, nobody wants to do that because they want to be the hero. Right? Base stealing, if you read that book, don't try to steal a base. The likelihood of getting out is so much greater than the contribution of the likelihood of eventually scoring. But people want to have the sexy game where you're running like crazy and you want to steal the base. It's your thing. You're good at it. So we had Billy Bean in San Francisco 15 years ago, come to one of our events, 15 people in the room, and he was asked by so many people, isn't the game worse? Don't people not come out if people aren't stealing bases and hitting home runs? And he just said this. He said, people come out when you win. Period. That's probably true in our companies too, right? Yeah.

Audience Member 3 (52:24):

You mentioned Toyota and Amazon overlapping one another, Doing something with the data that they have. What have you found that is a major difference with these two global, enormous companies versus other large.

Peter Temes (52:38):

Yeah.

Audience Member 3 (52:39):

Just you were saying they don't move, they're just what's going on?

Peter Temes (52:43):

Yeah, a whole bunch of things. One of the things that happens at Amazon and at Toyota that happens at scale, when you become more efficient and more innovative, you lose a lot of your margin. You actually will have more bad quarters and more bad years, but within a very narrow range, you're going to lose less. You're going to make less, but you're going to consistently add more money over time. It's a lot like investing. Buy it, buy the good stock with good earnings, hold it. In the long run, you're going to be ahead of the sexy go-go stocks. Probably a lot of companies, and I think the insurance business is a really big one. When you have a novel product, when you make a good gut call, the margins can be crazy high, especially if your firm has the discipline not to squander it on a more cost, which is usually what happens, right? There is, I wish I could remember maybe some of, there's a business principle with someone's name on it, like the Peter principle. There's a business principle that says the resources required to perform a task will eventually equal the resources available. And it's really true. You start making a lot more money in a big publicly traded company. You'll see your staffing go up and your costs go up. It takes a lot of discipline not to. So what you see with Amazon and with Toyota, the intelligence lives in the system, not in the individual. It's not as rewarding for certain personality types. So Google and Amazon are really interesting to compare. Google still tends to hire for the genius. They still are kind of in love with the hero, and that's fine. What it predicts is they're going to have less experimentation, fewer but bigger successes, less predictably. They'll stay smaller and probably have bigger margin. An efficiency oriented innovation strategy. And again, it's efficiency and discovery together. You're probably going to have more stability and steadiness. You'll probably have lower margins, but you'll probably have lots more sale. And you'll probably also be delivering a lot more value to a lot more people. If you believe that one of the elements of what we're all doing with our work lives is actually doing good things for large numbers of people. Amazon and Toyota make better contributions, I think, but it's less fun, and you don't get to be the hero as much. What I was saying before about Bezos, the thing he said before, he said, look, we're relentlessly customer focused, right? We're obsessive about the customer. And I look at my competitors and he said, they look at us doing it. And so they try to do too, but they can't. Why? They're relentlessly focused on their competitors, not on their customers. And that's a very fundamentally different orientation. And when you look at how to benchmark, how innovative a company is, forget benchmark. So many times we've been asked to do reports on or to help initiatives by someone who says inside a big company, our culture is not an innovative culture. We need to change our culture so we have more of a culture of innovation. How do we do it? And if you leave them alone, they'll say, okay, we're going to hire the culture consultant. We're going to get all our key people in a room. We're going to lock the door and look at each other and talk about each other. As opposed to saying it's really clear, you need to be relentlessly close to the customer. I have been in meetings in big companies where the boss will say, proudly, see that empty chair in this meeting? We always have an empty chair at every meeting as a symbol of the customer because the customer always needs that presence. And because I usually try to tell the truth, I say, you idiot, put a customer in the chair. Don't have an empty customer. So you can project what you think the customer is. Put a customer in the chair, and if you need to have 10 customers in the chair have 10. But anytime you start talking about culture to change culture, and it's only you, you're reinforcing the common problem that most of these places have, which is an inward looking culture. The innovative culture is an outward looking culture by definition. I think we're close to the end. We have two minutes left. Anybody have a short speech. Yes.

Audience Member 1 (56:39):

So that's what I want. Second, that research has to be very it Want to Innovate digital. My question is right now, so I'm on the innovation side of the house and now the Chat GPT is running, right. Everyone, we kind of pause the Projects. How we can integrate with chat GPT For our tech department is researcher. We're getting more skeptical about it. Do you have any success story that I know? It's funny, It's not your job, Any company that has used in keeping the business.

Peter Temes (57:20):

Yeah, so I know of several that are using it badly, including right now there's a strike I believe at Gannett, the big newspaper chain, very much over this issue. Let me give you the example of higher ed. I spent a lot of years teaching in universities. If you pick up the Chronicle of Higher Education, the big trade publication, people are going bananas because chat GPT is writing all the papers now. Right? That's an exaggeration. But that's like, who cares? Here's my view as a professor. We've had this terrible technology of the ink on paper, paper that I ask you to submit to me. And then if I'm in a big school, I'm going to have my assistant read it. That was never a good idea. So chat GPT takes an absurd model and brings it to it absurdly. How do we compensate for that?

(58:08)

We say, okay, you're all in my class. We're going to have to schedule about 10 hours at the end of the semester. Come on in, give me a written paper. That's fine. Come on in and let's talk about what you learned. Let's talk about what you're still curious about. I want to assess your learning and your ability by looking you in the eye and talking to you human to human. If chat GPT takes the nonsense and makes it simultaneously more powerful and valuable, but also less directly about you, we can plug that in as a resource, but it should force me to spend more time looking to you as a human. So there are several companies we know, and this is happening not systematically, although it's beginning to happen systematically, where chat PT is allowable as an advisory augmented tool. And that seems to be where this is heading, right? In the same way, and you know, can talk about the difference between an LLM and the kind of data analytics that goes into radiology. But the current model we have is you do breast imaging looking for a potential tumor. You have some really good tools that will highlight things that you as the individual physician should pay more attention to. But always on you. It's up to you. And we've been spending a fair amount of time talking with healthcare people about this. The smarter ones are doing exactly that, especially around mental health issues, but also especially around radiology. Because the radiology example's been there for a decade now and you get a net better diagnosis with the automation plus the live physician and you wouldn't want either one of them alone anymore. So this is almost certainly where we're heading. Yeah, I think that's it. Thank you very much. There is one more of these. I don't imagine any one of you as a glutton for punishment to the degree that you'll come for the third in a row, but if you'd like to you're welcome. I will be here. I think it's 15 minutes from now, right? For the last hour. Thank you very much.