Masterclass: Achieving innovation excellence (Part 1)

Dr. Peter Temes, Founder & President, ILO Institute

Transcript:

Peter Temes (00:10):

Pro journey to be a more digitally enabled industry, A sector that is not just picking up the phone and making a deal, but really data-driven, really automated, really dedicated to serving the customers big and small through that kind of data-driven value. Right? As we continue this journey for the whole sector, how do the bigger companies work with the smaller companies? How do the bigger companies make and sustain the kind of change that really adds value and makes those organizations themselves more sustainable? Right now, just from some snippets, I think this room is mostly smaller companies working with bigger companies, and I think we are a small enough group. And forgive me housekeeping because I have a back thing and it is not that I am old and crippled, it is that I am charming and worldly, right? I am going to be sitting here and moving around a little bit, but let's take a minute and just whip through. Just could we do a less than one sentence introduction for everybody so we know who's in the room. Why do not we start over here with this gentleman in blue? Who are you? Where do you work, what do you do?

Audience Member 1 (01:22):

We at VO solutions. We offer a innovation platforms, deliver solutions for carriers.

Peter Temes (01:34):

Gotcha. Thank you.

Christina (01:36):

Christina, Amy Brown. I work hearing insurance and an application.

Peter Temes (01:43):

Awesome morning.

(Audience Member) 2 Mike (01:45):

Mike from Michigan Planners. We are employee benefits agency.

Peter Temes (01:50):

Great morning.

Audience Member 3 (Jeff) (01:51):

My name is Jeff. I am the Founder of, we work with Property Adjustments there second time.

Peter Temes (01:57):

Cool.

Audience Member 4 (01:59):

Hutchinson Property Company and work in our platform solutions, solutions side of our business.

Peter Temes (02:06):

Yeah. Thank you.

Audience Member 5 (02:17):

Hello everyone. I am the solutions lead for Know Solutions for the Insurance sector.

Peter Temes (02:24):

 Wonderful. Thank you.

(Audience Member) 6 Ken (02:26):

Ken Kleiner at here, insurance of the Director for Technology.

Peter Temes (02:33):

Great. Thank you.

(Audience Member) 7 Matt (02:35):

I am Matt O'Malley. I am the US Country Manager for Property and Cash.

Peter Temes (02:41):

Gotcha. Thank you.

Audience Member 8 (02:42):

My name, I am the Chief Officer property for our capital 26 billion in capital.

Peter Temes (02:50):

Great. Thanks very much. Awesome

Audience Member 8 (02:59):

Business as well. We have a small MGA that handles direct consumer small for micro small Business.

Peter Temes (03:08):

That's great. Thanks.

Audience Member 10 (03:09):

Mitch Anderson with Trackable AI.

Peter Temes (03:14):

Interesting.

Audience Member 11 (03:16):

I am with company. We are software as a service in the content management space. So we help big companies manage all their complex technical documentation content and then push it out to people who need it to do or get answers to their policy questions.

Peter Temes (03:35):

Super. Thank you.

Audience Member 12 (Shannon) (03:36):

Good morning. I am Shannon Huffman. I am from S & P. We provide a lot of data and analytics for insurance.

Peter Temes (03:43):

Great.

Audience Member 13 (03:44):

Good morning. I work for Nationwide Innovation.

Peter Temes (03:49):

Oh, That's great. Are you out of Columbus? Yes. Yeah, we've done a lot of work with Nationwide.

Audience Member 14 (03:54):

Good morning. My name is Laura, director of Northwestern Mutual on the field experience side. So to make it, Awesome car technologies. We manage services.

Peter Temes (04:13):

Great.

Audience Member 15 (Lexi) (04:15):

Lexi Sprague. I work for Hub International, but they're owned by Hub. So personal lines.

Audience Member 16 (Courtney) (04:24):

Hi, I am Courtney Cooper. I work for The Hartford. I am their digital customer experience team.

Peter Temes (04:28):

Fantastic.

Audience Member 17 (Naveen) (04:29):

Hi I am Naveen. I am a co-founder.

Peter Temes (04:36):

Fantastic. All right, so as I said, I am Peter, Peter Temes. I want to keep this relatively informal. I have a lot of content I could share, but I want you to kind of guide me in terms of what's most useful. I think it is really positive that we have a mix of organizations of different sizes, and in fact, if you want to simplify it to say a buy and sell side, we've got buy side and sell side here, which I think is very, very healthy. I'll tell you that when I started this institute, it is now in 2005, so almost 20 years ago, a lot of big organization leaders, C-Suite people, VP plus, they would come in and say, we want to learn how to do what the little guys do, right? we are old and slow and we want to be a startup in a garage. How can we be that way as a big company? And for the most part, we do not hear that question anymore. It is not a smart question. The better question is how can we do what only we can do because big and global and have a lot of capital and a lot of relationships, how can we do our thing working in the appropriate way with the smaller companies who have that energy and that creativity? And what some of that comes down to is how can we externalize risk for experimentation, which has a high cost of failure if we do it to little firms that can do the trying and the failing and the getting back up and recovering faster, smarter in ways that enrich us. And the best example of this model is to look at what happened during Covid. I mean, only a company like Pfizer or Johnson and Johnson can inoculate a billion people in a matter of months, but it is predictable that it was bio-tech and three or four other really tiny companies were tiny as the whole cycle began. They're not so tiny anymore. Could do some of the fundamental drug discovery, some of the refinement really, really fast. I think the same is true in the insurance space, especially at that interface of hardcore technology, especially today when we are seeing a rapid maturing of some really impactful kinds of new technology. And let me ask you, how many of you are looking at large language models, chat, GPT and related AI stuff right now? Is that is? Yeah. Not much over there, huh? Yeah, I mean the potential of these interactive tools is really enormous, specifically for the insurance sector. And depending on what side of the business You are in, your property casualty, your direct to consumer, one of the things that we've seen very clearly is disaggregating risk analytics. So that I can say, instead of saying You are a woman of a certain age, you live in this zip code and I know things about how you work and how you drive. So You are in a group, not of a billion potential customers, I am going to nail you down to 5,000. And in that demographic, we feel like we know you. When we get to the point where I can say, no, I know you we are now at the point where I can offer you more value, more profitably, but how do I do that? These are the tools that are really getting us there. And how many of you have used a version of chat GPT or a similar large language model for search or chat? Yeah, I am glad. So interesting. Did you see that New York Times article came out about four months ago? Yeah. The tech reporter from the New York Times, one of the tech reporters goes to Bing's almost ready pre-release version of its Bing's chat GPT, and the reporters just, he is got his fingers moving on the keyboard and he is saying, give me an answer to this question. Interesting. Give me an answer to that question. he is on that exchange for three hours. And this is something that Microsoft hadn't anticipated by the time he is done with his conversation. The chat GPT function on binging literally told him, you do not love your wife, you love me. That's interesting. That's really interesting. Unanticipated, I mean, hundreds of thousands of test users, millions of test sessions before Microsoft let that flavor of bringing out into the world. They had not anticipated that level of weirdness in how the chat function would work. And one of the first things they did when they saw those results is they said, ah, no one can be on one chat in more than 20 minutes. But this is interesting. Why? Because the agent, the AI agent gets to know you too well, and it gets the motive not just to give you what you are searching for, but to give you what it thinks you really want. So if you put in a Google search, if you compare now, and it is fun to do this because Bing has gotten much better. I now am defaulting to binging, which I never thought I would do because Google, which has always seemed awesome, is not as awesome as what the chat GPT version of the search for Bing has been doing for me lately. At least when you put words into a Google search box, even though Google might know a lot about you, because especially if you have Gmail, I mean it really knows. It reads your email and it tries to dial in results based on who you are and sell you ads based on who you are. But if you look at how words are analyzed by the core Google search algorithms, they're pretty flat. You put five or six words in that search box, and it is a very straightforward applying of those words in some interesting ways, but not that interesting to the real genius it has, which is looking at the billions of websites and making matches. That's the genius that was their competitive advantage, and they're still really, really good at it. The thing about these large language models is they have the same kind of intelligence facing in both directions. This new technology is brilliantly better at understanding you, the searcher, and especially with repeated searches and repeated interactions, and understanding who you might really be as one person, not as a group, and well beyond the words that You are giving it, it is looking at things like how fast do you type? Why are you using this word instead of that word when they kind of mean the same thing, what does that tell me about you? And they're doing that at lightning speed, even if there's no secondary data about you that it is pulling from, and there will be more and more of that secondary data about you that it will be tapping over time. So that's where you start getting the demographic of one. On the one hand, it is fabulous because I want the good stuff. I mean, when I watched Netflix and it said Amazon Prime, I am a big Mrs. Maisel fan and he misses Maisel fans here I I grew up in a Jewish home in New York. It is my people. I mean, because I watched Mrs. Maisel and that great documentary about the making of Curb Your Enthusiasm, and I am a Muhammad Ali fan, just those three data points should be enough that it should be right on with this next recommendation. It almost never is. I mean, you could see it is got the list of a hundred things that are kind of like Mrs. Maisel and they'll forward that to me. But as this technology matures, I should really be getting the level of recommendation that makes me happier. That kind of literally improves my life because there's nothing like sitting on the couch watching a show that makes me think this is really my story, or this relates to struggles my family have had or opportunities that I want to help share with other people. That kind of specificity is where we are heading with all of these new technologies that are maturing right now, and especially with insurance because risk management, I mean, we know this, the rise of enterprise and more than just enterprise, the rise in life expectancy, the rise in human health has been directly tied to managing risk. Humans have a bias toward risk aversion. We tend to overestimate the downside and underestimate the upside when we make key decisions. But if we have the tool that says, no, the downside will only be this bad and no worse, we take better smarter risks and we do more. So this is where we are heading. This is what the stakes are right now. So having said all that, let me show you a little bit of what I prepared here. This is our logo. Not very interesting. The goose is there because a lot of people in big companies who fund innovation who say, we are going to stand up the group, believe that it is a golden egg and not a goose, and we try to make the point about the goose, You are not going to have a better quarter.

(13:10)

You are not going to have a better year because you've started to formalize an innovation program. You are going to build muscle, You are going to build habits. You are going to build culture that will give you better quarters and better years over time. It is a golden goose that's going to give you a lot of eggs, but do not think You are going to have short-term return. These are most of the current members of our organization. So these are companies and government agencies. I think there are three or four big insurance companies on this list who are paying to participate in events that we host. We host about 40 roundtable events that last a full day that are usually 15 to 20 people, often very, very notable guest speakers over lunch. And we talk about who's doing what the big challenges are, how folks are using Chat, GPT in some cases, how folks are reorganizing how once we have a new function like scanning data, interacting with customers, rating risk, when we have a new function that's more automated, what do we do with the people who've been working for us? What kinds of organizational change do we have to make now that we've wired in the technology and gotten it to work? We do a handful of other things. We are also executing research for all our members on a regular basis. This is a very ugly slide, but this is the very specific content, which is kind of your menu. The schedule today is three one hour sessions like this. My understanding is of course, You are not obligated. You can walk out at any time and there are snacks and little cakes just on the other side of that door just to really raise the stakes. And I am going to talk a little bit about some of the stronger kind of secular principles of innovation, how innovation in large organization works more generally, but this is the whole range of stuff that I thought you might be interested in. And I want to follow your lead about where to go deep on insurance specific things. So these are some of the experiences we've had working with big insurance companies around innovation and around new technology with AIG I think we are mostly all old enough to have lived through the global financial crisis. A good bit of it really was AIG's fault, right? We know that. Yeah. They were one of our members. We were very close to them as this was all unfolding. What's really interesting about AIG once it basically collapsed our government, the US government took it over, owned it for a little while, and then let it back into the wild where it kind of regrew to a meaningful degree when it regrew, it regrew based on data. Some of you may know that AIG had the pioneering absolute best data analytics function in the insurance sector for a good five years, and we were right in the thick of that working with a guy named Ley Bois who pioneered it. He had a really smart organizational strategy practically. He knew the science. He was a great leader of teams. He was very politically savvy, but he made one or two moves that really allowed the transformation of how they underwrote, especially for large property and casualty. I'd be very happy to talk about that in detail and share some of that. Munich, we did a report for Munich, oh, probably 12 years ago. That was really fascinating and it is very important for understanding how technology and insurance connect. So the question was can we talk with CEOs, chief risk officers, others at big companies who are potentially Munich re clients and ask them what they would like to be able to buy insurance for that they can not, right? The thing that you would like to buy that does not exist. And all these very sophisticated people came up often with answers that were like dumb answers. Like, gosh, I wish we could insure against the weather. It is like, well you can, they kept naming products that were already in the market that they thought weren't in the market, and these were people who might do a seven figure underwriting deal with Munich. How interesting that the map of what's available is so cloudy for them, and even as the work we are doing and the work that a company like Munich re is doing is getting more expensive in terms of what innovation costs them and more technical, that basic side of just communication and closeness to the customer was where the biggest gap was. And we have to continually go back to that if we are going to have value coming out of this HCC, Tokyo Marine HCC, Houston Casualty Corp, I think was what it was originally called. Again, working with them as a member, they were trying to decide about eight years ago, should we be in the business of cyber insurance for companies? And what we found by really giving a good look at this, and again, very similar to that point about Munich re, most of the cyber policies being sold to companies had very little value, but it was all about checking a box. So if I say, okay, CVS, right? You have billions of dollars of potential liability because of data breaches, because of other kind of failures, you could start sending the wrong drugs to the wrong people because certain cyber systems aren't really tuned up. We will give you cyber insurance. We'll cover all those risks up to a payout of $50 million for this premium. And for a company like CVS's, that's nothing that's worthless. It is probably worse than worthless because it makes you feel like you have real risk backstop when in fact you do not. It is just too small a number for them. There was virtually nothing in the market about eight years ago that really was priced in a way that made sense to mitigate risk of cyber vulnerabilities among large corporates. So why was this whole sector flourishing? It was flourishing because of governance concerns. Boards of directors of public companies, especially small and mid-sized public companies, had bylaws that said, and that they would update them every year. You must have cyber insurance. I mean, you got to have it, right? it is kind of like someone who says, oh, we probably all know people like this. I have health insurance. I haven't had it for years. Thank God. What's your deductible? $12,000. That's not really insurance is it? I mean, it is something else, but the idea that there are predictable costs and there's a premium that you pay and they kind of match and add more value. That was missing then as cyber insurance was emerging as a field, and we are certain it will be missing now as new categories of insurance, especially around liability with LLMs are rising. The best competitive space, the most powerful space to play in to help big companies drive innovation is to not be bullshit. I mean, it is to not sell something that has very little value but gets to check a box. I think that's important. I think it is a root competitive long-term position. The best way to position yourself for long-term growth to be the flavor of the month, but to really have something more valuable because You are deploying new technology. Well, yeah, progressive. How many of you know about the progressive device? They pop in the car for auto insurance? Yeah. You guys, it makes a lot of sense. So if You are a terrible, terrible, terrible driver, but you still feel compelled to drive maybe for work, maybe for your family, how do you get insurance? I mean, generally you can. Sometimes there's a proxy discrimination issue. We did a lot of work for New York Life around this with auto insurance. Multiple felony convictions make you a terrible risk for auto insurance, but some people are more likely to be convicted of a crime than others for the same behavior. So there are some states where you can not use felony convictions as a screen that's proxy discrimination. So progressive basically said, forget even people with felony. You can, well, I do not know. Could you anybody have an answer to this? Can you be legally blind and get auto insurance? I know you can be legally blind and get a license. I know that's true in many states. I suspect if you have the license for a certain price, you can get auto insurance. How would Progressive underwrite you with an auto policy if You are legally, I mean, I wouldn't want to do that. You are blind stop driving. So what they do is they put a device in the car and they say, you look like a terrible risk, but if you do not drive like a jerk, if you actually follow the rules, we will be happy to underwrite you. You'll have a fuller life. We'll get your premium. And because it is a prosocial solution, it is not just rating the risk, it is actively reducing the risk in ways that are safer for everybody. We all benefit. They built a billion dollar business doing that. Then we had a guy from Tesla about two years ago come into a room and he was describing he was running Tesla's new captive insurance agency. You buy a Tesla, you get a hard to resist offer from Tesla for your auto insurance. I said, oh, awesome. Is it like progressive where you put a device in the car? And he basically said, no idiot. The car is the device. The car is always pushing data forward, and that's a nice clue about where we are heading. We will have far better underwriting as we have that data coming from every user and looking at everything. The only question is how do we enable our firms to digest that data and to really make sense of it quickly enough, meaningfully enough, and humanely enough so that our default is not, oh, more risk. No, but our default might be a little more like progressive saying more risk. Here's how our new technology tools can help lower the real risk, right? Yeah. We talked about New York life. Yeah. Oh my gosh. I do not know how many of you touch the health insurance area. One of the really cool things that comes from the kind of work I get to do is sometimes I hear stories that I never hear anywhere else from people who I really believe who are credible in the room and can not talk about it. I'll give you two examples. One was Walgreens, have any of you, I am sure we've all actually been in a commercial retail pharmacy and there's a little clinical desk, there's a nurse, there's someone who's not a nurse, but there's a nurse or a doctor on a screen and there's rolling around little robot wheels and we can get a prescription for some very basic stuff. The minute clinic, you've seen these. So here's what Walgreens knew 10 years ago. When you have a critical mass of those things within a single zip code, which means you go into a Walmart here, you go into a Walmart there and you always learn to expect it. It is a kind of normal thing. It does not look like innovation to you. You are a regular citizen and You are like, yeah, there's always a nurse in the Walmart and for 30 bucks I can get a basic prescription if I need to. When they have a critical mass in a zip code, population health improves because remember, they're filling all these prescriptions. They have some really interesting proxies for how healthy people are in that zip code. And once it is more likely than not that every one of those stores has one of those people are more likely to use them. And when people are more likely to use them, the whole zip code gets a little bit healthier. Cool. What's the mechanism? They're pretty sure it is because those people, they're, they're still underutilized. The average person at that little counter is not just churning and churning and churning, doing a lot of waiting. They will take people by the hand, especially poor people, especially elderly people, and say, I am going to give you this prescription right now and I am going to walk you back so you can get it. They have better compliance. If you get your prescription in the pharmacy, You are more likely to get it filled and to take it. And because of all the non-infectious diseases that are really driving population health issues, which is mostly hypertension and diabetes, the drugs really do work if you take them. So they're making this great population. And then he said, and we can not talk about it. It is like, why can not you talk about it? Because it'll get shut down as a human trial once you start talking about a new commercial activity, which is plenty regulated, but not like you have to regulate new drug trials or new healthcare treatment trials. Once you start saying, Hey, by the way, a big benefit of this is it makes people healthier. Now you have this whole FDA protocol. Well, if it is trying to experiment about making people healthier, slow down and start spending a lot more money, and maybe we'll even stop you until it is proven. So it is not something they talk about. But there are more and more opportunities and more and more actual deployments of technology and new business models that are having that impact. And one of the things to look for is where that latent value is already emerging, but quietly, which is why gatherings like this I think are so important. So Leidos is another one of these. Leidos spun out from SAIC, which is a giant defense contractor, and one part of Leidos has a gigantic contract with the Veterans Administration Healthcare System. They were moving from rooms full of nurses supervised by doctors taking incoming calls for treatment approvals. So I am a vet and I say, I am feeling bad and this is all behavioral health. Maybe I am feeling suicidal, maybe I just can not go to work. I have a problem. I need to see someone. It was nurses and doctors who would take those calls for years and ask 10 questions and then get you an appointment or not direct you here or direct you there. They saw the opportunity to move to an automated response system, right? An interactive voice response in IVR, which is like the precursor to chatbots. You call press one if you want this, press two if you want that. And if you have good semantic analysis, it can actually make good decisions about you. But the delivery was backed up. So they'd already given notice to all the nurses and all the doctors, and they didn't have any tool to put in place. And the nurses and doctors were going home. And what they did is they took a very early AI implementation and they just rolled the dice. And by rolling the dice, they saw that the triage based on both the words people were using, but also the tone and intensity of their voices, the triage in terms of who needed immediate care, was measurably better than nurses and doctors doing it. That's interesting, not talking about it. I've heard this from a couple of people at Leidos when they were doing it. They're not talking about it because they were in that situation where they had to improvise and they were breaking a lot of rules to keep the wheels turning for the VA health system. Before that IVR system was ready, which it now is, and it is now two generations later. But how interesting that in that moment really of crisis, they had a short-term solution that was better than the long-term solution they were waiting for. How do you capture and collect the value of those experiments? That's an innovation discipline. We talk about the number one big best practice for innovation in large organizations being systematically lowering the cost of failure. And I'd love to talk more about that. But once you believe that and you lower the cost of failure in terms of time, in terms of money, in terms of brand and customer impact, and in terms of impact on the people in your organization who might or might not be part of teams that are likely to fail but fail in important ways to chart the territory of what is possible, where is the lowest cost of failure? The lowest cost of failure is in experiments that you've already paid for but not paid attention to. And in most organizations, that's a big set. And having the discipline to look around your organization and to look around your customer's organizations and be able to see them and find them and learn from them, document them. That's really important. Yeah. Final thought. How many people here are familiar with the machine learning modality called federated learning?

(29:11)

In some ways, I am glad we are going to talk about that because that's really powerful. And again, it is something people aren't talking about, but it is being used more and more and more and more and more. Anybody have an Android phone? Yeah, Android. An Android phone. Yeah, A you are using it already, and B, it is using you. And we'll talk about that. So these, some of this is some of the instances, some of the specific lessons. And then I wanted to talk a little bit if you are interested, where You are interested in what some of the key emerging technologies are for insurance companies like LMS Chat, CPT. We think computer vision and large scale image capture combined with those large language models really is about to have a pretty big leap forward. And I am sure there are some people in this room who know more about that than I do. Verification engines, oh my gosh, are these images real? I mean, if I am doing insurance somehow based on satellite imaging and sensors and either from cars, and I am like, man, you know, just had an auto accident, you tell me what happened. Okay, now I am going to go to the footage because there are so many cameras everywhere. Now I am going to look at data from a satellite. Your roof has a hole in it. Let me talk to my satellite. No, it does not. I can see it. We have the ability to do this now if we trust what we are seeing. And if you've begun seeing some of these deep fakes online, how big a challenge that is, but it is not a challenge that you can not meet. And verification engines looking at the signatures of what's true and what's not true based on visual images and based on other kinds of interactions between people and organizations, that's something You are not hearing a lot of conversation about. But there's fundamental technology being developed and in some cases deployed. And we need to talk about that more. By the way, if you've raised children as I have, if you've been a teacher, if you just interact with a lot of people, some of whom are not trustworthy, that some people are better at understanding when something's nonsense and when it is not. I mean, there are naive people who believe anything, right? And there are human biases that we know that exist that you can play upon. There are ways you can build trust by getting people nervous about other people, so they like you more. And yet we know most of us that there are signatures of truth. If you listen carefully, if you have the appropriate level of skepticism, you are probably able to tell, at least at the extremes, when someone is really full of crap, our society, our politics would be better if more people had those skills. And if you look at, for example, what's wrong with Facebook, we've all lived through this beginning about four or five years ago. Suddenly it seemed like everybody hated Facebook kind of all at once. The best summary I've heard about what's been going on there is this statement. The problem with Facebook is the people on Facebook. And I think that's true. I mean, we can argue about whether the algorithms are creating more intense experiences, good and bad, but the real problem is that people manipulate and lie and try to sell things and try to serve their own interests. And it just happens in some ways more efficiently on Facebook. So how do we address not just the effect but the cause? How do we distinguish between human behavior that ought to create trust and human behavior that feels like it might, but really shouldn't be? So you can digitize that. You can create verification engines about images, about when someone is likely to be telling the truth or not. Somebody you probably know around bank fraud, did you know this? That repeating digits is a sign of fraud. Like you look at the account number that someone gives you for a fake account, and there are certain patterns of digits that we just naturally default to if we are making stuff up. I mean, beautiful to know that I think, can we create those kinds of algorithms in human relationships that are worthwhile and socially beneficial? And can we build tools to really do that? Compliance reporting technologies, man, are these becoming more important because there are more and more rules and regulations, sure, but there's more and more uncertainty. There are more things that companies we work with could be doing that they're not, because the law is unsettled, regulations are unsettled, and the potential risk is enormous relative to the potential gains. The better our compliance reporting technologies are, the more big organizations can do to try to provide value because just the capturing of intent and activity in a way that's auditable gives them a lot of liability reduction. And even though that sounds a little wonky, that's very, very important for what insurance companies can do. And then these are maybe the takeaways more suited to big companies to think about. But I suppose enterprise serving companies in this space want to know this too, so they can lean more into the things that the big companies should be wanting. How do you create partnerships to explore new technologies, especially if You are a big, big company? Number one answer is have a portfolio model for partners. I mean, it is crazy to me when I sit with people who are spending many millions of dollars for really big companies and high stakes stuff, and they say, we think we found the vendor. We can only choose one because we have to really inculcate them into our culture and trust them a lot. It is like, what do you mean you do not? No, do not pick one new vendor. Pick seven, give them, eats a little project and see who actually does it. See who does it. Well see who actually works. And plenty of stories about that. Rita Gunther McGrath has a brilliant book called Discovery Driven Growth. It is already 20 years old. I recommend that highly for the portfolio theory about how to do new things and adopt new technology, and then ecosystems for task selection. That's really interesting. What happens when your organization is only good at doing one thing or five things or a hundred things is limited, rather than being able to choose dynamically between a thousand things and 5,000 things, when you are really good at one thing, which is not often the thing you were built to do when Amazon, which is making no money, selling groceries and books and toys and electronics online 15 years ago, is approached by folks who say, you know what? It is fun to buy little things on your site all day long. But what we really like is that data center model that you've built. Man, you guys can run a really big data center better than we can. Would you do this for us? And AW emerges and is still the engine of profitability there. Looking at the task selection and creating a kind of marketplace so that all the different things you do can be exposed to and verified and appreciated by a larger number of observers will help you identify where your strength really is. And often you need to be in a bigger ecosystem so you can spend more time doing what You are most good at. So that's a very big menu. I think we are probably halfway done already now, nine, 10, more than halfway done. So that's the insurance specific stuff. This is the best practice in any industry for any large organization, lower the cost of failure. The big organizations that are best at doing innovation over time that seem to be better at succeeding when they try new stuff are generally not better at succeeding. They're generally more frequently experimenting because when they try something new and it does not work, it does not cost them much money. They do it fast, they learn quickly. It does not burn their reputation in the market, and it does not alienate people on staff who trusted the change leaders and got burned, right? Systematically lower the cost of failure. A lot, of our research is case studies toolkits for how to do that. And once you start doing that, once you start paying attention to lowering the cost of failure, I think you do tend to pay more attention to all the work that happens closer to the customer and to the way that everyone does something different if they're in the same job. On your org chart, very specific example. In the beginning of the pandemic, we were doing work with Starbucks, 14,000 Starbucks branded stores in the us, and really pretty dramatically suddenly the whole operating environment changes. And what we worked them on was to figure out how to collect the best practice that emerged dynamically in these uncertain waters among those 14,000. The idea was you've got 14,000 store managers, right? You've got 900 regional area managers. Every one of them is experimenting because they do not know what to do. How can we tell who did this one thing best among those 14,000 capture that and socialize it to everyone else, right? That's the challenge. We start paying attention. Instead of saying, let's get a committee to think about the best new things to do, we say, wait a minute, we have an army of people running stores who are already doing the experimentation. They have no choice but to do the experimentation. How can we pay attention? How can we mobilize a couple of senior people who can create habits and use platforms to identify the best thing that happened in our stories to today and really make that something we can talk about and push out to everybody tomorrow? So the best practice today becomes the standard practice tomorrow, and that's a model that we just call the ratchet, right? Like a ratchet ranch. Every gain gets locked in and then the next gain is on top of it and you get better, better, better, better, whatever. I think I am just show two more slides and we'll talk. Oh, matching innovation was forever thought of as a creative function. Let's build something new. Let's create something new. That's largely an obsolete model today because of internet technology, because of the speed by which we can collaborate and learn from each other, innovation has shifted in our opinion from fundamentally a creative function to a matching function In every large firm, in every big insurance company, and in most small startup firms, you've got a group of people, some of whom have solved a problem here that someone else over here has.

(39:50)

We've worked inside firms where someone in New Jersey says, I need to identify a contractor to help us solve this problem. And someone in the same organization in Singapore has just finished writing the check to complete the project. They already did it. They do not know whose job is it in the big firm to map all the emerging needs and all the emerging resources. If you are a chief innovation officer and You are not sure where you should start, that's a really, really good place to start. There's a lot of value in that, but it goes further than that. Look at how work works and look at how risk underwriting works. I think in a similar way, let's say You are an Uber driver, and this is why the Giraffe is here. The instinct is to say the sick baby. I do not want to say the sick baby. I want to say the Giraffe on the lawn. Yeah. So I am driving my Uber. By the way, has anybody ever made a living even just for a week at a time, driving a car, driving a vehicle, a bus? Yeah, me too, man in Brooklyn a long time ago. Uber is better than what I had then because then it was dangerous. No one knew where you were and people paid in cash. I am very attracted to Uber. I think I would like to do it, and I think I will do it when I am a little less busy. So if You are an Uber driver, you have interestingly, not a special purpose device. You have your smartphone and you have a menu of tasks. Do I want to take this call? Yes or no? Do I want to take this call? Yes or no? And yeah, it is a tilted playing field. I'd like to see that platform be less biased and less manipulative, but fair enough. And suddenly my partner in life calls and says, oh my God, do you remember our neighbor down the street who works at the zoo? Do you remember that? He said one day he was going to bring home a Giraffe because he could. There's a Giraffe in front of our house right now. You got to come home and see it. Okay, so now I am driving my Uber. I am hoping to make another 120 bucks today, but I am like, come on, a Giraffe in front of my house. I want to see it. So I make the decision. I do not ask for my boss. I do not file paperwork. I do not plan ahead. I make a decision in that moment. I am going to turn off my app, I am going to turn off the money. I am going to go home. I want to spend two hours with the Giraffe in front of my house. And I think it is a story I'll be telling my grandkids. I think it is it. It is my matching of the value of that experience versus driving for these two hours to match my life, my family, my community. And it is really good that we have a platform like this that can let me do that without the intermediation of people who aren't good at making decisions about my life. And then I'll go back on Sunday, I'll work an extra two hours where I'll wake up earlier tomorrow, my choice, my life matching against the relative value. Think about what happens if You are driving a city bus. You are a city bus driver. There is nobody on your route that day. Nobody wants to take the bus that day. You do not go home to see the giraffe. You drive that empty bus up and down all day. That's a non-matching example of what could be more of a matching function. So this is happening, this idea of innovation as a matching function is happening both at the very high level where we are looking at how to map emerging resources to emerging needs, but it is also happening at a very direct level as the nature of work is changing, that management itself is changing, more management is happening where the manager is not managing in to a team, but the manager is managing out toward the edge of the organization because most of our organizations are able to create and deliver more value. That's still stuck in the firm. There's more upside in helping the value get out and educating people at the edge of the firm and in the user base than there is in tuning up more efficiency in the teams that we are working on. Now, it is not absolute, it is not one or the other, but as you see, and I wonder if your firms have this as well as you see more and more management be exterior in its focus, that inside management function is being replaced more and more by self-management automation, like the Uber app. The Uber app is a really good example. The special purpose devices, interesting that they're not cell phones that inside an Amazon warehouse you would have strapped to your arm. That's a worse example. It is the same basic idea, but it is mostly one directional, and there's, we are doing a little bit of work with them. I think they want to change, they want to get smarter, and I think they will. They're smart people. So that's a lot. What do you think? Questions, comments, thoughts, request requests for more or less. I got a lot more slides, but we have for those who wish to come back. Good lord, gluttons for punishment. If you want to come back for one or the other of the next two, we can go deeper on any of these, but anything that I've mentioned of particular interest, any thoughts, any rebuttals? Yes.

Audience Member 1 (44:44):

You Mentioned (Speaker Inaudible)

Peter Temes (44:46):

Yes, Thank you very much. So six years ago, we were doing work with PNC Bank and their head of anti-money laundering said, Hey, here's a research assignment for you. I've been hearing said their head of anti-money laundering about this new federated learning thing as an anti-money laundering tool. What is it? How does it work? What's the use case? And we really dug in. We got to sit with the people at Google. It was developed at Google. The engineering team that built it and I believe still runs it, a guy named Robert Ragnar. And here's how it works. Federated learning is a way to take many, many, many pools, relatively small pools of data, but many, many of them. So remember, Google owns Android, right? There are something like, is it 4 billion Android devices in the world? Most of those devices have onboard data on your phone. It is usually a mix of very, very personal data and data that you might want to share. What Federated learning does is it says, for all these pools of data, man, that data has really interesting significance to the models we have for how we operate what we do centrally with our central core computing. Wouldn't it be great if we could see all of it? Now, we can not go into your phone without your permission and take your data off and play with it. We also can not go into your phone without your permission and see correlations and then write down your name and say, this is a person who's one of these in one of those. But the privacy theory behind federated learning is we can go into your phone without your permission and structure your data. We can analyze it. We can say, someone who does this also seems to do that, and we can not write down your name. We can not attribute any of that to you. We can not copy the data, but the model that we are trying to improve improves based on what we learn on your data. Here's the specifics with anti-money laundering. If we are a big bank, we have models for who might be a money launderer, right? Based on a thousand people in jail. We know who they are and we have some data about them. We say, here are 800 characteristics of a money launderer. And when someone applies for an account at my bank, we are going to try to take that model and say, are you a money launderer? Yes or no? Right? But now with federated learning, I can take that model and I can send it out to let's say 50 million phones or 500 million phones, and I could say, okay, who fits this model in those 50 million or 500? Well, it was 10,000 more likely money launderer. I never even knew who they were. I wish I could write down their names and never do business with them, but I can not. But what I can do is say, these new money launderers who I now know exist, it is not just 800 things they do. It is 8,000 things they do in common that really make them money launderers. I can now take that list of 8,000 things they do. I can not block anyone, but I can now say my model is 10 times better because I've learned something by structuring that data. And now that model can be turned toward legitimately obtained personal identified data, and I can keep more money launderers out. That's how federated learning works. It can also rate risk. It can also approve people for loans. Anything that involves a model that can be dramatically improved by looking into a lot of small devices is something that federated learning can really do well. Now, we started having this conversation with Google and with the banks and other insurance companies six years ago. It has surprised me that there is so little said about federated learning, but I think it is because people are using it and they do not want to talk about it because it really works and it is very scary to lay people and maybe it ought to be scary to sophisticated people. Federated learning can be a tool. Intel has a chip, they have a federated learning chip that they've built focused on healthcare because your genomic data is a pool of data, much like that. What if I could send an agent into everyone's genome and just look at what connects with what, right? Is it 3 billion bits of data in the average strand of DNA? There's, there's an actual answer to how many bits, but yeah, so I am not going to write down who you are and what diseases you have and whether my daughter should marry you or not, but I am going to just understand the correlations between this and this and this in human genetics. And then my model is smarter and I can help more people. Best case. So that's federated learning. We know it is deployed more in China than elsewhere. We know that it has a natural money saving application with anti-money laundering. We know that healthcare is a big play. Very recently, Intel has started doing more thought leadership about it because they want to make it a strategic part of reviving their chip business. And I believe it is one of those under-discussed new technology approaches that we should probably all be paying more attention to.

Audience Member 1 (49:50):

What do you think the risks associated with that?

Peter Temes (49:53):

Yeah, I think there are three or four categories of risk associated with federated learning. One is someone will write my name down. I mean, someone will look at all my data on my phone and say, I never want to give you a loan. I do not want you living in my neighborhood. I do not want to sell you insurance, and I know who you are. And I broke the rule and I found some clever way to justify it. I mean that if the only thing stopping misuse of data is a policy or regulation, it is probably eventually going to be misused. Is the net gain greater than the potential net loss? I mean, for me, I have found that privacy, which I cherish is usually priced too high. And when I am traveling, I'll show my passport. I was one of the first people I knew to get an American passport that had that little indicator on it that there was a microchip in it. You know what I am talking about? I mean, all of a sudden, about 15 years ago, two passports ago for me, when you got an American passport, there was an RFID chip like sewn into it. My daughter had recently been in India. She'd been in, she is a high school student on a study trip. Spent three months in a city called Varanasi. Some of you might know it, also known as Banaras. And while she was there, the whole city erupted in what they call intercommunal violence, right? Muslim versus Hindu violence. The army shut down the whole city, and there's our 17 year old daughter. We kind of freaked out a little. She was traveling with this very, very progressive left-leaning travel group, which we liked, called their offices. And they said, yeah, we have news. Here's what we are doing. I said, great. What does the embassy say? I said, well, we prefer not to talk with the embassy. It is like, what? You prefer not to talk to the US Embassy when we got a dozen American teenagers in the middle of what could become a war zone? And politically they're like, yeah, the embassy, they're all CIA. There are only some. Only some of them are C A. But when my daughter is there, they're my CIA, right? I called the embassy, and the embassy in New Delhi. They were brilliant. They were like, okay, hold on. How do you spell it? Who else is there? What do you know? We have no record of this. Every eight hours You are going to get a call. I mean, they move this to a yellow status. Someone every time a shift changed in the embassy, someone took the football and they sent us a note or called us and it was great. And for three days it was really frightening. Less privacy, better. Do I want that chip? When I think about traveling around the world, do I want to be found as an American? I mean, if someone's out to kill Americans, no. Right? That's when I take my little lead envelope and I put it in my lead envelope so it does not transmit my Faraday, my portable Faraday cage. But more often than not, I am vastly better served by that lack of privacy. Mentioning the intelligence community. Do you know there used to be a practice until the early eighties, if any active US intelligent agent was revealed to have homosexual tendencies, they were fired. Why? Because the theory was they are blackmailable because an enemy, knowing that secret can get them to do almost anything because the secret is so terrible and embarrassing. Today, we live in a world where that secret is likely not to be a secret, nor terrible, nor embarrassing, less privacy. And I think we are better served. And there are plenty of openly gay service people everywhere, and I think most of us would probably agree that's a much better world to live in. The difference between privacy and secrecy is a very interesting difference. And some people make fortunes based on the dark side of privacy. I think we have to figure out the right ways to use technology so that we can dial it in and out. Have any of you come across the Creative Commons website? He really, yeah. So the dude who's now running the ethics center at the law school at Harvard, very, very interesting guy. He was special master during the Microsoft antitrust case years ago. He started this, and it is really interesting. If you've ever published anything or if You are a musician, I mean, it is literally true. That phrase, you can not even get arrested, right?

(54:04)

It is like, yeah, I got Bob Dylan playing down the street. Everyone wants to buy tickets. Me over here with my guitar, I can not even get arrested. Nobody's paying any attention to me whatsoever. So when you publish something, when you publish music, when you publish written word, typically the phrase we all see is all rights reserved. But a guy like me as a writer, I mean, I've written books, I've written articles. I do not want to protect it. I want it to go out into the world. I want people to read it. I can not get arrested as a writer. I do not want to reserve all rights and make the cost of someone publishing or sharing what I am doing higher. I want to give it away and push it out. So what they did at Creative Commons is they kind of lawyered about a dozen different settings, and they said, here, if you have any creative work, put it up on our site. Click this button. You can have some rights reserved. You can say, this work is available for anyone. If you do not change it, you can say, this work is available for anyone. But if you do change it, you have to make the list of changes you make to it transparent and take my name off it. Or you have to lift. I mean, it is again, all these different options that are super interesting. Some rights reserve. We need policy. We need privacy policy options that are some rights reserved, some privacy reserved, that can let us be more intelligent about what we share and what we do not. Why does that matter in the insurance industry? It matters a hundred percent. In the insurance industry. I think we all can see that. And the digital tools we are all in the business now of trying to develop and deploy are the ways in which we'll be able to say, I am willing to change my behavior in exchange for this risk underwriting. My firm is willing to operate differently. Anybody heard of the Hartford Steam Boiler company? Yeah. So they pioneered the greatest insurance business model innovation beginning in the late 18 hundreds. They were ensuring steam boilers when steam boilers were literally the engines of the industrial revolution, and they blew up a lot, right? New technology, very dangerous, under high pressure. And what they would do is they would go in, they would assess some of this trick because we use it a lot, but we do not talk about it much. They assess the current state of risk in a place that they want to underwrite, and then they we are willing to underwrite this at a lower premium than that current state of risk warrants. But you have to let us send someone in and make you safer. So if you have a risk of nine and their initial engineering assessment says we can get that down to a four, they'll underwrite you at a seven. Best deal you can get, You are likely to take it and they're going to hold you at that rate of seven while you drop to four for as long as they can win, win, win. And not just for the buyer of insurance, not just for the seller of insurance. For the rest of us who live in a world where the boilers aren't blowing up as much, right? That's where we want to be. Are we just about done here? Oh, one minute. Yeah. Meaning we are one minute over. No extra charge for one minute. Any other thoughts or comments? Alright, we are back here soon, right? But really you want more of this, I do not know, 9:45? we are going to probably just go deeper on some of these things. If you have something in mind, if you want to come and talk to the group that's available to you too. We've got lots more content if You are interested. Thank you very much guys.