Manulife's chief AI officer on responsible AI, part one

People in a glass conference room sitting at a table together
Adobe Stock

Takeaways

  • Developing responsible AI guidelines 
  • AI adoption resources
  • Safety, organizational values and empowerment

Jodie Wallis, the first global chief AI officer for Manulife, spoke with Digital Insurance about responsible AI and how the insurer is deploying the technology. Wallis was previously the global chief analytics officer. Manulife recently released its Responsible AI Principles

Jodie Wallis
Jodie Wallis

Manulife was also ranked in the top ten of AI implementation, according to the Evident Insurance AI Index.

This conversation has been lightly edited for clarity. 

Editor's note: Read on for part two of this conversation.

Can you share about this new position?

It's not so much a new position as a new title that reflects the position that I've been in for the last little while. Over the last few years, everything that the team has been doing has been AI related, either machine learning or Gen AI. And so we felt the time was right to reflect that in the title. And so my direct reports will also have the same changes to their titles. Rather than being vice presidents of advanced analytics, they will be VPs and heads of AI for their businesses. It's more of a reflection of the work we're doing than it is a change in the work we're doing.

What has changed related to technology during your time at Manulife?

If you think of 2016 to 2023, I describe that as the first two phases of AI. Between 2016 to 2020, at least, we were experimenting. We were demonstrating the value that machine learning models could bring to our existing operations, the kind of incremental improvements we could make with machine learning over and above traditional statistical techniques. We really got the businesses and the functional leaders on board over that period of time, and then kind of said, 'Okay, now it's time to scale.' That's when we made some major investments in our data and AI platforms. So between 2021 and 2022 we completely revamped our platforms. We moved to a modern cloud stack based on Microsoft Azure. We migrated our data lake, our enterprise data lake, over to the cloud environment we created and we created globally consistent capabilities for our AI teams. 

Phase two was taking what we had learned in the first phase and scaling those machine learning models. And then Gen AI came in. So when it was introduced, kind of in late 2022 between, we'll call it 2023 by the time we figured out what we wanted to do. From 2023 till now is the Gen AI phase, or the third phase. And so we have continued to build machine learning models where it makes sense for our business partners. But we've also significantly introduced a set of Gen AI use cases, they're all called AI, but it's like a completely different class of solutions and class of use cases. And we kind of joke that between 2023 and now we've gone through the same set of cycles as we went through from 2016 to 2023.

Machine learning was really about incremental value in unique cases where prediction capabilities could make a difference for the business. With Gen AI, it's a little bit different. It's more about where we can leverage its core capabilities of search, summarization and content generation to do things at scale. It's a different class of solutions. So what do I mean by at scale? We should be able to build something once in one part of the organization and then scale it to all other parts of the organization. We have over 100,000 distribution partners worldwide. And so we started with sales enablement in one market in Singapore, where we use Gen AI to, first of all, understand everything there was to understand about the customer, to leverage the machine learning model leads that we generated, and also to look across all customers with similar profiles to understand their situation, to create a set of conversation starters or talking points for the agents to engage with the customer. So a very relevant kind of starting point, rather than a generic campaign or even a personalized campaign, is still not a one to one. So to generate those talking points, to create emails or other types of communications, and also to provide the agents with the ability to understand manualized products more deeply and processes more deeply, so that they can service the customers more quickly. 

That same kind of distribution and sales enablement solution is now live in seven different markets across different product lines. So that's an example of scale. 

Another really great example is the work we're doing in the investment research space. So things in the news happen very quickly, like tariffs, for example, has been something that we've been dealing with all of 2025 and every time a new tariff is announced or a change is made, or the U.S. president says something, it changes the outlook of our portfolios and to continually redo that work and redo that work every time there was a new statement, is essentially prohibitive. So we're able to use what we built, an investment research analyst based on Gen AI to say, 'Here's the latest announcements, run it through all of our portfolios and help us understand what the impacts are to us and to our customers.' 

So that's another example, it's scale, but in a different way. It's the ability to ask and answer the same question with an underlying change in the political landscape or the financial landscape, over and over again without redoing the work every time.

What was the process to develop the AI guidelines?

Responsible AI has been something that my team and I have cared about and have paid attention to since the beginning of the AI journey. But a couple of years ago, we said we'd really like to share some guidelines with everybody. We'd like to share them with our customers. We'd like to share them with our teams, with our employees. We'd like to share them with our shareholders. And so we took some of the work that we had been doing more internally, and said, you know, which of these are really important to share? And so we came up with a preliminary list, and then we set about engaging with different stakeholders around the organization to get their perspectives. So, for example, our chief people officer to get her perspective in terms of impact on colleagues and the inclusion of colleagues. We talked to our customer-facing teams to understand what their customers were hoping to see from us in terms of responsible AI. So we went through this process of socialization before we landed on the six that we published, and we never intended them to be done. We really feel they will evolve. I mean, we already made one change to one of them to reflect what was happening in the environment. We really do feel like they will change and they will evolve over time. And that is part of our job, is to say, 'Are these still right? Have we missed anything?' I don't think they'll dramatically shift to something completely different, but they will evolve over time. 

Can you speak to some of the principles?

First and foremost we think about safety, safety of our customers, safety of our colleagues, safety of the organization. And that's really achieved through sound delivery processes and sound governance. The second thing that stands out to me is really doing AI in a way that is consistent with our organization's values. So we're doing it consistent with our values around our Code of Business Ethics, but also consistent with our values around sustainability. So for example, it's not just the biggest and latest model that we're deploying, it's what is the most efficient model that meets the accuracy requirements of a use case, because then we know we are being consistent with how we think about computer resources and power consumption and things like that. 

The other one is really about empowerment. So one of the principles is about making sure that everybody in our organization has the opportunity to work with AI in a way that's meaningful to them. So that means we need to give them the training resources, but not just the training resources, we really need to give them the change management and adoption resources so they can understand how to incorporate AI into their day to day. 

For reprint and licensing requests for this article, click here.
Artificial intelligence Corporate ethics Life insurance Underwriting
MORE FROM DIGITAL INSURANCE