Aflac sells trust: Keith Farley, SVP, individual voluntary benefits

An Aflac building with an Aflac sign out front.
Aflac

Digital Insurance spoke with Keith Farley, senior vice president, individual voluntary benefits at Aflac.

Farley discussed his philosophy around human-centered artificial intelligence and how the supplemental insurer is focused on customers when it comes to innovation. 

The conversation has been lightly edited for clarity.

Could you explain your philosophy around human-centered AI?

Keith Farley
Keith Farley

In previous roles, I was working in our innovation team and robotics, basically figuring out all the ways to bring technology in. I was a technologist. My role was to leverage the technology to create efficiencies and to create a better customer experience. Now, being more on the business side, I'm all about customer experience. So I get to see where those two intersect because there's a core me that wants to automate everything and make everything AI and simple and fast. But then there's the business side of me that understands the customer experience, especially in the type of work that we do, when people come to us, it's one of their worst days. They've been injured in an accident or they have an illness or a loved one does or maybe they've lost a loved one. So they don't always want high-tech solutions. Sometimes they want that human touch, and where I sit right now coming from that technology side, but on the business side, I just find it very interesting to find that balance for the company.

Walk me through how you balance the technology and the business side?

I think you really need to know as a brand just because you can do something doesn't mean you should do something. So you really need to know, what are the moments that matter, what counts and what do you want to make sure that you are in control of with a human experience? I like to remind myself that the first word in artificial intelligence is artificial. And sometimes you don't want something artificial. You want something real and maybe the A is authentic, that you're looking for. And so we try to balance those moments of authenticity where you want a human touch. Do you want to talk to someone on a phone or have someone walk you through something versus those other moments where it's just transactional. 

I'll give you an example of that. If you're changing an address, you've moved to a new location, that's transactional. I just need you to get my new address or I need to update my credit card payment or I want to add a dependent, we've had a child we want to add it or have had a child turned 26 and we need to remove them. These are very, for the most part, transactional type things. Whereas when you have a stage three or stage four cancer diagnosis and you're scared and you're worried and you want to know what coverage you have, that may not be the time for a chatbot or an AI. That's when you want to talk to a real person in a real place and have a conversation. And so what we try to do is automate the simple so we can service the complex. If we get rid of all of those simple requests and transactional things through leveraging mobile apps, leveraging AI to do those types of things with different automation, then it frees up our time to spend more time talking to someone who has had a heart attack or a stroke or has been diagnosed with cancer or who really needs to talk through what their coverages are.

How do you handle deploying these new technologies to customers?

You need to be upfront and let them know when they're dealing with something AI. So don't try to pass off a chatbot as a human experience. Let them know, 'Hey, we have a chatbot that can service you and maybe help you with your simple requests and save you time so you don't have to wait in this line.' But let people know exactly what they're getting. 

I think for us, another thing is making sure that if we have AI that we're using in the decision process, now we've trained the AI so it knows the rules of what is black and white there is no gray area. But what we have still done is that if an AI is making a claims decision, the AI can only approve claims, we do not allow it to deny claims. So it can, if it makes a mistake, it will only be to the benefit of the customer and the detriment of the company, not the other way around. If an AI that is looking at one of our claims, we have this project called code-based processing and it determines that we're going to deny this claim, it will actually kick it over to a human to review. It'll say the AI is suggesting that we should deny this claim based on these criteria, but a human will have to look at it and process that claim and make the final decision. Whereas if AI says I've looked at all the data and we should pay this claim, we let that go through. 

I think not only is it letting customers know, when they're dealing with an AI and when they're not, being upfront, but I think also you have to have your own mindset of what are you going to let the AI do. What we've said is you can do no harm, you can only help people, you can do no harm. And if it's something that would be a denial, it might be a correct denial. We're still going to send it through a human to make that final decision.

How do you see AI being used in fraud detection?

That's a great place where we see AI and massive data models where a human can never catch this pattern because you could never look at all this data, but you can put it through a machine and the machine can throw your outliers instantly, and say maybe you need to look at these couple areas because there's something that again, makes them an outlier. So that is some success that we've seen from our fraud detection. And using AI for that, just the amount of volumes of data that a machine can ingest compared to what it would take for a human to do the same and it can instantly see a pattern and draw our attention to it. And then we can send the human team in to double- check and look and say 'hey, is there something going on here?' Because from our standpoint any company fraud raises the prices of everything for everyone. So we want to make sure that we keep it out not only because it's wrong, we don't want it there but [also] for economic reasons. Fraud is expensive, and I want to pay claims to people that rightfully need the money and have coverage, not to people that are trying to beat the system. So AI has been a huge help there.

If fraudsters are going to be using the tools to send more attacks, we need to use the tools to ward off more of those attacks. It is sort of interesting that it's the same tools that we're using on either end. They're using massive data to look at where they could take advantage, we're using massive data to see where they might try to take advantage. We also have our own threat actor team internally that basically every day tries to break into the company and tries to commit fraud against the company to see if they can be successful and then we can close those doors. Many large organizations have those red teams, but as our red team starts using AI, it can expand the impact.

Are there other areas where AI can bring innovation?

I think what we start looking at with AI is around predictive modeling. Kind of straightforward would be if you've had this certain kind of accident or this certain kind of illness based on the data we have from the last 69 years we have been in existence, we can tell you with pretty good confidence, exactly what kind of treatments you're going to have, how many physical therapies you're going to need. We know these things only because we've got claims data that goes back to 1955 telling us that. And we start predicting on day one, when you break your leg, we can say 'Hey, over the next 12 to 18 months, here are all the procedures you're going to have.' Can we just pre-process that for you and say look, 'We already know before you even know, before you've gotten your full diagnosis and treatment plan, we know based on a volume of people that AI has helped us pull together. We can likely predict that this is the treatment you are going to have. So then we start thinking, can we start pre-processing those claims, we can't pre-pay the claims because you haven't experienced the loss yet, but we could predict that loss and then be able to say, 'Let's get everything ready and make it simple for you because there's a high likelihood that you're gonna come back to us in 60 days with physical therapy, and we already knew that. We already have that pre-loaded and we were just waiting for you to confirm.'

The other thing that we look at is, is there a time when people are comfortable with us just having access to their medical records to know what happened to them and just pay them automatically? Now, we're not fully there yet, and there's a lot of questions and regulation around. I would allow McDonald's to track me on the way to their store so that when I get there, the cheeseburger is ready. The reality is, Uber is tracking me, Google is tracking me, Delta Airlines, all these folks are tracking me because I want to know where my bag is in relation to me with Delta, for example. And with Uber, I want them to pick me up where I am, not where I say I'm going to be or where I was. So I think that as we get more comfortable with sharing that kind of data there's a lot of things that can make insurance a lot easier, including even recommending what you might need based on people that do things like you do, have similar hobbies, or a similar age in a similar region. It's kind of interesting to think about where it could go. I think what I always try to do is make sure that we apply the brakes as necessary to say, 'Look, you don't need to be there first, right? You need to be ready to go there. But you don't need to be there first because whoever gets there first, is potentially making mistakes along the way.'

That's something that we've said, we can do nothing that would have a negative impact to our customers. So we can't get so entwined with technology that we would do something that would negatively impact the customer. And that's really my role to say, 'I love it too, and I want to do it just as much as the next person. Let's make sure that it's in a way that is fair to the customer and respects everybody's privacy and makes it easier for them. Not just easier for us.'

What kind of conversations around data privacy are happening at Aflac?

I think there's an expectation that we all have, especially when dealing with a large organization, that the organization is making the proper investments to secure your data. And so with Aflac, we have medical information on people and financial information through credit cards, bank accounts, things like that. So we feel that there's a very high bar to keep and because we're a company that sells trust–we don't have a physical product, you can't drink it, you can't sit in it and drive it somewhere. It's just trust that we sell so I'd say it's paramount for us to make sure that we maintain that trust and data protection is one of those big things.

Any other technology that Aflac is focusing on?

We've had a lot of recent success with our mobile app. It allows people to, while they're still at the doctor's office, take a picture of the paperwork, send it to us directly through the app and file their claim in seconds. 

We look to the customer for the feedback of what do you want and when are you ready for the next thing? So we do a lot of focus groups to understand what customers are looking for. We often compare ourselves to other companies, not insurance companies, but how would Amazon? How would FedEx do it? How would Marriott Hotels do it and then if that's how they would do it, then what's the expectation on insurance for us? 

A lot of it is determined by customer priority, but I would say our mobile app has been something that has been a huge success in terms of the amount of people that are digitally filing claims for us. It's a lot easier to automate on the back-end, if we receive it digitally. It's much faster and easier for them than mailing us or faxing us. We still get those today, but we're trying to encourage more and more people to use the mobile channel.

My Special Aflac Duck

We have a product that we've worked with a company to create called My Special Aflac Duck. And this is an animatronic robotic companion given to children who are fighting cancer. So they get a companion through their diagnosis and through all of their procedures. The Aflac Duck reacts as you give a different input so if you start to pet the duck on its back, it kind of starts making this purring sound. I mean, ducks don't really purr but this one kind of does. It responds to that action and it has these emojis where if a child is asked how they're feeling, by a doctor or a nurse, and maybe the child cannot put into words for that doctor, what's going on. They can use the duck and the duck will act out that emotion, which could be happy, sad, nauseous, silly, all of those different things. And it gives this child a companion. It has a port on it so that if a child is receiving chemo through a port, the duck can go with them and also the child can use the port on the duck so they're not alone at any part of their journey.