7 reasons why artificial intelligence needs people

As we reflect on the definition of "work” in the business world, it’s clear that most work today is the exception processing of tasks that we couldn’t fully automate yet. And, with artificial intelligence changing the way we program software, we can solve for this last mile of enterprise automation we hadn’t been able to figure out over the past thirty years.

As AI projects roll out over the next few years, we will need to rethink the definition of the “work” that people will do. And in the post-AI era the future of work will become one of the largest agenda items for policy makers, corporate executives and social economists.

Despite the strong and inherently negative narrative around the impact on jobs, the bulk of the impact from the automation of work through AI will result in a “displacement” of work not a “replacement” of work – it’s easy to see how the abacus-to-calculator-to-Excel phenomenon created completely new work around financial planning and reporting, and enterprise performance management.

Similarly, AI will end up accelerating the future of work and resulting displacement of jobs will be a transition already in place, not an entirely new discussion. As some work gets automated other jobs will get created, in particular ones that require creativity, compassion and generalized thinking.

artificial intelligence 19.jpg
A NAO humanoid robot, developed by Softbank Corp. Photographer: Krisztian Bocsi/Bloomberg

Shifting away from the larger impact of AI on jobs, and zooming in a little closer to getting AI operationalized, it’s clear people will be needed in many different ways to make AI work. From my observations across dozens of AI rollouts, here are seven areas where human knowledge and expertise will be a critical requirement in AI projects.

1. Being an ethical compass

First and foremost, selecting the right AI applications is one the most important decisions that people can make. Are we using AI for the right reasons – to help people with special needs by enhancing eyesight, to provide translation in hearing aids for better communications across cultures, to increase diversity in the hiring process? Or, are we using it for the wrong applications – from influencing elections to guiding the criminal justice system? People provide the ethical compass needed to determine AI’s use cases.

2. Bringing context

We need people in the loop to orient AI towards the right goals. This happens already with supervised learning, where people label the datasets in advance before running AI algorithms. But even in unsupervised or reinforced learning, people need to ensure the algorithm is leading to results that matter for the business, for instance, in pharmacovigilance, when prioritizing between life-threatening and benign secondary effects of taking medicine.

In addition, we need people to contextualize AI’s results. For example, an AI algorithm predicting failure from sensors on aircraft engine parts will not know to interpret the data differently if the aircraft is flying over the Sahara versus the North Pole. People must feed this context into the algorithm; otherwise predictions can be off. It is more powerful to combine a moderately strong AI algorithm with human domain expertise than to use the most powerful deep learning model without any such context.

3. Providing governance

In a digital workforce, planning should take into account the inputs, outputs, and various exchanges involved with every process. We cannot just toss a robot into the mix and let it do its own thing. For instance, a minor change to a webform can throw off robot process automation working elsewhere.

Mapping out how machines and other systems will work together can prevent potential hiccups and obstacles. When managers only have to oversee people, they can easily see when an employee clocks in, clocks out, or physically does not show up for work at all. Since AI is virtual, it becomes harder to tell whether or not a robot is working. For instance, if there is a random password change that blocks a bot from logging in, then not only does the work stop, but we may not notice it until hours or even weeks later. In a hybrid workforce, we should be able to easily see if and how AI is working, just as much as human employees. People drive governance.

4. Handling complexity

The new man-meets-machine dynamic will make current jobs easier to do, but the focus of people will shift to higher value and more complex jobs. With AI taking over some tasks, employees are then free to look at bigger issues and concerns. Take banking for example. Chatbots in the world of finance can now handle more routine issues such as payment status, allowing human agents to dedicate their time to more complicated, customer cases, such as identifying the root cause of payment delays. This evolution towards more complex work is not new; when spreadsheets were invented, the financial analyst’s role went from reporting to planning.

5. Preventing bias

As mentioned in another column, the two big reasons for AI bias are the lack of diversity in data samples and rushed or incomplete training algorithms. In both cases, people with domain knowledge are key. Industry or process experts can help think through potential biases, train the models accordingly, and govern over the machines to see that they don’t fall out of line. Diversity in the teams working with AI can also address training bias. When we only have a select few working on a system, it becomes skewed to the thinking of a small group of individuals. By bringing in a team with different skills and approaches, we can have a more holistic, ethical design and come up with new angles.

6. Managing change

Managing this next-gen, hybrid workforce is very different from just managing people. We have to think about how we deal with the change, make sure robots are holding up their end, and see that employees have the right skills to work with their new AI coworkers. Putting seasoned employees alongside machines is a dramatic change – which is putting it lightly. AI design is moving from “humans in the loop” to “computers in the group.” Sometimes we forget to consider change management because we are eager to get AI going and start seeing some results. This actually does more harm than good, creating problems later down the line, due to rushed projects and negative thinking from employees. Any project has to start with a clear change management plan to prevent problems from both people and technology.

7. Connecting creativity and compassion

When Apple introduced the iPhone, we had no idea what new jobs it would create. Following the introduction of the iPhone, we came up with an entirely new creative industry of applications, ecommerce, ride sharing, wearables, online communities, video gaming, all invented by people who saw the potential of this new technology and applied their creativity to it. Just like creativity boomed after the iPhone’s launch, new creative applications will emerge from AI. We don’t know what they are yet, though people will drive that future. Much has been written about AI’s ability to create: art, poetry, and music are some recent and prominent examples.AI can help enhance creativity, though humans bring the compassion that goes hand in hand here. Kai Fu Lee, in his latest Ted Talk, explains how humans can thrive in the age of AI by harnessing compassion and creativity.

Get ready for the hybrid workforce

People who can translate their knowledge to these seven areas will be more important than ever. There is a great opportunity now to reskill employees on how to apply their experience in meaningful ways, and create greater talent. At the same time, recruiting programs should look for “bilingual” people, meaning those who already have both domain and digital knowledge, and can walk in, make those connections, and do the job.

In the future workforce, people and machines will be equally important, combining industry knowledge with AI and automation. We have to start thinking more about how to manage this combination and make sure there are well-thought plans, structures, and education in place. Then, we can really consider ourselves ready to work alongside AI.

For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning Data management
MORE FROM DIGITAL INSURANCE