Management

Dit is een bijdrage van KPMG
Analytics
Empathy of AI

Can ArtificiaI Intelligence understand empathy?

Would we as humans be able to replicate our own empathy to train AI? The implications of empathic AI are immense and ask serious questions about our future society.

23 januari 2019
Door: KPMG, partner

Would we as humans be able to replicate our own empathy to train AI? The implications of empathic AI are immense and ask serious questions about our future society.

Recently, I had the opportunity to chat with the head of psychiatry at one of the Netherlands’s leading university medical centers. Our conversation turned to the way that, increasingly, medical professionals were drawing on information from algorithms to make clinical decisions.

This head of psychiatry described how, thanks to AI, they now receive very accurate predictions — up to 80 percent in some cases — about the future behavior of some potentially very violent patients. The challenge for this professional was: what to do with this information? Do they tell the patient in the hope that they take medicine to reduce the chances of becoming violent? By doing so does the doctor maybe mitigate the risks to the nurses who have to care for the potentially volatile patient every day but can't be told of the AI predictions due to privacy regulations? But what if the AI prediction is wrong and they end up unnecessarily medicating the patient? To make the decision, the doctor must weigh his or her own empathy for both patient and nurse — a faculty no algorithm has yet mastered.

Not that the AI community isn't trying. Whether it be digital personal assistants in the home, so-called care robots in nursing homes and even a new generation of kids' toys, the tech industry is working hard to make AI more empathic in its 'thinking'. Given AI's obvious superiority over humans in logical decision making, and its programmed patience in the face of human illogicality (as any automobile sat nav system could attest to) you might think that training an algorithm to act with empathy will be just another stage in AI's evolution.

I'm not at all sure. We've already demonstrated great success in teaching algorithms to provide computational recommendations and decisions far beyond the capabilities of the human mind. But when it comes to issues like empathy and emotion the human mind is the greatest black box of them all. We humans make very complex, even irrational, evaluations before we actually take a decision. In many cases those decisions involve not just logical but also an emotional evaluation, and we have a very hard time explaining why or how we make them. If we can't understand how empathy affects our own decision-making, how can we equip AI with that power, or assess if they are doing it right?

The implications of empathic AI are immense and ask serious questions about our future society that go far beyond the often cited 'moral machine' hypothetical conundrum of who an AI-controlled automobile might prioritize in a crash scenario — the passengers in the car or pedestrians in its path?

Consider another example involving refugees that recently happened here in the Netherlands. It involved a brother and sister from Armenia, who had arrived 10 years ago as babies. The highest court ruled the kids could be sent to Armenia where they had never lived, but where their mother had been deported to a few years ago. They didn't have a dad and were living with a loving foster family, who were prepared to keep caring for them as their own. Under fierce societal protest, the immigration ministry decided the right thing to do would be for the kids to stay.

In this case, an exception was made to the rule (essentially the agreed definition of success) based on a feeling of empathy. Would we as humans be able to replicate our own empathy algorithm to train AI to make such an exception?

There's absolutely no doubt about the value AI can bring to all parts of society. It will transform our decision-making, bringing levels of insight, accuracy and predictive planning far beyond anything we as humans have achieved in the past. Most of the time, the issues we'll be asking AI to solve will have a clear objective. From time to time though there will be challenges that require an empathic understanding of all the factors involved. Until we humans fully understand how our own sense of empathy works, we'd better make sure we understand the implications before we install it in another black box.

By Sander Klous, partner, KPMG in the Netherlands

Reactie toevoegen