May 16 • 10M

Your emotions in the AI of the beholder.

What does it mean for you, if Emotional Artificial Intelligence can detect and analyze your emotions and moods in a Zoom call?

Open in playerListen on);
Design your future business.
Episode details

Why should you deal with this issue?

AI systems already possess superhuman abilities in various applications and are constantly improving. This creates gigantic opportunities, but also enormous risks. It is up to us humans to shape the digital age in a way that our freedoms, fundamental rights, and lifestyles can be preserved. As unbelievable and distant as it may sound now, we are on the threshold of a new era. An epoch in which most decisions are made by AI systems and humans increasingly lose their status as the most intelligent "life form".

The coming age of hyperintelligence.

Artificial intelligence is getting progressively better. Assuming the smallest imaginable amount of progress, we will inevitably reach a point in the future where we are surrounded by AI systems, whose capabilities far exceed our human ones. It is already the case today that the supercomputer Alpha Go cannot be beaten by any human in chess (side note: Extremely interesting follow-up questions arise here). Gary Kasparov will not someday have a lucky day, as Sam Harris says, and land a lucky punch against Alpha Go. No human will ever be able to win against computers in chess again. NEVER AGAIN. Just as we can't beat our smartphone calculator or navigate to a destination better than Google Maps. The same is true for more and more disciplines of daily life and work, from diagnosing diseases to driving, to military applications. It doesn't matter whether we like it or not, we are heading into the age of hyperintelligence (Novacene). This hyperintelligence is not human. It is superhuman and if we are not careful also encroaching. After all, who is to stop a superhuman machine from crossing our human boundaries, disregarding our privacy, or acting unethically? Machines that can recognise, analyse and use emotions are currently infuriating human rights organisations. What emotions does this trigger in you?

Superhuman abilities

What does it mean when we talk about "superhuman abilities" in the real world? Unbeatable at chess and arithmetic? Driving supercomputers that banish accidents and traffic jams to dark pasts, where humanoid apes, sipping coffee and writing messages, piloted 2-ton steel monsters? These are all "just" technical processes that we might even be happy to hand over to a computer. But what if it is about our humanity itself? About our feelings and emotions. Our private and intimate sphere? Everyone has experienced heartbreak, existential worries, and depressive phases, which show that we are not always well. Often it is important and right that we can hide this. But exactly that could change very soon.

The software company Zoom, is working on an Artificial Intelligence that can detect emotions in video meetings using facial and speech recognition and analyse them via Natural Language Processing and other AI applications. Zoom is not the only tech company working on this technology. AI-based features using which to access people's emotions have already been spotted in digital classrooms. They are also being used in self-driving cars to detect signs of road rage and drunk driving. So this new technology is not a distant vision in the future, but is knocking loudly on our door. Time to open, take a look at what exactly is happening and how we want to deal with it.

Don´t panic.

New technologies require care and safety mechanisms. Let's compare the new technology of AI with familiar technologies like electricity. AI systems and electricity can both be dangerous, but they also have the potential to drastically improve our society. Certainly no one wants to go about their daily lives without electricity. Rational pragmatism is therefore required. In order to be able to set up the right safety mechanisms, a danger analysis and an awareness of the dangers of an AI, that can detect, analyse and use emotions, are required. The following scenarios should be avoided at all costs:

First: Your "emotional data" is acquired without your knowledge as well as your consent and used to your disadvantage. Conceivable negative scenarios are the manipulation in negotiation or job interviews, or the exploitation of emotional vulnerabilities for fraud attempts. In the past, people's psychological profiles were already being used to influence elections (See: Cambridge Analytica, The Great Hack). Emotion recognition would enable completely new dimensions here, unfortunately in a negative sense.

Second: We are gradually evolving on a societal level into a digital surveillance state in which we are losing our freedom bit by bit via centralised control mechanisms.

The solution

Potentially dangerous topics should be regulated or at least subject to democratic oversight. Elon Musk, one of the leaders in AI development, has been advocating for an AI authority, comparable to other federal agencies, for years. But currently, such authorities do not exist. There is also the factor that nations handle the rights of their citizens differently. This leads to conflicts in the use of globalised systems. One example are the US big tech companies like Microsoft, Facebook or Google, whose products and services are not compatible with the EU´s GDPR guidelines. However, our economic world is based on the opportunities made possible by these new systems. It is therefore very much up to us humans, through communication and debate, to negotiate a new consensus based on which our fundamental rights and freedoms can endure in the digital age. In this case, perhaps more than ever, a global agreement is needed.

We are all challenged to help shape a positive vision of the future. A first step is to engage with the different topics. To know what is happening. To get connected so that we can have a say and, subsequently, help shape it.

Book recommendations

My questions for the HI swarm intelligence

Do you have any interesting articles, links or tips about Emotional AI? Then feel free to let me know in the comments. I'm really looking forward to sharing knowledge with like-minded people and am always on the lookout for new input and exciting perspectives. Let's go humans.

Leave a comment