The turning point in AI

"2001: A Space Odyssey", written by Arthur Clarke

One of the greatest SciFi movies of all times is "2001: A Space Odyssey", written by Arthur Clarke and directed by Stanley Kubrick..

It was the first time that Artificial Intelligence was so realistically integrated in a film, in the form of an intelligent supercomputer, HAL 9000, that was helping mankind to find the origins of a mysterious artifact buried beneath the Lunar surface. Οne of the iconic characteristics of HAL was that it could directly communicate with humans in natural language, in other words it had a Voice User Interface.

However, now in 2021, 20 years already after the fictional timeline of Space Odyssey, the situation with Voice UIs is far from HAL’s capabilities and hopefully equally far from the very same system vulnerabilities that then lead to a series of unfortunate events in space…

In every breakthrough technology there is “a turning point”, the point that it starts making a big impact to people’s lives and upscales to disruption.

For instance, back in 2000s, when the mobile revolution happened, the turning point was the excellent touchscreen of the first iPhone, which allowed a finger touch UI in smartphones without the need of a stylus pen. The market for touch screens had been growing quietly for years, but it was Apple in 2007 that changed everybody’s mind about touch UIs by introducing screens in which a slight electrical charge reacts to the human body’s own electrical field, rather than pressure.

That moment was the turning point for mobile computing, the moment that led nowadays to a spectacular 80% of the global population to carry a smartphone, a digital device with a primary UI that is based on touch!

AI’s turning point is about to manifestate on this decade, in the face of seamless, insightful voice recognition, on all major accents and languages.

A turning point where Alexa and Siri become HAL’s righteous successors that could actually comprehend and turn to actionable responses on phrases like this: 

“Hey Siri, err, please send a message to my friends from the basketball club, err, actually not all of them, take out John, he is sick… but no, include also John and he decides whether or not to come, and as I told you err, send them a message about dinner on Tuesday at… what was the name of the Chinese rest… a yes, Din Tai Fung. Please also book a table there, I think that we went last April, err and yes, everybody liked it”.

Imagine that you can say all of the above with any accent or in any language, or even switch “on the fly” between languages, and Siri can interpret it as any human would do!

Yes, this will be the turning point for virtual assistants and AI, the moment when technology as such becomes available, making the service both intuitive and unambiguous for people. It is right then, when we will see an exponential integration of voice assistants into devices

We will henceforth take for granted that every device is actually capable of listening and, most importantly, understanding all that we say, in the way we say it. As touch interfaces made computers accessible to more people, a voice interface will arrive to include almost every human in the merits of technology.

The tech giants and many startups are working vigorously on this breakthrough that will make AI part of our everyday lives. And immediately afterwards, more tech companies will come, with an ecosystem of offerings on new services and UI’s, to capitalise on the fact that machines can now “understand” natural language like people do.

Needless to say that, what happened with apps in mobile stores after the introduction of the new touchscreen in 2007, is set to happen to AI, as soon as Voice Recognition steps up to a seamless, natural user experience.

And we can all rest assured that the AI disruption is about to come on a much bigger scale than anything we have already witnessed.