AI at the movies
Today’s perception of Artificial Intelligence (AI) in the general public is massively influenced by movies like Her or Ex-Machina. ‘Super’ intelligent systems easily outperform human intelligence on almost all fronts and speak to ‘inferior’ humans in the voice of Scarlett Johansson – at least in the film Her. This type of AI is referred to as Artificial General Intelligence (or even superintelligence) and it is fair to assume that for the time being it is nothing more than science fiction.
The idea of superintelligence fuels wild speculation and Nick Bostrom’s book ‘Superintelligence: Paths, Dangers, Strategies (2014)’ won’t have helped to calm people down. It is always fun when philosophers write books about technology related topics rated as ‘popular science’. On a side note, as editor of “Global Catastrophic Risks” (2011), Bostrom has already demonstrated serious interest in very dark future scenarios. Although SUPERCRUNCHers should normally be ‘super’ excited about the idea of superintelligence, I am simply not: There is too little substance, it is too dystrophic and more interesting stuff is already happening out there. But hey, the movies are great entertainment and it is a cool philosophical issue for the time being.
Taking a closer look at the different types of AI there are a few key dimensions which help to illustrate them (see also the post on the ingredients of AI). They can be condensed into just two questions:
Although the terminology does get a little blurry, these questions allow us to distinguish at least two main forms of AI:
However, recent developments are calling for a more granular terminology, as we are observing a rapid growth of narrow AI; with applications for an increasing range of use cases. New extremely powerful AI systems are emerging; some of them blending multiple narrow AI solutions, that can easily adapt to new challenges. The latter are still not strong AI, however – at least in my humble opinion – they are more than narrow AI. I simply wouldn’t want to put Watson where it stands today or the work of arago into the same bucket with a chess computer on steroids. Maybe we should at least refer to them as hybrid AI (see e.g. Ted Greenwald’s 2011 forbes article)? So far there is no ‘official’ name for this chapter of the AI evolution and purists will keep calling everything weak AI unless it fully matches human intelligence in all aspects.
Narrow (or weak) AI is tailored to a specific problem or task and cannot deal with other challenges without being re-trained and/or modified. However the term weak AI is a little misleading, as it is by no means easy to develop such a system.
The name only makes sense if we look at the second question mentioned above: How strong and flexible is the system in relation to human intelligence? Weak or narrow AI systems lack the flexibility of human intelligence; they fall short in terms of the scope of components that comprise human intelligence, but can be very powerful in their domain. In fact they typically aim to beat humans in their specific domain. Once they achieve this they get a lot of media attention. DeepMind’s (now Google/Alphabet) AlphaGo or IBM’s DeepBlue are popular examples.
In fact pretty much all AI that is operational today falls into the narrow AI category: Apple’s Siri, which goes back to a SRI International spin-off acquired by Apple and a speech recognition engine developed by Nuance Communications, Google’s Assistant, and Amazon’s Alexa. There are plenty of narrow AI solutions addressing use cases in numerous industries – ranging from healthcare to defense.
The required ingredients of a narrow AI solution heavily depend on the specific problem at hand. More or less application independent or core requirements are:
Depending on the application, additional specific components might be required, such as:
Many of these specific components are already considered narrow AI solutions. Some of them still represent major challenges, especially creativity and social intelligence.
A strong AI (or full AI / Artificial General Intelligence (AGI)) system is as powerful and flexible as human intelligence and is not tailored to a specific problem or task. Strong AI has not been achieved (yet).
For many people it is daunting to think about a future after full AI has been achieved. The main concern is that once full AI is operational, it would quickly leverage all available infrastructure to propel itself to even higher levels. The concept of superintelligence implies that these self-improvement cycles will lead to a form of intelligence beyond our understanding and more importantly (or worryingly) control.
Keep in mind that a full AI system features all aspects of human intelligence – also the consciousness, which makes it hard to predict what this superintelligence will actually think when looking at what its human creators have produced. The extent to which AGI triggered runaway technological growth will change human civilization is pretty much unpredictable at the moment – and therefore nothing more than wild speculation.
Back to business – which AI type is most relevant today?
AI is emerging on many fronts and by now it should be clear that we are not talking about superintelligence at this stage: businesses leverage narrow AI solutions. Some call simple machine learning solutions AI, others deploy intelligent agents and don’t make much fuss about it, although their systems could rightfully be called AI.
In the next post I will discuss how a business can leverage AI without getting bluffed by overpromising sales pitches of almighty AI wonder-machines… Stay tuned!