Your cat is smarter than AI. True. So should you ignore it? AI that is, not the cat! Walter Hale brings the subject into focus as we enter a new decade where it promises to dominate.
Here are three statements about artificial intelligence (AI). Only one of them is true. Kind of.
- With AI, autonomous vehicles will be able to drive in all kinds of weather without human intervention.
- AI-powered robots can read as well as human beings.
- Your cat is smarter than AI.
You guessed it - the statement that is closest to the truth, as French neuroscientist Nicolas P Rougier has suggested recently, is that your feline friend probably is more intelligent than a robot. Before we pit cat against machine, let’s explore the other two statements that are both fake news.
You may be puzzled by the claim about the limits of autonomous vehicles, especially given that some of these groundbreaking machines are already being tested on British roads. Yet when pressed by the American business magazine ‘Fortune’, John Krafcik, the CEO of Waymo - which is pioneering this technology on behalf of Alphabet and Google - admitted that he could not foresee a time when these vehicles would be able to drive themselves in any kind of weather without human intervention.
The second item of fake news was perpetrated recently by Microsoft and parroted by a credulous or complacent media. As two American scientists Gary Marcus and Ernest Davis noted on the news site Quartz, in response to a Microsoft press release suggesting that robots can read a document - and answer questions about it - as well as we can: “This sounded much more revolutionary than it really was. Dig deeper and you would discover that the AI in question was given one of the easiest reading tests you can imagine - one in which all of the answers were directly spelled out in the text. The test was about highlighting relevant words not comprehending text.”
They argue that if we gave AI this statement to read: “Two children, Chloe and Alexander, went for a walk. They both saw a dog and a tree. Alexander also saw a cat and pointed it out to Chloe. She went to pet the cat.” and we asked AI who went out for a walk, it could tell us. If we asked it whether Chloe liked cats - which we would assume because she went to stroke one - AI would be nonplussed. As Marcus and Davis conclude: “Inferring what isn’t said is at the heart of reading and it simply wasn’t tested.” This narrative isn’t, they suggest, that unusual: “Practically every time one of the tech titans puts out a press release, we get a reprise of this same phenomenon: minor progress portrayed as revolution.”
This kind of hype makes it hard for businesses to know where they stand. It’s hard to assess how close AI is to changing the ways companies operate or how people work if we are continually being fed disinformation, Panglossian optimism and spurious claims. That has to be a concern for many print service providers have been caught on the bleeding edge of technology when they thought they were on the leading edge. All of which brings us back to the cat.
If a cat played a computer at chess, there could only ever be one winner. Yet if you judged your cat and a robot on their ability to walk properly, the cat would win every time. Indeed, the difficulty of getting robots to move in sync was one of the reasons Boeing decided to abandon full automation at the plant north of Seattle where it is producing its 777X jetliner and give the robots some human help. That is not a one-off. Even a visionary like Elon Musk, who wanted to automate as much of the car manufacturing process as possible, had to scale down his ambitions to meet his production targets.
This is one reason why, even though the totally automated print factory was being talked about in Japan in the mid-1980s, so few of them have materialised. The variety of jobs, specifications and substrates wide-format printers use have left many of them sceptical about the possibility of complete automation. At the recent Widthwise Round Table, there was a general consensus that AI-driven automation might be an option if you had machines that were effectively running the same kind of job all the time but, in reality, the participants wondered, how often was that likely to happen? As runs get shorter, and orders more varied, the bar for AI and automation keeps getting raised and none of the printers around the table expected themselves - or their competitors - to be running a fully automated print plant within the next decade.
There is an argument that these are the kind of challenges that face every emerging technology. After all, it took the humble lightbulb 120 years to become commercially viable. It was invented by British chemist Humphry Davy in the early 1800s, passed from one researcher to another until, in 1879, Thomas Edison figured out how to make an incandescent light bulb people would actually buy. Even then, it took another 40 years for electrical utilities to become stable, profitable businesses and they did that by looking beyond the light bulb and wooing consumers with electric toasters, electric irons and electric trams. Though the resources being devoted to AI are exponentially greater than those dedicated the lightbulb, the tasks being asked of machine learning, robots and all other kinds of AI are also exponentially more complex. If we recognise that problem, the argument goes, we should be wary of writing off AI too soon.
And we shouldn’t write off AI at all. It is already clear that it can prove immensely useful in gathering, vetting and analysing vast amounts of data, which, if managers are smart enough to use it, should make it easier to make better decisions. Imagine the impact on productivity - and morale - if all of that data that is being painstakingly entered into Microsoft Excel spreadsheets could be pulled in automatically and presented in an easy to digest form that required no greater technical expertise of the employee than the ability to read and deduce. The potential that comes from allying AI and the Internet of Things to improve production processes - from managing web-to-print more effectively to diagnosing faults before they occur - could also be transformational.
Yet let’s not get carried away by the hype. Marcus and Davis say that when another breakthrough is proclaimed, we should ask - stripping away the rhetoric - what AI actually did, how general the result is (for example, a robot could may be able to read a paragraph of Lord Of The Rings but could it also understand the news?), whether we can duplicate the result, and whether performing that task is an academic exercise or something that could be useful in the real world. And when a press release proclaims that robots can do something better than humans, it seems pertinent to ask which humans and how much better.
Their last suggestion, which takes us right back to driverless vehicles, is: how robust is the system? “Could it work as well with other data sets, without massive amounts of retraining?” they ask. How well, for example, could the system driving an autonomous vehicle cope with a detour that is not on its map?
French neuroscientist Rougier says that there is a fundamental flaw in AI that, despite the rapid development of machine learning, has still to be overcome. The ability to recognise an object, he argues, is not the same as understanding it. So if you fed a million images of a cat into the system, the algorithm could identify the species but it would still not be able to tell you what your pet was trying to communicate if it purred or rubbed itself against your legs. In contrast, show one kitten to a young child and they would instantly be able to recognise any other cat (even if they didn’t know the word yet). Rougier argues that for AI to be truly intelligent it needs to be able to ground digital symbols in the real world. It’s not enough to have sense, it needs somehow to acquire the senses we rely on to make sense of the world.
Not everyone agrees with Rougier but you can see his point. The bottom line, as Marcus and Davis conclude, is that: “AI really is coming eventually but it is a lot further away than most people think.”