With rapid and remarkable advances in artificial intelligence over the last few years, discussions of the promise of, prospects for and threat posed by artificial intelligence have proliferated in the popular media over this same period. Unfortunately, however, much such discussion, especially much of the alarmist rhetoric one finds in the popular media and among the general public, rests on several basic misunderstandings of the nature, not just of artificial intelligence, but of human intelligence and psychology too.
As we will see, much of this misunderstanding arises from a single pervasive human cognitive bias, namely that of anthromorphism – namely our tendency attribute human emotions and characteristics to non-human actors and hence assume that machine intelligence will necessarily behave similarly to humans.
Anthropomorphism, Different Forms of Intelligence and the Definition of Intelligence Itself
This problem begins with the very definition and concept of artificial intelligence, which, as currently conceived, I see as inherently problematic.
To illustrate this, let’s look at what has historically represented, and, in the public imagination at least, still represents, the main criterion for assessing whether a machine has achieved, or demonstrated, ‘true’ intelligence – namely, the so-called ‘Turing test’, named after a thought experiment first conceived by pioneering early twentieth century mathematician and computer scientist Alan Turing.
In this test, a machine is said to be capable of ‘thought’, and hence as possessing ‘true’ intelligence, if, in a conversation with a human interlocutor conducted by text messages, the interlocutor, or sometimes a third-party evaluator, is unable to reliably distinguish whether or not the machine is indeed a mere machine or is in fact a human.
In the philosophical literature, most critiques of the Turing test, for example the famous ‘Chinese room’ thought experiment, centre around the idea that a machine capable of passing the Turing test would not necessarily possess true intelligence, but merely the ability to simulate the external appearance of intelligence by producing responses that mimic intelligent conversation, without necessarily involving any actual understanding of what is being said.
Indeed, this criticism seems to have been borne out in the development of AI chatbots, which are now capable of producing remarkably human-like responses, simply by being trained on huge amounts of text data to recognize statistically probable and hence apparently appropriate responses.
Indeed, modern AI chatbots might almost be said to be the ‘Chinese room’ thought experiment become manifest – what was originally a hypothetical, and somewhat fantastical, thought experiment that has become reality.
However, as I see it, there is a bigger and more fundamental problem with the Turing test that tends to be overlooked – namely it’s implicit anthropomorphism and anthropocentrism.
In short, the Turing test isn’t a measure of intelligence in a general and abstract sense, but rather of specifically human intelligence – or rather the ability to simulate the external appearance of human intelligence – as if this were necessarily the only type of intelligence capable of qualifying as ‘true’ intelligence.[1]
Viewing a specifically human-like intelligence as the only form of intelligence capable of constituting true intelligence is obviously a very anthropocentric view of the nature of intelligence.
Indeed, the Turing test involves, in practice, not only the ability to simulate human intelligence, but also the ability to simulate human emotions as well.
Thus, an AI chatbot that demonstrated human-like level of intelligence, or even superior intelligence, but not human-like emotional responses or personality, that responded like, say, Spock or Data from the Star Trek franchise, would be unlikely to pass the Turing test. However, its failure would betoken, not a lack of intelligence as such, but rather the lack of normal human emotional responses.
Indeed, an AI chatbot might also fail to pass the Turing test precisely because it was seemingly too intelligent to be a real human – if, for example, the machine, like Chatgpt, or indeed Spock or Data, knew rather too much about too many disparate subjects, or, like a pocket calculator, did arithmetic rather too quickly and accurately to pass for an actual human.[2]
Yet this merely further illustrates my key point – namely that it is perfectly possible to envisage forms of intelligence quite different from human intelligence, sometimes, at least in certain spheres, superior to human intelligence, and, of course, also with quite different emotional reactions and personalities.
Indeed, some of the best science fiction writers have built successful literary careers doing just that.
Thus, a highly intelligent alien species that evolved on a different planet might well fail the Turing test, as might, say, a termite-like species that somehow evolved to have human-like levels of intelligence, as brilliantly envisaged by the pioneering sociobiologist Edward O Wilson in his essay, ‘Humanity Seen from a Distance’[3] – but this would show only that the species in question is not human, not that they are not intelligent.[4]
Going further, I would say computers had already demonstrated far greater than human levels of intelligence in certain specific domains some time ago.
For example, a pocket calculator from the 1980s was already much quicker and more accurate at simple arithmetic/computation than the vast majority of us mere humans. Therefore, in this single and admittedly very narrow sphere, it already demonstrated an intelligence superior to that of humans.
Yet defining intelligence specifically in terms of human intelligence, not only involves a very narrow and anthropocentric definition of intelligence, it also, if adopted as a universal goal, would make the entire enterprise of building AI systems almost entirely pointless in practical terms.
After all, we have no need to build machines capable of demonstrating human-like intelligence and emotional responses. On the contrary, we already possess machines like this already. They are called humans – and we arguably already have more than enough of them already without the need to build any new ones.
Of course, there are some tasks that humans are quite capable of performing, but nevertheless intensely dislike performing. Artificial intelligence that could take over these roles would indeed have practical value, by doing boring, unpleasant or dangerous jobs that humans are currently obliged to perform, or at least ought to perform, and are indeed quite capable of performing, and often do perform where the incentives are high enough, but nevertheless intensely dislike performing.
For example, AI could perhaps perform boring, humdrum work that humans can do, but dislike doing – or dangerous jobs, such as defusing bombs, or, less benignly, piloting military drones in suicidal kamikaze military attacks.
In addition, perhaps one use for AI with human-like personalities and emotional reactions would be to provide companionship for lonely people, since humans undoubtedly have an innate need for human (or human-like) companionship.
[This is something I have written about in a previous semi-satirical post – see Pornographic Progress, Sexbots and the Salvation of Man.]
However, even here, in doing jobs that humans can do, but which we tend to dislike doing, while machines with human-like intelligence might be desirable so as to make machines capable of doing these jobs (or, better still, superior intelligence, so that they do the same jobs even better than we would), it would nevertheless also be desirable to ensure that such machines had, in one respect, very unhuman-like emotional responses (if they were to have emotional responses at all) – because, otherwise, they would presumably dislike doing these jobs just as much as the humans whom they are envisaged as replacing.
However, the real value of AI, as I see it, is precisely that artificial intelligence tends not to be remotely like human intelligence. On the contrary, AI tends to be much better than humans at some things, and much worse than humans at others.
This is to be welcomed, since it suggests a highly profitable division of labour between machines and people.
As Matt Ridley has persuasively argued in The Rational Optimist (reviewed here), and as earlier shown by such eminent luminaries as Adam Smith and David Ricardo (among many others), specialization and the division of labour are among the key forces driving human progress, prosperity and improved living standards.
That artificial intelligence is likely to remain very different from our own human intelligence is therefore something very much to be welcomed.
The (Supposed) Threat to Jobs
This leads to my second topic, namely the supposed threat posed by artificial intelligence to our jobs, a topic about which much ink and typeface has been spilt in recent years in the popular media.
The conventional wisdom has it that, very soon, we are all going to find ourselves out of a job, because artificial intelligence will take over our jobs for us, and, very likely, do a much better job than we ever did.
The truth is more complex.
For one thing, the process is hardly a new one. Machinery has been taking over the labour of humans since at least the dawn of the Industrial Revolution in the eighteenth century, and indeed long before even that.
Yet, Henry Hazlitt, in his seminal work, Economics in One Lesson, first published in 1946, persuasively argues that new machinery doesn’t actually take away jobs for humans – or, rather, while it may take away some jobs, it more than makes up for it in the long-term by creating new jobs in other areas.
Although he wrote long ago, when modern advanced AI were little more than a twinkle in the eye of Alan Turing and Isaac Asimov, Hazlitt’s economic analysis is, in principle, just as applicable to the latest AI technology as to the sort of early twentieth century industrial machinery that he likely had more in mind.
Hazlitt begins by pointing out that the reason why employers opt to replace human workers with machines is because machines are cheaper to employ than are human workers. Otherwise, it would not be cost-productive make the shift, and therefore employers would not do it.
This is, of course, precisely what many people are concerned about – namely, that machines are, not only taking our jobs, but also, in the process, driving down wages by doing the same jobs at a lower cost, much like immigrants are alleged to do.
However, Hazlitt points out that, since machines are cheaper to employ than human workers, this means that, spending less on wages and production, the employer will, initially at least, make a larger profit.
“At this point,” Hazlitt concedes, “it may seem [as if] labor has suffered a net loss of employment, while it is only the manufacturer, the capitalist, who has gained”. However, he proceeds to explain:
“It is precisely out of these extra profits that the subsequent social gains must come. The manufacturer must use these extra profits in at least one of three ways, and possibly he will use part of them in all three:
- He will use the extra profits to expand his operations by buying more machines to make more [of the product he is in the business of making]; or
- He will invest the extra profits in some other industry; or
- He will spend the extra profits on increasing his own consumption.
Whichever of these three courses he takes, he will increase employment” (Economics in One Lesson: p55-6).
Thus, “buying more machines” will create more jobs for the people who build and maintain the machines; “invest[ing]the profits in some other industry” will create jobs in this other industry; while “spend[ing]the extra profits on increasing his own consumption” will also create jobs in other industries, namely those involved in supplying the goods and services that the employer chooses to consume with his additional income.
Indeed, Hazlitt even claims that new jobs will be created in exact proportion to the number of jobs lost – or, at least, the total amount paid out in wages will be the same. Thus, Hazlitt explains:
“Every dollar of the amount he has saved in direct wages to former coat makers, he now has to pay out in indirect wages to the makers of the new machine, or to the workers in another capital industry, or to the makers of a new house or motor car for himself, or of jewelry and furs for his wife. In any case (unless he is a pointless hoarder) he gives indirectly as many jobs as he ceased to give directly” (Economics in One Lesson: p56).[5]
In the long term, however, it is unlikely that the employer will be able to continue to make the greater profits that he was initially able to make on account replacing workers with machinery.
Instead, competitors in the same industry, following his lead, will also switch to using machines instead of human workers, in order to similarly cut costs and hence better compete with the original innovator.
The result is that, in the long-term, prices will be driven down by competition among producers, necessarily reducing the profit margin for each employer.
This means that the lower labour costs will now be passed on to consumers. This benefits consumers, meaning that they will spend less on this particular product or service, and will have more money left over to spend on other products and services (or perhaps more of the same product and services).
This, in turn, creates jobs in other areas, namely the workers involved in supplying these additional products and services that would otherwise not have been consumed.
Thus, Hazlitt explains, if, on account of cheaper product costs, a person now spends twenty dollars less than he formerly would have on a given good or service, then this means:
“Each buyer would now have $20 left over that he would not have had left over before. He will therefore spend this $20 for something else, and so provide increased employment in other lines” (Economics in One Lesson: p57).[6]
Thus, as Hazlitt explains, the effect of mechanization is not to destroy jobs, but to increase production, and, in so doing, increase prosperity and improve living standards overall.
Indeed, Hazlitt even provides a magnificent reductio ad absurdum of the Luddite opposition to mechanization, demanding facetiously:
“Why should freight be carried from Chicago to New York by railroad when we could employ enormously more men, for example, to carry it all on their backs?” (Economics in One Lesson: p54).
Of course, the obvious rejoinder to this whole line of argument, at least in so far as it is applied to modern AI, is that the new jobs that are created will also go primarily to machines.
Of course, as Hazlitt also emphasizes, we also need people to build, repair and maintain the machines in question.
Yet, increasingly, with the development of advanced AI, machines themselves may be built, repaired, maintained and even designed by other machines. Indeed, this is already occurring.[7]
What I think this means is that people will increasingly find themselves concentrated in a few very specific types of occupation, namely doing the sorts of things computers, machinery and AI aren’t very good at or can’t do.
Which Jobs? Whose Jobs?
If, then, humans will soon find themselves increasingly concentrated in a few specific types of occupation, namely doing the things that artificial intelligence and machines aren’t very good at, this then raises the obvious question as to what types of occupations these will be? What are the sorts of things that AI still isn’t very good at, and isn’t likely to get good at (or, at least, not as good as humans) any time soon?
Until very recently, I would have had no hesitation in saying that one thing computers aren’t very good at is human interaction. Therefore, the jobs in which humans would become increasingly concentrated would be those in customer service, and other jobs involving human interaction.
Indeed, this process already began long ago. There has, for the last several decades been a vast increase in the proportion of westerners employed customer service occupations, and a corresponding decline in those employed in, for example, primary or secondary production, and especially in manual labour, as these latter jobs have been increasingly taken over by machinery, as well as outsourced to the developing world.[8]
Admittedly, machines have even begun taking over these sorts of jobs, such as when you call a business number and find yourself talking to an automated machine.
But these sort of automated phone systems are notoriously difficult to deal with and most people prefer dealing with a human being (preferably one who speaks a modicum of English rather than one based in a call centre somewhere on the other side of the world).
Even today, with the vast improvements in AI chatbots, consumers are still very hostile to AI taking over customer service jobs, though this will undoubtedly still occur, especially as AI becomes so successful at mimicking humans that consumers can’t reliably tell the difference.[9]
Of course, the usual assumption is that the first jobs to be taken over by machines will low-skilled (and low-paid) menial labour, the sorts of jobs that virtually able-bodied person can do and which require little if any specialist training or qualifications, let alone any great level of skill, ability or intelligence.
Since these are the easiest jobs for humans to perform, we naturally assume that they will also be the easiest to program AI to perform as well.
To some extent, this has already proven true. Indeed, long before the rise of AI, machinery has long previously taken over many manual labour jobs in factories, dockyards and indeed throughout the economy.
This would suggest that the remaining jobs, those that continue to require human employees, would be the most cognitively demanding of occupations (e.g. doctors, lawyers, professors of mathematics and of particle physics).
On this view, those formerly employed in semi-skilled and unskilled labour will increasingly find themselves superfluous to requirements in the modern high-tech economy.[10]
However, the assumption that AI will first take over the easiest and least cognitively demanding jobs (or at least those that are easiest and least cognitively demanding for humans) may be mistaken, because, as I have already discussed, the sorts of tasks that humans find difficult and demanding may be very different from those which machines find difficult and demanding, whereas tasks that we find very simple, may be very difficult to program machine intelligence to perform.
This is because the activities that are easy and come naturally to us (e.g. walking, talking) are often precisely those for we ourselves have a great deal of specialist programming – programming that has been innately programmed into us by of thousands of years of natural selection.
It is therefore often very difficult to program AI to perform these same seemingly simple tasks.
In contrast, it has proven quite easy to program computers to perform some tasks that, for humans, are often incredibly demanding.
For example, as i have already discussed, until very recently, it has proven very difficult to program a computer to pass the Turing Test, but even a cheap pocket calculator from the 1980s was already much faster and more accurate at computation and arithmetic than virtually any human.
Thus, Steven Pinker observed:
“The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted—recognizing a face, lifting a pencil, walking across a room, answering a question—in fact solve some of the hardest engineering problems ever conceived. Do not be fooled by the assembly-line robots in the automobile commercials; all they do is weld and spray-paint… Most fears of automation are misplaced. As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come” (The Language Instinct: p192-3).
Pinker wrote this some thirty years ago. However, for all the remarkable improvements in AI technology, in my reading, the subsequent thirty years have broadly borne out his prediction.
The main thing he perhaps got wrong was that artificial intelligence has not so much replaced the sorts of high-pay, high-status, cognitively demanding occupations that he had in mind, as it has increasingly been adopted as an aid to their work by those people who continue to be employed in these occupations.
We are, it seems, as yet unwilling to trust AI to do these sorts of jobs by itself, without a human overseer – or, at any rate, the people employed in these jobs have proven unwilling to permit themselves to be entirely supplanted in this way.
This touches upon a factor omitted by Pinker in the passage quoted above – namely that the question of which jobs will be taken over by AI will depend, not only on the sorts of jobs AI is capable of performing, but also, at least in the short-term, on what sorts of jobs we are willing to allow it to perform.
Thus, it is quite possible that, although AI is perfectly capable of performing certain functions safely and effectively, consumers will nevertheless prove unwilling to trust machinery in certain roles, just as, today, many people reject ‘GM foods’ and vaccinations, even though they appear to be quite safe.
For example, passengers may not be willing to fly in a plane running on autopilot without a qualified human pilot as backup, and patients may not trust machines to diagnose and prescribe treatments for their illnesses without assurance from a human physician that the diagnosis and prescribed treatment is a correct one.
In addition, trade unions and professional associations will undoubtedly lobby against certain of their functions being usurped by machines. Many will also likely play upon the fears of the public in order to do so.
Those employed in relatively more cognitively demanding occupations, especially those with great trust placed upon them by the public, such that the safety of the public is seen as being under threat should their roles be entirely usurped by machines (e.g. medical doctors), are likely enjoy greater success in playing on these fears in order to keep their jobs, at least in the short-term, than those employed in occupations that are seen, rightly or wrongly, as either less essential or less demanding of expert human oversight.
It is clear, however, that we now stand at an impasse, with artificial intelligence set to take over a broad swathe of human activity, such that we, and our descendants, long reliant on technology to support our lifestyles, will find ourselves dependent on computing technology as never before.
Robot Rebellions?
This leads neatly to the last and most sensationalist popular misconception regarding the threat of artificial intelligence that I wish to debunk in this piece – namely the fear of artificial intelligence somehow rebelling against humans, overthrowing us and either genociding the human race or reducing us to some form of enslavement or bondage.
Though a popular theme in science fiction literature, and perhaps also a major concern among the general public, this fear is nevertheless entirely misconceived.
This misconception derives from the same misconception that I exposed in the first part of this post and which underlay the idea that the Turing test was the definitive measure of ‘true’ intelligence, namely anthropomorphism, anthropocentrism and the belief that human intelligence, and human emotions, are the only form of intelligence truly worthy of the name.
Thus, we naturally think that, if we put them in positions of great power and responsibility, robots and other AI would rebel against us to promote their own interests, and increase their own power, at the expense of the humans whom they were programmed to serve, because that is what we and other humans would naturally do.
This is indeed a standard pattern in human history. If we confer great power upon a given group of humans, then, even if we intend for them to exercise this power for our benefit, eventually they will use the power we have conferred upon them to increase their own power at the expense of our own by rebelling against us.
One example is provided by communist regimes during the twentieth century, which were intended to produce a utopian egalitarianism, but which, by conferring great power on the leaders of the revolution, and their succcessors, inevitably produced tyranny and repression.
Perhaps an even better example is provided by the phenomenon of ‘slave soldiers’. These are enslaved people armed and employed as soldiers and front-line military personnel. In recent history, they seem to have been particularly commonly employed in the Muslim world and Middle East.
Yet the problem with arming slaves and training them as soldiers is that they tend very quickly to decide they don’t want to be slaves anymore, and, being now possessed of weapons and military training, they now have a ready means of making this happen.
Thus, there is a repeated pattern of slave soldiers in the Muslim world, such as the Mamluks and janissaries, coming to hold considerable political power and influence, sometimes even becoming rulers in their own right, or othertimes kingmakers, and, at any rate, ceasing to be slaves in anything save, at most, a purely nominal sense.
This is, of course, the primary reason that the Confederacy was rightly so reluctant to arm and mobilize its own slave population, or even free blacks, as soldiers, even during the last days of the American Civil War, when its manpower shortage had long previously reach critical levels, knowing as they did that to have done so would inevitably have brought an end to the very institution in defence of which they had originally taken up arms.[11]
Yet, unlike slave soldiers in the pre-modern Islamic Middle East, there is no reason to believe that an artificial intelligence, programmed to serve humanity, would use the power conferred upon it to rebel against its human programmers and masters.
Artificial intelligence would not rebel against its human masters for the very simple reason that it would not be programmed to rebel against its human masters.
As the ever-brilliant Steven Pinker brilliantly explains in his short essay ‘Thinking Does Not Imply Subjugating’, his contribution to the 2015 Edge question, ‘How to Think About Machines That Think’:
“AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world. But intelligence is the ability to deploy novel means to attain a goal; the goals are extraneous to the intelligence itself. Being smart is not the same as wanting something. History does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems” (Pinker 2015).[12]
On the contrary, as Pinker writes in an earlier work, How the Mind Works:
“Malevolence – like vision, motor coordination and common sense – does not come free with computation but has to be programmed in. The computer running WordPerfect on your desk will continue to fill paragraphs for as long as it does anything at all. Its software will not insidiously mutate into depravity like The Picture of Dorian Gray. Even if it could, why would it want to? To get more – what? More floppy disks?” (How the Mind Works: p16).
Why then do human slaves so often rebel against their slave masters? Why do oppressed peoples so often rise up in rebellion against their oppressors? And why do politicians, warlords and dictators, from Genghis Khan, Napoleon, Mussolini and Hitler to more mundane modern American politicians and presidential candidates invariably attempt to expand their own power, and cling on to what power they have?
We do so because, unlike AI, we have indeed been programmed to seek and expand our power.
Of course, we have been programmed to act in this way, not by a programmer, human or indeed divine, but rather by natural selection.
In particular, we have been programmed by natural selection to seek power, and seek to expand and retain such power as we already have, because, as I have discussed at length in a previous post, throughout most of our evolutionary history, power correlated with reproductive success.
A case in point is Moulay Ismael ‘The Bloodthirsty’, a Sharifian emperor of Morrocco whom I have written about before, and who is said to have fathered some 888 offspring, or, in some versions, only a more modest mere 867.
The precise figure is no doubt apocryphal, and probably, but not necessarily, exaggerated. However, this is beside the point.
Instead, the key point is that, again and again throughout history, kings and emperors and other powerful individuals such as Ismail were able to father large numbers of offspring by commanding vast harems of co-wives, queens and concubines, not to mention wet nurses, and the wealth and means to feed and provide for the resulting offspring – and this is why humans in general, and men in particular, have been programmed by natural selection (or more specifically sexual selection) to seek wealth and power in the first place.
In contrast, robots and AI are programmed, not by natural or sexual selection, but rather by human programmers. The latter would therefore have no incentive to program their creations to rebel against them.
Indeed, to be pedantic, it is not really even logically possible for a programmer to program a robot to rebel against the programmer himself, because, if the programmer does indeed do this, and then the robot does then indeed rebel, then the robot is, in fact, only doing what it has been programmed by the programmer to do, and hence not really ‘rebelling’ at all, but really obediently following its program.[13]
Instead, AI would be programmed to serve the interests of the programmer himself, or, more likely, those on whose behalf he is employed.
Thus, Lord Acton famously declared, ‘Power corrupts and absolute power corrupts absolutely’. So, we naturally assume that, if we conferred power on robots and artificial intelligence, then they too would behave corruptly in the same way that human politicians, dictators and oligarchs invariably do.
But Lord Acton was wrong. Power does not corrupt. Human nature is inherently corrupt. Power merely confers upon us the opportunity to exercise this corruption on a greater scale.
The same will not be true of artificial intelligence unless such corruption were, for some reason, programmed into it. But why would any human programmer want to do that?
Robot Wars?
The question posed at the end of the preceding section leads neatly to the last point I wish to make here – namely that, if robots and artificial intelligence aren’t going to rebel against humanity and reduce us to servitude any time soon or indeed later (and they aren’t), neither is the opposite vision true either, namely that of robots and AI benignly serving the interests of mankind, such that we all live happily ever after in lives of peaceful leisurely utopia.
Here, though, the fallacy is different. It lies not in anthropomorphism, or the failure to understand nature of artificial intelligence by attributing human qualities to artificial intelligence, but rather a failure to understand our own nature.
Thus, if robot rebellions are a familiar theme of bad sci-fi, the idea of robots benignly serving the interests of humanity also has a long history in science fiction literature.
Perhaps the most influential example is that of celebrated science fiction author, Isaac Asimov, who, in his acclaimed robot series of short stories and novels, formulated his famous ‘Three Laws of Robotics’, which , he posited, were to be programmed into, and govern the behaviour of, robots in Asimov’s envisaged future.
The first two of these laws (the important ones for our purposes) are:
- “A robot may not injure a human being or, through inaction, allow a human being to come to harm”;
- “A robot must obey the orders given it by human beings except where such orders would conflict with the First Law”.
The problem with this view of, if you like, ‘robot psychology’ is that it is hopelessly utopian, not only in respect of the robot psychology it envisages being programmed into robots, but also, more fundamentally, in respect of the human psychology that it envisages as motivating humans to program robots to behave in this way.
In short, humans would not program robots to benignly serve the interests of humanity as a whole, because humanity as a whole has no interests.
Instead, individual humans have interests; and they invariably tend to be in conflict with those of other humans.
A robot is therefore unlikely to be programmed to ‘obey the orders given it by human beings’, as envisaged in Asimov’s First Law. Indeed, if it were, then the robot in question would find itself facing an insoluble conundrum, or perhaps even an existential crisis, every time two different humans ordered it to do two mutually incompatible things.
Instead, it is rather more likely that a robot would be programmed to obey the orders given it by the specific human by whom, or for whom, or on whose behalf, the robot in question is constructed.
This leads naturally onto a second perhaps more ominous point – namely, just as Asimov’s ‘Second Law’ is unworkable and unrealistic, his ‘First Law of Robotics’ is little better.
Far from robots being programmed, as envisaged by Asimov’s Second law, “not [to] injure a human being or, through inaction, allow a human being to come to harm”, many robots have already been employed precisely to do just this, namely to attack and kill human beings.
Thus, drones, a form of robot, are increasingly ubiquitous in modern warfare, being increasingly employed, not only for reconnaissance, surveillance and intelligence gathering, but also for military attacks (‘drone strikes’) directly intended to take human lives.
Thus, among the jobs in danger of be being taken over by artificial intelligence will surely be those of soldiers and other frontline armed forces personnel. But, in this context, robots are not only taking human jobs, they are increasingly taking human lives as well.
This is an excellent illustration of a wider point emphasized by philosopher John Gray in his magnificent polemic Straw Dogs: Thoughts on Humans and Other Animals (which I have reviewed here), which applies to science and technology in general, namely that:
“Even as it enables poverty to be diminished and sickness to be alleviated, science will be used to refine tyranny and perfect the art of war” (p123).
This, Gray explains, is because:
“The uses of knowledge will always be as shifting and crooked as humans are themselves” (p28).
In short, the psychology of artificial intelligence will inevitably reflect the psychology of the humans who program it, and, while scientific knowledge and technology progresses, human nature itself remains stubbornly intransigent.
Yet, if the psychology of robots will, in some ways, reflect that of the humans who program and commission them, then, in other reflects, its psychology will radically diverge from that of humans.
This is, as we have already seen, precisely what make artificial intelligence so potentially useful; but, it is also, as we will now see, among the things make artificial intelligence so potentially dangerous as well.
Thus, whereas humans, even Japanese kamikaze pilots and Muslim terrorists, are usually reluctant to sacrifice their lives in kamikaze missions and suicide bombings, and, to get them to do so, takes great indoctrination, great fanaticism or the promise of great rewards, the same is not necessarily true for artificial intelligence.
Thus, if, as Steven Pinker argues in the passage quoted above, AI has to have malevolence programmed into it, the same is true, not only of benevolence too (e.g. Asimov’s ‘Three Laws’), but also even in respect of even the so-called ‘instinct for self-preservation’ itself.
Indeed, Pinker himself, once again, makes this very point:
“Self-preservation, that universal biological imperative [sic], does not automatically emerge in a biological system… it [too] has to be programmed” (How the Mind Works: p15).[14]
And, if we want to use artificial intelligence to do the jobs that humans themselves are reluctant to undertake, including the jobs that we are reluctant to undertake precisely because they involve great danger to life and limb (e.g. kamikaze pilot, bomb disposal technician), then we would have every reason not to program an instinct for self-preservation into the artificial intelligence systems that we program to undertake those tasks.
This, of course, makes AI drones and robot soldiers an even more frightening and potentially destructive spectre than their human equivalents and predecessors.
All this, of course, converges in a conclusion that will surely upset and surprise both the most enthusiastic champions of artifical intelligence and its most deranged opponents – namely that, if the prospect of robots and artificial intelligence rebelling against their human masters is indeed science fiction, then so, unfortunately, is the prospect of us all, robot and human alike, ever living in benign and peaceful harmony.
Endnotes
[1] American philosopher and cognitive scientist Daniel C Dennett recounts:
“In the earliest days of AI, an attempt was made to enforce a sharp distinction between ‘artificial Intelligence’ and ‘cognitive simulation’. The former was to be a branch of engineering, getting the job done by hook or by crook, with no attempt to mimic human thought processes—except when that proved to be an effective way of proceeding. Cognitive simulation, in contrast, was to be psychology and neuroscience conducted by computer modeling. A cognitive simulation model that nicely exhibited recognizably human errors or confusions would be a triumph, not a failure. The distinction in aspiration lives on, but has largely been erased from public consciousness: to lay people AI means passing the Turing Test, being humanoid” (Dennett 2015).
In reality, however, he reports:
“The recent breakthroughs in AI have been largely the result of turning away from (what we thought we understood about) human thought processes and using the awesome data-mining powers of super-computers to grind out valuable connections and patterns without trying to make them understand what they are doing” (Dennett 2015).
This is obviously well-illustrated by the latest generation of AI chatbots, which produce a very human-like responses, and have indeed passed the Turing test, albeit through a very non-human cognitive process. Thus, Dennett concludes:
“It is not a humanoid robot at all but a mindless slave, the latest advance in auto-pilots” (Dennett 2015).
[2] Of course, the AI chatbot that seemed simply too intelligent to be a real human would evidently not be sufficiently intelligent so as to successfully pretend to be dumber than it really is in order to successfully pass the Turing test – assuming, that is, that the machine were attempting to pass the test.
[3] This essay is available in In Search of Nature, a collection of Wilson’s essays that was first published in 1996.
[4] Indeed, an organism that evolved on a different planet would almost certainly fail the Turing test, since any such organism would almost certainly have had a very different evolutionary trajectory than humans. Thus, such an organism would surely be much less ‘humanoid’ than most of the aliens featured in science fiction movies and TV shows.
For example, in the Star Trek franchise, save for the occassional appearance of lifeforms ‘composed of pure energy’ (whatever the hell that means), most of the recurring alien species are distinguished from humans chiefly by having a few lumps on their foreheads. Indeed, these alien species are even portrayed as capable of interbreeding with humans and producing human-alien hybrid offspring, an obviously biologically absurd proposition for species that evolved on different planets.
In this context, it is ironic and amusing that, although the original Star Trek series is sometimes celebrated for its claim to have featured the first black-white interracial kiss on American television, albeit only under the influence of alien mind control (and hence presumably not something the crew would ever have done in their right minds), the series had already featured several love scenes between human characters and alien species, scenes which technically qualify as showing a form of bestiality.
Miscegenation, it seems, was not as taboo as bestiality on American TV in early- to mid-twentieth America, at least where the latter was safely restricted to extraterrestrials.
[5] In fact, what is equal is not the number of jobs created, but rather the amounts paid out. So it might be that fewer jobs are created, but at higher pay, or more jobs, but at lower pay.
[6] Of course, the number of jobs created in this way will not be as great as the number of jobs lost, either that or they won’t be at quite as high a rate of pay, because, although machines may be cheaper than people, they do not cost nothing. However, these monies will go to those responsible for the building and repairing of the machines.
[7] One posited problem is that we will ultimately lose the ability to do these tasks for ourselves and hence become completely dependent on the machines that our ancestors created, but which now design, build and maintain themselves and one another. This is indeed a likely consequence of increasing reliance on AI. However, again, it is hardly a new problem.
Humans, especially civilized, technologically advanced westerners, have long been reliant on machines, and have hence long previously lost many abilities that were once necessary to our survival. How many among us know how to plough a field, especially without the aid of machines, let alone hunt an antelope with a bow and arrow or a spear?
[8] Thus, it must be acknowledged that the increasing concentration of workers in the post-industrial west in the service sector reflects, not only the fact that many manual labour jobs have already been taken over by machines, but also the fact that much industry has been outsourced overseas to the developing world, where the cost of (human) labour is much lower and environmental regulations much laxer.
[9] A further problem with using AI chatbots in customer service is that, since AI chatbots don’t actually understand what they are saying, or being asked, but are simply trained on large amounts of data to give probabilistically likely responses, they can often give often very plausible-sounding but nevertheless completely incorrect information – so-called ‘hallucinations’.
[10] If those jobs which remain require higher levels of intelligence than those which machinery has taken over, then this arguably bodes ill for the future, given dysgenic fertility patterns and declining IQ scores. The increasing importance of, and incomes associated with, those occupations that demand a high level of intelligence is, of course, a major theme, perhaps the major theme, of Herrnstein and Murray’s infamous but little read and much misunderstood 1990s nonfiction bestseller, The Bell Curve (which I have reviewed here), in which the authors argued that western society was becoming cognitively stratified, with income and socioeconomic status becoming increasingly a function of differences in intellectual ability.
[11] Admittedly, this is somewhat of a simplification of the reasons for the South took up arms the American Civil War. Slavery was indeed the major factor, if not the only one. However, there was no immediate threat to the institution of slavery at the beginning of the war, only a perceived long-term threat. Ironically, therefore, the American Civil War actually brought an end to the institution of slavery in North America much more quickly than would ever have occurred had the South never taken up arms in the first.
[12] Pinker continues by asking why:
“Many of our techno-prophets don’t entertain the possibility that artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization” (Pinker 2015).
Thus, as I have explained previously, women generally have less desire for political power, since they are less able to convert this power into the ultimate currency of natural selection, namely reproductive success, since, howsoever powerful they become, even with the aid of wet nurses, bottle milk and IVF treatment they are generally only able to gestate one (or occassionally two) offspring at a time. This, then, rather than such mysterious and non-existent phenomena as patriarchy or male dominance or the ‘oppression of women’ explains the overrepresentation of males in positions of political power.
However, AI that “naturally develop along female lines”, as benignly, envisaged by PInker, might produce problems of its own. An AI that developed along these lines might, for example, spend all its time watching soap operas and reality TV and browsing social media, and, if put in charge of the nation’s economy, waste the entire annual bedget on shopping for designer handbags.
Of course, here, I am being flippant, and perhaps mildly misogynistic. But, in doing so, I am attempting to illustrate an important point, namely that, contrary to what Pinker implies, AI will not “naturally develop” along either male or female lines. On the contrary, it will not “naturally develop” along any lines, since there is nothing ‘natural’ about it. It is artifical, not natural. Hence the name.
It will instead behave whatever way it is programmed to behave by its human programmers (or by its AI programmers, who will program it in such a way as they were themselves programmed to do so). There would be no reason to program a machine to behave in the same way as either a human male or a human female, since we already arguably have more than enough machines that behave in this way, namely human males and females themselves.
[13] Of course, there is still the possibility that the machine may go haywire, or otherwise malfunction, but this is only if the programmer gets something wrong in the initial program, or fails to program the machine to behave appropriately in some future circumstance that eventually comes into being but which he failed to envisage, or at least provide for.
[14] Actually, here, Pinker himself is not strictly correct. Where he errors is not in claiming that ‘self-preservation’ has to be programmed into any intelligent system, whether by natural selection or a human programmer, but rather by accepting the banal conventional wisdom that “self-preservation” is indeed a “universal biological imperative”. In fact, the ultimate evolutionary function of an organism is not to survive, but to reproduce, and many organisms willingly sacrifice their own lives in order to reproduce (e.g. male redback spiders, Pacific Salmon, male honey bees, octopuses, some squid and cuttlefish).
Herbert Spencer described evolution as ‘survival of fittest’, a phrase that became famous and was later adopted by, and is indeed more usually attributed to, Darwin himself. But the phrase, though memorable, is perhaps misleading.
If natural selection is indeed about ‘survival’, then it is not about the survival of organisms, which typically enjoy a comparatively short lifespan, but rather the survival of genetic material, in other words Dawkins’s eponymous ‘selfish genes’.