The Gatwick Project: could the AI arms race be humanity’s last?

In-depth

The race between the US and China for supremacy in artificial intelligence could have chilling ramifications for the future of warfare. Long before the advent of a superintelligence which could wipe out the human race, artificially intelligent weapons systems could become almost impossible to contain.

“If the development by the enemy as well as by us of thermonuclear weapons could have been averted, I think we would be in a somewhat safer world today than we are… However, it is my judgment in these things that when you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb. I do not think anybody opposed making it; there were some debates about what to do with it after it was made.” – J. Robert Oppenheimer, 1954

In 1939, a group of emigre scientists who had fled the fascist dictatorships of Europe wrote to the then president of the United States, Franklin D. Roosevelt. Their urgent missive was prompted by the theoretical discovery of nuclear fission just a year earlier in Nazi Germany. It warned the president that it may soon become possible to set up a nuclear chain reaction in a large mass of uranium, thereby generating huge amounts of energy which could either be used for civilian purposes, or to create immensely powerful weapons. The letter, signed by Albert Einstein, urged the US administration to begin its own research into nuclear technology. Soon after, the Manhattan Project was born, culminating in the 1945 bombings of Hiroshima and Nagasaki, where the first use of the atomic bomb in the theatre of war killed an estimated 120,000 people in mere seconds. But this was not the end of the nuclear arms race. Following the second world war, the two remaining great powers, the United States and Russia, competed to develop and stockpile nuclear weaponry of ever greater destructive force, while smaller nations looking to deter their own obliteration raced to develop similar capabilities. Today, the number of nuclear warheads in existence far exceeds the number required to exterminate all life on earth, and every military engagement involving a nuclear power raises the prospect of such an ignominious end to humanity. While fears of nuclear armaments falling into the hands of non-state actors are as yet unrealised, the probability increases with every new bomb manufactured. The lesson: once an arms race has begun between two superpower rivals, it is almost impossible to stop and leads to increasing instability.

The dawn of the AI arms race

When Deepmind’s AlphaGo beat China’s world-number-one ranked Go player Ke Jie in three matches out of three in 2017, the Alphabet company saw the victory as an exciting validation of their efforts. They had created an artificial intelligence capable of beating any human in a game which is orders of magnitude more complex than chess, years ahead of predictions. 19-year-old Ke Jie expressed shock, referring to an earlier contest to say: “Last year, I think the way AlphaGo played was pretty close to human beings, but today I think he plays like the God of Go.” The Deepmind team may well have selected the game of Go because it posed a challenge of the right magnitude for its evolving technology, but from a geopolitical standpoint, the move could almost have been calculated to catch the attention of China’s leaders: an earlier, uncensored match between AlphaGo and Lee Sedol was watched by over 60 million Chinese.

Just two months after AlphaGo’s victory, China’s State Council laid out a three-step plan to overtake the US in artificial intelligence by 2030, appointing its four biggest tech companies – Baidu, Alibaba, Tencent and iFlytek – as ‘national champions’ to lead AI innovation in self-driving cars, smart cities, computer vision and voice intelligence. Whether spurred by Deepmind’s display of Go supremacy or not, suddenly the AI arms race was an official reality. China’s strategy to create a $1 trillion industry is driven primarily by economic concerns, but also states that AI will be put into military service. The potential for creating weapons of hitherto unimagined power using machine intelligence is not lost on world leaders and military strategists. In September, Vladimir Putin felt Russia to be sufficiently on the sidelines of this emerging AI rivalry to speak candidly about the true nature of the race:

“Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Soon, the technology sections of major publications all over the world were making plenty of space for analysis comparing the current and future AI capabilities of the US and China. Venture capitalist and former Google China president Kai Fu Lee has become a minor celebrity by insisting that China is catching up much faster than most people realise, mainly due to better funding for AI entrepreneurs and an internet ecosystem that is largely closed to Western competition:

“As I’ve said before, building an AI superpower in the 21st century requires four conditions: lots of data, dedicated entrepreneurs, skilled AI scientists, and a friendly policy environment. China’s highly competitive startup environment is forging the world’s most shrewd and persistent entrepreneurs, and China’s weird “intranet” has created the world’s most data-rich internet environment, so when you add on the other two factors – the emergence of more AI scientists and the Chinese government’s policy support – Silicon Valley’s advantage will melt away.”

He points to the penetration of WeChat pay and Alipay (China’s mobile payments industry is 50 times bigger than that of the US) as proof of China’s big data credentials, and compares China’s data resources to the oil resources of Saudi Arabia. Lee has suggested that the US double its funding of AI research to remain competitive.

On the other hand, researchers at Oxford University’s Future of Humanity Institute published a report earlier this year in which they compared China and the US’s capabilities for developing AI technology across four key drivers: Hardware; Data; Research and Algorithms; and Commercial AI Sector. The US scored 33, China scored 17, beating its rival in just one area, access to data. The report identifies hardware as China’s biggest challenge, pointing to US efforts to slow China’s progress in high-powered chip development:

“After the U.S. government banned Intel and other chip-makers from selling China high-powered Xeon chips, the Committee on Foreign Investment in the United States (CFIUS) has subjected China’s investments in U.S. chip-makers to harsher scrutiny. In September 2017, the White House blocked a state-backed Chinese investment fund from acquiring a US semiconductor company, marking only the fourth time in history that an American president had blocked a corporate acquisition on national security grounds.”

Both Intel and NVIDIA, the inventor of the GPU chip which has proven to be highly effective at executing deep learning tasks, are American companies. Google, Amazon and Facebook have all recently started designing and manufacturing their own chips to power artificial intelligence applications. This year, Google released its third generation of the Tensor Processing Unit (TPU), a specialist machine-learning chip which the Mountain View company makes available via its cloud services, as well as the smaller Edge TPU, designed for performing machine learning tasks on devices.

However, in a recent interview, R. David Edelman, the director of the Project on Technology, the Economy, and National Security at MIT, suggests that by treating AI-related hardware as ‘militarily sensitive’, the US government may inadvertently harm companies like Apple and Google, which make much of their profit in China, to the benefit of Chinese rivals:

“This is intended to help US companies be more competitive. The irony is it would almost certainly give Chinese companies that don’t face those same restrictions a sizable advantage in the playing field.”

Lack of access to US-made semiconductors may force China to catch up in chip manufacturing prowess. To that end, Beijing is currently raising 300 billion yuan ($44bn) via the China Integrated Circuit Industry Investment Fund Co. to boost the sector, following on from a 139 billion yuan financing round in 2014.

Superintelligence: doomsday machine or benevolent dictator?

All the many billions thrown at artificial intelligence R&D by the world’s largest corporations and richest governments raises an important question: what will the end result be? An AI- enabled utopia in which humans, freed from the need to do boring, mundane, or dangerous jobs, can pursue creative and artistic goals, all watched over by machines of loving grace (to borrow from Richard Brautigan’s poem)? Or a dystopia in which machine intelligence is used to control, enslave or even eradicate humans altogether?

Thinkers including Stephen Hawking, Elon Musk and Bill Gates have all warned that artificial intelligence research could lead to the development of a superintelligence – a machine that is not only more intelligent than humans, but which can also rewrite its own software to continually improve itself – capable of wiping out humanity. The machine demiurge wouldn’t even need to view people as a threat for this to happen, posits Hawking, it would just need to be pursuing, with ruthless efficiency, a goal that was ultimately incompatible with human life:

“The real risk with AI isn’t malice but competence. A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”

Nick Bostrom, director of the aforementioned Future of Humanity Institute at Oxford University, illustrates this idea with his now famous paperclip maximiser:

“Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”

Given the unpredictability of how a machine superintelligence would respond to human beings, even ones that originally designed it with the goal of facilitating human flourishing, the ultimate destination of AI development is something of a crapshoot. MIT physics professor Max Tegmark sets out 12 possible scenarios in his book “Life 3.0: Being Human in the Age of Artificial Intelligence”, one of which is named “Benevolent Dictator”. In this scenario, the superintelligent entity “uses quite a subtle and complex definition of human flourishing, and has turned Earth into a highly enriched zoo environment that’s really fun for humans to live in. As a result, most people find their lives highly fulfilling and meaningful.” Even this optimistic scenario invites comparisons with sci-fi dystopias like those seen in The Matrix and Black Mirror.

Luckily, at least for those of us who are already alive, many experts in artificial intelligence see the emergence of a superintelligence happening many decades into the future, if at all. Current AI research, which focuses on deep learning by neural networks, is excellent at solving a broad range of narrow problems, argues Rodney Brooks, former director of the MIT Computer Science and Artificial Intelligence Laboratory, but developments in the field that would lead to even human-level intelligence are as yet undreamed of:

“Work is underway to add focus of attention and handling of consistent spatial structure to deep learning. That is the hard work of science and research, and we really have no idea how hard it will be, nor how long it will take, nor whether the whole approach will reach a fatal dead end. It took thirty years to go from backpropagation [the earliest learning algorithms loosely based on abstracted models of neurons] to deep learning, but along the way many researchers were sure there was no future in backpropagation. They were wrong, but it would not have been surprising if they had been right, as we knew all along that the backpropagation algorithm is not what happens inside people’s heads.”

Seeing the effectiveness of machine learning algorithms in tasks like image recognition and jumping to the conclusion that human-level intelligence is inevitable is says Tegmark, like seeing a combustion engine and jumping to the conclusion that warp drives are just around the corner. While that doesn’t make the problem of how humanity should deal with a superintelligence go away, it does at least buy us some time.

A fate worse than extinction

Potentially life-eradicating superintelligence may be a dim and distant prospect, but more mundane applications of artificial intelligence to the battlefield are advancing rapidly. Google’s Project Maven provided the US Department of Defense with machine learning technology to help analysts identify targets for unmanned drone strikes, and the Mountain View company clearly saw the project as a foothold to win larger contracts with the Pentagon. Announcing Project Maven in September 2017, a Google staffer wrote:

“Maven is a large government program that will result in improved safety for citizens and nations through faster identification of evils such as violent extremist activities and human right abuses. The scale and magic of GCP [Google Cloud Platform], the power of Google ML [machine learning], and the wisdom and strength of our people will bring about multi-order-of-magnitude improvements in safety and security for the world.”

The project has since been scrapped, after more than a dozen employees resigned in protest, but this and similar leaked emails concerning Maven reveal the commercial incentives for large tech companies to supply the military with AI capabilities. IBM and Amazon (the clear front-runner) are currently in competition for a contract to move the DoD onto the cloud and bring AI capabilities to the Pentagon’s data analysis, a contract with Bloomberg Government estimates will be worth $10 billion. Such initiatives pave the way to autonomous weapons systems: combinations of unmanned surveillance drones, robots, wearable devices, machine learning algorithms and big data analytics which will automate battlefield decision making.

Aside from the military advantages conferred by these systems, which reduce the need for human combatants, some analysts have made moral arguments in their favour. Roboticist Ronald C. Arkin, for example, has posited that unmanned systems could behave “more ethically” than human counterparts since, lacking emotions an instinct for self-preservation, they could be programmed to gather all the information available before making a life and death decision.

Others are less sanguine, recognising the potential for sufficiently advanced artificially intelligent systems to restore the first-strike advantage for nuclear powers, the potential for badly-programmed killer robots to run amok, and the potential for misuse by dictators and terrorists. Elon Musk (who has described artificial intelligence research as “summoning the demon”) joined a large number of AI researchers, scientists and companies signing a pledge to forgo the development of lethal autonomous weapons in July 2018:

“We the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems. Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.”

Slaughterbots, a video released by the Future of Life Institute in 2017 dramatises a scenario in which inexpensive microdrones, used in combination with facial recognition technology and shaped explosives, wreak havoc in the near future, as they are deployed to assassinate huge numbers of political targets with impunity. Unlike nuclear warheads, the software which powers autonomous weapons systems would be relatively easy to leak, copy and distribute. Drones over Gatwick airport two weeks ago may only have caused three days of disruption, but they might also foreshadow a chilling future in which weapons of mass destruction are available to anyone with the will and better-than-average technical abilities.

The Institute’s pledge also calls on governments to implement “strong international norms, regulations and laws against lethal autonomous weapons”. At present, 26 nations have explicitly endorsed a call for a ban on lethal autonomous weapons systems. Five states have explicitly rejected such a ban: France, Israel, Russia, the United States, and the United Kingdom, all of whom are among the world’s top ten arms exporters. Unfortunately, the world may need to face the AI equivalent of a Hiroshima or Nagasaki before there’s enough of a public outcry to cause these countries to work towards an international treaty. But by then, it may already be too late.