Cyberattacks are becoming more frequent, sophisticated and destructive. Each day in 2017, the United States suffered, on average, more than 4,000 ransomware attacks, which encrypt computer files until the owner pays to release them1. In 2015, the daily average was just 1,000. In May last year, when the WannaCry virus crippled hundreds of IT systems across the UK National Health Service, more than 19,000 appointments were cancelled. A month later, the NotPetya ransomware cost pharmaceutical giant Merck, shipping firm Maersk and logistics company FedEx around US$300 million each. Global damages from cyberattacks totalled $5 billion in 2017 and may reach $6 trillion a year by 2021 (see go.nature.com/2gncsyg).
Countries are partly behind this rise. They use cyberattacks both offensively and defensively. For example, North Korea has been linked to WannaCry, and Russia to NotPetya.
As the threats escalate, so do defence tactics. Since 2012, the United States has used ‘active’ cyberdefence strategies, in which computer experts neutralize or distract viruses with decoy targets, or break into a hacker’s computer to delete data or destroy the system. In 2016, the United Kingdom announced a 5-year, £1.9-billion (US$2.7-billion) plan to combat cyber threats. NATO also began drafting principles for active cyberdefence, to be agreed by 2019. The United States and the United Kingdom are leading this initiative. Denmark, Germany, the Netherlands, Norway and Spain are also involved (see go.nature.com/2hebxnt).
Artificial intelligence (AI) is poised to revolutionize this activity. Attacks and responses will become faster, more precise and more disruptive. Threats will be dealt with in hours, not days or weeks. AI is already being used to verify code and identify bugs and vulnerabilities. For example, in April 2017, the software firm DarkTrace in Cambridge, UK, launched Antigena, which uses machine learning to spot abnormal behaviour on an IT network, shut down communications to that part of the system and issue an alert. The value of AI in cybersecurity was $1 billion in 2016 and is predicted to reach $18 billion by 20232.
By the end of this decade, many countries plan to deploy AI for national cyberdefence; for example, the United States has been evaluating the use of autonomous defence systems and is expected to issue a report on its strategy next month3. AI makes deterrence possible because attacks can be punished4. Algorithms can identify the source and neutralize it without having to identify the actor behind it. Currently, countries hesitate to push back because they are unsure who is responsible, given that campaigns may be waged through third-party computers and often use common software.
The risk is a cyber arms race5. As states use increasingly aggressive AI-driven strategies, opponents will respond ever more fiercely. Such a vicious cycle might lead ultimately to a physical attack.
Cyberspace is a domain of warfare, and AI is a new defence capability. Regulations are thus necessary for state use of AI, as they are for other military domains — air, sea, land and space6. Criteria are needed to determine proportional responses, as well as to set clear thresholds or ‘red lines’ for distinguishing legal and illegal cyberattacks, and to apply appropriate sanctions for illegal acts7. In each case, unilateral approaches will be ineffective. Rather, an international doctrine must be defined for state action in cyberspace. Alarmingly, international efforts to regulate cyber conflicts have stalled.
We call on regional forums, such as NATO and the European Union, to revive efforts and prepare the ground for an initiative led by the United Nations. In the meantime, computer experts must be transparent about problems, limitations and shortcomings of using AI for defence. Researchers must also work with policymakers and end users to design testing and oversight mechanisms for this technology.
To read more, click here.