Sunday, December 22, 2024

Europe needs a plan for AI in the military realm

By Ulrike Franke

In Europe, 2019 was the year of artificial intelligence (AI). Governments put together expert groups, organized public debates and published national strategies designed to grapple with the possible implications of AI in areas such as health care, the labor market and transportation. European countries developed training programs, allocated investment and made plans for research cooperation. In 2020, the challenge for governments will be to show that they can fulfill their promises by translating ideas into effective policies.

But despite attempts to coordinate these efforts – most notably that of the European Commission, which called upon member states to maximize cooperation through the publication of AI strategies – there is one AI-relevant area in which Europe lacks coherence, and which generally receives too little attention. In fact, an analysis of official documents from various European countries suggests fundamental differences that may be difficult to bridge. This area is the use of AI in the military realm.

Despite a marked growth of work on the economic and societal consequences of the increasing use of AI in various areas of life, the use of AI in the military is largely absent from the public discourse in most European countries. In Germany in particular, officials seem uncomfortable discussing the subject, unless the focus is on whether and how to ban “killer robots,” or AI-enabled lethal autonomous weapon systems (LAWS).

In other countries – most notably France, but also the UK – there is more expert work on the topic, but this does not translate into a broader societal debate. Similarly, the academic discourse on AI in the military focuses on developments in the United States and China, and tends to overlook Europe.

This neglect is not helpful. It means that little information is available about European thinking on AI in the military, and that there is scant discussion of how European armed forces plan to use AI. Yet the fact remains that European companies are already developing AI-enabled military systems.

It would be a bad idea for Europe to try to sit this development out – or to approach it with an exclusively national focus. While no one can predict exactly how revolutionary it will be, AI is likely to have a considerable impact on how militaries operate, and on how wars are waged. As Europeans discuss plans for strategic sovereignty – both in the military and in the technology sector – military AI, which is relevant to both areas, deserves more attention.

One of the problems of the European, particularly German, debate on AI-enabled military systems is the focus on LAWS. These systems can carry out the critical functions of a targeting cycle in a military operation, including the selection and engagement of targets, without human intervention. The potential use of LAWS comes with a range of legal, ethical and political problems that are rightly being discussed in the United Nations. But while concern over LAWS, and work toward regulating them, is to be praised, European policymakers should not forget that military AI goes beyond killer robots.

AI is, for example, famously good at working with big data to identify and categorize images and texts. In a military context, AI can help sift through massive amounts of video footage, such as feeds recorded by drones. Or it can examine photographs to single out changes from one picture to the next – a useful function to indicate the presence of an explosive device hidden in the time between the photos were taken. Other intelligence-relevant AI applications include image and face recognition, translation, image geolocation and more.

AI can also support military logistics through predictive maintenance based on the analysis of various sensory inputs. AI-enabled weapons are also likely to be deployed in cyberspace where it allows actors to both find and patch up cyber vulnerabilities. Due to cyberspace’s relative lack of physical limitations, and given that fewer organizational changes are required for it, AI-enabled weapons could be introduced comparatively quickly into the cyber realm.

In many areas, AI can make processes faster, more efficient and cheaper. Such efficiency gains are important, especially for cash-strapped militaries. But technologies are truly groundbreaking only if they provide new capabilities or allow for tactics that go beyond what already exists. Artificial intelligence might be able to provide this in the areas of swarming and autonomous vehicles – including, but not limited to, LAWS.

Swarming refers to the combination of many systems – such as drones, unmanned boats or tanks – that can act independently but in a coordinated manner. Military swarms could provide new capabilities, such as flying sensor networks, flying minefields or coordinated and automated waves of attacks that deny the enemy a massed formation to fight.

Given these extensive areas of application and, judging from past efforts to predict the impact of technologies, there is a good chance the most important changes to warfare caused by AI are not featured in the list above, Europe cannot afford to disregard these developments.

Of the big three European states – Germany, France and the UK – France has shown the most interest in military AI. Defense was designated as a priority AI sector for industrial policy in the French 2018 national AI strategy. In 2019, France became the first European state to publish a military AI strategy. The country’s approach to AI is clearly geopolitical and driven by concerns over Europe and France becoming tech colonies of the United States and China.

The UK has published neither an overarching national nor a military AI strategy, but a range of documents, most notably from the Defence Science and Technology Laboratory (DSTL), and the Defence Ministry’s in-house think-tank, DCDC. However, these publications appear to primarily target the expert community.

Among the big three, Germany is the outlier. In its 2018 national AI strategy, the military, security and geopolitical elements of AI are notably absent. Defense is mentioned only in one sentence, which implicitly shifts all responsibility for this area to the ministry of defense. As this ministry traditionally publishes few doctrinal or strategy documents, it is unlikely that a German military AI strategy will see the light of day.

More importantly, the German political realm, spearheaded by the foreign ministry, seems to have taken the decision to deal with military AI primarily from an arms control angle. As a consequence, the German expert community focuses mostly on AI arms control and disarmament. Given the extent to which this angle dominates the debate, and how different it is from the French approach, it poses questions for joint French-German projects like the new Future Combat Air System fighter jet, which will rely heavily on AI elements.

Given the changes expected to be caused by AI in the military realm and given the level of attention paid to the issue in other countries – most notably the US, China and Russia – as well as European yearnings for strategic sovereignty, Europeans should pay closer attention to military AI. It is counterproductive to let valid concerns about LAWS marginalize the debate on all military AI.

ULRIKE FRANKE
is a policy fellow at the European Council on Foreign Relations in London, where she works on emerging technologies in warfare. She co-hosts Sicherheitshalber (For safety’s sake), a German-language podcast on security and defense.

Security Briefs