Military is the Missing Word in Discussions About AI Security - Latest Global News

Military is the Missing Word in Discussions About AI Security

Stay up to date with free updates

The author is Director of International Policy at Stanford University’s Cyber ​​Policy Center and Special Advisor to the European Commission

Western governments are racing to establish AI security institutes. The UK, US, Japan and Canada have all announced such initiatives, while the US Department of Homeland Security added an AI Safety and Security Board just last week. Given this strong emphasis on security, it is notable that none of these bodies regulate the military use of AI. Meanwhile, the modern battlefield is already showing the potential for clear AI security risks.

According to a recent investigation by Israeli magazine +972, the Israel Defense Forces have used an AI-powered program called Lavender to flag targets for drone strikes. The system combines data and intelligence sources to identify suspected militants. The program reportedly identified tens of thousands of targets, and bombs dropped in Gaza resulted in excessive deaths and damage. The IDF disputes several aspects of the report.

Venture capitalists are boosting the “deftech” or defense technology market. Tech companies want to be part of this latest boom and are all too quick to sell the benefits of AI on the battlefield. Microsoft has reportedly unveiled Dalle-E, a generative AI tool, to the US military, while controversial facial recognition company Clearview AI is proud to have helped Ukraine identify Russian soldiers with its technology. Anduril makes autonomous systems and Shield AI develops AI-powered drones. The two companies raised hundreds of millions of dollars in their first rounds of investment.

Although it is easy to point the finger at private companies that are hyping AI for war purposes, it is governments that have removed the deftech sector from their oversight. The landmark EU AI law does not apply to AI systems that serve “exclusively military, defense or national security purposes.” Meanwhile, the White House Executive Order on AI included important exceptions for military AI (although the Defense Department has internal guidelines). For example, implementation of much of the Executive Order “does not cover AI when used as a component of a national security system.” And Congress has taken no action to regulate military use of the technology.

This means that the world’s two largest democratic blocs have no new binding rules about what types of AI systems the military and secret services can use. Therefore, they lack the moral authority to encourage other countries to limit their own use of AI in their respective militaries. A recent political declaration supported by several countries on “Responsible Military Use of Artificial Intelligence and Autonomy” is nothing more than that: a declaration.

We must ask ourselves how useful political discussions about AI security are if they do not address the military uses of the technology. Although there is no evidence that AI-powered weapons can comply with international laws on discrimination and proportionality, they are being sold around the world. Because some of the technologies have dual use, the boundaries between civilian and military use are blurring.

The decision not to regulate military AI has a human cost. Although they are systematically inaccurate, these systems are often given undue trust in a military context because they are mistakenly viewed as impartial. Yes, AI can help make faster military decisions, but it can also be more error-prone and may fundamentally not comply with international humanitarian law. Human control over operations is crucial to holding actors legally accountable.

The UN has tried to fill the gap. Secretary-General António Guterres first called for a ban on autonomous weapons in 2018, calling them “morally repugnant.” More than 100 countries have expressed interest in negotiating and adopting new international laws banning and restricting autonomous weapon systems. But Russia, the US, the UK and Israel rejected a binding proposal, leading to the collapse of the talks.

If nations do not act to protect civilians from the military use of AI, the rules-based international system must be strengthened. The UN Secretary-General’s High-Level Advisory Panel on AI (of which I am a member) would be one of several groups well placed to recommend banning risky deployments of military AI, but political leadership remains crucial to ensuring that the rules are followed become.

It is critical to ensure that human rights standards and laws governing armed conflict continue to protect civilians in a new era of warfare. The unregulated use of AI on the battlefield cannot continue.

Video: AI: blessing or curse for humanity? | FT Tech
Sharing Is Caring:

Leave a Comment