AI and Warfare: Escalating issues or preventing loss of life?

Introduction

Devansh Sharma
3 min readDec 16, 2022

Artificial intelligence (AI) is being used more and more in the growth of warfare. Autonomous and semi-autonomous drones are becoming into a practical and affordable instrument for hitting conventional targets, as seen in the recent conflicts in the Ukraine, Azerbaijan, Syria, and Ethiopia. Future threats include the possibility of drone swarms overrunning Canadian military stations and cyberattacks on vital infrastructure.

Predicting the future

Beyond the exaggerations frequently seen in science fiction, it’s critical to comprehend the inherent risks associated with the use of AI in combat. The spread of lethal autonomous weapons systems (LAWS), such as military robots and drones, and China’s emergence as a major international power have both started to change the existing order. The management of this shifting geopolitical landscape may depend on Canada and other North Atlantic Treaty Organization (NATO) members participating actively in debates on regulating military AI.

The likelihood of enemies attacking Exposed AI systems is high. It’s also important to consider the current lack of AI technology resiliency. The military should spend money developing AI that can operate in uncontested areas. Highly regulated AI systems can benefit the military while allaying worries about vulnerabilities. These systems include diagnostic software, maintenance and failure prediction tools, and maintenance tools. These technologies will reduce the chance of adversary assaults, incomplete data, and more. They are more likely to occur quickly even if they are not the glamorous uses we frequently read about.

Where to draw boundaries

It is crucial to determine the boundaries for military AI research in order to manage this new reality. General-purpose technology like artificial intelligence (AI) has the power to alter how fast and extensively conflicts are fought. It is comparable to internal combustion and steam engines. Sadly, the laws of war regulating the use of AI have not yet been determined, particularly the conditions for initiating wars (jus ad bellum) and how AI should act during a combat (jus in bello).

It is obvious that AI should not be employed as a weapon of war: As military robots and drones become more accessible and prevalent, both state and non-state actors will have access to LAWS. In fact, several states have made substantial progress in implementing LAWS today. Although Canada and a substantial number of other countries favour legally enforceable agreements that would restrict the development and use of autonomous weapons, the bulk of major military powers see enormous benefits in weaponizing AI.

Negotiating collective weapons control agreements continues to be very difficult for countries like China, Russia, and the United States due to a lack of mutual confidence. It is clear that unrestrained military AI development is dangerous, though. In reality, LAWS is being developed with funding provided by the algorithms that today drive apparently minor industries like autonomous car networks, social networking, music streaming, and children’s toys.

Conclusion

Unfortunately, AI is still a moving target. In contrast to nuclear proliferation or genetically modified illnesses, which are technological advances whose potential to bring devastation is easily restricted, artificial intelligence is just software. For example, a “killing robot” is not the outcome of a specific concept. It is more akin to a collection of technologies connected by moving software algorithms.

Fortunately, the nation-states of the world had already had to deal with new technology that had an impact on global security. The past talks on WMD can serve as a framework for future accords, particularly when defining the norms of war, notwithstanding the diversity of viewpoints on AI and its weaponization.

During the Cold War, confidence-boosting strategies such as regular communication, scientific cooperation, and shared scholarship were crucial for managing geopolitical tensions. In the years following World War II, the most powerful countries in the world, including the United States, Britain, the Soviet Union, China, France, Germany, and Japan, kept an eye on international control of nuclear weapons, chemical agents, and biological weapons.

Global collaboration is necessary to manage a fresh generation of highly developed WMDs both then and now.

Cheers

--

--