April 21, 2025
Politics

Build Allied AI or Risk Fighting Alone

[IMPORTANT: Make this 4 times longer with much more detail]

Argument An expert’s point of view on a current event. Build Allied AI or Risk Fighting Alone Cutting edge systems need to be developed in tandem. By Becca Wasser , a fellow in the defense program and co-lead of the Gaming Lab at the Center for a New American Security, and Josh Wallin , a fellow in the Defense Program at the Center for a New American Security. Half a dozen people mill about onstage in front of a massive blue screen that shows an image of a soldier in profile wearing combat gear. One of the people onstage wears a formal military uniform and a general’s uniform hat, who speaks to a man who has a hand raised to wave at the image onscreen. Participants chat in front of an electronic image of a soldier before the closing session of the Responsible AI in the Military Domain summit in Seoul on Sept. 10, 2024. Jung Yeon-ji/AFP via Getty Images My FP: Follow topics and authors to get straight to what you like. Exclusively for FP subscribers. Subscribe Now | Log In Military United States February 24, 2025, 11:13 AM Comment icon View Comments ( 0 ) It’s 2029. From wildfires in California to catastrophic flooding in Pakistan, natural disasters are more common than ever—and hit harder. But amid fire and flood, advances in artificial intelligence enable the United States and other countries to deploy self-flying drones to find and rescue survivors, use machine-learning algorithms to streamline the delivery of lifesaving aid, and automate translation in real time to aid multinational coordination. This future vision of more efficient, collaborative, and effective military cooperation is attainable, but only if we act now. The United States and its allies are increasingly incorporating rapidly advancing AI-enabled technology into their militaries to solve key operational problems, speed up responses, save lives, and even deter threats. But each nation is developing its own capabilities; incorporating these systems into military activities at different paces; and creating its own policies to dictate when, where, and how military AI can be employed. It’s 2029. From wildfires in California to catastrophic flooding in Pakistan, natural disasters are more common than ever—and hit harder. But amid fire and flood, advances in artificial intelligence enable the United States and other countries to deploy self-flying drones to find and rescue survivors, use machine-learning algorithms to streamline the delivery of lifesaving aid, and automate translation in real time to aid multinational coordination. This future vision of more efficient, collaborative, and effective military cooperation is attainable, but only if we act now. The United States and its allies are increasingly incorporating rapidly advancing AI-enabled technology into their militaries to solve key operational problems, speed up responses, save lives, and even deter threats. But each nation is developing its own capabilities; incorporating these systems into military activities at different paces; and creating its own policies to dictate when, where, and how military AI can be employed. Trending Articles Israel’s Lapid Lays Out Alternative to Trump’s Gaza Plan The opposition leader’s proposal would see Egypt take temporary control of the enclave and oversee its reconstruction. Powered By Advertisement Israel’s Lapid Lays Out Alternative to Trump’s Gaza Plan X Washington and its allies must build a shared framework for the collective use of military AI. Failing to do so will risk the United States’ ability to operate alongside other nations against future threats ranging from natural disasters to great-power conflicts. Military AI is already being developed and deployed. The United States is building more than a thousand collaborative combat aircraft, which it describes as self-flying “ loyal wingman ” planes meant to support crewed fighter jets. Both Ukraine and Israel have claimed to use AI to analyze open-source data to identify targets for military strikes, while Bloomberg reported in 2024 that the United States has used AI to enable target selection in the Middle East. Prospective use cases abound in military medicine , logistics , maintenance , and personnel management . But all these countries are branching off in their own directions. As with military hardware such as fighter jets, AI systems developed by different governments are not necessarily compatible. There is a risk of countries adopting different development paths and creating siloed systems. The United States rarely fights wars alone, preferring collective action to help reduce the burden on U.S. forces. Washington must bring along its allies and friends to achieve its vision of AI-enabled interoperable military forces for military coalitions to even be possible in the future. The importance of interoperability—the ability of countries to conduct military operations together—was abundantly clear during a recent tabletop exercise that we ran for the U.S. Defense Department-led AI Partnership for Defense . This fictional exercise brought together government officials from more than a dozen nations to explore how AI could be employed in future military operations in 2029, and it represented a level of cooperation in AI employment that does not exist today. The exercise demonstrated that military AI has enormous potential to improve coalition military operations and save lives—but only if the United States and other countries take steps now to build interoperability by adopting, integrating, and employing these capabilities together. In future high-speed conflicts, the United States and allied militaries will need to share large volumes of data parsed by AI to identify targets and help connect one military’s sensors to another’s shooters, which may be autonomous uncrewed platforms carrying out strike missions. But this more efficient vision of future warfare will not be possible if the United States and its allies fail to align their military AI investments, strategies, and employment. Interoperability has long been a major challenge for military coalitions—one that even the most advanced forces have struggled to achieve. Multinational military operations are complex, requiring the integration of diverse equipment, rules of engagement, and skills. In past operations , these differences have slowed down decision-making, shifting the weight of effort to a select few countries and resulting in less effective outcomes. In Operation Inherent Resolve , researchers from the RAND Corp. found that the pace of airstrikes against the Islamic State was slowed by different interpretations of acceptable targets among coalition nations, ultimately pushing the bulk of strike missions on to the United States. Read More A close-up photograph shows an electronic wafer magnified in a circular frame, with the different small components of the chip shown in shades of red, yellow, blue, and green. Can Taiwan’s Chip Giant Make Nice With Trump? TSMC is weighing its options after tariff threats. Analysis | Ray Wang The advent of AI presents new challenges for interoperability. Across nations, there is varied understanding of this technology and no consensus among officials on how to develop and employ AI and autonomous systems. Various frameworks , guidance , and standards abound as each nation grapples with its own interpretation of its legal and ethical obligations. Key to interoperability is having shared capabilities that can communicate, interact, and work together. But right now, nations are going down their own paths, developing unique systems that can’t easily communicate or share data—a process that may also be hampered by long-standing security restrictions. Sign up for Editors’ Picks A curated selection of FP’s must-read stories. Sign Up By submitting your email, you agree to the Privacy Policy and Terms of Use and to receive email correspondence from us. You may opt out at any time. Enter your email Sign Up ✓ Signed Up You’re on the list! More ways to stay updated on global news: FP Live Enter your email Sign Up ✓ Signed Up World Brief Enter your email Sign Up ✓ Signed Up China Brief Enter your email Sign Up ✓ Signed Up South Asia Brief Enter your email Sign Up ✓ Signed Up Situation Report Enter your email Sign Up ✓ Signed Up View All Newsletters For example, computer vision applications, which enable the autonomous identification of potential threats, may struggle in a coalition operation. AI systems developed by different nations may contradict each other and thus slow down the targeting cycle rather than speeding it up. Even systems developed using the same training data—the fuel that computer vision software relies on to learn about the world—are not guaranteed to perform identically due to other design parameters. If multiple countries employ autonomous systems during an operation, what should be done to ensure that they identify targets in a consistent way? Autonomy also presents new challenges. Unlike existing military platforms—in which human operators can communicate, resolve disputes, and coordinate their actions to avoid accidents—autonomous systems will need to be programmed to operate together without relying on the common sense or relationships developed between human operators. Automating this historically human behavior and problem-solving is a new process, and it may not be possible when conflicts between autonomous systems emerge during fast-paced operations. There are also critical issues that countries have not yet sufficiently grappled with, such as whether they are comfortable with another country’s autonomous capabilities operating alongside platforms crewed by their military personnel, or AI creating mission plans for their forces to undertake. While undoubtedly context-dependent, national capitals reviewing every operation would lose valuable time and slow down operations during a crisis, when speed is essential—ceding the very benefit that autonomy is meant to provide. In the face of the perceived risks of AI and autonomy, it may seem easiest to throw up our hands and abandon these emerging capabilities, but nations must understand the opportunity cost of failing to employ them. We saw this in our tabletop exercise, where participants had to weigh whether to employ a crewed capability over a more effective AI-enabled system. The trade-off was stark: accept risk in hopes of moving faster and saving more lives or decline risk by using familiar platforms and sparse, well-trained operators, knowing that it could result in dramatically fewer lives saved. The United States and its allies will need to make these cost calculations and determine where and when they are willing to take risks. As we are keenly aware, adversaries such as China are already pursuing these technologies, and the United States and its allies risk losing their military technological edge if they cannot safely harness AI. In Washington, D.C., and capitals around the globe, leveraging military AI is and will remain a critically important challenge. The pace and scale of AI development are rapidly growing and, along with them, the difficulty of interoperability. The Trump administration should prioritize military AI interoperability, particularly if it wants to prioritize efficiency and encourage other to take on greater responsibility in future crises. The administration must work with other governments to smooth out differences around how defense AI systems are developed, maintained, and deployed to reap the benefits of them. Washington and its allies must rectify their different policy perspectives, guidance, and risk assessments for employing AI and autonomous systems well before a conflict begins. Efforts such as the AI Partnership for Defense are critical steps that have laid the groundwork for collaboration, but more work must be done. Creating military AI interoperability between the United States and other nations is a tall order. Without it, nations will struggle to harness the benefits of AI in future coalition operations and may not be able to effectively respond to crises or deter threats. By prioritizing AI interoperability, the United States and allied nations can lay the groundwork for effective military operations and a more secure future. Becca Wasser is a fellow in the defense program and co-lead of the Gaming Lab at the Center for a New American Security. Josh Wallin is a fellow in the Defense Program at the Center for a New American Security. Read More On AI | Military | United States | War Join the Conversation Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription. Already a subscriber? Log In . Subscribe Subscribe View 0 Comments Join the Conversation Join the conversation on this and other recent Foreign Policy articles when you subscribe now. Subscribe Subscribe Not your account? Log out View 0 Comments Join the Conversation Please follow our comment guidelines , stay on topic, and be civil, courteous, and respectful of others’ beliefs. You are commenting as . Change your username | Log out Change your username: Username I agree to abide by FP’s comment guidelines . (Required) Confirm CANCEL Confirm your username to get started. The default username below has been generated using the first name and last initial on your FP subscriber account. Usernames may be updated at any time and must not contain inappropriate or offensive language. Username I agree to abide by FP’s comment guidelines . (Required) Confirm

Leave feedback about this

  • Quality
  • Price
  • Service

PROS

+
Add Field

CONS

+
Add Field
Choose Image
Choose Video