Introduction
Artificial Intelligence (AI) has swiftly moved from research labs into the public spotlight – epitomised by tools like ChatGPT – and is now making inroads into the arena of modern warfare. Military planners around the globe are exploring AI’s potential on the battlefield, from autonomous drones and real-time data analysis to nuclear command systems. As great-power rivalries intensify, AI is becoming as strategically significant as conventional armaments, raising profound questions about nuclear strategy and global security. This article examines how AI is reshaping modern warfare with a focus on nuclear warfare, early warning systems, decision-making, cyberwarfare, and command and control, all while remaining politically neutral and grounded in real-world developments. It also looks at the role of ChatGPT-style language models in information warfare and decision support.
From the Cold War to the AI Arms Race
Military interest in automating warfare is not new. During the Cold War, both superpowers invested in early computer systems for defence. In the 1950s, the United States built the SAGE supercomputer network to process radar data and coordinate air defences against Soviet bombers, aiming to supplement “the fallible, comparatively slow-reacting mind and hand of man”. The Soviet Union, for its part, developed the semi-automated “Dead Hand” system – an array of sensors and communications designed to automatically launch nuclear retaliation if national leadership was wiped out. These historical efforts laid the groundwork for today’s AI-driven military systems.
Fast forward to the 21st century, and the race for AI supremacy is in full swing. Speaking to students in 2017, Russian President Vladimir Putin famously declared, “Artificial intelligence is the future, not only for Russia, but for all humankind… Whoever becomes the leader in this sphere will become the ruler of the world.” His remark underscored how major powers view AI as a game-changing strategic resource. National security imperatives are now driving massive investments in military AI, fuelling an “AI arms race” that some analysts deem as perilous as the nuclear arms race itself. The United States, China, and Russia – among others – are pouring resources into defence AI research, determined not to fall behind in a technology that offers colossal opportunities and equally significant threats.
AI in Nuclear Strategy and Early Warning Systems
One area of particular focus is the integration of AI into nuclear strategy – especially in early warning systems and nuclear command and control. Proponents argue that modern AI, especially machine learning algorithms, could sift through vast intelligence and sensor datasets far faster than human analysts, potentially detecting missile launches or adversary moves with greater speed and accuracy. In theory, an AI-enhanced early warning system might reduce the risk of human error or delay in detecting a nuclear attack, giving leaders precious extra minutes to respond. AI could also help filter out false alarms by cross-checking alerts with multiple data sources and even arguing against erroneous launch indications – essentially a high-tech safeguard against over-reaction. Such decision-support AI might have prevented some infamous close calls. For instance, in 1983 Soviet officer Stanislav Petrov received computer alerts of incoming U.S. missiles; he suspected a glitch and held off on retaliating, correctly averting nuclear war. Had an automated system been in control that night, the outcome could have been catastrophic. Advocates believe a well-designed AI could mimic Petrov’s wise scepticism by recognising false positives and advising restraint.
However, integrating AI into nuclear early warning and command networks is a double-edged sword. Technical limitations mean AI systems can behave unpredictably or misinterpret data in novel ways. Experts warn that overly trusting an algorithm’s “judgement” could rekindle old problems of false alarms in new forms. Machine learning models, in particular, are notorious for occasional brittle mistakes – confidently seeing threats where none exist if fed data outside their training, or being tripped up by subtle anomalies. A nuclear early warning AI might filter out one false alarm yet trigger another, potentially providing “faulty but seemingly accurate information to decision-makers”. This is especially dangerous given that nations like the United States and Russia maintain a launch-on-warning posture – nuclear forces on high alert ready to fire within minutes of detecting an attack. In a high-tension crisis, an AI error or misclassification could be magnified by hair-trigger deployment policies. Thus, while AI promises to aid nuclear decision-making by crunching data without human fatigue or emotion, it also introduces new failure modes that could increase the risk of inadvertent nuclear launch if not carefully managed. The challenge is ensuring that AI augments human judgement without overriding it or creating false confidence.
Command and Control: Humans in the Loop vs. Automation
The prospect of AI-driven decision-making in nuclear command and control (NC3) has prompted debate and some international consensus on limits. On one hand, military planners see benefits in automating parts of command systems – using AI to monitor communications, track submarines, or even recommend optimal responses – all to reduce human error and reaction time. China, Russia, and the U.S. are all reportedly experimenting with AI integration into their nuclear command networks, albeit opaquely. Chinese research in particular points to interest in using AI for command decision-support and perhaps even autonomous nuclear weapons systems down the line. These moves reflect a belief that AI could confer a strategic edge by accelerating and sharpening nuclear decision-making processes. Indeed, the very existence of Russia’s Cold War-era Perimeter (“Dead Hand”) system – which still serves as an automated backup to ensure a retaliatory strike if Russia’s leadership is decapitated – illustrates how automation in nuclear command is not unprecedented. Modern AI could take such automation to new levels, for better or worse.
At the same time, there is growing recognition that certain red lines should not be crossed. In a rare moment of accord, U.S. President Joe Biden and Chinese President Xi Jinping agreed in late 2023 that decisions to use nuclear weapons must remain under human control, not left to AI algorithms. The two leaders jointly affirmed the need for a human “finger on the button,” stressing the importance of prudence and responsibility in military AI development. This high-level understanding – the first of its kind – highlights widespread anxieties about fully autonomous nuclear launch systems. As of today, no country publicly claims to have handed over nuclear launch authority to a machine, and experts insist a “human in the loop” is a vital safety check. The 1983 Petrov incident is a stark reminder why: human intuition and caution prevented a calamity that a rigid computer might have provoked. Even Russia’s formidable Poseidon nuclear torpedo, an underwater drone carrying a massive warhead, is thought to have some autonomous capabilities but not complete independence in firing decisions. Policy-makers and militaries appear to recognise that, at least for now, AI’s role in nuclear command must be carefully constrained – assisting human decision-makers, not replacing them.
Autonomous Weapons and AI “Superweapons”
Beyond command networks, AI is increasingly being embedded in weapons themselves – including those with nuclear capabilities. The race to develop autonomous or semi-autonomous weapons is accelerating, and nuclear-armed states are no exception. Russia has openly boasted of new missile systems augmented by AI. The commander of Russia’s Air Force revealed plans for “cruise missiles with artificial intelligence” that can analyse their environment in flight and choose their own speed, altitude, course and even targets. Work is reportedly underway to give these missiles on-board AI “brains” to make decisions on optimal targeting without direct human control, according to Russian defence officials. Observers speculate that some of the futuristic weapons President Vladimir Putin unveiled in 2018 – such as the Avangard hypersonic glide vehicle and the Burevestnik nuclear-powered cruise missile – could incorporate this kind of AI-driven guidance system. If so, these would be among the first nuclear-delivery vehicles with a degree of machine autonomy, able to adjust their flight path and target selection en route.
Such developments illustrate AI’s allure in weapons design: a missile or drone that can react dynamically to countermeasures and hunt targets on its own promises greater effectiveness. Already today, many conventional systems have automated features (for example, naval close-in weapon systems that automatically shoot down incoming missiles in milliseconds). However, giving more lethal systems a self-directing capability raises serious concerns about control. Military analysts warn that the push to add AI to missiles, drones, and potentially robotic nuclear platforms is racing ahead of our ability to reliably control or predict these systems. An autonomous nuclear-tipped weapon that “learns” or adapts in real time could, in theory, pose a nightmare scenario if it were ever to malfunction or misinterpret intent. This is why any moves toward such weapons are being watched carefully by the international community. The norm so far is that nuclear weapons are kept under tight human control at all stages; AI challenges that norm by introducing a new degree of freedom – and uncertainty – in how these weapons might ultimately behave.
AI, Cyberwarfare, and Nuclear Security
AI’s impact on modern warfare also extends into the digital domain. Cyberwarfare is an increasingly critical front in any major conflict, and AI is turbocharging both offensive and defensive cyber capabilities. In the nuclear context, this raises the stakes. Advanced AI tools can rapidly scan networks for vulnerabilities or automate sophisticated cyberattacks that would have taken humans far longer to devise. Analysts caution that a concerted AI-driven cyber assault could, for instance, attempt to penetrate an adversary’s nuclear command and control systems or early warning networks. A successful hack or manipulation of data could sow chaos – imagine an AI falsifying missile launch data or jamming communication links at a critical moment. There is concern that AI-augmented cyber operations might make nuclear arsenals more vulnerable by exploiting weaknesses faster than defenders can patch them. On the flip side, AI is equally being deployed to fortify cybersecurity. Machine learning systems are used to detect anomalies in network traffic and to respond to intrusions at machine speed, forming a digital bulwark around sensitive nuclear infrastructure.
The intersection of AI and cyberwarfare introduces new dimensions of strategic instability. A key worry is inadvertent escalation: if one nation’s AI misidentifies a routine network glitch as a cyberattack on its nuclear systems, it could trigger retaliatory measures. Similarly, an AI-launched cyber strike that shuts down early warning radars or command links could be misperceived by the target as the prelude to a nuclear first strike, potentially prompting a panic response. Moreover, the information warfare aspect of cyber conflict has grown with AI. Experts point to the danger of AI-generated disinformation heightening nuclear tensions. For example, a deepfake video or convincingly fabricated intelligence report – mass-produced by generative AI – could falsely suggest an enemy is preparing a nuclear launch, putting leaders on hair-trigger alert. In the already high-stakes realm of nuclear deterrence, the advent of AI-enhanced cyber and information warfare is yet another variable that strategists and diplomats must account for. It underscores the need for clear communication and trust-building between nuclear powers, so that an algorithm’s mischief doesn’t ignite a real-world conflagration.
Geopolitical Flashpoints: Russia–Ukraine and China–Taiwan
The influence of AI on warfare is not hypothetical – it is visible in current conflicts and standoffs. The ongoing Russia–Ukraine war offers a sobering glimpse of AI’s battlefield potential. Observers have dubbed it a “battle of algorithms” as both sides deploy AI-powered reconnaissance drones, loitering munitions, and automated targeting systems in an effort to outsmart each other. Ukraine, strapped for manpower and resources, has leaned heavily on technology: AI-enhanced drones conduct surveillance and strikes, and machine-learning software helps analyse battlefield data to direct artillery more efficiently. Russia too reportedly uses AI for image recognition in targeting and to counter Ukrainian drones. This rapid adoption has made parts of the conflict an unprecedented duel between unmanned systems and AI-guided munitions – warfare increasingly driven by silicon intelligence as much as human strategy. Yet this technological race is playing out against a dangerous nuclear backdrop. Since the war’s outset in 2022, Moscow’s nuclear sabre-rattling has intensified, with President Putin repeatedly reminding the world of Russia’s vast nuclear arsenal and lowering thresholds in doctrine for potential nuclear use. Western officials have to weigh not only the conventional battles augmented by AI, but also the risk that any escalation – even a misfiring autonomous drone or misinterpreted AI warning – could spark a broader confrontation between nuclear-armed powers. So far, nuclear deterrence has held, but the conflict underlines how AI-driven weapons are being tested in the shadow of atomic weapons. Any miscalculation in this high-tech proxy war could have far-reaching consequences.
In East Asia, meanwhile, tensions between China and Taiwan (and by extension, China and the United States) represent another theatre where AI and nuclear strategy intersect. China has made AI a cornerstone of its military modernisation, aiming to leap ahead in “intelligentised” warfare to offset U.S. military advantages. This includes extensive research into using AI for command decisions, sensor fusion, and autonomous systems in a potential conflict – for example, coordinating drone swarms or optimising missile strikes with minimal human input. Beijing’s concept of “military-civil fusion” means advances from China’s vibrant commercial AI sector are quickly funnelled into the PLA’s projects. If a Taiwan crisis were to erupt, AI could play a significant role: we might see AI-assisted intelligence analysis, predictive algorithms anticipating each side’s moves, and AI-managed logistics and cyber operations. Importantly, the U.S.-China standoff over Taiwan also carries a nuclear dimension. China is a growing nuclear power (its arsenal is expanding toward parity with the U.S. and Russia), and the United States maintains its own nuclear forces in the Pacific as a deterrent shield. Both presidents have openly acknowledged that an armed clash must be averted due to the nuclear risks, as evidenced by their pledge to keep AI out of nuclear launch decisions. Nevertheless, the fog of war – potentially augmented by AI – could make crisis stability more precarious. Imagine AI-driven surveillance mistakenly classifying a routine military drill as a nuclear missile launch preparation, or an information warfare bot-net spreading a rumour that one side’s leadership has ordered nuclear alert. Such scenarios, while speculative, illustrate why defence analysts are urging caution as AI is woven into the fabric of military operations. In flashpoints like Taiwan, where high-tech militaries eye each other warily, maintaining human control and clear lines of communication will be critical to prevent AI-accelerated incidents from spiralling out of control.
The Rise of ChatGPT-Style AI: Information Warfare and Decision Support
While much of military AI happens behind closed doors, the emergence of large language models like OpenAI’s ChatGPT has showcased AI’s capabilities to the wider public – and militaries have taken notice. ChatGPT-style language AIs can generate fluent text, translate languages, and answer questions, which makes them attractive for a range of military applications short of pulling triggers. One immediate arena is information warfare. State-aligned propaganda operations have already begun exploiting generative AI to crank out disinformation on an industrial scale. For example, researchers uncovered a pro-Russian influence campaign that used large language models to produce and spread over 19,000 propaganda articles across the web, ranging from fake news stories to biased “translations” of Western news, all tailored to manipulate public opinion. Clues in the text (like tell-tale AI disclaimers) revealed that some content was likely drafted by an AI similar to ChatGPT. This operation, dubbed “CopyCop,” demonstrated how generative AI can vastly amplify the reach and speed of information warfare efforts. It is no longer the realm of a few trolls to craft false narratives – now AI can pump out persuasive narratives in bulk, complete with an authoritative tone. Western analysts also report that actors linked to China, North Korea, and Iran are leveraging AI tools to generate propaganda, fake personas on social media, and even to assist in cyber-reconnaissance and phishing attacks. In response, tech companies and governments are developing AI algorithms to detect and counter false content – essentially an AI vs. AI battle in the info-war domain. The launch of ChatGPT in particular has rung alarm bells: it delivered a proof-of-concept that with minimal prompt, an AI can produce convincing, human-like arguments or lies. This has spurred agencies to invest in countermeasures and to monitor how adversaries might use such tools to sway narratives during conflicts. In a nuclear crisis, controlling the information space is crucial; the prospect of AI-generated deepfakes or fake emergency alerts spreading confusion is a new challenge strategists must plan for.
Large language models are also being eyed as decision support assistants within military and government operations. Their ability to synthesise information could help analysts and commanders digest the deluge of data that modern intelligence gathers. In late 2023, the U.S. Central Intelligence Agency reportedly unveiled plans for a ChatGPT-style AI tool to aid in sifting through open-source intelligence (OSINT) – effectively a digital analyst that can read foreign news, social media, and reports, then answer questions or summarise findings for human officers. Such a system could flag relevant insights from millions of pieces of data in near real-time, a task impossible for human staff to scale. The U.S. Department of Defense has similarly been testing generative AI in its Global Information Dominance Experiments, with an eye toward future capabilities like drafting operations plans or providing on-the-fly advice to commanders. Even Microsoft built a version of its AI model for U.S. intelligence agencies that operates on classified networks, hinting at the demand for AI-driven analysis behind secure walls. The appeal is clear: in fast-moving scenarios – be it a battlefield situation or a nuclear standoff – a commander could ask an AI assistant for options (“What are the likely outcomes if we respond to incident X with a show of force?”) or to translate and summarise a flurry of intercepts from an adversary’s communications.
However, militaries are approaching ChatGPT-like tools with caution. Current large language models (LLMs) have well-known limitations: they sometimes fabricate information (“hallucinations”) and lack true understanding, meaning their answers, while fluent, are not guaranteed to be accurate. U.S. Air Force Secretary Frank Kendall noted in mid-2023 that generative AI systems like ChatGPT are “not reliable, in terms of the truthfulness of what [they produce]” – not yet ready for prime time in critical military tasks. He stressed that a lot more development is needed before such tools could be trusted to, for example, write valid operational orders or recommendations for real-world missions. The risk of an LLM confidently giving unsound advice is a serious concern if lives are on the line. Additionally, there are security worries: an AI that can access vast databases might inadvertently reveal sensitive information if not properly controlled, or could be vulnerable to adversarial manipulation (prompt hacking). Despite these hurdles, research is ongoing. Allies and rivals alike are racing to refine military-grade AI chatbots – ones that are more truthful, secure, and tailored to specific domains. The coming years may see ChatGPT’s progeny integrated as virtual advisors in command centres, helping filter noise from information and even suggesting courses of action (with human oversight). The goal is not to hand decision-making to a chatbot, but to leverage its speed and breadth of knowledge to augment human decision-makers in stressful, complex situations. If done carefully, such AI assistants could improve clarity and reaction time in fast-escalating crises; if done poorly, they could add to confusion.
Conclusion: AI’s Promise and Peril in the Nuclear Age
From the early warning radars of the Cold War to today’s autonomous drones and chatbots, artificial intelligence has steadily woven itself into the fabric of modern warfare. In the context of nuclear strategy, AI offers both promising advantages and sobering risks. It has the potential to enhance surveillance, crunch data free of human bias or fatigue, and provide leaders with better intelligence in split-second situations. AI might one day help prevent accidental nuclear war by cross-checking false alarms or by maintaining cooler judgement under pressure than a panicked human. Yet the other side of the coin is ever-present: AI could just as easily introduce new failure modes – from software glitches and training biases to malicious exploitation – that could trigger the very catastrophe we strive to avoid. The current trajectory of AI in military affairs is thus a cautious balancing act. Defence establishments are eagerly embracing AI’s power, deploying it in roles from processing satellite imagery to guarding networks and guiding weapons. But at the same time, officials are voicing the need for restraint and reliability, especially where nuclear weapons are concerned.
Real-world geopolitical tensions underscore why getting this balance right is so critical. In Ukraine, AI-enabled systems are clashing on the battlefield under the uneasy eye of the world’s nuclear powers. In the Taiwan Strait, any future conflict augmented by AI will play out in the shadow of U.S. and Chinese nuclear arsenals. Miscommunication or technical misfires in such fraught circumstances could escalate with lightning speed. And in the information domain, the rise of ChatGPT and its ilk shows how AI can sway narratives and perceptions that leaders rely on in a crisis. The coming years will likely bring even more advanced AI – smarter algorithms, faster decision loops, and weapons with greater autonomy. As these technologies arrive, militaries and governments will be forced to establish guardrails to prevent unintended escalation. The consensus emerging among nuclear-armed states is that humans must remain firmly in control of nuclear decisions, no matter how sophisticated AI becomes. In parallel, confidence-building and communication channels between rivals can help ensure that an AI’s mistake or a deepfake provocation is recognised for what it is, rather than igniting conflict.
Artificial intelligence is poised to redefine warfare in the 21st century, but it does not exist in a vacuum – it operates in a world still shaped by the ultimate deterrent of nuclear weapons. The rise of AI, from ChatGPT-like language models to autonomous weapons, is a story of both exciting innovation and cautionary tales. Navigating this new era will require cool heads, cooperation, and perhaps a touch of the wisdom shown by that lone Soviet officer in 1983. In the end, the goal is a future where AI serves to enhance stability and security, not undermine the fragile peace that has held in the nuclear age. Achieving that will be one of the great strategic challenges of our time, as artificial intelligence in modern warfare continues to evolve beyond ChatGPT and into uncharted territory.
Footnotes
- Alex McFarland, “With All of This Talk of Nuclear Weapons, Let’s Not Forget About the AI Arms Race,” The DEFCON Warning System (17 May 2023) – Quoting Vladimir Putin’s 2017 statement on AI leadershipdefconwarningsystem.com and discussing the military-driven AI arms racedefconwarningsystem.comdefconwarningsystem.com.
- Erin Hahn, “A Different Use for Artificial Intelligence in Nuclear Weapons Command and Control,” War on the Rocks (26 April 2019) – Describes the U.S. SAGE system as an early big-data air defence computer designed to aid human decision-makingwarontherocks.comwarontherocks.com.
- Peter Rautenbach, “On Integrating Artificial Intelligence With Nuclear Control,” Arms Control Today (September 2022) – Argues that modern AI/ML integration into nuclear command, control, and communications could reduce human bias and errorarmscontrol.orgarmscontrol.org, but also warns of unique AI failures (false positives, brittleness) in launch-on-warning posturesarmscontrol.orgarmscontrol.org. References the Soviet “Dead Hand” automatic retaliation system as an example of high automation in nuclear systemsarmscontrol.org.
- Zachary Kallenborn, “Giving an AI control of nuclear weapons: What could possibly go wrong?” Bulletin of the Atomic Scientists (1 Feb 2022) – Recounts the 1983 Stanislav Petrov incident where human judgement averted a false alarm nuclear launchthebulletin.org. Notes that Russia has developed the Poseidon nuclear-powered autonomous torpedo (with some AI capabilities) and that some experts have proposed automating nuclear launch decisionsthebulletin.orgthebulletin.org.
- Jamie Seidel, “Russia’s terrifying new ‘superweapon’ revealed,” The DEFCON Warning System (23 July 2019) – Reports Russian Air Force plans for cruise missiles with onboard artificial intelligence to autonomously adjust course and select targetsdefconwarningsystem.com. Suggests Putin’s touted Avangard hypersonic glider and Burevestnik nuclear cruise missile could feature such AI guidancedefconwarningsystem.com. Warns that the rapid adoption of AI in weapons could outpace our ability to control themdefconwarningsystem.com.
- “VR Nuclear War Training,” The DEFCON Warning System (17 July 2019) – Describes a U.S. Defense Department project using virtual reality and AI-powered Microsoft HoloLens headsets to train troops for nuclear and radiological scenarios, improving realism and situational awarenessdefconwarningsystem.com. Illustrates AI’s role in modern military training.
- Thomas Macaulay, “LLMs have become a weapon of information warfare,” The Next Web (9 May 2024) – Details how a pro-Kremlin influence operation (“CopyCop”) leveraged large language models to mass-produce over 19,000 propaganda articlesthenextweb.com, including AI-generated “translations” with tailored political biasthenextweb.com. Also notes the CIA’s development of a ChatGPT-style AI for open-source intelligence analysis and Microsoft’s deployment of a classified LLM for U.S. intelligence agenciesthenextweb.com.
- David Kirichenko, “The Rush for AI-Enabled Drones on Ukrainian Battlefields,” Lawfare (5 Dec 2024) – Observes that the Russia–Ukraine war has become a testing ground for AI in combat, with both sides using AI-guided drones and automation, turning the conflict into a “battle of algorithms”lawfaremedia.org and a “clash between algorithms”lawfaremedia.org. Highlights how technological innovation is accelerating amid the war.
- “Russia’s Nuclear Posture in 2025: Real Threat or Strategic Bluff?” The DEFCON Warning System (20 May 2025) – Notes Russia’s intensified nuclear sabre-rattling during the Ukraine conflict and revisions to its nuclear doctrine, lowering the threshold for nuclear usedefconwarningsystem.com. Provides context on the heightened nuclear tensions alongside conventional fighting in Ukraine.
- Fei Su and Jingdong Yuan, “Chinese thinking on AI integration and interaction with nuclear command and control, force structure, and decision-making,” European Leadership Network (2021) – Analyses Chinese open-source writings and finds strong interest in applying AI to military command and decision-making, including potential autonomous nuclear weapons systemseuropeanleadershipnetwork.org. Discusses China’s “military-civil fusion” strategy to harness civilian AI advances for military useeuropeanleadershipnetwork.org.
- Jarrett Renshaw and Trevor Hunnicutt, “Biden, Xi agree that humans, not AI, should control nuclear arms – White House,” Reuters (16 Nov 2023) – Reports that U.S. President Biden and China’s President Xi jointly affirmed the importance of maintaining human control over nuclear weapon launch decisionsreuters.com and agreed to develop military AI “in a prudent and responsible manner” given the risksreuters.com. Marks a notable bilateral acknowledgment linking AI and nuclear stability.
- Frank Kendall quoted in Theresa Hitchens, “Kendall: Air Force studying ‘military applications’ for ChatGPT-like artificial intelligence,” Breaking Defense (5 June 2023) – U.S. Air Force Secretary Kendall cautions that generative AI (e.g. ChatGPT) currently has “limited utility” due to truthfulness and reliability issues, and is not yet ready to draft operational orders or be trusted for critical decisionsbreakingdefense.combreakingdefense.com. Indicates ongoing USAF research into future uses of such AI under human oversight.
- Future of Life Institute, “Artificial Intelligence and Nuclear Weapons: Problem Analysis and US Policy Recommendations” (14 Nov 2023) – Outlines how AI could destabilise nuclear deterrence by integration into NC3, enabling more potent surveillance (threatening second-strike forces), and increasing cyber vulnerabilities and disinformation risksfutureoflife.orgfutureoflife.org. Emphasises that erroneous or manipulated AI outputs in nuclear contexts could heighten the probability of miscalculation or inadvertent escalation.