AI In Warfare: The Ultimate Unbundling of Power and Responsibility
How long does it take for a human pilot to identify a threat, verify it, and make a split-second engagement decision? Seconds. An AI-piloted drone can make that same decision in microseconds. This dramatic compression of time is just one symptom of a monumental shift in global conflict. We are witnessing the ultimate unbundling of the warrior, a process that promises unprecedented efficiency at the cost of unimaginable risk.
This is the central argument of J.Y. Sterling's groundbreaking book, "The Great Unbundling: How Artificial Intelligence is Redefining the Value of a Human Being." For millennia, the act of war, for all its brutality, was an intensely human affair. It relied on the bundling of human capabilities: a soldier's physical courage was bundled with their moral judgment; a commander's strategic intellect was bundled with their intuitive understanding of human fear and motivation. The military use of AI is systematically dismantling this bundle, isolating each function and optimizing it beyond human limits. The result is a battlefield where intelligence is detached from conscience, and lethality is separated from direct human responsibility.
This article explores the landscape of AI in warfare through the lens of The Great Unbundling.
- For the AI-Curious Professional, it provides a clear framework for understanding the technological and strategic shifts underway.
- For the Philosophical Inquirer, it confronts the deep ethical questions that arise when the decision to kill is handed to a machine.
- For the Aspiring AI Ethicist/Researcher, it offers a nuanced analysis grounded in the geopolitical and economic realities driving the weaponization of artificial intelligence.
The Bundled Soldier: How Warfare Once Relied on Human Integration
Historically, the effectiveness of a soldier or a commander was a direct result of their integrated, or bundled, capabilities. As explored in Part I of "The Great Unbundling", Homo sapiens' dominance has always stemmed from this unique fusion of skills within a single individual.
Consider the classic archetypes of warfare:
- The Sniper: Their role required the unbundling of sharp eyesight and a steady hand from the emotional weight of the task. Yet, it was the bundled human conscience that provided the crucial, final check on the decision to fire.
- The General: Their command depended on bundling analytical strategy with an empathetic, intuitive grasp of troop morale and enemy psychology. Military genius wasn't just about moving pieces on a map; it was about understanding the human heart.
- The Intelligence Analyst: This role demanded the ability to see patterns in disparate information (analytical intelligence) while also understanding the human motivations and cultural contexts that produced that information (emotional and social intelligence).
In each case, the assumption was clear: the mind that analyzes the target is the same mind that feels the gravity of the consequences. The artificial intelligence military revolution seeks to sever these connections entirely.
The Engine of Unbundling: Competition, Capitalism, and the AI Arms Race
The drive to introduce AI in war is not purely strategic; it is supercharged by the same forces that define our modern era: geopolitical competition and capitalism. As detailed in Part II of "The Great Unbundling", the profit-driven engine of innovation is now focused on militarization, creating an AI arms race that moves at a pace that defies governance.
Global military spending on artificial intelligence is projected to reach $38.6 billion by 2030, according to some market analyses. This massive investment is fueling the rapid separation of core military functions.
Unbundling Intelligence from Human Oversight
Modern intelligence gathering produces a firehose of data—petabytes of satellite imagery, signals intercepts, and open-source information. No human team can process it effectively. AI, however, can. This is the first unbundling: separating the act of "knowing" from human comprehension.
AI systems now perform AI threat detection by identifying patterns invisible to the human eye. While this provides an immense tactical advantage, it also introduces risk. An algorithm might flag an innocent convoy as a threat based on subtle data correlations that a human analyst, with their bundled contextual understanding, would dismiss. This unbundling of analysis from intuition is a cornerstone of modern artificial intelligence and national security.
Unbundling Lethality from Human Control
The most debated aspect of military AI is the development of Lethal Autonomous Weapon Systems (LAWS)—or "killer robots." These systems represent the ultimate unbundling of the kill chain. A LAWS can independently search for, identify, target, and kill a human being without direct human input.
While major military powers like the U.S. currently mandate "meaningful human control," the strategic advantage offered by full autonomy is a powerful temptation. An autonomous drone swarm does not need to maintain a data link with a remote operator, making it immune to jamming. It can react faster and operate in environments too dangerous for human pilots. This pursuit of tactical advantage pushes military doctrine closer to a future where the most critical decision on the battlefield is made by a non-human entity.
An Unbundled Battlefield: Dangers and Doctrines of Artificial Intelligence in Warfare
When intelligence, targeting, and lethality are unbundled from human accountability, the very nature of conflict changes. This "Unbundled World," the focus of Part III of "The Great Unbundling", presents a landscape fraught with new and terrifying dangers.
The Speed of Conflict: The Risk of "Flash Wars"
Algorithmic warfare operates at machine speed. When one nation's AI-powered defense system detects a perceived threat from another's AI-powered offense, the resulting exchange could escalate into a full-blown conflict in minutes, or even seconds. This phenomenon, known as a "flash war," could occur faster than human diplomats or even military commanders can intervene. The 2010 "Flash Crash" in financial markets, where algorithms spiraled out of control, provides a stark warning for the far higher stakes of future warfare.
The Black Box Problem: The Illusion of "Meaningful Human Control"
A core ethical guideline for the military use of artificial intelligence is maintaining "meaningful human control." But what does that mean when the AI's decision-making process is inscrutable? A deep learning neural network may recommend a strike based on trillions of calculations. A human supervisor, given seconds to approve or veto, cannot possibly audit that logic. Their "control" becomes a procedural checkmark, unbundling the act of approval from any real understanding or responsibility.
The Devaluation of Human Life
The promise of sending machines to fight in place of people is politically attractive. It lowers the perceived domestic cost of war by removing the risk of soldiers returning in coffins. However, this very unbundling of risk from national aggression could make conflict more likely. If a nation believes it can wage a "clean" war with expendable machines, the threshold for initiating hostilities may drop precipitously, fundamentally devaluing the human lives on the other side.
The Great Re-bundling: Forging a New Human Role in Security
The trajectory of AI in warfare is not preordained. As argued in the final part of "The Great Unbundling", humanity's response to this technological disruption is what will shape our future. This is the "Great Re-bundling"—a conscious effort to re-integrate human values, judgment, and purpose into the new systems we create.
1. Re-bundling Ethics into Code: True safety doesn't come from a human having a veto switch; it comes from building systems that are fundamentally aligned with our ethical principles. This means moving beyond "human-in-the-loop" as a safeguard and toward "ethics-in-the-design" as a foundation. It requires a new type of collaboration between engineers, ethicists, military strategists, and international lawyers.
2. Cultivating New Bundled Skills: The soldier of the future may need fewer skills with a rifle and more with data analysis and AI supervision. The commander of tomorrow must be a "centaur," seamlessly blending their human strategic intuition with the analytical power of AI. This requires a radical overhaul of military training and education, creating a new generation of leaders who can master this human-machine partnership.
3. Forging Global Treaties: Just as the world came together to regulate chemical and nuclear weapons, a global framework for the weaponization of artificial intelligence is a civilizational necessity. This includes clear definitions of what constitutes "meaningful human control" and potential bans on specific types of autonomous weapons that defy ethical application.
Conclusion: Navigating the Unbundled Future of Combat
The integration of AI in warfare is the most potent and perilous example of The Great Unbundling. It isolates intelligence from understanding, action from accountability, and violence from human conscience. The challenge it presents is not merely technological; it is deeply philosophical. It forces us to ask what part of war, if any, is so fundamentally human that it must never be delegated to a machine.
The answer to that question will determine whether artificial intelligence leads to a more secure world or an automated, unaccountable form of destruction. Navigating the future of AI in warfare requires us to look beyond the technology and focus on consciously re-bundling our most cherished human values—wisdom, ethics, and accountability—into the very fabric of the systems we build.
To explore the full societal impact of AI through the powerful lens of The Great Unbundling, purchase your copy of J.Y. Sterling's "The Great Unbundling: How Artificial Intelligence is Redefining the Value of a Human Being."
For ongoing analysis of AI's unbundling effect on every aspect of our world, from warfare to the workplace, subscribe to the J.Y. Sterling newsletter.
Subpages
- AI Cybersecurity
- AI Defense Contractors
- AI Threat Detection
- Artificial Intelligence and National Security
- Artificial Intelligence Security Tools
- Cyber Security vs Artificial Intelligence
- Future Warfare
- How Has Generative AI Affected Security
- Memorandum on Advancing the United States Leadership in Artificial Intelligence