Why Does AI Not Keep Information Secure? The Great Unbundling of Trust
Have you ever wondered why a technology as intelligent as AI can be so poor at keeping a secret? In the first quarter of 2025 alone, the average cost of a data breach involving AI soared to over $7 million for financial institutions, and it takes organizations an average of 290 days to even identify and contain an AI-specific breach. This isn't just a technical glitch; it's a fundamental crisis of trust. The answer to why AI does not keep information secure lies not in its code, but in its very nature—a nature best understood through the lens of "The Great Unbundling."
As author J.Y. Sterling argues in The Great Unbundling: How Artificial Intelligence is Redefining the Value of a Human Being, AI is systematically dismantling the bundled capabilities that have defined human value for millennia. For centuries, our intelligence was inseparable from our accountability, our problem-solving from our understanding of consequences. AI shatters this bond, unbundling raw cognitive power from the human faculties that create security: consciousness, ethics, and lived experience. This article explores how this unbundling process is the root cause of AI's privacy failures and what we can do to confront this new reality.
For the AI-Curious Professional, this framework reveals the systemic risks that go far beyond simple cybersecurity threats. For the Philosophical Inquirer, it poses a critical question: can we ever program trust into a machine that cannot feel betrayal? And for the Aspiring AI Ethicist, it provides a new model for diagnosing and addressing the deep-seated privacy issues in artificial intelligence.
The Unbundling of Trust: Separating Intelligence from Secrecy
For most of human history, information security was a deeply human endeavor. A secret-keeper wasn't just a repository of data; they were an individual in whom we placed our trust. Their analytical intelligence was bundled with emotional intelligence (understanding the gravity of the secret), social awareness (knowing the consequences of a leak), and a physical presence (being directly accountable for their actions). To betray a confidence meant facing tangible social shame, economic ruin, or even physical danger.
The Great Unbundling, as detailed in Sterling's work, describes how capitalism's relentless drive for efficiency is financing the separation of these functions. AI represents the pinnacle of this process:
- Unbundled Intelligence: AI can analyze petabytes of data, identify patterns, and make predictions with superhuman speed and accuracy.
- Missing Consciousness & Accountability: It performs these tasks without any subjective "understanding" of what concepts like privacy, dignity, or confidentiality mean. It has no reputation to protect and no fear of consequences.
This is the core of the problem. We are building systems with immense intellectual capacity but no inherent sense of responsibility. When an AI system leaks sensitive medical records or proprietary source code, it hasn't made a moral failing; it has simply executed a flawed instruction or been successfully manipulated. It has unbundled the ability to process information from the wisdom to protect it.
AI's Technical Fault Lines: Where Unbundling Causes Breaches
The philosophical concept of unbundling manifests as concrete, technical vulnerabilities that plague AI systems. These aren't just bugs; they are the logical outcomes of deploying intelligence without the other bundled human traits.
1. The 'Black Box' Problem: Intelligence Without Explainability
Many advanced AI models, particularly deep learning networks, are effective but opaque. They deliver an answer without showing their work. This unbundling of decision from justification makes it incredibly difficult to audit for security flaws or to understand if the model is relying on sensitive data in inappropriate ways. Without transparency, true accountability is impossible.
2. Adversarial Attacks and Data Poisoning
Because AI learns from data, it can be taught the wrong lessons. In adversarial attacks, malicious actors create deceptive inputs designed to fool the model. For example, researchers have tricked facial recognition systems with specially designed glasses and caused autonomous vehicles to misinterpret street signs with strategically placed stickers. In data poisoning, an attacker subtly corrupts the training data itself, embedding hidden backdoors or biases that can be exploited later. This directly attacks the AI's unbundled intelligence, turning its learning capability into a security liability.
3. Inference Attacks and Unintended Leakage
An AI model can inadvertently memorize and reveal information about the data it was trained on. Through carefully crafted queries, an attacker can launch a membership inference attack to determine if a specific individual's data was part of the training set. A 2023 report from Harmonic Security found that nearly 8.5% of prompts fed into public generative AI tools contained sensitive data, including intellectual property and customer PII. This happens because the AI has learned patterns so well that it can reconstruct the source material, effectively unbundling the final output from the confidential data that created it.
4. The Scale of the Breach: Centralized Data Hoarding
The unbundling of intelligence from the physical world has led to the creation of massive, centralized "data lakes" required to train powerful AI. While humans store information in a decentralized network of individual minds, AI demands vast, aggregated datasets. This concentration creates a single, high-value target for attackers, turning a potential privacy leak into a catastrophic breach affecting millions.
AI Privacy Issues: Examples of Unbundling Gone Wrong
The theoretical risks of unbundling are already playing out in the real world, creating significant privacy issues with artificial intelligence.
- Generative AI and Corporate Secrets: In 2023, engineers at Samsung accidentally leaked proprietary source code and confidential meeting notes by pasting them into ChatGPT for assistance. The AI, designed only to process and learn, unbundled the act of "helping" from the concept of "confidentiality," absorbing the sensitive data into its training set.
- Facial Recognition and Unbundled Consent: Companies like Clearview AI have scraped billions of photos from public social media profiles to build facial recognition databases for law enforcement. This practice unbundles a person's identity from their consent to be identified, creating a powerful surveillance tool with enormous security risks if its database is ever breached.
- AI in Healthcare and Patient Privacy: A study highlighted by Metomic found that healthcare organizations experience AI-related data leakage 2.7 times more frequently than other industries. An AI system designed to predict patient outcomes might find correlations that inadvertently expose sensitive conditions, unbundling diagnostic insight from the ethical duty of patient confidentiality.
The Philosophical Challenge: Can an Unconscious Intelligence Ever Be Secure?
This brings us to the profound philosophical challenge raised by The Great Unbundling. Can we ever truly create secure AI if the system itself has no inner world? How can a system that doesn't "know" justice, "feel" empathy, or "understand" trust ever become a reliable guardian of our most sensitive information?
The entire basis of humanism is the bundled individual—the person who thinks, feels, acts, and bears responsibility. When we task an unbundled intelligence with duties that have always required a bundled human, we are conducting a risky civilizational experiment. AI privacy ethics isn't just about writing better rules; it's about confronting the possibility that a non-conscious entity is fundamentally unsuited for certain roles, regardless of its raw intelligence.
The Great Re-bundling: A Framework for Human-Centric AI Security
Acknowledging the inevitability of unbundling does not mean accepting a future of inevitable data breaches. The human response must be what Sterling calls "The Great Re-bundling"—a conscious effort to re-integrate these separated functions through new systems of control, policy, and oversight.
1. Policy and Regulation: Re-bundling Power with Accountability
The development of comprehensive AI regulations, like the EU's AI Act, is a critical first step. These frameworks act as a societal harness, re-bundling technological capability with legal accountability. They mandate transparency, risk assessments, and human oversight, enforcing the responsibilities that the AI itself cannot possess.
2. Technical Solutions: Privacy by Design as Re-bundling
A new suite of Privacy-Enhancing Technologies (PETs) aims to re-bundle data processing with security at a technical level.
- Federated Learning: Trains a central AI model on decentralized data (e.g., on individual mobile phones) without the raw data ever leaving the device.
- Differential Privacy: Adds statistical "noise" to datasets, allowing for analysis of group trends while making it impossible to identify any single individual.
- Homomorphic Encryption: Allows AI to perform calculations on data while it remains fully encrypted.
These technologies are a form of technical re-bundling, forcing the AI to process information under strict, mathematically enforced privacy constraints.
3. Human Oversight: The Ultimate Re-bundling
Perhaps the most crucial strategy is insisting on a "human in the loop." For critical applications, the AI's analytical output must be treated as a recommendation, not a final decision. The final judgment—and the accountability that comes with it—must rest with a bundled human being who can apply context, ethics, and wisdom. This re-bundles the AI's un-moored intelligence with human-centric governance.
Conclusion: Redefining Security in an Unbundled World
So, why does AI not keep information secure? Because it is the ultimate expression of unbundled intelligence. It possesses the analytical power of a thousand minds but lacks the accountability of a single, conscious individual. It can process our data but cannot grasp our values. Its security failures are not anomalous bugs but are inherent features of its design—a design that mirrors our economy's relentless pursuit of fractionated, optimized functions at the expense of integrated, holistic systems.
The path forward is not to halt innovation but to embrace the challenge of The Great Re-bundling. By weaving together robust policy, privacy-preserving technologies, and non-negotiable human oversight, we can begin to re-bundle intelligence with accountability and build an AI ecosystem that is not only powerful but also trustworthy.
To explore the full societal impact of The Great Unbundling, from the future of work to the nature of human purpose, read J.Y. Sterling's foundational book, "The Great Unbundling." For ongoing analysis and insights into navigating our unbundled future, subscribe to our newsletter.