How Has Generative AI Affected Security: The Great Unbundling of Trust
TL;DR: Generative AI has fundamentally transformed the security landscape by unbundling traditional security assumptions—separating the creation of convincing content from human intent, making sophisticated attacks accessible to novice threat actors, and forcing organizations to rethink trust verification in an era where seeing and hearing are no longer believing.
The question "how has generative AI affected security" has become one of the most pressing concerns for cybersecurity professionals, national security experts, and business leaders worldwide. Since ChatGPT's launch in late 2022, the number of detected phishing sites surged 138 percent, marking a dramatic shift in the threat landscape that represents what J.Y. Sterling calls "The Great Unbundling" in action—the systematic separation of capabilities that were once uniquely human.
The Unbundling of Security Assumptions
For centuries, our security frameworks have been built on bundled human capabilities: the assumption that creating convincing communication required genuine intent, that sophisticated attacks demanded technical expertise, and that trust could be established through familiar voices and faces. Generative AI represents the unbundling of these fundamental security assumptions, creating what Sterling describes as a world where "the person with ideas also feels passion, directs hands, and experiences consequences" no longer holds true.
Breaking Down Traditional Security Barriers
Generative AI is more likely to amplify existing risks than create wholly new ones, but it will increase sharply the speed and scale of some threats. This amplification effect demonstrates the unbundling principle: AI doesn't just replace human capabilities—it separates them from their original context and scales them beyond human limitations.
The most immediate manifestation of this unbundling appears in three critical areas:
Cognitive Unbundling: AI separates the creation of sophisticated content from human intelligence and experience. GenAI can -- in a matter of seconds -- collect and curate sensitive information about an organization or individual and use it to craft highly targeted and convincing messages and even deepfake phone calls and videos.
Technical Unbundling: Advanced attack capabilities are separated from technical expertise. The increasing performance, availability and accessibility of generative AI tools allows potentially anyone to pose a threat through malicious use, misuse or mishap.
Trust Unbundling: The separation of authentic communication from familiar identities through deepfakes and voice cloning, fundamentally challenging how we verify human identity and intent.
Generative AI Security Risks: A New Threat Matrix
Enhanced Phishing and Social Engineering
Traditional phishing relied on mass distribution and statistical success rates. Generative AI has unbundled this approach, enabling hyper-personalized attacks that scale beyond human capacity. Gartner found in a recent survey that 28% of organizations had experienced a deepfake audio attack; 21% a deepfake video attack; and 19% a deepfake media attack that bypassed biometric protections.
The sophistication of these attacks represents a fundamental shift. At Black Hat USA 2021, for example, Singapore's Government Technology Agency presented the results of an experiment in which the security team sent simulated spear phishing emails to internal users. Some were human-crafted, and others were generated by OpenAI's GPT-3 technology. More people clicked the links in the AI-generated phishing emails than in the human-written ones -- by a significant margin.
The Rise of Deepfake Threats
Deepfakes represent perhaps the most visible example of generative AI's impact on security. Voice phishing rose 442% in late 2024 as AI deepfakes bypass detection tools, demonstrating how quickly threat actors have adopted these technologies.
The implications extend far beyond individual fraud. Blackmail and Extortion: Cybercriminals can fabricate damaging/compromising videos of an individual and threaten to release them unless a ransom is paid. Even more concerning, Cyber criminals can use deepfake technology to create scams, false claims, and hoaxes that undermine and destabilize organizations.
Automated Malware and Code Generation
The unbundling of programming expertise from malicious intent has created new attack vectors. HP researchers in September reported that hackers had used AI to create a remote access Trojan. This capability allows threat actors to generate sophisticated malware without deep technical knowledge, democratizing advanced cyberattacks.
National Security Implications
Strategic Competition and AI Weaponization
The national security implications of generative AI's impact on security extend beyond individual attacks to strategic competition between nations. Strategic competitors, such as China and Russia, are making significant investments in AI for national security purposes.
Imagine what the Cuban Missile Crisis would look like in 2030 with all sides using a wide range of AI-applications ranging from imagery recognition to logistics management and generative analysis of adversary intentions. There would be a tendency to speed up the crisis even when it might make more sense to slow down decision-making and be more deliberate.
Government Response and Coordination
Recognizing these threats, the U.S. government has established comprehensive response mechanisms. The U.S. Artificial Intelligence Safety Institute at the U.S. Department of Commerce's National Institute of Standards and Technology (NIST) announced the formation of the Testing Risks of AI for National Security (TRAINS) Taskforce.
Enterprise Security Challenges
Data Privacy and Corporate Vulnerabilities
Corporate leaders are increasingly concerned about AI-related security risks. In KPMG's new Q2 2025 report, 69% of leaders cited concerns about AI data privacy, a significant increase from the 43% who cited it in Q4 2024.
The scale of data exposure has become staggering. AI tools like ChatGPT and Microsoft Copilot contributed to millions of data loss incidents in 2024, particularly social security numbers. This represents the unbundling of data security from traditional perimeter defenses.
The OWASP Top 10 for LLM Security
The cybersecurity community has developed new frameworks to address generative AI risks. The OWASP Top 10 for LLMs effectively debunks the misconception that securing GenAI is solely about protecting the model or analyzing prompts. These guidelines recognize that generative AI security requires a comprehensive approach that addresses the entire AI lifecycle.
Defensive Strategies and The Great Re-bundling
AI-Powered Defense Systems
In Sterling's framework, defensive responses represent "The Great Re-bundling"—conscious efforts to recombine capabilities in new ways to address unbundled threats. As generative AI becomes a tool for attackers, defenders are leveraging the same technology to build smarter, faster, and more adaptive cybersecurity systems.
Key defensive innovations include:
- Deepfake Detection Algorithms: Neural networks trained on large datasets of real and fake media to spot subtle anomalies
- AI-Based Email Filters: Natural Language Processing (NLP) models analyze the tone, structure, and intent of messages to flag suspicious content
- Behavioral Biometrics: AI monitors how users type, move their mouse, or interact with systems to detect anomalies that suggest impersonation
Human Risk Management
The evolution of security training reflects the need to address unbundled threats. The term that many are now adopting is "human risk management" (HRM), which represents a re-bundling of human awareness with technological defenses.
Research and consulting firm Forrester describes HRM as "solutions that manage and reduce cybersecurity risks posed by and to humans through: detecting and measuring human security behaviors and quantifying the human risk; initiating policy and training interventions based on the human risk; educating and enabling the workforce to protect themselves and their organization against cyber attacks; building a positive security culture".
Zero Trust Architectures
The move toward Zero Trust represents a fundamental re-bundling of security assumptions. AI enforces strict access controls based on continuous authentication and behavior analysis, acknowledging that traditional trust models cannot withstand the unbundling effects of generative AI.
Economic and Governance Implications
The Business Case for AI Security
Organizations are rapidly increasing security investments in response to generative AI threats. Business leaders surveyed by KPMG reported prioritizing security oversight in their generative AI budgeting decisions, with 67% saying they plan to spend money on cyber and data security protections for their AI models.
This investment pattern reflects Sterling's argument about capitalism as the "Engine of Unbundling"—the same market forces driving AI development are now funding its security solutions.
Regulatory and Policy Responses
The rapid evolution of generative AI threats has outpaced traditional governance mechanisms. Global regulation is incomplete, falling behind current technical advances and highly likely failing to anticipate future developments.
This regulatory lag exemplifies Sterling's thesis about the pace of unbundling defying governance. Traditional policy frameworks, built around bundled human capabilities, struggle to address threats that separate malicious intent from technical capability.
Future Outlook: Living with Unbundled Security
The Continuing Arms Race
The security implications of generative AI will continue evolving as the technology advances. Analysts worry about AI's potential to make certain types of attacks much more profitable because of the attack volume that AI can create.
This scaling effect represents the core challenge of unbundled security: when the creation of threats is separated from human limitations, the volume and sophistication of attacks can grow exponentially.
Building Resilient Systems
Success in this unbundled world requires what Sterling calls "The Great Re-bundling"—conscious efforts to create new forms of human purpose and agency in an AI-dominated landscape. For cybersecurity, this means:
- Continuous Authentication: Moving beyond point-in-time verification to ongoing trust validation
- Human-AI Collaboration: Re-bundling human judgment with AI capabilities for enhanced security
- Adaptive Defense: Creating systems that evolve as quickly as the threats they face
The Path Forward
America holds the lead in the AI race — but our advantage may not last. By working together we can build on and accelerate America's AI edge, boost our national security and seize the opportunities AI presents.
This competitive dynamic underscores Sterling's argument about the necessity of adaptation. Organizations and nations that successfully re-bundle their security capabilities around new AI realities will maintain competitive advantage, while those clinging to traditional bundled assumptions will become increasingly vulnerable.
Conclusion: Security in the Age of Unbundling
The question of how generative AI has affected security reveals a fundamental transformation in the nature of trust, verification, and protection. We are witnessing the unbundling of security assumptions that have governed human interaction for millennia—the breakdown of the connection between sophisticated communication and genuine intent, between technical capability and human expertise, between familiar voices and trusted identities.
Yet this unbundling also creates opportunities for re-bundling—new combinations of human insight and AI capability that can potentially create more robust security than ever before. The organizations and societies that understand this dynamic, that embrace both the challenges and opportunities of unbundled security, will be best positioned to thrive in an AI-transformed world.
The Great Unbundling is not just changing how we create and consume content—it's fundamentally altering the nature of security itself. The question is not whether this transformation will continue, but whether we will actively shape its direction or simply react to its consequences.
Ready to explore more insights on AI's transformation of human value and purpose? Discover how "The Great Unbundling" framework applies across industries and institutions in J.Y. Sterling's comprehensive analysis of artificial intelligence's impact on the human condition. Get the book or join our newsletter for ongoing insights.