What Are Some Possible Reasons Cybercriminals Might Use Deepfakes? The Digital Identity Crisis

Discover what are some possible reasons cybercriminals might use deepfakes, plus essential deepfake protection strategies to safeguard your digital identity.

what are some possible reasons cybercriminals might use deepfakesdeepfake protection
Featured image for What Are Some Possible Reasons Cybercriminals Might Use Deepfakes? The Digital Identity Crisis
Featured image for article: What Are Some Possible Reasons Cybercriminals Might Use Deepfakes? The Digital Identity Crisis

What Are Some Possible Reasons Cybercriminals Might Use Deepfakes? The Digital Identity Crisis

In 2023, a Hong Kong finance worker transferred $25 million to fraudsters after participating in a video call with what appeared to be his company's CFO and colleagues. The entire executive team was generated by deepfake technology. This incident represents more than sophisticated fraud—it exemplifies what J.Y. Sterling calls "The Great Unbundling" in action, where artificial intelligence systematically separates human capabilities from their original context, creating unprecedented vulnerabilities in our digital age.

The Unbundling of Trust and Identity

For millennia, human societies relied on bundled verification systems: we trusted voices because they came from recognizable faces, believed words because they emerged from familiar mouths, and validated identity through physical presence. As Sterling argues in "The Great Unbundling: How Artificial Intelligence is Redefining the Value of a Human Being," AI now separates these once-inseparable elements, creating what he terms "the unbundling of human authenticity."

Deepfakes represent one of the most disturbing manifestations of this unbundling, where visual appearance, vocal patterns, and behavioral mimicry can be artificially reconstructed without the person's consent or knowledge. Understanding what are some possible reasons cybercriminals might use deepfakes requires examining how these technologies exploit the gaps created when trust mechanisms become separated from their human origins.

Primary Motivations: Why Cybercriminals Weaponize Deepfakes

1. Financial Fraud and Business Email Compromise

The most immediate and lucrative application involves CEO fraud and business email compromise schemes. Cybercriminals use deepfakes to:

  • Authorize fraudulent transfers: Impersonate executives in video calls to authorize wire transfers or financial decisions
  • Bypass verification protocols: Overcome voice-based authentication systems used by banks and corporations
  • Create urgency: Generate convincing "emergency" scenarios requiring immediate financial action
  • Target specific individuals: Craft personalized attacks using publicly available video content from social media or corporate websites

Real-world impact: The FBI reported that business email compromise schemes resulted in over $2.7 billion in losses in 2022, with deepfake-enhanced attacks representing a growing subset of these crimes.

2. Social Engineering and Psychological Manipulation

Deepfakes amplify traditional social engineering by adding unprecedented authenticity to deceptive communications:

  • Emotional manipulation: Create fake distress calls from family members to extract money or sensitive information
  • Authority exploitation: Impersonate government officials, law enforcement, or corporate leaders to compel compliance
  • Relationship exploitation: Generate fake romantic interactions for long-term financial scams
  • Trust establishment: Build credibility in initial contact phases before transitioning to traditional fraud techniques

3. Identity Theft and Account Takeovers

Cybercriminals leverage deepfakes to circumvent biometric security measures:

  • Facial recognition bypass: Defeat security systems that rely on facial verification
  • Voice authentication defeat: Overcome voice-based password systems and phone-based verification
  • Document fraud: Create convincing video "proof" of identity for account recovery processes
  • Multi-factor authentication circumvention: Combine deepfakes with other stolen credentials for comprehensive account takeovers

4. Extortion and Blackmail Operations

The creation of compromising deepfake content opens new extortion vectors:

  • Reputation damage threats: Create fake compromising videos of public figures, executives, or private individuals
  • Sextortion schemes: Generate explicit deepfake content for blackmail purposes
  • Professional sabotage: Threaten to release damaging fake videos affecting career prospects
  • Political manipulation: Create fake content to influence elections or damage political opponents

5. Market Manipulation and Insider Trading

Financial markets become vulnerable when deepfakes target key decision-makers:

  • False announcements: Create fake videos of CEOs making market-moving statements
  • Earnings manipulation: Generate false financial guidance from company executives
  • Merger and acquisition fraud: Fabricate fake discussions about corporate deals
  • Regulatory manipulation: Create fake government official statements affecting market sectors

The Deeper Implications: Beyond Individual Crimes

Sterling's framework reveals how cybercriminals using deepfakes represent more than isolated criminal activity—they exploit the fundamental breakdown of bundled human verification systems. This creates three concerning trends:

The Erosion of Epistemic Trust

When visual and auditory evidence becomes unreliable, society loses shared methods for establishing truth. Cybercriminals exploit this uncertainty by making victims question their own perceptions and rely on external "verification" that criminals control.

The Commoditization of Identity

Deepfake technology transforms human identity into a commodity that can be bought, sold, and manipulated. Cybercriminals treat personal likenesses as raw materials for fraud, fundamentally changing how we understand individual ownership of identity.

The Weaponization of Empathy

Human empathy—our evolved capacity to respond to others' distress—becomes a vulnerability when deepfakes can artificially trigger these responses for malicious purposes.

Essential Deepfake Protection Strategies

Individual Protection Measures

Technical Safeguards:

  • Use multi-factor authentication beyond biometric systems
  • Implement code words or verification questions with family members
  • Regularly monitor your digital footprint and limit publicly available media
  • Install deepfake detection software when available

Behavioral Adaptations:

  • Establish verification protocols for high-stakes communications
  • Maintain healthy skepticism toward unexpected video or audio requests
  • Create communication channels that can't be easily replicated
  • Develop media literacy skills to identify potential deepfakes

Organizational Defense Strategies

Policy Implementation:

  • Establish clear verification procedures for financial authorizations
  • Create "cooling-off" periods for unusual high-value transactions
  • Develop incident response protocols specifically for deepfake attacks
  • Train employees on deepfake recognition and response procedures

Technical Infrastructure:

  • Implement deepfake detection tools in communication systems
  • Use blockchain-based verification for critical communications
  • Develop alternative authentication methods that don't rely on biometrics
  • Create secure communication channels for sensitive discussions

Societal-Level Responses

Regulatory Frameworks:

  • Support legislation criminalizing malicious deepfake creation and distribution
  • Advocate for platform liability regarding deepfake content
  • Promote international cooperation on deepfake-related crimes
  • Encourage development of industry standards for deepfake detection

Educational Initiatives:

  • Promote digital literacy programs focusing on synthetic media
  • Support research into deepfake detection technologies
  • Develop public awareness campaigns about deepfake risks
  • Create resources for victims of deepfake-based crimes

The Future Landscape: Preparing for Escalation

Current deepfake technology represents only the beginning of this threat landscape. As AI capabilities improve, we can expect:

Increased Sophistication:

  • Real-time deepfake generation during live conversations
  • Cross-modal deepfakes combining multiple sensory inputs
  • Personalized deepfakes based on minimal source material
  • Integration with other AI systems for comprehensive identity theft

Broader Accessibility:

  • Lower technical barriers for deepfake creation
  • Increased availability of deepfake-as-a-service platforms
  • Mobile applications enabling widespread deepfake generation
  • Reduced costs making deepfake attacks economically viable for more criminals

Evolving Countermeasures:

  • AI-powered deepfake detection systems
  • Blockchain-based identity verification
  • Biometric systems that account for synthetic media
  • Legal frameworks specifically addressing deepfake crimes

The Great Re-bundling: Human Response to AI Manipulation

Sterling's framework suggests that understanding what are some possible reasons cybercriminals might use deepfakes requires recognizing this as part of a broader pattern where AI unbundles human capabilities. The human response—what he calls "The Great Re-bundling"—involves consciously rebuilding trust mechanisms that account for AI's capabilities.

This means developing new forms of verification that combine multiple modalities, creating human-centered authentication systems, and building social structures that preserve human agency in an age of synthetic media.

Conclusion: Navigating the Unbundled Future

The question of what are some possible reasons cybercriminals might use deepfakes reveals the broader challenge of maintaining human security and dignity as AI capabilities expand. Cybercriminals exploit deepfakes for financial gain, social manipulation, identity theft, extortion, and market manipulation—but their success depends on our continued reliance on verification systems designed for a pre-AI world.

Effective deepfake protection requires both individual vigilance and systemic adaptation. We must develop new forms of digital literacy, create robust verification protocols, and build social structures that preserve human agency while acknowledging AI's transformative power.

The future belongs not to those who fear this technology, but to those who understand its implications and consciously choose how to rebuild trust in an age where nothing digital can be taken at face value. As Sterling argues, our response to AI's unbundling of human capabilities will determine whether we maintain human dignity or surrender it to technological inevitability.

Ready to explore more about AI's impact on human society? Discover J.Y. Sterling's comprehensive analysis in "The Great Unbundling: How Artificial Intelligence is Redefining the Value of a Human Being"—available now for readers seeking deeper insights into navigating our AI-transformed world.


This analysis draws from J.Y. Sterling's framework in "The Great Unbundling," which provides essential context for understanding AI's broader implications for human society. For more insights on AI, technology, and human adaptation, visit jysterling.com.

Ready to explore the future of humanity?

Join thousands of readers who are grappling with the most important questions of our time through The Great Unbundling.

Get the Book