Should AI Be Made Illegal

Explore why AI regulation is more complex than simple bans. J.Y. Sterling examines why AI should not be made illegal and what effective AI governance really requires.

should AI be made illegalwhy AI should not be regulatedshould the government regulate AIwhy should artificial intelligence be regulatedAI needs to be regulated
Featured image for Should AI Be Made Illegal
Featured image for article: Should AI Be Made Illegal

Should AI Be Made Illegal? The Great Unbundling Reveals Why Simple Bans Won't Work

The question "should AI be made illegal?" has gained urgency as artificial intelligence reshapes every aspect of human society. From Goldman Sachs predicting 300 million jobs exposed to automation to ChatGPT demonstrating superhuman capabilities in mere months, the pace of AI development has triggered calls for everything from temporary moratoriums to outright bans. Yet as author J.Y. Sterling argues in "The Great Unbundling: How Artificial Intelligence is Redefining the Value of a Human Being," the question itself reveals a fundamental misunderstanding of what we're facing.

The answer isn't whether AI should be made illegal—it's understanding why the government should regulate AI strategically rather than reactively, and why blanket prohibitions will fail in the face of capitalism's unbundling engine.

The Flawed Logic Behind AI Prohibition

Why AI Should Not Be Regulated Through Outright Bans

The impulse to make AI illegal stems from a natural human response to perceived existential threats. When faced with technology that appears to threaten human relevance, the instinct is to stop it entirely. However, this approach fails to grasp the fundamental forces driving AI development.

The Economic Reality: AI represents the systematic unbundling of human capabilities—analytical intelligence, emotional intelligence, physical dexterity, and creative expression—into discrete, optimizable functions. This unbundling isn't driven by technological inevitability but by capitalism's relentless pursuit of efficiency and profit. As Sterling explains, "The profit-driven mechanism financing and directing unbundling operates at a pace that defies governance through simple prohibition."

The Global Competition Factor: Should the government regulate AI through bans, other nations will continue development. China's AI investments, the EU's regulatory approaches, and emerging economies' adoption strategies create a prisoner's dilemma. Unilateral prohibition amounts to unilateral disarmament in a global technological race.

The Innovation Imperative: History shows that transformative technologies resist prohibition. The internet, despite early fears about security and social disruption, became essential infrastructure. Similarly, AI's benefits—from medical diagnosis to climate modeling—make complete prohibition both impractical and potentially harmful to human welfare.

Why Should Artificial Intelligence Be Regulated? The Unbundling Framework

Understanding the Real Threat

The question shouldn't be whether AI needs to be regulated, but how to regulate it effectively. Sterling's Great Unbundling framework reveals why traditional regulatory approaches fail:

The Bundled Human Assumption: Traditional regulation assumes humans remain central to economic and social systems. Employment law, privacy regulations, and safety standards all presuppose that humans control the technologies they create. The unbundling of human capabilities breaks this assumption.

Current Unbundling Examples:

  • Labor Markets: AI systems now perform tasks previously requiring human judgment, from legal document review to creative content generation
  • Intelligence Operations: Machine learning models process information at scales no human can match, separating problem-solving from conscious understanding
  • Social Connection: Algorithms curate human interaction, unbundling authentic community from optimized engagement

The Regulatory Challenge

Why should artificial intelligence be regulated? Because unbundling occurs faster than social adaptation. Sterling notes that "our social structures, myths, and economies assume the person with ideas also feels passion, directs hands, and experiences consequences." When AI systems make decisions affecting humans without experiencing those consequences, traditional accountability mechanisms break down.

The Accountability Gap: Current AI systems can pass the bar exam without understanding justice, diagnose diseases without experiencing suffering, or create art without feeling beauty. This separation of capability from consequence creates unprecedented regulatory challenges.

The Scale Problem: AI operates at speeds and scales that make traditional oversight impossible. High-frequency trading algorithms execute thousands of transactions per second; recommendation systems influence billions of people simultaneously. Human-paced regulation cannot match machine-speed execution.

The AI Regulation Debate: Beyond Simple Solutions

What Effective AI Governance Requires

The AI regulation debate often presents false choices: unrestricted development versus complete prohibition. Sterling's framework suggests a third path—strategic regulation that acknowledges unbundling while preserving human agency.

Principle-Based Regulation: Instead of prescriptive rules that become obsolete as technology evolves, regulation should establish principles that scale with AI capability. Transparency requirements, accountability standards, and human oversight mandates can adapt to new AI applications.

Sectoral Approaches: Different applications require different regulatory frameworks. Healthcare AI needs safety validation; financial AI requires stability oversight; social media AI demands transparency about algorithmic decision-making. One-size-fits-all approaches ignore the diverse ways AI impacts human systems.

International Coordination: Effective AI governance requires global cooperation. The EU's AI Act, China's algorithmic regulation, and emerging frameworks in other nations create a complex landscape that national bans cannot address.

The Counter-Current: The Great Re-bundling

Sterling's most compelling insight concerns humanity's response to unbundling. "The Great Re-bundling" represents conscious human effort to re-bundle capabilities in new ways, creating new forms of human purpose and value.

Human-AI Collaboration: Rather than replacing humans entirely, optimal AI deployment often involves human-AI teams that leverage complementary strengths. Doctors using AI diagnostic tools, artists collaborating with generative AI, and researchers employing AI for data analysis represent re-bundling strategies.

New Artisan Movements: As AI commoditizes certain capabilities, humans increasingly value demonstrably human-created products and services. From handmade crafts to human-authored content, scarcity creates value.

Regulatory Innovation: New governance models emerge that account for human-AI hybrid systems. Algorithmic auditing, explainable AI requirements, and human-in-the-loop mandates represent regulatory re-bundling.

Practical Implications for AI Governance

What This Means for Policymakers

Focus on Outcomes, Not Technology: Effective regulation targets harmful outcomes rather than specific technologies. Instead of banning AI, regulate deception, discrimination, and dangerous applications regardless of whether they use AI.

Adaptive Frameworks: Regulatory systems must evolve with technological capability. Sunset clauses, regular review processes, and built-in adaptation mechanisms prevent regulations from becoming obsolete.

Stakeholder Integration: Effective AI governance requires input from technologists, ethicists, affected communities, and domain experts. Regulatory capture by any single group undermines legitimacy and effectiveness.

What This Means for Business Leaders

Proactive Compliance: Organizations that anticipate regulatory trends and implement strong AI governance practices gain competitive advantages. Self-regulation often prevents more restrictive government intervention.

Human-Centered Design: AI systems that preserve human agency and dignity face fewer regulatory obstacles. Designing for human oversight and control reduces regulatory risk.

Transparency Investments: As regulation increasingly requires AI system transparency, organizations that invest in explainable AI and algorithmic auditing capabilities position themselves for long-term success.

What This Means for Individuals

Informed Engagement: Citizens must understand AI's implications to participate effectively in democratic governance. The complexity of AI systems requires higher levels of technological literacy.

Strategic Adaptation: Individuals can pursue re-bundling strategies that leverage human strengths AI cannot replicate. Emotional intelligence, creative synthesis, and ethical reasoning remain distinctly human capabilities.

Collective Action: Effective AI governance requires collective human response. Individual opt-out strategies may provide personal protection but cannot address systemic challenges.

The Path Forward: Strategic AI Governance

Beyond the Prohibition Debate

The question "should AI be made illegal?" ultimately misses the point. As Sterling argues, "The inevitability of unbundling" means that prohibition strategies will fail. Instead, we need governance frameworks that acknowledge AI's transformative potential while preserving human agency and dignity.

The Regulatory Imperative: AI needs to be regulated not because it's inherently dangerous, but because it operates at scales and speeds that exceed human oversight capacity. Effective regulation creates guardrails that allow beneficial AI development while preventing harmful applications.

The Innovation Balance: Regulation should enable rather than stifle beneficial AI development. Regulatory frameworks that provide clarity and predictability encourage responsible innovation while deterring harmful applications.

The Democratic Requirement: AI governance decisions affect everyone, requiring democratic participation in regulatory design. Technocratic approaches that exclude public input lack legitimacy and effectiveness.

Conclusion: Embracing Complexity

The AI regulation debate reveals deeper questions about human value, technological progress, and democratic governance. Sterling's Great Unbundling framework shows that simple solutions—whether unrestricted development or complete prohibition—ignore the complex dynamics driving AI development.

Effective AI governance requires nuanced approaches that acknowledge both the benefits and risks of unbundling human capabilities. Rather than asking whether AI should be made illegal, we should ask how to regulate AI systems in ways that preserve human agency while enabling beneficial applications.

The stakes are too high for simplistic answers. As AI systems become more capable, the quality of our governance frameworks will determine whether unbundling leads to human flourishing or human obsolescence. The choice remains ours—but only if we act thoughtfully and deliberately.

Ready to explore how AI is reshaping human value? Discover the complete framework in J.Y. Sterling's The Great Unbundling: How Artificial Intelligence is Redefining the Value of a Human Being. Learn why traditional approaches to AI governance fail and what effective regulation really requires.


J.Y. Sterling is the author of "The Great Unbundling: How Artificial Intelligence is Redefining the Value of a Human Being," offering a groundbreaking framework for understanding AI's impact on human society. His work provides essential insights for navigating the complex relationship between technological progress and human flourishing.

Ready to explore the future of humanity?

Join thousands of readers who are grappling with the most important questions of our time through The Great Unbundling.

Get the Book