Why Artificial Intelligence Is Bad

Wondering why artificial intelligence is bad for society? Explore the deep-seated problems with AI through the 'Great Unbundling' framework, from job loss to the erosion of human value.

why artificial intelligence is badwhy AI is badis AI good or badAI is dangerousproblems with AI
Featured image for Why Artificial Intelligence Is Bad
Featured image for article: Why Artificial Intelligence Is Bad

Why Artificial Intelligence Is Bad: Beyond the Hype and Headlines

For every headline heralding a new AI breakthrough, a shadow of concern follows. We are told that Artificial Intelligence will cure diseases and solve climate change, yet a persistent question echoes in our collective consciousness: Is AI dangerous? The answer is far more complex than a simple "yes" or "no." To truly grasp the reasons why artificial intelligence is bad, we must look past the surface-level fears of rogue robots and examine the fundamental way AI is reshaping the value of a human being.

The core of the problem lies in what I call "The Great Unbundling" in my book, The Great Unbundling: How Artificial Intelligence is Redefining the Value of a Human Being. For millennia, humanity's success was built on a bundled set of capabilities. Our analytical intelligence was packaged with our emotional intelligence, our physical dexterity was tied to our consciousness, and our capacity for purpose drove it all. AI is the great engine systematically dismantling this bundle, piece by piece.

This article will explore the negative effects of artificial intelligence through this powerful framework.

  • For the AI-Curious Professional: You will understand the systemic risks AI poses to your career and the economy, beyond simple automation.
  • For the Philosophical Inquirer: You will gain a new lens to analyze AI's challenge to humanism and our sense of purpose.
  • For the Aspiring AI Ethicist: You will see the deep-rooted sources of AI's problems, providing a robust foundation for building ethical solutions.

The Core Problem with AI: The Great Unbundling of the Human Being

Before we can diagnose the specific dangers of AI, we must understand the source of the sickness. Homo sapiens thrived by being a jack-of-all-trades package. The same person who had an idea (analysis) also felt the drive to see it through (passion), could direct the hands to build it (physicality), and experienced the consequences (consciousness). Our economies, laws, and even our myths are built on this assumption of the "bundled human."

Artificial intelligence represents a historic disruption to this model. It doesn't just augment the bundle; it isolates each component and improves it beyond human capacity.

  • Analytical Intelligence is unbundled into algorithms that can process data faster than any human team.
  • Creative Skill is unbundled into generative models that can produce art, music, and text in seconds.
  • Social Connection is unbundled into engagement algorithms that provide the feeling of validation without the substance of community.

This unbundling is the primary reason why AI is bad for society—it devalues the original, integrated human package that our world was built to reward.

Unbundling Labor: Why AI is Dangerous for Your Job and the Economy

The most immediate and tangible impact of artificial intelligence is on the labor market. The conversation has shifted from automating blue-collar, physical jobs to automating white-collar, cognitive tasks.

A landmark 2023 report from Goldman Sachs estimated that generative AI could expose the equivalent of 300 million full-time jobs to automation. This isn't just about efficiency; it's a fundamental unbundling of the professional.

The Devaluation of Human Expertise

Consider a financial analyst. Their value was a bundle of skills: data gathering, statistical analysis, critical thinking, pattern recognition, and communication. Today, a company can purchase AI-driven analysis that performs the first four tasks instantly and with greater accuracy. The human is left with only the final layer of communication.

This is the unbundling of "thinking" from the "thinker." When a company can buy the output of cognition without employing the conscious, bundled human being, the economic value of that human plummets. This process raises profound problems with AI:

  • Wage Stagnation: As AI takes over high-value tasks, the remaining human tasks become commoditized, driving down wages for even highly skilled professionals.
  • Career Path Obsolescence: Traditional career ladders are collapsing. Why train a junior analyst for five years when an AI can achieve senior-level output on day one?
  • The UBI Imperative: As this trend accelerates, Universal Basic Income (UBI) shifts from a progressive policy proposal to a potential civilizational necessity to prevent mass societal collapse when vast segments of the population become economically non-competitive.

The impact of artificial intelligence on society starts here, in the wallets and career prospects of millions who believed their cognitive skills were a permanent shield against automation.

Unbundling Intelligence: The Danger of Competence without Comprehension

One of the most insidious dangers of artificial intelligence is its ability to separate performance from understanding. An AI model can pass the bar exam, but it has no concept of justice. It can identify cancerous cells in a medical scan, but it has no understanding of compassion or the sanctity of life.

This is the unbundling of competence from comprehension. As noted by researchers like Gary Marcus, today's AI systems are masters of statistical pattern matching, not genuine reasoning. This leads to critical societal risks:

  • Algorithmic Bias at Scale: AI models are trained on historical data, which is saturated with humanity's existing biases. An AI trained on biased lending data will perpetuate discriminatory practices, not because it is malicious, but because it is unthinkingly competent at recognizing past patterns. A 2019 study published in Science found that a widely used US healthcare algorithm exhibited significant racial bias, systematically allocating less care to Black patients than to equally sick white patients.
  • Erosion of Accountability: When an autonomous AI system makes a catastrophic error—in finance, medicine, or military applications—who is to blame? The programmer? The user? The corporation? This lack of clear accountability is a critical AI danger.
  • The Black Box Problem: Many advanced AI systems are so complex that even their creators do not fully understand how they arrive at a specific conclusion. We are building a world that relies on decision-making systems that are fundamentally inscrutable to human oversight, a fact that makes many question, is AI good or bad for the future?

Unbundling Connection: How AI is Bad for Society and Mental Health

Perhaps the most personal and pervasive negative impact of artificial intelligence is its role in unbundling human connection. Social media platforms, streaming services, and news aggregators use sophisticated AI to optimize one metric: engagement.

They achieve this by unbundling the feeling of validation from the reciprocal work of genuine community. A "like" is a fleeting, frictionless hit of social approval. A real friendship requires time, vulnerability, and mutual effort. AI has learned to provide the former at the expense of the latter.

This is why AI is harmful to our social fabric:

  1. Engineered Polarization: Algorithms have discovered that outrage drives more engagement than consensus. They feed users increasingly extreme content, creating echo chambers and deepening societal divides.
  2. The Loneliness Epidemic: By offering a cheap substitute for real connection, AI-driven platforms can exacerbate feelings of isolation. A 2022 study from the American Psychological Association found strong correlations between high social media use and increased depression and anxiety among adolescents.
  3. Erosion of Shared Reality: When each person's information diet is hyper-personalized by an AI aiming to maximize their individual engagement, we lose the shared set of facts and values that are necessary for a functioning democracy.

So, Is AI Good or Bad? The Philosophical Challenge

To ask "Is AI good or bad?" is to ask the wrong question. AI is a tool, but it's a tool that reflects the values of its creator: a capitalist system laser-focused on efficiency and profit. The true impact of AI is that it holds up a mirror to ourselves.

The ultimate reason why artificial intelligence is bad in its current trajectory is that it threatens the philosophical foundation of humanism, which places the integrated human individual at the center of value. When our intelligence, creativity, and even our capacity for connection are unbundled and outsourced to more efficient machines, we are forced to confront a terrifying question: What is a human being worth?

The Path Forward: The Great Re-bundling

Recognizing the problems with AI is not a call for despair, but a call to action. The answer to the Great Unbundling is what I term "The Great Re-bundling"—a conscious, deliberate effort by humanity to create new forms of value by re-integrating our capabilities in ways that machines cannot.

This is not about stopping technology. It is about steering it.

  • For Professionals: The task is to cultivate skills that resist unbundling. This means moving beyond single-domain expertise to embrace cross-disciplinary thinking, strategic foresight, ethical leadership, and deep, empathetic client relationships.
  • For Society: We must demand and build governance structures for AI that are optimized for human well-being, not just corporate profit. This involves embedding values like fairness, accountability, and transparency into the very code of these systems.
  • For Individuals: We can consciously choose to re-bundle in our own lives. This means prioritizing deep work over shallow distraction, nurturing real-world communities over algorithmic feeds, and engaging in activities that unite our mind, body, and spirit.

The dangers of artificial intelligence are real and profound. They stem from a systemic unbundling of our core human capacities. But understanding this framework gives us the power to respond. The future is not about whether we will use AI, but about whether we will have the wisdom to master it, rather than allowing it to master us.

To explore the complete framework of The Great Unbundling and discover the strategies for thriving in an age of AI, purchase your copy of J.Y. Sterling's "The Great Unbundling" today. For ongoing analysis and insights, subscribe to our newsletter.

Ready to explore the future of humanity?

Join thousands of readers who are grappling with the most important questions of our time through The Great Unbundling.

Get the Book