Skip to content
Wbcom Designs

The AI Backlash Is Coming and I Think That’s Healthy

· · 16 min read
Developer working thoughtfully on laptop representing balanced AI usage

I’ve spent the last two years deeply embedded in AI tooling, building with it, writing about why Claude Code stuck in my workflow, rethinking my entire approach around it. And lately, I’ve been noticing something that I think is both inevitable and necessary: the backlash is building. People are getting frustrated. And honestly? I think that’s a good thing.


The Backlash Is Real, and It’s Justified

Let me start by saying something that might surprise you, given how much I use and advocate for AI tools: most of the criticism is valid.

When people complain about AI-generated content flooding their feeds, they’re right. When designers get frustrated that their craft is being reduced to a prompt, they have a point. When writers feel threatened by tools that can produce passable prose in seconds, their anxiety is legitimate. When developers worry about AI-generated code being pushed to production without proper review, that concern is well-founded.

The backlash isn’t coming from Luddites who hate technology. It’s coming from skilled professionals who see the quality of their industries declining because AI is being applied carelessly, indiscriminately, and often by people who don’t understand the domains they’re automating.

I’ve seen this firsthand in the WordPress ecosystem. Plugins generated entirely by AI with no human review. Blog posts that read like they were written by a blender, technically coherent but devoid of insight, personality, or genuine expertise. Client deliverables that look polished on the surface but crumble under scrutiny because no one with real experience vetted the architecture.

This isn’t an AI problem. It’s a human problem. It’s what happens when people use powerful tools without the judgment to wield them well.

What “AI Slop” Actually Means

There’s a term gaining traction that I think perfectly captures the problem: AI slop. It’s the digital equivalent of junk food, technically edible, widely available, and ultimately unsatisfying.

AI slop is everywhere now. It’s the LinkedIn post that starts with “In today’s rapidly evolving landscape” and says absolutely nothing in 500 words. It’s the product description that’s grammatically perfect but could apply to literally any product in the category. It’s the blog article that ranks on Google for six months before anyone realizes it’s factually wrong. It’s the customer support response that addresses your question with a wall of text that somehow doesn’t answer it.

AI slop isn’t about AI being bad at writing. It’s about humans being lazy about editing.

The problem isn’t that AI generates low-quality output. It actually generates remarkably high-quality output when used well. The problem is that AI makes it so easy to generate content that the bar for “good enough to publish” has dropped through the floor. Why spend four hours writing a thoughtful article when you can generate ten mediocre ones in the same time? Why carefully consider your response to a client when AI can draft something “professional-sounding” in seconds?

The economics of content creation have fundamentally shifted. The cost of producing content has approached zero. And as with anything that becomes nearly free to produce, the market gets flooded with low-quality versions that make it harder for the genuinely good stuff to stand out.

I see this in my own WordPress niche. There are sites now that publish 20-30 “articles” a day, all AI-generated, all targeting long-tail keywords, all providing the bare minimum information wrapped in maximum word count. They’re not trying to help anyone. They’re trying to game algorithms. And for a while, it works, until Google catches up, readers catch on, and the whole thing collapses.

The Quality vs. Quantity Trap

Here’s what I think is the most dangerous idea in the current AI discourse: the belief that quantity can substitute for quality if the quantity is large enough.

This thinking goes something like: “Sure, each individual piece of AI content might not be great, but if I produce 100 of them, at least some will perform well, and the sheer volume will generate traffic/leads/revenue.” I’ve heard this logic from marketers, agency owners, and even developers who should know better. I wrote about a healthier approach in how I use AI to run product and strategy while my team handles development.

The trap is that this works, briefly. Volume can generate short-term metrics that look impressive in a spreadsheet. But it creates a toxic cycle:

  1. You flood channels with AI-generated content
  2. Metrics go up initially
  3. Audience trust erodes because they can tell the quality has dropped
  4. Engagement per piece plummets
  5. You produce even more content to compensate for declining per-piece performance
  6. The channel becomes noise, and your brand becomes associated with that noise

I’ve watched several WordPress-focused publications fall into this trap. Sites that used to publish two or three deeply researched tutorials per week started publishing two or three AI-generated pieces per day. The traffic numbers looked great for about three months. Then readers stopped coming back because every article felt interchangeable with the last one. Then Google’s helpful content updates started penalizing the thin content. Then the site was worse off than before they started using AI.

The lesson is simple but easy to ignore: AI should be used to make fewer, better things, not more, mediocre things.


Why Businesses Are Pushing AI Too Hard

I have some sympathy for the businesses that are over-indexing on AI right now. The pressure to adopt is enormous, and it’s coming from every direction.

Investors want to hear “AI” in every pitch deck. Competitors are loudly proclaiming how AI has transformed their operations (often exaggerating wildly). Tech media breathlessly covers every new AI capability as if it’s the second coming. Employees are either terrified of being replaced or frustrated that their company isn’t “keeping up.”

In this environment, the rational response for many business leaders is to adopt AI as visibly and broadly as possible, whether or not it actually makes sense for their specific use cases. It’s a classic case of FOMO-driven decision making, and it leads to some genuinely terrible outcomes.

The patterns I keep seeing in businesses that push AI too hard

The “replace everyone” approach: Company fires its writing team, replaces them with AI, quality craters, they quietly rehire writers three months later. I’ve seen this happen at least five times in the WordPress publishing space alone.

The “AI first, strategy never” approach: Implementing AI tools without a clear understanding of what problem they’re solving. “We’re using AI!” becomes the goal instead of “We’re solving X problem, and AI is one tool we’re using to do it.”

The “automate the relationship” approach: Using AI for customer interactions that require empathy, nuance, and genuine understanding. Nothing makes a customer feel less valued than a chatbot that clearly doesn’t understand their problem but keeps generating cheerful, unhelpful responses.

The “ship it fast” approach: Using AI to accelerate development without investing proportionally in testing, review, and quality assurance. The code is generated faster, but the bugs ship faster too.

The common thread in all these patterns is that AI is being treated as a replacement for human judgment rather than an amplifier of it. That’s the fundamental error, and it’s the root cause of the backlash.

The Human Touch That Can’t Be Replaced

I want to be specific about what I mean when I say “human touch” because it’s easy for this to become a vague, sentimental argument. I’m not talking about some mystical human quality that machines can never replicate. I’m talking about concrete capabilities that AI currently lacks and that matter enormously in professional work.

Genuine expertise and lived experience

When I write about WordPress development, I’m drawing on over a decade of building real products used by real people. I know what breaks in production because I’ve been woken up at 3 AM by those breakages. I know which architectural patterns lead to maintenance nightmares because I’ve lived through those nightmares. AI can synthesize information about these topics, but it hasn’t felt the pain that makes the advice authentic.

Readers can tell the difference. Maybe not always consciously, but there’s a texture to writing that comes from genuine experience that AI-generated content consistently lacks. It’s the unexpected detail, the counter-intuitive insight, the specific example that could only come from someone who was actually there.

Contextual judgment

AI is excellent at answering questions within well-defined boundaries. It struggles with the kinds of judgment calls that professionals make dozens of times per day. Should this feature be built at all, or is the client solving the wrong problem? Is this code technically correct but architecturally misguided? Is this design accessible to real users in real contexts with real constraints?

These questions require understanding that goes beyond the immediate task, understanding of business context, user behavior, technical debt, team dynamics, and long-term consequences. AI can help inform these decisions, but it shouldn’t make them.

Emotional intelligence in professional relationships

When a client sends a frustrated email, the appropriate response isn’t just technically accurate, it needs to acknowledge their frustration, demonstrate understanding, and rebuild confidence. When a team member is struggling, the right intervention isn’t information, it’s empathy. When a stakeholder is nervous about a deadline, what they need isn’t a status update, it’s reassurance from someone they trust.

AI can generate text that sounds empathetic. But it doesn’t feel empathy, and that difference matters more than we might think. People are remarkably good at detecting when compassion is performed versus felt, especially in high-stakes professional situations.


What Healthy AI Adoption Actually Looks Like

So if the current state of AI adoption is unhealthy, too fast, too indiscriminate, too focused on replacement rather than augmentation, what does a healthy version look like? Here’s what I’ve come to believe after two years of intensive use.

AI as a thinking partner, not a content factory

The most valuable way I use AI isn’t to generate output, it’s to think through problems. I’ll describe a technical challenge to Claude and ask it to help me evaluate different approaches. I’ll use it to stress-test my assumptions. I’ll have it review my code not to find syntax errors but to question my architectural choices.

This is fundamentally different from “AI, write me a blog post” or “AI, build me a plugin.” It’s using AI the way a senior developer uses a whiteboard conversation with a colleague, as a tool for clarifying your own thinking, not as a substitute for it.

Human review as a non-negotiable

Nothing AI produces should go to a client, a customer, or production without meaningful human review. Not a cursory glance. Not “looks fine.” Actual critical review by someone who understands the domain and the context.

I spend probably 30-40% of my working time reviewing and refining AI output. That might sound like it defeats the purpose, but it doesn’t. The AI gets me to 70-80% in a fraction of the time. I then invest focused energy on the remaining 20-30% that requires expertise, taste, and judgment. The total time is still dramatically less than doing everything from scratch, but the quality is maintained because a human is in the loop at every decision point.

Transparency about AI use

I think we need to normalize being honest about how we use AI. Not in a confessional way, but in a straightforward, professional way. When I use AI to help me write documentation, I don’t pretend I typed every word. When AI helps me debug a complex issue, I don’t claim I figured it out entirely on my own.

This transparency matters because it sets realistic expectations. Clients who know I use AI tools understand why I can deliver faster than traditional timelines. They also understand that my expertise is what makes the AI output reliable, I’m not just a prompt jockey, I’m a senior developer who uses AI to amplify my capabilities.

Unhealthy AI AdoptionHealthy AI Adoption
Replace human workers with AIAugment human workers with AI
Maximize output volumeMaximize output quality
AI generates, humans publishAI drafts, humans refine and decide
Hide AI use from clientsBe transparent about AI in your workflow
Automate everything possibleAutomate what benefits from automation
AI decides, humans implementHumans decide, AI implements
Cut costs by replacing expertiseIncrease value by amplifying expertise

My Own Complicated Relationship with AI

I’d be a hypocrite if I wrote this entire piece without examining my own relationship with these tools. So let me be honest.

I use AI extensively. It’s woven into almost every part of my professional workflow. I build with it, write with it, think with it. My productivity has increased dramatically because of it. My business model literally depends on AI tools being as good as they are.

And yet, I have concerns.

I worry about what happens to the pipeline of junior developers when AI handles all the tasks that juniors traditionally used to learn from. If AI writes the boilerplate, builds the CRUD interfaces, and generates the test suites, where do new developers get the foundational experience they need to develop the judgment that makes AI useful in the first place?

I worry about the homogenization of creative output. When millions of people use the same AI tools to generate content, there’s an inevitable convergence toward the mean. The edges get smoothed off. The weird, distinctive, personal voice that makes great writing great gets averaged into competent blandness.

I worry about the speed at which we’re adopting these tools without fully understanding the second-order effects. We’re running a massive, uncontrolled experiment on knowledge work, education, creativity, and human expertise. The benefits are exciting. The risks are real and largely unknown.

I worry about my own dependency. There are days when I realize I haven’t written a single line of code without AI assistance. Am I maintaining my skills, or am I slowly atrophying? Would I still be as effective if these tools disappeared tomorrow? I don’t have a comfortable answer to that question.

Using AI well requires holding two truths simultaneously: these tools are genuinely transformative, and we’re probably using them wrong in ways we won’t fully understand for years.

Finding the Balance

So how do I reconcile being a heavy AI user with having serious concerns about how AI is being adopted? It comes down to a few principles I’ve developed over time.

Principle 1: AI handles the mechanical, I handle the meaningful

There’s a clear line in my work between mechanical tasks and meaningful tasks. Mechanical tasks are things that have a right answer and a known process, generating boilerplate code, formatting data, creating test scaffolding, researching API documentation. AI handles these beautifully.

Meaningful tasks are things that require judgment, creativity, or relationship skills, deciding what to build, how to structure a system, what a client really needs (versus what they asked for), how to handle a delicate professional situation. These stay firmly in my domain.

The line isn’t always clear, and it shifts as AI gets more capable. But maintaining the distinction keeps me from falling into the trap of outsourcing my thinking to a machine.

Principle 2: Invest the time savings in quality, not quantity

When AI saves me three hours on a task, I have a choice. I can use those three hours to take on more work (the quantity path) or I can use them to make the current work better (the quality path). I’ve learned, sometimes painfully, that the quality path is almost always the right choice.

Those saved hours go into more thorough testing, better documentation, deeper client conversations, and the kind of polish that distinguishes professional work from amateur work. The result is fewer projects but better outcomes, higher client satisfaction, and stronger referrals.

Principle 3: Maintain skills that exist independent of AI

I deliberately practice writing code without AI assistance. Not all the time, that would be willfully inefficient. But regularly enough that my fundamental skills don’t atrophy. I do code katas. I read source code directly instead of asking AI to summarize it. I debug problems by reading stack traces before reaching for AI help.

This might sound like the professional equivalent of chopping wood when you have a chainsaw. But I’ve seen what happens to people who become completely dependent on tools they don’t understand, when the tool fails or produces bad output, they have no way to evaluate or correct it. Maintaining independent competence isn’t nostalgia. It’s professional insurance.

Principle 4: Stay human in human-facing work

Every email I send to a client, I write myself. Every difficult conversation, I have personally. Every piece of strategic advice, I think through with my own brain (sometimes with AI helping me explore options, but never making the final call). The human-facing parts of my work are where trust is built, and trust is the foundation of everything else.


What I Tell My Team About AI Use

Even though I’m largely solo now, I work with freelancers and collaborators regularly. I also advise other WordPress businesses on their development practices. Here’s the framework I share when people ask me how to think about AI in their work.

  • Use AI to learn faster, not to skip learning. If AI generates a solution you don’t understand, that’s a learning opportunity, not a finished product. Take the time to understand why the solution works. You’ll be better at using AI and better at your craft.
  • Never publish anything you haven’t personally reviewed and would stake your reputation on. If an AI-generated piece of content or code has your name on it, it should meet the same standard as if you’d created it from scratch. The tool doesn’t absolve you of responsibility for the output.
  • Be the expert that makes AI useful, not the operator that makes AI run. Anyone can write a prompt. The value you bring is knowing whether the response is good, how to improve it, and what’s missing. That expertise is what clients and employers actually pay for.
  • Don’t automate empathy. If a task requires understanding how someone feels, a customer complaint, a team conflict, a client’s anxiety about a deadline, handle it yourself. AI-generated empathy is worse than no empathy at all because it’s recognizably fake.
  • Talk about how you use AI openly. The stigma around AI use helps no one. Pretending you don’t use it when you do creates unrealistic expectations. Being open about your workflow lets you set appropriate expectations and builds trust.

The Backlash as Course Correction

Coming back to the central point: I think the AI backlash we’re seeing is healthy because it’s a necessary course correction. The initial wave of AI adoption was characterized by unbridled enthusiasm, unrealistic expectations, and a gold rush mentality. The backlash is the market, the profession, and the culture pushing back against the excesses.

This pattern isn’t new. Every transformative technology goes through it. The early internet had a dot-com bubble followed by a crash that wiped out companies built on hype rather than value. Social media had its utopian phase followed by a reckoning with misinformation, mental health impacts, and platform manipulation. Mobile apps had their gold rush followed by app fatigue and a consolidation around genuinely useful tools.

In each case, the backlash didn’t kill the technology. It killed the bad implementations of the technology. The internet survived the dot-com crash, the nonsense companies died, and the ones providing real value thrived. Social media is still here, but with more nuanced understanding of its effects. Mobile apps are still central to daily life, but we’ve stopped downloading fifty apps and started using five really well.

AI will follow the same arc. The backlash will separate the signal from the noise. Companies that used AI thoughtfully will continue to benefit. Companies that used it to cut corners will face consequences, from customers, from search engines, from the market. Professionals who used AI to become better at their craft will thrive. Those who used it to avoid learning their craft will struggle.

The question was never “will AI change everything?” It always was, and still is: “will we be thoughtful enough to change things for the better?”

What I Think Happens Next

Predictions are dangerous, but here’s what I think we’ll see over the next year or two as the backlash reshapes the AI landscape:

  1. “Human-made” becomes a selling point. We’re already seeing this in art, writing, and craft goods. I think it will extend to professional services. “Human-reviewed code,” “written by a real developer,” and similar labels will carry premium value. Not because AI is inherently bad, but because human involvement signals care and accountability.
  2. AI disclosure becomes standard. Whether through regulation or professional norms, I think we’ll see increasing expectation to disclose significant AI use. This is healthy. Transparency allows people to make informed decisions about the content and services they consume.
  3. Quality differentiation intensifies. As AI makes it easy for everyone to produce “good enough” work, the market premium for genuinely excellent work will increase. The gap between “AI-generated adequate” and “expert-crafted exceptional” will become the primary competitive axis in creative and knowledge work.
  4. AI literacy becomes a core professional skill. Knowing how to use AI effectively, and equally important, knowing when not to, will become as fundamental as knowing how to use email or spreadsheets. The professionals who thrive will be those who can articulate exactly how AI fits into their workflow and why.
  5. The “AI wrapper” businesses will collapse. Companies whose entire value proposition is “we put AI on top of X” without any domain expertise or proprietary advantage will face a reckoning. When everyone has access to the same AI models, the wrapper isn’t the value, the domain knowledge is.

Embracing the Tension

I don’t think the answer to the AI backlash is to pick a side, “AI is amazing” or “AI is destroying everything.” Both positions are too simple for a genuinely complex reality.

The healthiest stance, as far as I can tell, is to hold the tension. Use these tools. Benefit from them. Build with them. But also question them. Push back when they’re being applied badly. Advocate for quality over quantity. Maintain your independent skills. Be transparent about your usage. And listen to the critics, really listen, because they’re often pointing at real problems even when their proposed solutions are wrong.

The backlash is coming, and in many ways it’s already here. I welcome it. Not because I think AI is bad, I genuinely don’t. But because the correction will make AI adoption better, more thoughtful, and ultimately more sustainable.

We went through the “move fast and break things” phase. Now it’s time for the “move thoughtfully and build things that last” phase. And honestly, that’s the phase where the real value gets created anyway.

The tools are extraordinary. The question is whether we’ll use them with the care they deserve. The backlash is the world’s way of saying: not yet, but maybe soon, if we’re willing to learn from our mistakes.

I think we will. I have to think we will. Because the alternative, a world drowning in AI slop where genuine expertise is devalued and every human interaction is mediated by a language model, isn’t a world any of us want to build.


I write about the intersection of AI, WordPress development, and building a sustainable tech business on this blog. These are personal reflections from someone deep in the trenches, using these tools every day while trying to use them responsibly.

Varun Dubey
Varun Dubey

We specialize in web design & development, search engine optimization and web marketing, eCommerce, multimedia solutions, content writing, graphic and logo design. We build web solutions, which evolve with the changing needs of your business.