Skip to content
AI & Tools

When a Client Tried to Replace Us with AI

· · 9 min read
Client business meeting about AI replacement

Last year, one of our longest-standing clients sent an email that stopped me cold. They wanted to “explore using AI to handle development going forward.” After six years of working together, they were wondering if they still needed us at all.

I won’t pretend that didn’t sting a little. But I also knew exactly where this was coming from. They had seen the demos. They had watched the YouTube videos. They had read the blog posts about how AI can write code in seconds. It all looked amazing, and honestly? I get it. If I hadn’t spent the last decade doing this work, I might have believed the same thing.

What followed over the next three months was one of the most clarifying experiences of my career. Not because it proved AI wrong, but because it revealed something much more nuanced about where AI actually fits, and where it genuinely does not.


How It Started

The client ran a mid-sized membership community. Over the years, we had built out a lot of their infrastructure – custom integrations, payment flows, member data handling, automated emails, reporting dashboards. It was not glamorous work, but it held everything together.

Their new CTO had joined a few months prior, and he was enthusiastic about AI. Genuinely so, not in a buzzword way. He had done real experiments and had seen AI produce code that looked, on the surface, completely functional. So he proposed a trial: they would use an AI coding tool to build their next feature instead of bringing us in.

The feature they wanted to build was not complicated in concept. A reporting dashboard that pulled member activity data, filtered it by different time ranges and segments, and displayed it in a clean UI. Straightforward on paper.

“We figured if AI can build anything, let’s start with something manageable and see how it goes.”

I respected the honesty. They were not trying to cut us out permanently. They genuinely wanted to test the premise. And they kept us in the loop throughout, which I appreciated more than I expected to.


Week One: The Demo Looks Great

The first week, I heard nothing. Then the CTO sent over a screenshot. The dashboard looked really good. Clean layout, the charts were rendering, the filters worked in the demo environment. He was clearly excited.

I sent back a congratulatory reply and felt a small knot in my stomach. Not out of jealousy, but because I recognized that screenshot. I had seen that phase of a project before. It’s the phase before you connect it to real data, real users, and real edge cases.

Week two, things got quieter. Week three, I got a call.

What Actually Happened

The dashboard worked perfectly with test data. When they pointed it at their actual member database, it started returning wrong numbers. Not dramatically wrong, just slightly off. Different enough that someone who knew their metrics would catch it, but not in an obvious way.

The CTO dug in and realized the AI had generated queries that did not account for how their member data was actually structured. Their database had gone through two migrations over the years, and the schema had some quirks that made total sense if you knew the history, but were not documented anywhere obvious. The AI had no way of knowing.

There was also a permissions issue. Their members had different tiers, and the dashboard was not correctly scoping what data each admin could see. It was not a security catastrophe, but it was the kind of thing that would have caused real problems if it had shipped.

The AI wrote code that worked. It just didn’t understand the system it was plugging into.

That is a subtle but important distinction. The code was not bad. In isolation, you could look at it and it made sense. But software does not live in isolation. It lives inside an existing system, with a history, with undocumented decisions, with specific constraints that evolved over years. That context is not in any prompt.


The Part Nobody Shows You in the Demo

AI demos are almost always fresh-slate scenarios. Start a new project, generate a component, watch it work. That is genuinely impressive. For new things, on a clean surface, with no prior decisions baked in, AI can produce working code fast.

But most real client work is not that. Most real work is:

  • A system that has been running for three years and has accumulated decisions nobody quite remembers
  • An integration that depends on a third-party service behaving a certain way, until the day it doesn’t
  • A performance issue that only shows up under real traffic, not in a demo environment
  • A data migration that has to happen without losing anything, on a live system, with users online
  • A security requirement that isn’t written down anywhere but that experienced people know matters

AI can help with pieces of these problems. It genuinely can. But it cannot hold the full context of a real system across a real client relationship, and that context is often most of the job.

We Got Called Back In

The call I got in week three was: “Can you help us figure out what went wrong and fix it?”

Of course I said yes. And I want to be clear that I am not telling this story to make the client look foolish. They made a reasonable bet. AI tools had improved dramatically. The premise was not unreasonable. They just ran into the gap between “AI can generate code” and “this code works correctly in our specific situation.” I’ve written before about how a major client project changed the way we approach this kind of work, and that context made it easier to navigate this situation without panic.

It took our team about a week to understand what the AI had built, identify where the data logic was off, and rework the query structure to match their actual schema. Another few days to sort out the permissions model. Not an enormous amount of work, but work that required someone who understood their system from the inside.

The AI got them 60% of the way there very fast. We got them the rest of the way because we understood what the remaining 40% actually required.

That ratio is not always the same. Sometimes AI gets you 80% of the way. Sometimes 30%. It depends entirely on how much context the problem requires.

There is also the time factor. Once we finished, we ran the dashboard through their full data range and caught two more edge cases before it shipped. That final verification pass – the “does this actually hold up against our real data over the last two years” check – is something AI cannot do for you either. Someone has to know what to look for.


What I Actually Think About AI and Development

I use AI tools in my own work every day now. That is not a grudging admission. They genuinely help. I use them to think through approaches, to write boilerplate faster, to catch things I might have missed, to explore options I had not considered. For certain tasks, they are a real productivity gain. I wrote about which specific AI tools have actually stuck in my development workflow – because the list is shorter than most people expect.

But here is what I have noticed: the more I understand about a problem, the better AI performs as my collaborator. And the less I understand, the more likely I am to accept something that looks right but has a subtle flaw I don’t catch until later.

That relationship works in reverse too. For someone without deep technical experience, AI can produce output that looks confident and complete but contains assumptions that don’t hold up in context. You only know what questions to ask if you already know what matters.

  • AI is excellent at generating options when you know how to evaluate them
  • AI is poor at knowing which constraints it doesn’t know about
  • AI makes you faster if you already know what “correct” looks like
  • AI can mislead you if you don’t have the expertise to verify its output

This is not a criticism of the tools. It is just an accurate description of how they work. They are autocomplete at a much higher level. Extremely powerful autocomplete, but autocomplete nonetheless.


The Honest Version of Where This Goes

I do think AI will change what clients need from developers over time. Not eliminate the need, but shift it. Less time spent on straightforward implementation, more time spent on architecture decisions, system understanding, debugging subtle problems, and knowing what the right questions are in the first place.

For junior developers, I think this creates real pressure. If AI can handle the entry-level work, where do people build the experience they need to handle the harder work? That’s a genuine tension the industry hasn’t figured out yet.

For experienced people, I think the honest answer is: your value is not in writing code. It’s in understanding systems, understanding clients, understanding what can go wrong and why, and being able to navigate from a broken state back to working. AI does not have that. Not yet, and maybe not for a while.

The client from this story is still a client. After the dust settled, the CTO told me something that I’ve thought about since. He said: “I expected AI to replace the work. What I didn’t expect was how much of the work was actually knowledge about us specifically.”

That is it, really. The knowledge about a specific situation, a specific client, a specific system built over a specific history. That is not in the training data.


What This Means If You’re a Client Reading This

Test AI tools. Genuinely. Use them where they fit. They are not a gimmick and they will save you real time and money on certain things.

But be honest about what you’re testing. If you’re using AI to build something on a clean slate with no integration requirements, you might be delighted. If you’re using it to extend a system with three years of context and constraints, budget more time for the gap between “the demo worked” and “this works in production.”

The best use I’ve seen clients make of AI is in collaboration with people who understand their system. Not as a replacement for that understanding, but as a way to move faster once it exists. That combination is genuinely powerful.

The worst use I’ve seen is treating AI as a shortcut around the knowledge-building that complex work actually requires. That shortcut is usually expensive to fix later.


What I Learned

That three-month stretch taught me something I should probably have already known: the value of long-term client relationships is not just in the accumulated work. It’s in the accumulated understanding. Every decision we made together, every bug we traced, every migration we navigated, every time we said “remember when we tried that and it didn’t work” – that is not written down anywhere, but it lives in the relationship.

AI cannot inherit that. It starts fresh every time. That is sometimes its greatest strength. For complex systems with history, it is a real limitation.

I also learned not to be defensive about it. The client was right to test. The question of where AI helps and where it doesn’t is worth asking honestly. I would rather work with clients who ask those questions directly than ones who quietly wonder but never say anything.

And if you’re a developer reading this and feeling anxious about AI: I understand that feeling. What helped me was getting specific. Not “will AI replace developers” but “which parts of what I do can AI do, and which parts require what I’ve learned over years of actual work?” The answer to the second question is where your energy is worth putting.

If you’re navigating decisions like this – whether to bring in outside help, how to use AI tools responsibly, or how to think about your development roadmap – I’m happy to talk through what I’ve seen work. No pitch, just a real conversation. Reach out here.

Varun Dubey
Varun Dubey

We specialize in web design & development, search engine optimization and web marketing, eCommerce, multimedia solutions, content writing, graphic and logo design. We build web solutions, which evolve with the changing needs of your business.