How I Use AI to Run Product, Strategy, and Execution While My Team Handles Development
I still have a team. Let me get that out of the way because the AI narrative these days makes it sound like everyone is going solo. That is not what happened here. What happened is that AI changed which responsibilities I keep and which ones I delegate, and that shift has been the most productive change in how I run things in years.
Here is the setup: product planning, execution strategy, revamping existing products, launching new products, all of that sits with me now. Custom development work sits with my team. I use AI tools heavily to handle the stuff that used to require three or four people on my side, and my team brings fresh perspective to the code and client work that benefits from human collaboration.
This is not a “replace your team with AI” story. This is a “restructure who does what” story. And if you are running a product company or agency with a team, I think this approach might work for you too.
Why I Changed the Responsibility Split
For years, my workflow looked like most small agency or product company setups. I would handle client communication, high-level planning, and some architecture decisions. My team would handle development, testing, and deployment. Product planning was scattered, sometimes me, sometimes a senior developer, sometimes nobody until a deadline forced decisions.
The problem was not that people were underperforming. The problem was context switching. I dug deeper into this in my post about the hardest part of running a dev agency, getting your team to own their work. Every time I pulled a developer into a product strategy discussion, that was a day of deep coding work lost. Every time I tried to delegate product decisions to someone whose strength was writing clean PHP, I got technically sound answers that missed the business context.
The best division of labor is not about who can do what. It is about who should be doing what based on where they create the most value.
When AI coding assistants became genuinely useful, not the early autocomplete days, but the real codebase-aware tools like Claude Code which I covered in detail here, I realized I could take on a lot more of the planning, research, architecture, and execution work myself without burning out. Not because AI replaced my team, but because AI replaced the parts of my workflow that used to eat up entire days: researching competitors, drafting specs, writing boilerplate, scaffolding projects, generating documentation, auditing code quality.
What Sits With Me Now
The responsibilities I handle personally have expanded significantly. Here is what my plate looks like and how AI makes each one manageable:
Product Planning
Every product decision, what to build next, what to deprecate, which feature requests to prioritize, comes through me. I use AI to speed up the research phase dramatically. Instead of spending two days analyzing competitors and market positioning, I can get a comprehensive landscape analysis in an hour. I still make the decisions, but the information gathering that used to bottleneck everything is now fast.
For our WordPress plugins and themes, this means I personally evaluate every feature request against our product roadmap, analyze support tickets for patterns, and decide what ships in the next release. Before AI, this analysis work would either eat my entire week or just not get done properly.
Execution Strategy
How something gets built matters as much as what gets built. I write the technical specs, define the architecture, outline the implementation approach, and set the quality bar. AI helps me draft these specs faster and more thoroughly. I can describe what I want, get a detailed implementation plan, review it, refine it, and hand my team a clear blueprint instead of a vague ticket.
This was a game changer. My developers used to spend significant time figuring out the approach before writing a single line of code. Now they get a clear spec with architecture decisions already made, file structures defined, and edge cases documented. They can focus on what they do best, writing clean, reliable code.
Product Revamping
We have over 48 plugins and multiple themes. Some of them need modernization, updating to newer WordPress APIs, adding block editor support, improving performance, refreshing the UI. I personally drive every revamp: audit the current state, decide what changes, write the migration plan, and often prototype the new approach with AI assistance before handing it to my team for full implementation.
AI is particularly valuable here because I can quickly audit an entire plugin codebase, identify deprecated patterns, map out what needs to change, and even generate the scaffolding for the new approach. What used to be a multi-week planning exercise for a major revamp now takes me a few days.
New Product Development
When we launch something new, I handle the entire concept-to-prototype phase. Market research, feature definition, competitive analysis, initial architecture, proof of concept, all me with AI. By the time my team gets involved, there is a working prototype, a clear spec, and a realistic scope.
This is where AI has had the biggest impact. I can go from idea to working prototype in days instead of weeks. The prototype might not be production-ready code, but it proves the concept, identifies the hard problems early, and gives the team something concrete to build from.
What Sits With My Team
Custom development is in my team’s hands. And I mean genuinely in their hands, not “they code what I dictate” but “they own the implementation and bring their own expertise.”
| My Responsibility | Team Responsibility |
|---|---|
| Product roadmap and priorities | Implementation approach and code quality |
| Technical specs and architecture | Code review and testing |
| Feature scoping and edge case identification | Client-facing custom development |
| Prototype and proof of concept | Production-ready code |
| Quality standards and security requirements | Deployment and maintenance |
| Content, marketing, and positioning | Support escalation handling |
| New product launches | Bug fixes and patch releases |
The key insight here is that my team brings fresh perspective. When I am deep in product thinking, what the market needs, what users are asking for, what competitors are doing, I can develop blind spots about implementation. My developers catch things I miss because they are coming at the code with clean eyes. They question my architectural assumptions. They find simpler solutions to problems I have overcomplicated.
Having your team handle development with fresh eyes is not delegation out of laziness. It is a quality control mechanism. The person who designed the system should not be the only one building it.
Custom client work especially benefits from this split. Clients get a team of developers who are focused entirely on building great solutions, not developers who are also half-distracted by product planning meetings and roadmap discussions. The quality of our client work has gone up since I stopped pulling developers into every strategic conversation.
The AI Tools That Make This Work
I want to be specific about what I use and how, because vague “AI helps me work faster” claims are useless.
| Task | Tool | How I Use It |
|---|---|---|
| Codebase analysis and auditing | Claude Code (CLI) | Read entire plugins, identify patterns, generate architecture docs |
| Rapid prototyping | Claude Code + Cursor | Build working prototypes from specs in hours |
| Content and documentation | Claude | Draft blog posts, user docs, developer docs, changelogs |
| Competitive research | Claude + web search | Analyze competitor features, pricing, positioning |
| Code quality review | Claude Code with WPCS | Pre-review code before my team sees it |
| Bug triage and analysis | Claude Code + Basecamp | Analyze support tickets, identify root causes, write fix specs |
| SEO and content planning | Multiple MCP tools | Keyword research, content calendar, trend analysis |
The common thread is that AI handles the research, analysis, and first-draft work. I handle the judgment calls, decisions, and quality standards. My team handles the production implementation and client delivery.
What Changed in Practice
Let me walk through how a typical product cycle works now versus how it used to work.
Before: The Old Way
- Feature request comes in from support or my own observation
- I write a vague ticket: “Add X feature to Y plugin”
- Developer spends 2 days researching the best approach
- We have a meeting to discuss the approach
- Developer builds it over 1-2 weeks
- I review and request changes (because my vague spec missed things)
- Another week of revisions
- Ship it 3-4 weeks after the original request
Now: The Current Way
- Feature request comes in
- I use AI to research the approach, audit the existing code, and draft a detailed spec with architecture decisions, file changes, edge cases, and test criteria
- Sometimes I prototype it myself to validate the approach
- Developer gets a clear spec, builds it in 3-5 days
- First review has minimal changes because the spec was thorough
- Ship it 1-2 weeks after the original request
The time savings come from two places: my research and spec writing is faster with AI, and my team’s implementation is faster because they get better specs. Nobody is working harder. The work is just better organized.
The Fresh Perspective Problem (And Why Teams Matter)
One thing I have learned the hard way: when you use AI to build something, you develop a false sense of completeness. You have been deep in the problem, you have discussed every angle with your AI assistant, and everything seems covered. Then your team looks at it and immediately spots three things you missed.
This is not an AI limitation, it is a human limitation. When you are the person who designed something, you have already rationalized every decision. You cannot see your own blind spots. Having a team that comes to the code fresh, without the baggage of all your design decisions, is incredibly valuable.
Real example: The settings page I was sure was perfect
I spent an afternoon using AI to build a new settings interface for one of our plugins. It looked great. The code was clean. Every option was logically organized. I was ready to ship it. My developer looked at it the next morning and asked: “What happens when someone has 200 entries? This will load everything on page load.” He was right. I had tested with 10 entries. The pagination and lazy loading I had skipped because “it is just a settings page” would have caused real problems for power users. Fresh eyes caught what my AI-assisted deep dive missed completely.
This is why I will never go fully solo even though AI makes it theoretically possible. The quality of output from a team that challenges each other is fundamentally better than one person, no matter how good their tools are.
How I Toggle Between Responsibilities
The hardest part of this setup is not the AI tools or the team management. It is the constant context switching between very different types of work. In a single day, I might go from product strategy for a new plugin launch to debugging a customer issue to writing a blog post to reviewing a team member’s PR.
Here is how I manage it:
- Morning blocks for deep work. Product planning, spec writing, and prototyping happen before noon. These need uninterrupted thinking time and that is when I have it.
- Afternoon blocks for reactive work. Code reviews, team questions, support escalations, and content publishing happen after lunch. These are interruption-friendly tasks.
- Theme days when possible. Mondays for product planning across all products. Tuesdays and Wednesdays for execution, building, prototyping, spec writing. Thursdays for review and team sync. Fridays for content and marketing. This does not always hold, but having a default structure helps.
- Basecamp as the single source of truth. Every task, every decision, every spec lives in Basecamp. If it is not in Basecamp, it does not exist. This prevents the “I told you in Slack three weeks ago” problem.
- AI sessions have clear start and end points. I do not leave Claude Code running in the background while doing other work. When I am in an AI-assisted work session, I am focused on that task. When I am done, I close it and move to the next thing. This prevents the “let me just quickly check one more thing” trap that can eat hours.
Mistakes I Have Made With This Approach
This setup is not perfect and I have made real mistakes figuring it out.
Over-speccing
Because AI makes it easy to write detailed specs, I sometimes over-specify things. A 15-page spec for a feature that a good developer could figure out from a 2-paragraph description is not helpful, it is micromanaging disguised as thoroughness. I have learned to match spec detail to task complexity. Simple bug fix? One paragraph. New feature with architectural implications? Full spec. Routine enhancement? Bullet points.
Not Trusting the Team Enough
Early on, I would prototype something with AI and then expect my team to implement it exactly as I had prototyped it. That defeats the entire purpose of having a team. Now I prototype to validate the concept, then let my team implement it their way. Their way is often better because they think about things I do not, maintainability, backwards compatibility, edge cases in production environments I have never seen.
Taking On Too Much
AI makes you feel like you can handle everything. You cannot. I went through a phase where I was personally managing product strategy for all our plugins, writing all the content, doing all the competitor research, and still trying to review every PR. I burned out in about six weeks. The fix was simple: not everything needs my personal attention. Some product decisions can be made by senior team members. Some content can be written by others. AI expanded my capacity but it did not make it infinite.
The Numbers
I am not going to pretend I have perfect before/after metrics, but here is what I can observe:
| Metric | Before This Setup | Now |
|---|---|---|
| Time from feature request to spec | 1-2 weeks | 1-3 days |
| Time from spec to shipped feature | 2-3 weeks | 1-2 weeks |
| Revision rounds per feature | 2-3 rounds | Usually 1 |
| Products I can actively manage | 3-4 at a time | 8-10 at a time |
| Blog posts per month | 2-3 | 15-20 across all sites |
| New product launches per quarter | 0-1 | 1-2 |
The biggest change is not any single metric but the compounding effect. When specs are better, development is faster. When development is faster, we ship more. When we ship more, we learn faster from real users. When we learn faster, the next round of product planning is better informed. It is a virtuous cycle.
Communication Patterns That Keep This Working
The split only works if communication between my side and the team side is structured. Vague hand-offs kill the whole model. I have settled on a few patterns that prevent the most common breakdowns.
Every spec I write has a “Decisions Already Made” section and an “Open for Team Input” section. The first section covers architecture choices, technology selections, and scope boundaries that I have already evaluated and decided on. The second section explicitly lists areas where I want the team’s input, performance approach, testing strategy, deployment sequence, error handling patterns. This prevents the ambiguity that leads to either the team second-guessing settled decisions or silently accepting choices they could improve.
Weekly sync meetings are short and structured around three questions: what shipped, what is blocked, and what surprised you. The “what surprised you” question is the most valuable because it surfaces the gaps between my specs and reality. If a developer says “the API response was shaped differently than the spec assumed,” that feeds directly into making the next spec better. These meetings never run longer than 30 minutes because everything that needs detailed discussion happens asynchronously in Basecamp threads where the context is preserved and searchable.
Pull request reviews are where I stay close to the implementation without micromanaging it. I review every PR for our product plugins, not to check code formatting or variable names, but to verify that the implementation aligns with the product intent. If the spec said “users should be able to filter by date range” and the implementation uses a single date picker instead of a range picker, that is a product decision that needs discussion, not a code quality issue. Keeping PR reviews focused on intent rather than implementation details respects the team’s ownership of the code while ensuring the final product matches the vision.
Who This Approach Works For
This is not for everyone. It works well if:
- You are a technical founder or lead who can understand code at the architecture level
- You have a team of developers who are self-directed and can work from specs
- You manage multiple products or projects simultaneously
- You are comfortable with AI tools and willing to invest time learning them deeply
- You value output quality over output quantity
It does not work well if your team needs hands-on mentoring, if you are not technical enough to evaluate AI output, or if your products require deep domain expertise that AI cannot help research.
What I Would Tell You If You Are Thinking About This
Start small. Pick one product or project and try the split: you handle planning and specs with AI, your team handles implementation. See how the quality and speed compare. Do not restructure everything at once.
Invest in your spec-writing process. The quality of what your team builds is directly proportional to the quality of what you hand them. A great spec with clear architecture decisions, defined edge cases, and explicit quality criteria will get you better results than any AI tool.
Trust your team’s fresh perspective. When they push back on your approach, listen. They are seeing things you cannot see because you are too close to the problem. That fresh perspective is worth more than any efficiency gain from AI.
AI did not make my team less important. It made the division of responsibilities clearer. I do the thinking and planning. They do the building and challenging. We all do better work because of it.
The future of running a product company or agency is not AI replacing teams. It is AI enabling leaders to take on more strategic work while their teams focus on what humans do best, collaborate, challenge assumptions, and build things that work in the real world.
We specialize in web design & development, search engine optimization and web marketing, eCommerce, multimedia solutions, content writing, graphic and logo design. We build web solutions, which evolve with the changing needs of your business.