Skip to content
Wbcom Designs

Monthly Reflection: What February Taught Me About Building Better Products

· · 19 min read
White ruled paper on desk — monthly reflection on building better products

I started doing monthly reflections a while back, mostly as a personal practice. Write down what happened, what I got right, what I got wrong. I do not always publish them. But February was one of those months where enough things shifted that I think it is worth sharing – both for myself to have the record, and because some of what I learned might be useful to other people building products.

February was dense. We were deep into WP Vanguard development, dealing with some technical decisions I had been avoiding, trying to balance client work with product work, and running headlong into a few assumptions I have been carrying for a long time that turned out to be wrong.

Here is what the month actually taught me.


The assumption that kept slowing me down

I had this belief – held for years, never fully examined – that building the right product was mostly a technical problem. Get the architecture right. Write clean code. Solve the hard engineering challenges. If the product works well under the hood, everything else will follow.

February pushed back on that hard.

We had a feature in WP Vanguard that I thought was done. The logic was solid, the implementation was clean, the tests passed. But when I put it in front of actual users – people who run small WordPress sites, people who are not security experts – they did not understand what to do with it. The output was technically accurate and practically useless. Not because the code was wrong. Because I had designed it for someone who already understood the problem I was trying to solve.

I have run into this before. I know this trap. But knowing a trap exists and reliably avoiding it are different things.

The output was technically accurate and practically useless. Not because the code was wrong – because I had designed it for someone who already understood the problem.

What I ended up doing: I sat with three people while they used that feature. Did not explain anything. Just watched where they stopped, where they scrolled back, where they looked confused. Rewrote the output framing based entirely on those three sessions. The underlying logic did not change at all. The usefulness changed dramatically.

The lesson, which I keep relearning: good products are not primarily a technical achievement. They are a communication achievement. The engineering enables the product. The communication is the product. I wrote about this dynamic when I explained why I built the WP Vanguard security scanner free – the hardest part was not the scanning logic, it was figuring out how to explain findings to someone who does not live in security.

The specific feature that broke this open

Let me be more specific because the vague lesson is less useful than the concrete example. The feature in question was the detailed breakdown view – when a user clicks into a specific security finding to understand exactly what was detected and why it matters. I had built it to show the raw technical detail. What check ran, what value was returned, what the expected value should be, what the CVSS score was for known vulnerabilities.

That is genuinely useful information. For a developer. For a security professional. For someone who can read a CVSS score and immediately understand what it means in terms of actual risk.

For a bakery owner who built their website on WordPress three years ago and has never looked at security settings: completely opaque. The three people I tested with all hit the same wall in the same place. They could see something was wrong. They could not figure out what to do about it. Two of them closed the window and went back to the summary grade, which told them something was wrong without explaining what. One of them just gave up entirely.

What changed after those sessions: every finding now leads with a plain-language explanation of what the problem actually means for the site owner. Not what the technical issue is – what it means for them. “This means that attackers can more easily steal information your visitors type into your site.” Then the details for people who want them. Lead with the consequence, follow with the cause. That ordering took about two hours to implement and made a bigger difference than anything I had shipped in the three weeks before it.


Shipping less, but shipping better

I have a bias toward shipping. Get it out, get feedback, iterate. That instinct has served me well overall. But this month I noticed something: I was confusing activity with progress.

I was shipping things. Small improvements, minor additions, tweaks here and there. The changelog was active. But when I looked at the actual product a week into February, it was not meaningfully better than it had been at the end of January. It was busier. More features, more settings, more surface area. Not better.

The features I was shipping were not coming from clear thinking about what the product needed. They were coming from a list I had accumulated over months – things users mentioned, things I noticed, things that seemed like they should exist. I was working through the list without asking whether the list was the right list.

I spent part of February doing something uncomfortable: going through that backlog and deleting items. Not deprioritising them. Deleting them. Features that were never going to move the product forward in a meaningful way, that I had been carrying around as future obligations for no good reason.

The backlog is not the product strategy. It is a collection of ideas, some of which belong in the product and many of which do not.

After deleting probably 30% of the backlog, I felt lighter. And the work that remained felt more focused. When you are not constantly aware of the list of things you are not doing, it is easier to do the things you are doing well.

What the deleted items had in common

After I cleared them out, I noticed a pattern in what I had deleted. Most of the removed items fell into one of three categories. The first was features that would make the product feel more complete without actually making it more useful. Extra scan categories that would add to the report length but were not the kinds of issues most WordPress sites actually face. A comparison view showing how a site’s score stacked up against similar sites – interesting, probably, but not what someone needs when they are trying to decide what to fix.

The second category was features I had added after a single user request. Someone asked for it once, it seemed reasonable, I added it to the list as if that meant it should exist. But one request is not signal. It might be a genuinely good idea, or it might be something that one person wanted for reasons specific to their situation. The filter I should have been applying: would this feature help the majority of users, or would it add complexity for the many to serve the few?

The third category was the most honest to admit: features I wanted to build because they were technically interesting. Not because they were needed. Because they were fun problems to solve. I do not think there is anything wrong with enjoying the technical challenge. But putting technically interesting problems on a product roadmap and calling them features is a way of confusing my own enjoyment with user value.

This is a kind of bias I think most technically-minded builders have. We gravitate toward the interesting problems. The boring problems – the ones that are genuinely the most important to solve – get less attention because they are harder to get motivated about. The interesting problem of building a sophisticated vulnerability detection system got more of my time in January than the boring problem of making sure non-technical users could understand what the results meant. That ordering was backwards.


The thing about feedback loops

One of the genuine structural advantages of working on a product like WP Vanguard – something people use directly, in real time – is that the feedback loop is short. Someone scans their site and either gets value from it or they do not. You can see this in the data almost immediately.

For the first time in a while, I had access to real usage data. Not survey responses or support tickets – actual behavior. What people did with the scan results. Which findings they clicked into. Where they dropped off. What they shared.

February was the first month I sat down and actually read that data carefully, instead of glancing at the headline numbers. What I found was interesting and, in some places, humbling.

The feature I was least confident about – the overall security grade, the simple letter score – turned out to be the thing users found most valuable. They shared it. They referenced it when talking to their developers. They came back to see if their grade had changed after making updates. The thing I had almost cut because it felt too simple was the anchor the whole product depended on.

The feature I was most proud of – the detailed breakdown of each check, the technical explanation of every finding – was almost entirely ignored by non-technical users. They looked at the summary, saw the grade, read the top recommendations. The depth I had invested in was invisible to most of the people using the product.

I do not think the depth is wrong. Technical users find it valuable. But it reoriented my thinking about what the product is for and who it is for. The non-technical user who wants a simple answer is the primary audience. The technical user who wants to dig in is secondary. I had been building as if the opposite were true.

What the data also showed that I did not expect

There were two other things in the data that surprised me. The first was how many people scanned the same site multiple times in the same session. Not returning users checking in after making fixes – the same person, within the same hour, scanning the same URL two or three times. At first I thought it was a data artifact. When I looked more carefully, I realized what was happening: people were scanning, seeing a result they did not fully understand, and scanning again to see if anything had changed. They were using the re-scan as a way to check their understanding of the results, not to check for actual changes.

That told me something important: the results page was not giving people enough confidence that they understood what they were looking at. They needed something to do with the information and were not sure what that was, so they repeated the action that had given them the information in the first place. The product equivalent of rereading a sentence that did not quite make sense.

The second thing was the share rate on certain grade outcomes. Sites that got a B or C grade were shared significantly more than sites that got an A or an F. The A made sense as a non-event – nothing to share about a clean bill of health. The F also made some sense – people might not want to share evidence of a serious problem. But B and C were the engagement sweet spot. Good enough that they did not feel embarrassed, bad enough that there was something concrete to show: “look, my site has this issue, I’m working on it.” That insight is shaping how I think about messaging around those grade outcomes.


On saying no to things that sound right

February had at least three conversations where someone suggested a feature or integration for WP Vanguard that made complete sense on the surface. Reasonable people, reasonable suggestions, clearly thought through. And in each case, after sitting with the suggestion for a day or two, I decided not to do it.

This used to be harder than it is now. Not because I have become more decisive, but because I have gotten clearer about what the product is trying to be. When you are clear about that, the filter almost applies itself.

The suggestions I turned down were all good ideas in isolation. They would have made the product more capable in specific ways. But each one would have also added complexity, added maintenance burden, and pulled the product a bit further from the thing I am actually trying to build: the simplest, most useful security check a WordPress site owner can run in one minute.

Every feature that makes the product more powerful also makes it harder to understand at a glance. For WP Vanguard, that tradeoff tilts toward simplicity almost every time. The product that tries to do everything ends up doing nothing well.

The three specific suggestions I declined

The first was an integration with a popular uptime monitoring service. The person suggesting it had thought it through well: security and uptime are related concerns, site owners who care about one usually care about both, why not combine them? The argument was sensible. But combining them would have changed what WP Vanguard is. It would have become a site health tool with security features, rather than a security scanner. The focus would have blurred. I said no.

The second was a feature to track a site’s security grade over time and show a historical chart. Again, not a bad idea in principle. Who would not want to see whether they were getting better or worse? But it required storing site URLs and running periodic re-scans, which introduces data retention questions, privacy considerations, and infrastructure complexity that are not trivial. All of that for a feature that adds to the product’s scope without serving the core use case: check your site today, know where you stand today. I said no.

The third was a white-label option for agencies to put their own branding on the scanner. I understand why this is attractive. Agencies want to give clients tools that feel proprietary. And honestly, I have considered this before. But it would have required rebuilding significant parts of the presentation layer to be configurable, which was not a small project. Given everything else competing for attention, a feature that primarily benefits agencies – a secondary audience for this product – was not where the time should go in this phase. I said no, and I said it might be right for later if the product gets to a place where agency features make sense to prioritize.

The no that costs something

The hardest no was to a partnership inquiry that would have brought the product to a larger audience but would have required us to add significant complexity to the scanning pipeline. More checks, more data, more output. It looked like growth. It actually looked like drift.

I said no. I am still not certain it was right. But I am more certain that saying yes to every opportunity that sounds like growth is how products lose their shape. There is a version of WP Vanguard that tries to be everything and ends up being nothing distinctive. I do not want to build that version.


A decision I had been avoiding about the roadmap

There was one technical decision I had been circling around for weeks and finally had to commit to in February: whether to build the deep scan as a separate product or as a premium tier of the same product.

This sounds like a simple pricing question. It is actually a question about product identity. A separate product says: the free scanner is complete as it is, and deep scanning is a different thing for a different audience. A premium tier says: the free scanner is the entry point, and paying unlocks more of the same thing. These two framings lead to very different decisions about how you build features, how you market, and what success looks like for the free version.

I had been avoiding the decision because both options had real arguments in their favor and I did not want to commit to one until I was more certain. But the avoidance itself was causing problems. I was making smaller feature decisions in February without clarity about which direction things were going, which meant some of those decisions were going to need to be undone depending on which way I eventually went.

I finally committed to the premium tier model. The free scanner is the product. Deep scanning is for people who want more of the same thing, not a different thing. That decision unlocked a bunch of smaller decisions that had been waiting on it, and I felt the effect immediately in the quality of the work that followed. Half the slowness in my product decisions during the first three weeks of February traced back to that one unresolved question.

The meta-lesson: when I notice I have been circling the same question for a while, that is a sign I need to make the call, even imperfectly. The cost of the wrong decision is usually lower than the cost of everything downstream being blocked while I wait for certainty that is not coming.


The balance between client work and product work

This is the tension that I think every service business turned product business deals with, and it does not resolve neatly. Client work pays the bills. Product work is the long-term bet. When a client has an urgent need, the product waits. When the product needs a week of uninterrupted thinking, the client gets half-attention.

February was a bad month for this balance. Client work expanded unexpectedly, and WP Vanguard got whatever was left over. Which was not nothing, but was not what the product needed in that phase of development.

I do not have a great solution to this. What I did do in the last week of February was block out product time in the calendar in advance for March – not as aspirational intentions but as actual appointments I treat the same way I would treat a client call. We will see if that works.

The thing I am most aware of: the window where a new product can be shaped is limited. You can go back and improve a product forever. But the early decisions – about what it is, who it is for, what problem it actually solves – those decisions are disproportionately hard to undo later. Giving product work whatever time remains after client obligations means making those early decisions poorly. That cost is not visible until much later.

What February showed me about the hidden cost

The client work that expanded in February was a legitimate project – a community platform build for a client who has been with us for a long time. Good work, work I care about. The expansion was not unwelcome in itself. But the timing was difficult, and what I noticed over the course of the month was something more subtle than just “less time for the product.”

It was the mental residue that follows from context switching. When I had two hours of product time after a full day of client work, those two hours were not equal to two hours of product time at the start of a fresh day. My thinking was noisier. It took longer to get into the problem. The decisions I made in those late-evening product sessions were lower quality than the ones I made when I had a morning blocked for product work.

I know this is not a new observation. Cal Newport has written a lot about deep work and the cost of attention residue. But there is a difference between knowing this intellectually and seeing it play out clearly in your own work over the course of a month. February was the month I really saw it. Client work does not just take time from product work. It takes cognitive quality. The best hours of thinking I had in February mostly went to client problems, not to the product questions I needed to think hardest about.

This is why the blocked calendar matters and why I am treating it more seriously than I have before. Product work needs the best cognitive hours, not the leftover ones. Otherwise you make slower, less interesting decisions about something that deserves your clearest thinking.


What I read and thought about

February was a good reading month, which usually happens when I have a lot of hard problems sitting in the background that I am not quite ready to tackle directly. Reading is how I think sideways.

Two things stuck with me. The first was a post about the difference between product vision and product strategy. The argument was that vision tells you where you are going, strategy tells you what you are specifically doing to get there, and most early-stage products have some version of the vision but no actual strategy – just a list of things to build. That matched my experience.

The second was more personal than professional – a long conversation with someone who has been building products for fifteen years longer than I have. She said something that I have been turning over since: “The biggest mistake I made early on was optimising for the product I wanted to build instead of the product my users needed to trust. Trust comes before adoption.”

That reframing – trust before adoption – changed how I am thinking about some of the WP Vanguard decisions going into March. Security products specifically live or die on trust. The UX, the communication, the simplicity – all of it is building trust or undermining it. I had been thinking about those things in terms of user experience. Thinking about them in terms of trust is a slightly different frame that I think points to better decisions. It also connects to something I think about with the open source bet – that the transparency of the platform itself is part of what builds trust with the people building on top of it.


What I got wrong

A few specific things, for the record.

  • I underestimated the onboarding friction. I thought the product was self-explanatory. It mostly is, for people who use scanners regularly. For people who have never used a security tool, there is more hand-holding needed at the start than I built in. I kept telling myself it was obvious. It was obvious to me, which is different.
  • I delayed a decision about the deep scan pricing. I had a sense that the current pricing was slightly off, but I kept deferring the decision because I did not want to deal with the complexity of changing it mid-launch. That cost me feedback I could have had earlier. Decisions I am avoiding are usually decisions I know I need to make but am not ready to commit to. The avoidance does not make them easier.
  • I shipped something I knew was half-baked. There was a specific feature that I shipped at 70% because I was behind schedule and wanted to hit the milestone. I told myself I would finish it properly in the next iteration. I did not finish it in February. I am carrying it into March. The lesson I keep not learning: shipping 70% of something often costs more than waiting to ship 100% of it, because fixing something live is slower than building it right the first time.
  • I wrote product copy without testing it. The descriptions of each scan check on the results page – I wrote those in a single afternoon, published them, and moved on. When I sat with users in February, two of those descriptions caused confusion every time. I had not tested the copy the way I tested the code. Copy that users misread is a bug, just a different kind.
  • I conflated busyness with velocity. There were days in February where I was working hard all day and had almost nothing to show for it in terms of product progress. Fixing things I had introduced, revisiting decisions I had made without enough information, cleaning up code I had written in a hurry. Activity is not the same as movement. I need to track meaningful output, not just hours spent.

What I got right

In the interest of balance, not just self-critique.

  • The user sessions. Sitting with three people while they used the product was one of the most valuable hours I spent in February. I almost did not do it – too busy, too much other work, easy to deprioritise. Doing it changed product decisions in ways that weeks of thinking would not have. I should do this every month.
  • The backlog pruning. Deleting 30% of the list felt like giving up at first. A week later, it felt like removing dead weight. The remaining items are things I actually believe in building. That is a better starting point for any given work session.
  • Having the hard conversations early. There were two situations in February where I had to tell people things they did not want to hear – about timelines, about scope, about what I was willing to commit to. Both conversations went better than I expected and resolved faster than they would have if I had let them drift.
  • Making the roadmap call. Committing to the premium tier model rather than continuing to circle the question was the right move. Even if the decision turns out to be wrong, having it made is better than having it open. I stopped making downstream decisions in a fog.
  • Reading the usage data seriously. Instead of glancing at summary metrics, I spent time in February actually understanding what users were doing. The things I learned from that – the letter grade engagement, the re-scan behavior, the B/C share rate – changed concrete product decisions. Data is not useful if you do not read it carefully enough for it to challenge your assumptions.

Going into March

Three things I am carrying from February into March.

First, the trust-before-adoption frame. Every decision I make about WP Vanguard in March gets filtered through that question: does this build trust or does it complicate it? Simpler, more reliable, more honest output builds trust. Feature accumulation and complexity erode it.

Second, the blocked calendar. Product work gets protected time. Client work goes in the slots I have designated for client work. This is not a new idea and I have tried it before with mixed results. But February showed me clearly what happens when I do not protect the time: the product gets half-attention during its most critical formation period. That is a cost I am not willing to keep paying.

Third, the user sessions as a habit. Once a month minimum, I sit with someone who is not me and watch them use the product. Not to collect feature requests. To see where their experience diverges from the experience I think they are having. Those divergences are where the real product work lives.

There is a fourth thing, smaller but specific: I am going back to finish the half-baked feature before building anything new. Carrying it into March was already a mistake. Carrying it into April would be worse. The cost of technical debt compounds, and I know from experience that the features I leave unfinished at 70% become the ones that cause the most embarrassing bugs later. I would rather end March with one thing working completely than three things working partially.

February was a hard month in some ways. It pushed back on a few things I thought I knew. That is usually how the best months feel in retrospect – not pleasant while they are happening, but useful in what they leave behind.

If you are building something

I write about what I am building and what I am learning from it. If you are working on a product – especially in the WordPress space – some of this is probably familiar. If you have had a month that looked like this and came out with something useful, I would genuinely like to hear about it.

And if you run a WordPress site, take 60 seconds and check its security at WP Vanguard. It is free. You will know something you did not know before.

Varun Dubey
Varun Dubey

We specialize in web design & development, search engine optimization and web marketing, eCommerce, multimedia solutions, content writing, graphic and logo design. We build web solutions, which evolve with the changing needs of your business.