Feature parity is not what’s holding back your SaaS business
The fundamental trap that so many folks fall into with feature parity is pursuing it at all costs as a principle, not as a practical response to user feedback and demand.
As a startup, your product will never have all the features your space's incumbents have. Good luck disrupting Microsoft Office by building every single Excel feature. You'll never get done—and even if you do, why would anyone use your product if it's basically Excel?
You’re leading a product team at a fast-growing start-up. You’ve got a killer product with a great suite of offerings. Your customers are loving it, and engagement is through the roof. Obviously, they still have some concerns, feature requests, and dreams for your product. They're voicing that in your feedback channels, which you diligently collect and have your customer support team address.
Suddenly, you open Slack to see a message from your CEO with a screenshot from LinkedIn:
Your main competitor has not only raised $50 million, but they've also released three brand-new fancy AI-powered product features which you currently don't offer.
Your CEO is obsessed with competing head-on with these new features. Plus, they’ve heard that feature parity is essential to success, and that letting your competitors get ahead of your technology can be a death blow. So, you reorient your engineering, product, and marketing teams around these new product ideas.
You start building with your team, and as you construct these new offerings, you realize you need to change some of the functionality, naming, and structure of your past products, and that these changes will mean some of these features are only available on desktop.
But, since you know these new features are going to crush it, you're not too concerned about that, and you go ahead and rename and redesign some of them.
6 months later, you're stoked to release your new offerings. You know it's going to be successful since your well funded competitor released them and did a bunch of PR around it.
On launch day, the numbers are a little bit more muted than expected - but that’s OK.
A week in, activation and engagement is well below expectation.
And a month later, the team is doing a post-mortem on how on earth this new launch failed so miserably, and now has to deal with a mountain of user feedback that complains about a lack of consistency in the UX and user flows because of these new changes. 💀
What happened?
The many meanings of feature parity
Before we dive into what happened here, let's quickly zoom out review the two types of feature parity.
Internal Feature Parity
Internal feature parity is important for consistency and experience for your users.
No matter when or where your users are engaging with your product, they should experience the same stuff.
When they don’t, they get frustrated, like this Airtable evangelist who was having a hard time convincing his internal company users because the tool lacked feature parity.
Most users expect feature parity and if they run into a lack of it, that can create friction, and friction can be a huge source of churn.
This means that your features should be consistently named, designed, and executed:
- Across different platforms
- Across devices
- Across different operating systems
- After product updates
That’s great in general, but you have to be careful with how you pursue this.
Take Microsoft for example. They demanded feature parity between their Series X and Series S devices from both their development teams and game makers because they wanted to keep the experience the same and match competitors offerings. Ironically, that has slowed game development, frustrated potential customers, and generally actually helped their main competitor Sony.
External Feature Parity
External feature parity focuses on a company's feature set as compared to its competitors. Achieving feature parity means you have the same set of features available in your product for users as the other software does. External feature parity is generally seen as essential to staying ahead of the curve of the market at ensuring users see you as a leader in the space.
The feature parity trap
Let's jump back into our example: what happened?
This company got caught in the feature parity trap. They got so focused on maintaining or achieving feature parity with their competitors that they lose sight of what's really important and at the heart of great product design: users.
A user-first product mindset means putting their needs, concerns, desires at the heart of your product strategy, not as one component of it.
They pursued external feature parity without validation.
They lost internal feature parity by renaming, redesigning, and limiting access to some products to desktop only.
This isn't to say that analyzing the competitive landscape, or brainstorming new ideas from your competitors launches is a bad idea — not at all! In fact the best product teams will absolutely have their finger on the pulse and will be sure to keep track of these new features.
Nor am I saying that feature parity is not an ideal state.
But the fundamental trap that so many folks fall into with feature parity is pursuing it at all costs as a principle, not as a practical response to user feedback and demand.
This leads to misallocation of resources and wasted time building things your product users don't actually want or need.
So, what’s a poor product manager to do?
So you want external feature parity if possible and appropriate, but you know you need to validate it with user demand and feedback.
You also want to keep your product at internal feature parity, with consistent naming, product design, and user experience no matter how they are accessing or using the product.
It’s a lot to keep track of, especially when there's pressure from your CEO and leadership team pulling you in every direction.
Don't worry, we've got you covered.
Learn how to build a consistent external and internal feature parity strategy with these simple best practices.
How to gauge user interest in pursuit of external feature parity
At risk of stating the obvious, it is still important that you and your team are:
- Following your competitors growth: Regularly evaluating competitors' products to identify new features being offered in the market.
- Incorporating the latest features into your discussions: Discussing and analyzing those new features, and working across your organization to think about how they might fit into your strategy
As you can see, your competitors external features do have an impact on what users might expect from software in your niche. But, you still need to validate that expectation and also understand how it might be unique to your specific product.
Most folks think that this process is:
- Users use it in competitor product
- All users thus expect it in our product
- Let's go build it
But when you refresh this with a user-first mentality, you can approach it like this;
- Users use it in competitor product
- Users might expect and appreciate that in our product
- Run feedback and testing to gauge demand
- They want - let's go build it
- They don’t want it (now) - let's hold off and reassess in the future
It seems obvious, especially because most product teams do run feedback and testing for their completely net new internal products.
But when it comes to responding to external releases, so many folks fall into the trap of assuming that because others have built it or pursued it, they don't need to do their due diligence as it's already been proven for them (why would they build it if they didn't think users in our segment would appreciate it?)
It's like if you're a VC in 2021 and FTX is pitching you — no DD needed!
Putting feature parity discovery on autopilot
So you obviously want to run these feedback and testing experiments quickly. You should definitely be doing traditional feedback collection through surveys and user research (you can build these basic workflows with Command AI.)
The problem with the traditional user research approach is:
- It takes time. Interviews take time, but so does crafting surveys and analyzing the results.
- You can only bombard users with so many surveys. We’ve encountered this problem with customers actually. Customer will get set up on CB, feel excited by the potential of asking users questions, and spam them with a million surveys (we have some mechanisms built into the product to discourage this, but it’s still possible to hack Command AI to be spammy). Plus, it’s a big enough issue that even the federal government is concerned about it!
- Sometimes users answer surveys in a way that doesn’t line up with their real usage patterns. The “survey taker” effect.
That's why one of the most powerful ways to get consistent, at the right time, accurate feedback is to run more microsurveys.
These are very simple and straightforward in-context surveys which ask users how a very specific feature or interaction are going.
For example, after a user interacts with a new feature, you can run a quick customer effort survey and inquire unintrusively about how it went:
They could be triggered based off of a specific event being completed or through a specific user behavior like confusion or anger.
This smarter, smaller targeting can drive better engagement and more accurate data because it's in the moment and easy to respond to.
Cheating with deadends
One of the key things Command AI was built to do was to capture user intent. We have various interfaces — a search interface called Spotlight and a chat interface called Copilot — that lets users type what they’re trying to do and get routed to the right place.
There are lots of benefits to this. Mainly, users can uncover next steps that are most relevant to them. But, there is one sneaky benefit that is relevant to this conversation: Deadends.
When users search or ask about something for which there is no good answer, Command AI classifies those as deadends. Maybe they ask about a feature that doesn’t exist. Maybe they ask about a feature that doesn’t exist in your product but does exist in a competitor’s product. In the case that lots of users are asking about a feature like that, sounds like feature parity (in your case!) could be important!
In addition to insight into what features users care about, deadends can reveal
- What problems users are facing
- How they describe those problems or the features that they're using
- How they respond to the answers or tooling that you nudge them towards
This solves the three problems above.
- Once you set up an interface like this, it collects deadends forever.
- Because users are initiating the question, you’re not at risk of bombarding them.
- Because users are asking questions “in the moment”, in the heat of their session, they aren’t in “survey mode”. You can be confident they have some intent related to the thing they asked about.
Then, you can see all the gathered dead ends in one place in your deadends dashboard in Command AI, and quickly get a detailed view into where your users are running into problems.
Pretty cool, these deadends!
Using user assistance tooling to go beyond feature parity
When you apply user assistance technologies to your feature parity pursuits, you unlock so much more insight and speed to change. Suddenly, you can see in natural language how your users are talking about your product, what they're hoping to achieve, and whether or not they like where you're directing them.
You might notice that users call one tool something completely different than what you currently have it labeled. Or, perhaps users are constantly clamoring for an additional feature.
What used to take three engineers, two product designers, and 3 months worth of pop-ups to create and gather in a traditional DAP can now be gleaned directly from your users conversations and searches in these user assistance tools.