In the following exchange, Uizard’s co-founder and CEO, Tony Beltramelli (@tbeltramelli), shares how rigorous research surfaced tangible landmarks in pricing’s otherwise confusing landscape and how expert pricing guidance is only as good as a founder’s own grasp of things.
— Being data-driven even when pricing is really an art
— Measuring perceived vs realized value (and pricing’s PMF signal)
— Going beyond standard segmentation
— Avoiding competition-based pricing without adequate research
— Going down the usage-based pricing rabbit hole (and deciding against it)
— The (limited) value of working with consultants
— Why changing prices often is hard and what they’ve done instead
Being data-driven even when pricing is really an art
Of the 4 co-founders at Uizard, 3 of us are technical.
And all of us are nerds.
We love making data-backed decisions. So how we approached Uizard’s pricing was no different. Our immediate thought on answering how much to charge was: “we need data to solve this problem.”
We had been told that we could “just interview people, collect some noisy insights” or “base the pricing off of what competitors were doing,” but we really felt that we had to validate our assumptions.
Although we completely agree — it was hard to appreciate this as engineers early on — that pricing is not an exact science. That it concerns as much psychology and art. And it is among the most challenging things we’ve taken up as founders.
We knew that having some concrete, data-informed insights would ensure that we weren’t overcharging/undercharging when we went to market.
As a first step, we ran the Van Westendorp price sensitivity exercise and collected data points via both email-based surveys and 1-1 conversations with our beta sign-ups and users. Then we centralized all those responses into a single spreadsheet for analysis.
With any such data gathering effort, though, there comes a lot of noise.
We needed to be very particular in understanding segments.
Measuring perceived vs realized value (and pricing’s PMF signal)
What I mean by noise are all the unfiltered inputs that can blur meaningful conclusions.
For example, a lot of the survey participants weren’t educated enough about the product. Some hadn’t even used it yet and just had a messaging (proposition) to price us against. Our interpretations had to acknowledge these factors.
There can be a major mismatch between what people think they’re signing up/paying for and what the actual product does. If that gap is too big, any pricing feedback we’d collect would be useless.
That’s why, with the pre-product surveys where people hadn’t yet experienced the product, we had to set up conversations to make them understand what we were attempting and what they’d get out of it.
For beta users, we could look at how many minutes/hours/weeks they had spent actively using the product and plot that usage against their takes on pricing.
This allowed us to get a sense of how perceived value (estimated pre-product) fared when compared to realized value (after they had used the product).
Which thereupon gave us a handle on whether we were overpromising/underdelivering (perceived value >> realized value) or were failing to communicate/position (perceived value << realized value) the product’s core proposition.
This also yielded a strong product-market fit signal.
With regards to PMF, we definitely followed the standard, Sean Ellis question (“How disappointed would you be if you could no longer use the product?”) and tracked short-mid-long retention rates over 12/18/24 months.
But when we had a lot of active customers telling us that our price points were too low, that was a resounding sign that we had something sticky in the making.
Essentially, having used the product, a bunch of them were concluding that the actual value of the product was much higher and, more importantly, they wanted to pay more to access it.
Going beyond standard segmentation
Of course, knowing who takes these surveys has to be the first step of segmentation. If we aren’t focused on a target market, we’d have so much irrelevant information on our hands.
Which is interesting, because Uizard is a sort of vertical-agnostic product. We had to bucket customers across different sizes and types: freelancers, agencies, startups, SMBs, and enterprises.
This impacted whether: Someone would pay monthly/yearly? They’d prefer being charged for seats/teams? Or in the case of students, if they’d pay at all? And other such considerations.
To understand pricing concerns beyond these standard questions, we dug deeper and started looking at use cases.
As 2 customers with the same team size and the same industry can still use the same product very differently. Each use case has a specific level of pain associated with it and a specific set of reasons behind why people would choose your product to solve that pain.
I’ll admit that it’s really hard to segment for use cases. But learning that solving a particular pain is worth $10 to someone while fixing another pain is worth 10x more is incredibly useful.
Avoiding competition-based pricing without adequate research
When we were interviewing some of our beta users, it was quite tempting to get started by simply pricing Uizard with respect to the competition that we were drawing comparisons from. But we weren’t so sure.
Because alternatives can be slightly different in multiple ways as well. Perhaps they serve a distinctive set of use cases or are meant for a unique group of users within a wider audience. Or maybe their value is perceived differently.
Thus one of our motivations behind the price sensitivity research was also to find out, “okay, people don’t just compare us with tools A, B, and C, they also want our pricing to be on a similar plane.”
And, surprise, surprise!
The analysis made plain that the comparisons were indeed happening.
We didn’t just have anecdotal accounts of 5 people comparing us with competitors, we had arrived at an overall trend across hundreds of users. And as this wasn’t surface-level pattern matching, we could then do some strategic thinking:
“Now that we know that we especially get compared to a particular alternative by a given segment, what can we do to make our offering more attractive? Should we offer more for a similarly-prized tier? Or should we deliberately price less?”
We could tackle such questions with great confidence.
Going down the usage-based pricing rabbit hole (and deciding against it)
We’ve gone deep into the rabbit hole of usage-based pricing.
It is a rabbit hole as there are so many routes to frame usage.
We even ended up working with a pricing expert to understand what this model might do for us. But their proposal was so complex from the user’s perspective that we, ultimately, decided not to go ahead with it.
It’s true that inputs-based pricing, say per-seat pricing, doesn’t always have the best proxies for value, and you definitely leave money on the table.
But pricing based on usage can also be so intangible and overwhelming. Most people find it hard to commit to a n/month or n/week sort of price.
That model makes far better sense for some companies. If you’re building an API-first startup, charging for usage can be a great fit. The customers in those markets are used to paying that way.
Plus there are clear monetary gains too. The sheer volume of API calls will certainly dwarf the number of team members making those calls.
On the other hand, if you’re building a SaaS application like ours, where customers naturally think in terms of seats/users, it can be hard to convince them on usage-based tiers.
Imagine if Netflix priced based on usage. They’d have months with no revenue, right? Because most people stream sporadically.
The (limited) value of working with consultants
We’ve worked with pricing consultants on two occasions.
Once with a consultant that we recruited from the network and then this other time with someone who was directly working with our investors.
The latter engagement was much more fruitful. Probably on account of the fact that we were working with someone who had been in SaaS for much longer and also deeply understood our business.
In general, though, I tend to be very skeptical of consultants.
I would say, in both cases, it was great to educate ourselves but ultimately we needed to do the work as founders as we were the ones who knew our company — our customers, our product’s value — the best.
Put another way, the value of working with consultants is much like paying for a university class. You get to learn from someone who really gets a topic. But then it’s your job to learn the right things and then apply them to your unique domain.
So yeah, don’t just blindly pay a consultant and call it a day.
You’re going to have to do the work yourself as a founder.
That’s for sure.
Why changing prices often is hard and what they’ve done instead
“Do it often.”
That’s likely the most shared piece of pricing advice. Ever.
And yet, to me, it’s one of those things where there’s a huge chasm between what people say and what people actually do.
We really like ProfitWell’s Patrick Campbell. He has some amazing tips and tricks on pricing. And he usually provides data to back his claims which we love.
But he says that if your NPS is above 20, you should raise prices at least once a year. Open View’s Kyle Poyar says that one should increase prices once a quarter.
There are many other experts insisting that founders should change pricing all the time. And we know this! But it’s just so hard to actually do it with active paying customers, because pricing is such a tricky subject.
Instead of trying to change pricing that often, what we’ve been able to do is revisit the structure of our packaging. That translates to trying out experiments along the lines of:
“What happens if we take a bit of this feature from a tier and put it in another tier?”
“What changes if we increase the feature limits in a given tier?”
“What if we kept the same feature set on a plan and tweaked the feature limits?”
It’s honestly not something we do as often as we should, but we made a price point change just a few weeks ago. Where we changed one of the tiers from $15/month to $19/month, as we had added a lot of features and customers had been saying that we were charging too little.
It’s quite fresh. So let’s see what happens there.
Not changing price points as often doesn’t mean that pricing is low on the priority list, it is actually a company-wide discipline. We have multiple teammates that are looking at it from different angles.
Packaging. Plan limits. Free-to-paid conversions (exposing the right paid plan at the right point of their journey). Everything gets analyzed on a regular basis.
All that said, we’re still learning and have barely scratched the surface of what’s possible with pricing. It’s just so hard!
Related reading from the Relay archives:
— Tability’s co-founder, Sten Pittet, on the importance of having a system for understanding the impact of pricing decisions
— Rodeo’s co-founder, Ben Fisher, on starting with the price