In the following exchange, Thematic’s co-founder and CEO, Alyona Medelyan (@zelandiya), recounts the unique challenges of building an AI-first product, while seamlessly paring away at hyped notions and reminding us that customers evaluate AI solutions just like other B2B software.
— Taking on the problem-solution paradox of AI-first products
— A playbook for approaching an AI-first MVP
— How to scale well with just a seed round
— Notes on Thematic’s first hires
— Sustaining competitive advantages
— Balancing a product roadmap while tending to AI advances
When developing an AI-powered solution, experts or enthusiasts of AI have a unique angle. They are seeking a problem, where AI can make a difference. And if they choose wisely, they can have a deep competitive advantage.
Meanwhile founders who “slap on” AI onto an existing solution, usually don’t succeed. Other priorities take over.
But… starting with AI also means starting with technology instead of starting with a problem.
This is, of course, problematic in startup land! First, you have to brainstorm possible problems, then validate them by speaking to potential customers. It can be a time sink. Especially if you get excited about something that isn’t worth solving.
When starting Thematic, it really helped me to position myself as an expert. Then, I didn’t need to seek and validate problems worth solving, they came to me! Here’s what I did:
- During my PhD, I published my projects as open-source code and wrote about them.
- I jumped on the Twitter wagon early and started using the #nlproc hashtag to tweet about Natural Language Processing (to separate this from those interested in Neuro Linguistic Programming). This hashtag is widely used on other social media as well.
- I also started an AI meet-up in my city of 2 million people (Auckland, New Zealand) and offered my services as a contractor.
- I created a website and made it easy for people to contact me.
Gradually I could then see a pattern emerge for a recurrent problem. People from three different companies came to me with the same problem: analyzing open-ended responses to Net Promoter Score surveys. This became the first use case for Thematic.
It really depends on the problem you are solving. Oftentimes, to validate the idea you likely don’t even need a UI. Or perhaps, you just need a minimal version.
If the problem can only be solved using AI, people will be prepared to pay money for intermediate output such as enriched data or clean data, or something generated by the AI.
For Thematic, our MVP was a Python script and a PowerPoint deck that presented its output. I created this deck manually. It was a classic example of building things that don’t scale!
Often us, technologists, overthink.
So it’s important to remind yourself to keep things simple.
I am not sure I would agree with the statement that AI products take longer to take shape. In AI there are plenty of open-source solutions or cheap APIs you can deploy and test with. Think of all the solutions utilizing GPT3! Or libraries like Hugginface, SpaCy, or OpenNLP.
The tricky part for technologists is to find people who will pay money for the solution. There are a tonne of “interesting” AI applications that people are willing to talk about but not pay for.
Here is my playbook for what it might look like for a B2B, AI-first SaaS startup:
First, as you are evaluating possible problems, ask potential customers: When would you like to have this solved by? This will give you the best idea of urgency.
Only work on problems that are urgent and therefore painful enough for people to pay for.
Second, explain how AI might be able to help.
Ask them if they’d like to see an example on their input or data. Depending on how complex this is, you might suggest a paid proof-of-concept. If the solution is complex, but they decide against paying, that’s an indicator of lack of urgency.
Finally, when you show your solution to potential customers, ask them if they think it might be a good solution to their problem.
If they say yes, ask them if they’d like to see a proposal. Try to sell first, and then collaborate with your early buyers (early adopters) on improving the solution.
We have only raised $1.2M in seed funding. And when we did it, we were already doing $500K in ARR. We have not raised a round since.
There are a few things that we did that allowed us to scale with minimum funding:
- Take advantage of government grants.
- Be capital efficient before you prove product-market fit (which is more than just having a first paying customer!) The biggest expense comes from hiring too many people too soon. To keep the team lean, we use contractors and try to automate tasks where we can.
- Get upfront payments and multi-year contracts whenever possible. For any contract that’s more than $10K, assume that it’s an upfront payment and state the discount provided (usually 20%).
Our AI approach doesn’t require manual training data in order to create custom models for each customer. So we did not need a lot of investment here either.
My cofounder and I are technical. So our first hires were in the areas of Customer Success, Marketing (content) and R&D (to free up my time so that I could continue to focus on sales).
Two months later we started hiring sales people.
This has been incredibly difficult, given that our own background isn’t in sales.
We tried to hire two AEs at the same time but both didn’t work out. Then we tried to not hire anyone, with the founder and CS team supporting sales, and that didn’t help either. We now have a small team of SDRs and Account Executives. But these roles still remain tricky to hire for.
Our AI approach continues to be hard to copy. We can see competitors copying our data visualizations and launching them one to three years after we did. But they struggle to copy our specific method for analyzing feedback.
The common approach here is to use supervised categorization or topic modeling. We use neither. In favor of a self-supervising AI that discovers specific and actionable themes in text, and then we let users customize those.
I think that the secret is in both understanding what customers want and then leveraging the technology to help them achieve that.
With the rapidly changing advances in AI, we always have to ask ourselves:
Do we try to squeeze more out of our current approach by tweaking parameters and data? Or, do we scrap it and replace it by the SOTA (state of the art, i.e. best currently performing) approach?
Our researchers always have to be on top of the latest advances while also staying pragmatic. They cannot get distracted by the shiny new thing. Because there are so many shiny new things in AI research, they wouldn’t get anything done!
So it’s a fine balance between being aware of the latest research and knowing when to deploy it!
It helps to constantly be answering: what’s in it for the customer?
An accuracy improvement of 5% or even 10% might get you published and cited, but it’s meaningless for a customer. On the other hand, small wins with a short turnaround are worthwhile. Each quarter, we try to have a mix of big bets, smaller fixes, library/model updates, and improvements to our tooling.