AI-Native PMF #1: Question Base — AI's “Very Real” Cold Start Problem, Managing High-Stakes Hallucinations, and Balancing Adoption and Costs with a Freemium Strategy

Ai-native-series-Relay-by-Chargebee-Question-Base

Editor’s note: This new Relay by Chargebee series features field notes from AI-native SaaS founders as they tinker, build, and monetize their paths towards a new kind of product-market fit playbook. Stay tuned.

What we’re building at Question Base
The challenges around building and selling an AI-first product
Why we don’t force-fit our product interface
Managing AI hallucinations (and trust) in high-stakes contexts
Thinking through platform risks and potential threats from Slack AI
Why we are sticking with seat-based pricing, for now
Offering a free plan as an AI solution: tested benefits and peculiarly new guardrails

What we’re building at Question Base

Question Base is a documentation tool for the age of AI. We convert conversational data into organized, accessible knowledge for the team, building the most accurate and up-to-date layer of information in a company.

We work in a deep integration with Slack. Not just because we love Slack, but also because we see immense potential in it. We see it morphing into a significant backbone for businesses, going far beyond how it is used today.

There are two kinds of companies that love us.

Larger enterprises with 500+ employees that have existing documentation where teams have lost faith in the quality of said documentation and simply don’t refer to it.

For these companies, our value proposition is to help bring the answers from these often extensive, unused resources directly into Slack (via the Question Base bot).

And, as a second benefit, we enable each user to enrich the documentation with up-to-date info, all via Slack, as they type in their usual day.

6703af8d64c092cc806cd552_qb-footage-slack-06-transcode (1)

The other type of companies that love us are small organizations with 20 to 100 people. They don’t necessarily have documentation and are heavily dependent on their day-to-day communication for concerns and information.

They operate in remote teams, exchanging a lot of valuable know-how in Slack, but all that ends up getting lost.

For these organizations, enabling knowledge capture has been our primary benefit — something they find incredibly valuable because it distributes the responsibility and diversifies the sources of documentation.

This seems to be a problem people are eager to part ways with because documentation has historically been a pain to maintain.

LLMs don’t solve the problem by themselves as the underlying documentation they reference is often ridden with duplicates and outdated information.

The challenges around building and selling an AI-first product

While the race in AI is on, so are the speculations. We have been building in this space for a few years now and have seen some very interesting patterns emerge.

One of them (not new) but still true is the power that data and distribution will hold in the long run. The second is an increasing pressure for delivering “immediate gains.”

This can be a difficult challenge to tackle.

I had shared notes on thinking about the former, in my past Relay contribution:

"One strategy that can be used to gain a defensible position is to own your own data. Startups that leverage [OpenAI’s] and other models to apply to someone else’s data are in a very precarious situation.

This model may be beneficial in the short term, as it was for us when we used generative AI to analyze Slack history channels. But for companies to remain competitive in the long run, they need to have their own system of record to collect data in and drive value from it.

There are various ways of building a new system of record out of existing data. You need to choose what is your edge and the business case you’re solving.

It is similar to the analytics space where the same user events can be sent to different platforms for different purposes: visualization, analysis, re-targeting etc. With AI we can now achieve that for company documentation.

Some tools will be best at giving summaries, others at giving answers, others at extracting correlations and data etc. We choose to focus on turning company information into Questions & Answers and being the best at scaling employee know-how across the organization."

The latter is an urgent challenge for good reasons, too:

First, the budgets for new tools have significantly shrunk; remaining budgets are being allocated strictly based on what can create immediate value.

Second, the cold start problem is very real.

For a product like ours that is building an asset people can see and experience as fast as possible. As a layer of data on top of a platform, we are dependent on getting access to either Slack history or some existing documentation.

With a few clicks people can convert a chat channel history into a neatly organized FAQ of useful knowledge.

They can export it into their Notion or Confluence and now have an auto-updating, self-building knowledge base while their team simply chats.

If we don’t get access to any prior knowledge, then the bot can only build that knowledge base taken-forward.

This creates a delay in experiencing the product value so we continuously optimize the onboarding to drive as many companies as possible through the success loop.

We still believe that being THE database of verified know-how is going to be very valuable in the long-term, and we are still building towards that goal.

Which means that when we extract knowledge from a conversation, we mark it as an “AI-generated” answer and we drive the experts on the team to verify this answer. Last year, we were extreme in not showing the AI-generated answers to employees when they asked questions.

But now we are prioritizing “immediate gains.”

Beyond training the bot on Slack history, we are also Integrating our product with conversational knowledge shared through, calls, and other existing documentation via Notion, Box, Confluence, Intercom, and others.

This helps users interact with their existing knowledge using the simplest medium of chat, and in doing so, also helps orgs keep their documentation up to date.

We make it darn easy for experts to resolve duplicates and verify answers so a rich verified knowledge base develops in parallel.

Why we don’t force-fit our product interface

People are tired of new workflows.

This is why, instead of force-fitting our platform into users’ workflows, we decided to deepen our integration with Slack and utilize it as our primary interface.

This wasn’t an easy call. I see founders racing to get users and unique datasets on their own platforms. I get it. That is how things have been.

Things are different this time, though, and I have a hunch that people’s workflows with most AI solutions are going to be more and more centralized around their current communication flow.

You’re probably going to have to integrate and be adjacent to an already-central tool that your ICP uses. For us, this already-central tool is Slack. Of course, this has evolved through a number of iterations and experiments.

Managing AI hallucinations (and trust) in high-stakes contexts

I think founders building in AI should pay keen attention to AI hallucination — even in low-stakes contexts, but especially in high-stakes contexts.

Most of us, as users, have experienced blatantly inaccurate responses with our go-to, everyday AI companions.

If there are multiple controversial answers within the same set of data, it is possible that your AI might not know which one to prioritize, but it’ll send a very believable answer.

This is something we are extremely careful about as AI-native builders.

We work by flipping things a little bit. Imagine that you share an update about the product today, our AI would ask you whether you’d like to save this information and enrich it with your name, timestamp, and other meta data.

The next time somebody asks a related question, our product will bubble up a response that’s verifiable instead of letting loose generative AI to “figure out” an answer to your question.

We also ensure that for a company in a high-stakes space like insurance, we only enable AI answers when they have well-documented product know-how within their help center, otherwise we only serve human-verified answers.

Interestingly enough, this has also been a great positioning angle in our conversations with operators in similarly high-stakes sectors such as finance, legal, medical, etc.

Products in these sectors simply cannot afford to pass on the wrong information because the consequences are huge. It can hurt customer experience, a company’s reputation, and, at worst, can even be a threat to somebody’s wellbeing.

For instance, we had an insurance company come to us after having used a competitor solution and running into a thorny situation where the AI had surfaced an incorrect answer to a customer question on an insurance policy.

The repercussion was that the company entered a policy where they might have to endure multiples of thousands of dollars in losses. Cancelling the policy and explaining the AI’s mistake to the customer can lead to losing market trust, given how strong word of mouth can be.

The bigger, scarier issue here is: who do you hold accountable?

If it was a human agent that made the mistake and shared the wrong number, you could go talk to them, train them, you could take the loss as part of your onboarding experience or training cost.

But if it’s an invisible AI engine that generated that mistake, nobody’s accountable and you don’t even know how to fix it going forward.

Thinking through platform risks and potential threats from Slack AI

I think the market is huge right now, and coupled with the existing and future offerings from Slack AI you are definitely only going to solve a portion of the problem.

We are helping turn unstructured knowledge into a searchable, verified knowledge base, whereas Slack AI is focusing on better enterprise search, recaps, and tasks.

The hypothetical platform threat remains.

Today, we are selling to a different customer base than Slack AI. Whether we would grow up-market towards the customers they’re going after or they’ll decide to go down-market to eat into our pie is yet to be seen.

I can foresee a future where things will merge in some way, but it’s just too hypothetical due to the ability of AI to be so versatile.

If the incumbents play their cards right, there is a possibility that everything converges because the implementation of a solution or an additional feature/use case is so low.

In some ways, you can already see ClickUp, Slack, Notion, and others working towards becoming the command center of work.

It’s so hypothetical that I think that the only way to even get to that hypothetical future is by working on a current workflow that adds value to people. Down the line we could all expand and change in a billion ways.

The winners will be the ones who manage to gain a UX advantage and the habit of employees to work from that platform.

Why we are sticking with seat-based pricing, for now

Right now, we use seat-based pricing.

It’s a familiar model that customers are used to, especially since platforms like Slack AI and Notion AI use it as well. This pricing positions our bot as a natural add-on to a team’s conversational experience in Slack.

Our offering is priced at about half of Slack AI, with slightly different capabilities, making it easy for users to understand its value and fit that alongside their existing Slack subscription.

In larger companies, we’ve noticed deployments often start with a single use case — like answering product-related questions. In these cases, we experiment with custom pricing tailored to the specific channels or teams using the bot.

As adoption grows and expands into other functions like HR or IT, we’re transitioning toward the company-wide seat based pricing.

This evolution allows us to align our pricing with the increasing value we deliver across the organization while scaling naturally.

Offering a free plan as an AI solution: tested benefits and peculiarly new guardrails

Offering a free plan as part of an AI tool has its challenges.

Particularly in knowledge management where we connect to databases—often external—hosting vast amounts of data.

There are inherent limitations when balancing the costs of delivering value for free and maintaining a sustainable business model.

We’ve experimented extensively to find the right balance.

Our goal is to lower the barrier to entry, allowing teams to experience the product without upfront commitments, while ensuring we don’t overextend ourselves financially.

For example, in some cases, companies onboard and integrate tools like Confluence, and the AI crunches decades-old data, leading to integration costs exceeding $600. If they don’t convert to a paid plan due to external factors, we’re left footing the bill.

To mitigate this, we’ve designed a free plan that offers users a meaningful experience without incurring unpredictable costs.

The free plan includes saving up to 100 answers in their database and crunching their Slack history for a limited period. However, integrations and other services that can result in high, variable costs are now reserved for the paid plan.

We are continuously looking into how to bring most possible value upfront but in a limited experience.

Overall, working on the problem of turning conversational data into knowledge is a very fascinating and hard challenge. We spent 2024 on running multiple experiments on how to drive the most value of human & AI-assisted collaboration.

Now, we have that UX working very neatly.

So in 2025, we are 100% focused on expanding to new types of conversational data - calls, emails and audio messages and growing the value to more customers.

2 Likes