Pricing and Packaging When You Don't Have Perfect Data
Every pricing conversation at an early-stage company starts the same way: someone says "we need more data before we can make this decision." And they're right, technically. You always need more data. But if you wait for perfect data, you'll never change your pricing, and your current pricing is almost certainly wrong.
I've built pricing models at companies where the data was incomplete, the sample sizes were small, and the pressure to get it right was enormous because every dollar of mispricing showed up directly in unit economics. Here's what I've learned about making pricing decisions when your data is imperfect, which is always.
Start with unit economics, not competitor benchmarks
The instinct is to look at what competitors charge and position yourself somewhere in the range. This is backwards. Competitor pricing tells you what the market will tolerate. It tells you nothing about whether your business can survive at that price point.
Start with your costs. What does it actually cost to acquire, onboard, serve, and retain one customer? Include everything: the sales team's time, the onboarding specialist's hours, the support tickets, the infrastructure costs, the account management overhead. Most early-stage companies have never done this math honestly.
At one company, I ran this analysis and discovered that every single customer was operating at negative unit economics. The cost to serve exceeded the revenue per customer. Leadership didn't know because nobody had aggregated the full cost picture. That single finding changed the entire pricing strategy and potentially saved the company from scaling itself into bankruptcy.
The point isn't that your unit economics will be positive on day one. At early-stage companies, they often aren't. The point is that you need to know the gap so you can price with intention rather than guessing.
Build a sensitivity model, not a pricing model
A pricing model gives you one answer. A sensitivity model gives you a range of answers and shows you which assumptions matter most. At an early-stage company, the sensitivity model is far more valuable because your assumptions are uncertain.
The structure is straightforward. Build a spreadsheet where the inputs are your key assumptions: customer acquisition cost, average contract value, cost to serve, expected retention rate, expansion rate. Then model what happens to your margins when each assumption changes by 10%, 20%, 50%.
What you'll find is that some assumptions matter enormously and others barely move the needle. Maybe your pricing is highly sensitive to retention rate but barely sensitive to acquisition cost. That tells you where to focus: invest in retention infrastructure before you optimize your sales funnel.
Use engagement data as a proxy for willingness to pay
When you don't have enough customers to run a proper willingness-to-pay study, engagement data is your best proxy. Customers who use your product heavily are telling you, through their behavior, that they're getting value. Customers who log in once a month are telling you they're not.
Segment your customers by engagement level and look at the patterns. What features do your most engaged customers use? What outcomes are they achieving? What would they lose if they stopped using your product tomorrow?
This gives you the foundation for value-based packaging. Your most engaged customers are likely underpriced. Your least engaged customers might be overpriced, or more likely, they're on the wrong plan entirely. The solution isn't always to charge more. Sometimes it's to restructure your packages so that customers self-select into the tier that matches how they actually use the product.
Price for the customer you want, not the customer you have
Early-stage companies often price low to win their first customers. That's a reasonable strategy for getting initial traction. The problem is that those early prices become anchors. Your first 20 customers are paying $500 a month, so your 21st customer expects the same, even though you've tripled the value of the product since then.
The fix is to separate your pricing strategy for new customers from your pricing for existing customers. New customers should be priced based on the current value of the product. Existing customers can be grandfathered or migrated gradually. Trying to maintain a single price point that works for both groups is how you end up with a pricing structure that serves nobody well.
Test with new customers, not surveys
Surveys about pricing are unreliable. People will tell you they'd pay $200 a month for something, then balk when you actually charge them $150. The only reliable signal is whether someone actually pays.
If you're considering a price increase, test it on new customers first. Change the price on your website or in your sales conversations and see what happens to conversion rates. You need a smaller sample size than you think. If you close 10 deals at the new price with roughly the same conversion rate, that's a strong enough signal to proceed.
The fear of testing higher prices is almost always worse than the reality. In my experience, the most common outcome of a price increase is that nothing changes. The customers who were going to buy still buy. The ones who weren't still don't. And your revenue per customer goes up.
The framework
When I need to make a pricing decision without perfect data, I follow this sequence: calculate true unit economics first, build a sensitivity model to identify which assumptions matter, use engagement data to understand where value is concentrated, set new prices based on current value delivery, and test with new customers before rolling out broadly.
None of this requires a data science team or a six-month research project. It requires a spreadsheet, honest cost accounting, and the willingness to make a decision that might be wrong. Because the only pricing decision that's guaranteed to be wrong is the one you never revisit.