Blog

How our Head of Product uses data to say no (and drive more revenue)

5 Product metrics that actually drive growth (not vanity metrics)

As the Head of Product at Hex, I manage the amazing people who manage all of our product areas. We've got a big product surface: data analysis, visualization, collaboration features, and more.

Most product teams ship feature after feature without knowing if anything improves. They track vanity metrics and use shipping as a key outcome. They build what loud customers request. They prioritize based on gut feel. Then executives ask for results and getting an answer takes a week.

I used to do this too. Until I realized I was running a feature factory, not a strategic product organization.

The product metrics that actually matter at Hex

Working with the team, we've identified several core metrics that tell me if our investment portfolio needs to change.

1. Weekly active users

Our KPI as a product team is weekly active users on aggregate and across seat types to ensure that usage is growing. This ties directly to our users seeing value in their Hex access and to the way that we monetize our product.

2. Feature adoption and stickiness — together

We track adoption rate and weekly-to-monthly active user ratio as one metric. Adoption without stickiness means nothing. If adoption was high but weekly usage was low, we'd have an onboarding problem. If adoption was low, but the feature had high stickiness would indicate a discoverability issue.

This directly changes where we invest. Features with high stickiness lead to expansion revenue and lower churn. Features with poor stickiness consume engineering time without business impact and indicate that we need to evaluate how to improve the experience or retire the feature.

3. Uptime trends, not just numbers

Hex is at the core of many of our users' daily stacks and hosts business critical data so our uptime is critical. "Over the last 3 months, how has our uptime evolved?" That's the question I ask, not "what's our uptime?"

If uptime is degrading over time, that indicates we need to invest more to get to where our target is. Might be we invest less in new functionality and make sure we have the right processes in place to reduce downtime.

Enterprise customers expect 99.9 percent uptime. That's less than 9 hours of downtime annually. When you frame it that way, every incident matters.

4. Revenue by product line

At the end of the day, understanding how much money your products make and how those products are sold is critical. We track revenue by product line and then go deeper to understand exactly where new product expansion is successful in the sales and customer adoption cycles. This helps us partner more effectively with the sales team to maximize revenue and adoption across customers.This helps us partner more effectively with the sales team to maximize revenue and adoption across customers.

5. Trial-to-paid patterns

Trial conversion is your product-market-fit report card. But it's not just the rate; it's understanding what drives conversion.

We once thought that better data connections would boost trial conversions, so we invested heavily in simplifying that flow, but conversion didn't budge.

Failure is actually really helpful because it allows you to learn more. That experiment taught us that users valued our demo data experience more than connecting their own data initially. We shifted focus and saw immediate impact. By tracking this top line metric, we could make a measurable and meaningful impact on our growth.

knowledge

See our template for tracking feature success.

Making this work without a massive data team

You need three things: a cloud data warehouse, event and sales tracking, and an analytics layer (like Hex) that your PMs can actually use (learn more about how you can use Hex for product analytics). We use Hex to explore our centralized data without waiting for the data team.

Once you set up your tool, start with one dashboard for your most common decisions. Don't overbuild. Then, push updates from that central dashboard to Slack on a monthly basis to keep all eyes on how you’re doing. Nothing fancy, just consistent accountability.

When you start tracking and sharing metrics, different stakeholders are going to want different views about product performance. Executives care about trade-offs. Engineers need technical metrics. Sales wants to know what helps them close deals. With Hex, we can present the same data with different lenses easily for these different teams.

Every metric should tell you if your solution fits the market

Every project that you can measure gives you an opportunity to learn if your hypothesis and solution fit the market. Most teams document successes, we document failures too. When something doesn't work, we share why. It saves us from repeating expensive mistakes.

We try to connect everything back to business outcomes. Uptime isn't just a percentage — it's retention. Feature adoption isn't just a number — it's revenue potential.

Before adding any metric, ask if this would change your decisions. If the answer is no, skip it. Dashboards full of metrics that seem interesting don't drive action.

From feature factory to strategic product output

Hex is a critical management tool for maximizing our product investments, but tools are just tools. What really matters is the mindset shift: from shipping features to driving outcomes, from trying to make everyone happy to making strategic trade-offs, from waiting for perfect data tomorrow to acting on good-enough data today.

Product teams that systematically learn what works—and what doesn't—win in the long run. Everything else is just factory work.

Product moves faster when data is in the room. Learn why teams use Hex for product analytics.