Skip to main content
Blog

How to use data analytics and AI to grow your business

You can build all the dashboards and models you want. But if they're not moving the needle on revenue, retention, and margin, you're just shipping compute invoices.

How to use data analytics and AI to grow your business

Most data teams can point to a dashboard that nobody opens anymore. A model that took weeks to build, used once, then forgotten. A Slack thread where someone asked "can we trust this number?" and the answer took three days. The work got done, but did it help with any tangible business outcomes?

According to McKinsey, companies that intensively use customer analytics are 23 times more likely to outperform competitors in customer acquisition and almost 19 times more likely to achieve above-average profitability. Yet according to Hex's State of Data Teams report, AI went from 4% to 27% as a top goal for data teams in just one year — a 575% increase as teams shift from cautious experimentation to active implementation.

This guide is for data leaders, analytics engineers, and PMs who know analytics should move the needle but struggle to prove it. Learn from other teams who design workflows, models, and governance so that your analytics and AI are more likely to move the metrics that matter.

Why is it hard to connect analytics to business results?

Traditional analytics workflows create distance between insight and action. The tools, team structures, and incentives that worked for reporting don't work for decision-making.

Speed doesn't match the pace of decisions. Most analytics orgs are staffed for throughput, not latency. A PM has a question Monday morning. They file a ticket or ping Slack. An analyst picks it up when they can. By the time the answer arrives, the meeting has already happened. The decision was made without the data. There's no way to offload the simple questions, so the backlog grows.

Definitions live in people's heads, not systems. Two people run similar queries and get different numbers. "I'm showing 847 accounts." "Weird, I got 912." The problem isn't the query — it's that "account" means something different depending on who's asking and which table they hit. Every analysis requires institutional knowledge. That doesn't scale.

Dashboards show what happened, not why. Conversion dropped 15% last month. Okay — but why? The dashboard can't tell you. It was built to monitor, not investigate. So the analyst digs in, slices by channel, then by segment, then by cohort. Each follow-up takes another round trip. The moment you need to go deeper, you're starting over in a different tool.

These problems aren't going away on their own. They're structural mismatches between how analytics teams work and how decisions actually get made.

What changes when AI enters the workflow?

AI analytics compresses the loop between question and answer. Instead of waiting for an analyst to translate a business question into SQL, you type the question directly. Instead of guessing which table contains the data you need, the AI queries a context layer that already defines what "revenue" and "active customer" mean.

Here's what that looks like:

Traditional analytics

AI analytics

Process

Analyst writes SQL to filter by region, then product, then segment. Each follow-up spawns a new query.

You type the question. The system queries the data, identifies anomalies, surfaces contributing factors.

Schema knowledge

Analyst needs to know table names, joins, column meanings

AI queries semantic layer definitions; analyst reviews generated SQL

Time to answer

Hours to days, depending on analyst availability

Minutes, with SQL visible for verification

Who can initiate

SQL-fluent analysts

Anyone with access; analysts get better results by reviewing generated queries

Speed matters, but it's not the main thing. The deeper shift is in how data teams spend their time. Instead of processing "can you pull this?" requests, they define the metrics and governance that make AI answers trustworthy. The context layer becomes the product; the AI becomes the interface.

AI doesn't replace data teams. It gives them more leverage to deliver insights to the business. When a PM can type "why did trial conversions drop last week?" and get a starting point in two minutes, the data team's backlog shrinks. When the same PM can inspect the SQL and see exactly which tables were queried, trust builds faster than with black-box dashboards.

Where do teams go wrong?

AI queries faster but can't fix bad data. If "revenue" means different things in different tables, AI confidently gives wrong answers. Clean foundations matter more, not less, when AI makes data more accessible.

The most common mistake is skipping the context work. Teams that don't govern their definitions find AI makes their problems worse, not better. Inconsistent definitions lead to inconsistent answers. AI just delivers them faster. You don't need a full semantic model on day one. Start by endorsing the tables your team queries most and adding descriptions that give agents and analysts useful context. Add workspace rules for organizational standards. Then formalize with semantic models as your definitions mature. The point is to have the five to ten metrics that matter most defined precisely before opening AI access broadly.

The second mistake is self-service without guardrails. When anyone can query anything, you risk metric proliferation, security violations, and flawed analyses. Data teams should curate which tables and metrics AI can access, and implement review flows for queries touching sensitive data.

And the third is treating AI output as decisions. AI surfaces patterns but lacks business context. A recommendation that optimizes one metric might damage customer relationships. The best AI tools make their reasoning inspectable so users can apply judgment before acting.

One more: expecting tools to change culture. AI-assisted analytics lowers barriers to insight, but organizational factors matter just as much. Data literacy, leadership support, and incentive structures determine whether faster answers actually translate to better decisions. A team with great tools but no habit of using data will underperform a team with basic tools and strong analytical culture.

What does this look like in practice?

Unblocking self-serve at PandaDoc

PandaDoc's team builds software for the full document lifecycle, from creation to e-signature. Before Hex, 80% of analyst time went to repetitive data requests. Business users who couldn't wait turned to AI chatbots — and got inconsistent numbers.

After connecting Hex Threads to their Cube semantic models via Semantic Sync, business users could ask questions in plain language and get SQL-backed answers they could trust. Analysts could inspect every query. Click-through rate data that took 20 minutes to pull manually now takes five.

What made it work: the semantic layer was already doing the definitional work — Threads just made it accessible. Without that foundation, natural language queries return whatever the column names suggest, not what the business actually means.

Operationalizing churn predictions at ClickUp

ClickUp's data team had the opposite problem. They'd already built a churn prediction model combining product usage signals with support data to score accounts by risk. The model worked. Getting it into the hands of people who could act on it? That was the hard part.

Before: predictions lived in BI dashboards too high-level for customer success to use. After: the team published the model as an interactive data app in Hex. CS managers could filter by segment, drill into individual accounts, and see which factors drove each risk score.

The result, according to Hex's case study: ClickUp saved more than $1M in churn, with CS focusing outreach on accounts with the highest revenue return potential. Data science built the model, but the app — not the model — became the interface between prediction and action.

Embedding insights in the workflow at Ramp

"Who are our customers?"

At Ramp, a finance automation platform, even that basic question was hard to answer. No single source of truth. After building data foundations in Hex, an analytics engineer built a Customer Feedback 360 app with sentiment analysis entirely in SQL, pulling from support tickets, NPS surveys, and product feedback.

The app embeds directly in Salesforce and Notion. Account managers see sentiment scores without leaving their workflow. Feedback loops that once took days now take hours.

The lesson: an app that lives only in the data team's tooling doesn't change behavior. An app embedded in the CRM, the CS platform, or the weekly review deck becomes part of the operating rhythm.

What separates tools that work from tools that don't?

Not all "AI-assisted" tools deliver on the promise. Some bolt a ChatGPT wrapper onto dashboards and call it a day.

The first question to ask: can you see the SQL? When AI generates a query, you should see exactly what it wrote — the tables accessed, the filters applied, the joins used. Black-box answers erode trust over time. Technical users need to verify; stakeholders need to explain how numbers were calculated. If you can't inspect it, you can't trust it.

The second: does it use your definitions? The AI should query your semantic layer, not guess from column names. When your semantic layer specifies "monthly recurring revenue," that definition should apply everywhere — whether a data scientist queries it in Python or a PM asks in plain English. Tools that force you to maintain a separate set of AI-specific definitions create more problems than they solve.

And the third: do your access controls still apply? A sales rep asking about their territory should see their territory's data and nothing else. If your governance model breaks the moment someone asks a natural language question, you don't have governance — you have a security gap.

Hex is built around these principles: SQL, Python, and natural language exploration in a unified workspace with governance that makes analytics trustworthy. Notebook Agent helps data teams go from question to analysis faster. Threads lets business users ask questions conversationally while data teams maintain control through semantic models and endorsed data sources.

How can you get started?

Start with questions, not platforms. What decisions would you make differently with better data? Which metrics tell you whether you're winning or losing? The teams that succeed pick two or three high-value analyses and get those working well before expanding. CRM and financial data first, then product analytics, then everything else. Many teams try to build comprehensive data platforms on day one. Most of those projects stall.

Once you know which questions matter, invest in governed context before opening AI access. Conflicting numbers destroy trust. Start by endorsing trusted tables and adding warehouse descriptions so analysts and AI agents know which data to use. Establish ownership and create single definitions for critical metrics like "revenue" and "active customer." As those definitions stabilize, formalize them in semantic models — authored directly or synced from dbt, Cube, or Snowflake. In Hex's State of Data Teams report, trust is the top concern around AI adoption, cited nearly twice as much as any other barrier. The investment feels tedious, but every hour spent on context governance saves many hours of debugging confidently wrong AI answers later.

Then build data into your operating rhythm. A weekly growth review that starts with five key metrics, surfaces three questions worth investigating, and commits to one experiment creates more value than a monthly all-hands where executives present charts nobody acts on. The data team's role isn't to present findings — it's to have findings ready when questions arise.

Finally, measure adoption, not implementation. Track whether more people access data, whether time-to-insight decreases, whether decisions cite evidence. If your data warehouse is perfectly configured but nobody uses it, you haven't succeeded.

Moving from "we track metrics" to "analytics shapes decisions" takes sustained investment, especially in larger organizations. Teams that succeed start with specific questions, invest in semantic quality early, build capabilities progressively, and treat analytics as organizational change, not a technology project.

Hex is an AI-assisted platform where data teams and business users work side-by-side. Try it free or request a demo.

Get "The Data Leader’s Guide to Agentic Analytics"  — a practical roadmap for understanding and implementing AI to accelerate your data team.