Skip to main content
Blog

AI analytics is here. I said it.

Stop waiting for AI analytics to prove itself. It's waiting for you to make it work.

AI analytics is here. I said it.

There’s a genre of post that does very well in the data community right now. Someone tried an AI analytics tool, it gave a wrong number, a VP acted on it for three months, and now the author is here to tell you AI in data is overhyped, not ready, and possibly dangerous.

These posts get a lot of traction. I understand why. The anxiety is real, the stakes are real, and there are genuinely bad tools out there. But I’ve started reading them differently.

What I see in almost every one of these stories isn’t a failure of AI. It’s a failure of setup — the same failure I’ve watched play out with data warehouses, with BI tools, with every piece of infrastructure the data world has ever introduced. When we skip the architecture work and blame the technology, we cut ourselves off at the knees.

The AI just makes it easier to blame. It’s newer, it’s confidently wrong, and it’s not one of us.

Reverse happy ears

There’s a phrase we use when someone is so bought in on an idea that they only hear the good news: happy ears. What I’m experiencing in the data community right now is the opposite. We are curating the failure stories. Forwarding them, commenting on them, feeling briefly relieved by them.

Because if AI analytics doesn’t work, we don’t have to figure out what it means for us.

For a senior analyst or head of data who has spent years being the person with answers — the one who knows the stack, knows the data, knows which numbers to trust — admitting uncertainty about a technology everyone else seems to understand feels like exposure. So the skepticism becomes cover. Not cynicism, exactly. Protection.

And the industry hasn’t helped. The content explaining AI analytics largely assumes the audience already understands how LLMs work, what they need to produce reliable outputs, how warehouse descriptions get read, what belongs in a rules file versus a semantic model. That stuff gets assumed, not explained. When you don’t understand the mechanism, you can’t troubleshoot the failure. You just conclude it doesn’t work — and that conclusion feels a lot safer than raising your hand.

So when I see a team waiting for better models, waiting for clearer ROI, waiting to see who wins the infrastructure race — I don’t think they’re being reckless. I think they tried, something didn’t work, nobody told them why, and now the skepticism feels confirmed, even though the actual problem was never diagnosed.

What the setup actually requires

AI analytics works when it has context. Not just a semantic model — real context. What this metric means, where this data gets weird, and what question the business is actually asking when they ask that question. Warehouse descriptions that tell the model what it’s looking at. Documented curation choices. The institutional knowledge that used to live in a senior analyst’s head or a Slack thread from 2021.

We’ve written about what AI analytics actually is as a category — what it demands, what distinguishes it from BI with a chat box bolted on. But regardless of the tool, what makes it work is context.

That’s not an AI problem. That’s a data team problem. And it’s exactly the work that gets skipped because the tool was sold as easy to implement, and the hard parts were buried in complex documentation.

Here’s what I tell data professionals: you don’t need to do this for your entire warehouse. Most of what’s in there — the raw ingestion layers, the staging tables, the abandoned explorations — doesn’t need to be AI-ready. You’re actually working with a small, modeled, business-facing slice. The scope of this work is smaller than the narrative suggests.

Pick 3 tables and the 10 use cases they support. Document the context for those. Connect the tool. See what happens.

When you set it up that way, something shifts. The volume of questions goes up — not because the data team is answering more of them, but because people who never felt comfortable asking are now asking. Hypotheses that lived in a spreadsheet for months because there was never bandwidth to explore them. Questions that didn’t make it into the queue because someone didn’t want to admit they didn’t know something.

The same AI, pointed at an uncontextualized data source, gives you the horror stories. Pointed at a well-curated one, it gives you something closer to what self-service has always promised.

knowledge

Get a practical roadmap for setting up your context. Download The Data Leader’s Guide to AI Analytics.

Just try it

Stop reading the failure posts and go spend genuine time with one of these tools. Not to evaluate model performance. Not to benchmark against six other options. To actually configure one properly — with real context, scoped to the part of your data that’s actually ready — and see what happens.

If it doesn’t work, diagnose why before you conclude anything. Was the context there? Were the warehouse descriptions populated? Were you asking questions that the data could actually answer?

The marketing for these tools is excellent. The work to make them excellent is real. But the work is incremental and quieter than the failure posts would have you believe.

Hex was built around this definition of AI analytics — the context layer, the conversational surface, the depth when you need it. If you’re already a customer, the pieces are there. If you’re not, the point still stands: find a tool that was built for this posture, not retrofitted to it. Set it up with real context. Start with three tables.

AI analytics is here. The only way to make it work is by doing.

New to Hex and want to try the Hex Agent?