Blog
AI and data literacy: how to close the gap between insight and action
Your organization has dashboards, a warehouse, and now AI generating answers. People still go with their gut. Here's how to build the kind of literacy that actually changes decisions, especially when AI is doing more of the analytical work.

You've probably seen this play out more than once. Your company invests in a data warehouse, stands up dashboards, maybe even hires a few more analysts. And yet, when it comes time to make a decision — which campaign to double down on, where to cut costs, which customer segment is quietly churning — people still default to gut instinct or ask for "just one more report."
Now add AI to the mix. A new layer appears: tools that generate SQL, surface insights in plain language, and produce charts without anyone writing code. The promise is that more people can work with data independently. The risk is that more people are now acting on answers they can't evaluate.
That tension between AI-generated accessibility and the judgment needed to use it well is where data literacy matters most right now. And it's where most organizations are still falling short.
Why data investments keep stalling at the decision layer
Most organizations have invested heavily in data infrastructure. Warehouses, dashboards, headcount: the pieces are in place. And yet decisions still happen on gut instinct, tribal knowledge, or whoever has the most convincing spreadsheet. AI tools only make this gap more visible. Easier to surface answers, but no fix for the reasons those answers go unused.
Three problems keep showing up.
Definitions aren't consistent across business units
When "revenue" means one thing to Finance, something slightly different to Sales, and something else entirely to Product, every downstream analysis is already compromised. Teams don't distrust data. They distrust that spending extra time getting the "right" answers from data won't conflict with another teams "right" answers. A NewVantage Partners survey of C-suite executives across Fortune 1000 companies bears this out: fewer than 40% described their organization as data-driven, despite years of investment. Inconsistent definitions are a big reason why.
AI amplifies this. A conversational analytics tool will confidently return whichever definition of "churn" it encounters first. Without shared metric definitions, AI doesn't resolve confusion. It scales it.
Leadership doesn't enforce data-informed decisions
Most analytics programs start backwards. Researchers Bart de Langhe and Stefano Puntoni describe the pattern in MIT Sloan Management Review: teams look for a purpose for the data they already have instead of starting with the decisions that drive the business. The result is dashboards that answer questions nobody's asking while actual decisions go unsupported.
This is a leadership problem as much as a tooling problem. When executives don't visibly use data to make decisions, nobody else does either. Forrester research confirms that decision-makers still base roughly half their decisions on quantitative information, a figure that has barely budged in recent years. You can deploy the most sophisticated AI analytics stack in the world, but if leadership treats data as validation rather than input, adoption stalls.
Literacy programs aren't connected to business outcomes
Most data literacy efforts teach skills in the abstract: here's how to read a chart, here's what a p-value means, here's how to write a basic query. The skills that matter most are contextual. Can someone in marketing evaluate whether an AI-surfaced trend reflects a real campaign change or a seasonal pattern? Can a finance lead push back on a forecast that ignores a known risk?
When training isn't tied to the decisions people actually make in their roles, it doesn't stick. And when self-serve tools get deployed without the literacy to support them, the result is more confusion, not less. Gartner research confirms that organizations with high self-service adoption often face quality, security, and governance problems, exactly what you'd expect when access outpaces understanding.
Shared definitions: the foundation AI analytics depends on
Even strong data literacy won't help if different departments define the same metric differently. Metric inconsistency paralyzes decision-making. And when AI tools generate answers on top of inconsistent definitions, the problem multiplies. An AI interface will confidently return answers built on whichever definition of "revenue" it happens to encounter first.
When teams stop debating whose numbers are right, they start actually making decisions. A few things have to work together to get there.
Start with a centralized data dictionary that documents calculation logic, not just labels. A single source of truth means one shared definition, one calculation logic, and one place where metrics live, not just one database. This is what makes AI-produced answers trustworthy: the AI works from the same definitions everyone else uses.
Metric definitions also can't be owned by one team. Modern Data 101 describes how separating concerns, where data teams focus on technical infrastructure and domain experts own the meaning and context of their metrics, creates governance that works because it's rooted in how teams already operate.
How semantic layers and other context enforce AI consistency
Tooling needs to enforce consistency automatically. Semantic layers, table endorsements, and workspace guides give your most important metrics reliable meaning so "active users" or "churn" calculations don't vary by who's asking or which tool they're using.
When those definitions power every surface, from conversational analytics to notebooks to dashboards to data apps, everybody had confidence in the shared metrics they're analyzing. The definitions scale out from the data rather than living in a document nobody reads.
This played out at Calendly, where the analytics team built a Standardized Metric Library that gave cross-functional teams a single source of truth for key metrics. It cut down metric debates and let business users self-serve routine questions without filing analyst tickets. Once that shared vocabulary exists, AI-produced insights become reliable by default, because the AI is working from the same definitions everyone else trusts.
Building AI-ready skills without overloading the data team
With shared definitions in place, the next challenge is building the skills your people need to work effectively with AI analytics. Data literacy in an AI era doesn't mean everyone becomes an analyst. It means people can question and apply what AI produces: push back on a forecast that ignores a seasonal pattern, ask "what assumptions are built into this recommendation?" That judgment matters more, not less, when AI handles the SQL.
In Hex's State of Data Teams 2025 survey, 70% of data professionals identified self-serve analytics as a worthy goal. Closing that gap takes structure.
Programs that stick: fellowships over classrooms
Classroom-style training on chart-reading and SQL basics rarely sticks because the skills that matter are contextual. UNH research confirms this: what a finance professional needs looks different from what a product manager needs. Fellowship-style programs work better. Dedicate a few hours each week for business users to investigate a question from their own work using AI tools, with a data team member available for coaching. A finance analyst might spend two hours a week exploring revenue trends in Threads, with an analyst reviewing the AI-generated queries and pointing out where context is missing. A product manager might dig into feature adoption patterns, learning to distinguish meaningful changes from seasonal effects. It's important to celebrate breakthroughs and innovation stemming from these exercises. Humans are mimetic creatures, so this celebration serves to scale data literacy across the rest of the organization.
Not everyone needs to read SQL, and pretending otherwise is how you get programs that collapse under their own weight. But everyone using AI analytics should understand that the AI is writing queries against your data, that those queries can be reviewed, and that the answer isn't the end of the conversation. Every answer the Notebook Agent produces shows the underlying query and reasoning, so an analyst can verify the logic while a business user still gets a clear answer.
From gatekeepers to architects
This ownership model works because it redefines what the data team is for. When business users can answer routine questions themselves by asking in plain language through tools like Threads, the data team's time opens up for strategic work: building better models, improving data quality, and defining the governance that makes self-serve trustworthy. Pair conversational analytics with analyst verification of the underlying queries: autonomy for business users, quality control for the data team.
Leadership has a role here too. If the data team is responsible for foundations and coaching, and business users are responsible for building skills in their domain, leadership is responsible for creating the conditions: protecting time for learning, tying literacy goals to business outcomes, and visibly using data in their own decisions. Without that, fellowship programs quietly die when the next quarter gets busy.
How tooling builds literacy (or quietly kills it)
Your AI analytics tooling either reinforces these literacy gains or creates permanent dependency. The difference comes down to design choices that matter more than feature lists.
Simple questions should have simple paths to answers, through natural language or visual interfaces, while deeper exploration is available in notebooks when the work calls for it. Tools that force everyone into SQL-only workflows exclude business users. But tools that hide all the mechanics create a different problem: people can ask questions but can't dig deeper into answers.
The best design makes the analysis behind every answer visible. A business user asks a question in plain language; an analyst can click through to the actual SQL and logic behind the response. When both paths run on the same data and the same metric definitions, trust builds naturally. That's what separates tools that teach people to fish from tools that just hand them a fish.
Governance built in, not bolted on
When your team explores data within structured frameworks using metrics defined once and applied consistently, they build understanding. When they're free to define their own metrics in isolated spreadsheets, they build confusion.
Without guardrails like consistent tagging, discoverability, and security, self-serve tools erode the trust that literacy depends on.
Proximity matters too: the closer data is to the decision, the more likely people are to use it. Hex's connected workflow reflects this. Someone can start with a question in Threads, dig deeper in a notebook if the answer needs more context, then share the whole thing as a data app. The work stays connected instead of scattering across tools.
At Mercor, non-technical team members used Hex's agentic analytics to build their own dashboards and analyses without data team intervention. They reached 100% self-service on operational analytics. Analysts shifted to strategic modeling work. The tooling made literacy achievable: consistent definitions, transparent AI, and a path from question to answer that didn't need a ticket in the backlog.
That kind of independence is what data literacy looks like in practice, and it's measurable.
Measuring whether AI literacy investments change decisions
The metrics that matter for literacy programs aren't completion rates or dashboard logins. They're behavioral. But most of them are also hard to measure quantitatively, and pretending otherwise leads to reporting theater.
Understand how people work, then build feedback loops
The most reliable signal comes from actually watching people work. Have your data team sit with business stakeholders while they use AI tools. Set up regular sessions to review agent responses together, troubleshoot misunderstandings, and explain the reasoning behind a query. These conversations reveal what no dashboard can: whether someone is blindly trusting an AI-generated answer or genuinely evaluating it.
That kind of coaching builds literacy faster than any training module. It also builds trust between data teams and the business users who depend on them. When an analyst walks a product manager through why a particular Threads response pulled from one table instead of another, both sides learn something. The analyst sees where context gaps cause confusion. The product manager starts to understand what "the AI said so" actually means.
Pair that qualitative layer with lightweight feedback mechanisms: quarterly surveys on confidence levels, regular retrospectives on data-informed decisions, or simply asking stakeholders whether they're getting useful answers. You'll pick up patterns that usage metrics miss entirely.
Close the loop with observability
For the aggregate view, Context Studio shows you what's happening when you're not in the room: which questions come up most often, where answers fall short, and where governance gaps are creating confusion. That observability layer closes the loop between the coaching conversations and the platform-level patterns.
You should also watch your data team's ticket queue. If routine requests are declining while strategic work is increasing, literacy is taking hold. That shift is one of the clearest signals available, and most teams already have the data to track it.
The investment case won't come from a single metric. It comes from stakeholders asking better questions, data teams spending less time on repeat requests, and fewer meetings derailed by "whose numbers are right?" You'll feel it before you can put a number on it.
Closing the gap
AI makes the literacy problem both more urgent and more solvable. More urgent because AI-produced answers spread faster than human-generated ones, and wrong answers at speed are worse than slow answers. More solvable because AI analytics can meet people where they are, in natural language, with transparent logic, on trusted data, rather than asking everyone to learn SQL first.
The organizations getting this right aren't running more training programs. They're building environments where business users and data practitioners share the same metric definitions, the same data, and the same workspace for asking and answering questions. When that foundation is in place, the gap between having an insight and acting on it shrinks fast.
If you want to see what that looks like, you can sign up for Hex or request a demo.