Focusing your data team on what matters most

How do you go from from reactive ticket-answering to strategic, decision-impacting data work?

Focus Your Data Team Blog Hero

Every data leader is familiar with the common trope: you want to move your data team away from reactive, ticket-answering and towards more strategic, decision-impacting work. But how do you actually get your data team to focus on the work that matters? How do you build and run an efficient data team? Across our customers at Hex, we’ve seen that reactivity vs. proactivity is only a small part of the problem. 

This guide will break down some of the common problems we’re seeing, and offer a practical framework for really focusing your data team on what matters.

Shareability and collaboration

The first place to start when it comes to improving the efficiency of your data team? The work you’re already doing. Today, large swaths of the hard work that data teams do dies in the vacuum of screenshots, uncollaborative tools, and feedback loops that never got closed. To help your data team be more impactful, they need to become more collaborative; but that’s a lot easier said than done.


Despite how much of a meme it has become, many data teams today are still sharing static artifacts from their analysis. A main culprit is screenshots that quickly become out of date. But even dashboards in popular BI tools – with regularly updating data on the backend – only tell part of the story: they leave no jumping off points for stakeholders to ask further questions of the data. How data teams share their work is as important as the work itself, and we’ve seen many a team do incredible analysis, only to see it ignored by stakeholders; or worse, misinterpreted. 

To make sharing a strength for your data team, focus on how work gets shared. Is there a better process you could design? How are stakeholders integrated into the analysis process? And do your existing tools support sophisticated sharing that actually leads to decision making?

Collaboration and closing feedback loops

Finding meaningful insights is a collaborative process between your data team and your stakeholders, but many teams get stuck in a kind of “show and wait” process. A project often starts with a question that someone on the data team starts to investigate, and then they share a “result” with the stakeholder. In previous jobs, I’ve seen this literally happen in JIRA, with an “answer” meaning a resolved ticket. But that’s not how collaboration works! Investigation and analysis, especially when it’s exploratory, needs to happen as a feedback loop between the data team and the stakeholder. Think of it more as a process of collective discovery than a counter at a restaurant fulfilling an order.

To make data teams more efficient, leaders need to be focusing on how that feedback loop gets closed as early as possible – getting your stakeholders and data teams working together throughout a project. Showing early results, getting more context on the product, and not being too wedded to a particular output are great places to start.

Empowering functional teams to do their own explorations

Another important facet of helping your team focus on what matters actually has nothing to do with their work at all: it’s about what they can do to help other teams at the company explore their own intuitions and ask questions of the data. No matter how good your data team is, they can’t go at it alone. And “offloading” work to functional teams isn’t just about getting your data team more time to focus on more strategic work – it’s going to lead to more nuanced insights for the business overall.

Product and business context matters

In a sense, the goal of a more efficient data team is only a piece of the puzzle; what we’re really after is data-driven organizations, making better decisions and moving the business forward. Though your data team are the experts in the data itself (and methods of learning from it), much of the business context, product intuition, and institutional knowledge belongs to your product, engineering, and marketing teams, among others. Close collaboration can help bring that to bear on the data team’s work, but it’s never going to fully replicate what’s in the head of these teams.

Realistic goals instead of the self-serve holy grail

Data teams have been chasing the holy grail of self-serve analytics for years. Until we can figure that out, a more realistic place to start is setting up “explore from here” types of playgrounds for your functional teams. 

There’s a sweet spot here, starting from something like a regular reporting dashboard. Say you’ve seen an unexpected change in a metric, like your activation rates increasing by 10% suddenly. Allowing your product and engineering teams to do some basic investigations – cutting by user segments, filtering dates, pivoting on acquisition channel – can help empower them to combine their domain expertise with a real ability to explore. A great example is Notion, who were surprised to find that their most active users of internal analytics dashboards were actually their GTM teams.

Creating shared definitions of metrics and business concepts

Part of a data team’s job is making sure functional teams are on the same page about what business concepts mean in practice. Readers have likely seen many an organization with completely disorganized definitions – the product usage dashboard shows a top line user number that’s 10% higher than the marketing dashboard, because the latter filters out users who haven’t signed in over the past month. Without a shared understanding of your core business concepts – users, revenue, churn – your functional teams will get scattered and inconsistent analysis. With it though, your data team can get on the same page, and teams like product and engineering can get started on their own investigations.

Shared definitions and a metric layer

There are a bunch of different ways to create these shared definitions in practice. The simplest – and most common – is data documentation. But the problem with documentation is that for it to be useful, people actually have to read it, and at fast moving organizations (actually, really anywhere), that’s not going to happen often. 

Over the past few years, we’ve seen the emergence of the concept of the “metric layer” – which in its most basic form is usually an API layer on top of your data that outputs standardized definitions of business concepts. The most well developed of these is dbt’s Semantic Layer, which sits on top of your dbt project and gives you BI-like options for transformation and querying.

What often gets lost in the focus on implementation is the core of what you’re trying to do here: really understand your business concepts, identify how they map to your actual data, and codify that for your organization.

Building a knowledge library

It’s useful to think of one of the goals of a data team as building a knowledge library for your business. Much of what a data team does is learn – about how your products work, what levers matter, where your customers are coming from. And the question for leaders is, is that knowledge being recorded, organized, and most importantly, used? Or do the insights that your team gathers find ephemeral usage for particular projects, and then get filed away in a proverbial Github filing cabinet, never to reach the mind of a PM or Marketer again?

Empowering your functional teams to explore on their own

With a robust set of commonly defined metrics – and standardized ways of interpreting and understanding them – your data team can actually help other teams start to run their own investigations. A big part of what holds back these teams is the general sprawl and confusion of approaching a data warehouse full of tables. Which users table is the right one? What filters should I apply to get a meaningful result? How do I calculate revenue for this segment? A shared set of definitions – often implemented through a metric or semantic layer – gives teams a standard starting point for analysis. 

Picking a flexible, efficient data stack and investing in it

Finally, the piece you’ve all been waiting for. The tools your team uses – and how the rest of your organization interacts with them – are a major piece of how effective your data team can be. The most talented team stuck with neanderthal tooling is going to fail to deliver insights to the business at a scale and speed that matters. But with so many new tools in the market over the past few years, it’s getting harder to cut through the noise and figure out what will actually empower your data team.

Letting your team use the languages they’re comfortable with 

SQL, Python, or a secret third thing – everyone has different backgrounds and is comfortable with different languages. And perhaps even more importantly, the type of analysis you’re doing will often call for a specific type of language (ever tried analyzing geospatial data in MySQL?). If your tools are forcing you to operate primarily in one language, then your data team is likely not focusing only on what matters.

A sinister, and more difficult to identify, version of this is when your team doesn’t even realize that there are better ways to do what they’re doing. For many data teams out there, the idea that you don’t need to be configuring Snowflake JDBC connections in your notebook isn’t obvious. As a leader, identifying those gaps can help your team become that much more efficient. 

Shareability and collaboration (again)

We’ve already talked about the critical importance of how your data team shares their work and closes feedback loops with other teams. But does your data stack allow, or even empower, your team to do that? The reason that many teams are still exporting data to Excel and screenshotting graphs is that, well, that’s the best their current stack allows them to do.

Fundamentally, a data team’s job is the production of knowledge. Your team can be doing the most sophisticated, detailed analysis on the planet, but if it doesn’t impact a stakeholder decision, it’s essentially useless. Does your current stack – and the way that you use that stack – empower your team to create, save, and share these artifacts of knowledge? 

Investing in your stack after you’ve chosen it

Too many teams get sold great software by great sales teams, but end up getting only a fraction of the value they could. Most tools in a data stack aren’t plug and play SaaS solutions – they require a meaningful, continuing investment. And it’s not just getting your data integrated and your team onboarded. It’s about using the tool the way it was designed to be used, following best practices, taking the time to write documentation, etc. 

A great example is Looker – many teams overuse persistent derived tables and model their data poorly, leading to model sprawl and a completely unusable project 2 years down the road. This isn’t the software’s fault, it’s the team’s fault. In the tools you’re using, are your projects organized? Is everything named well? If external teams need to use it, is there clear documentation or resources?

To invest in the stack you’ve chosen, your team needs to be focused on continual learning. We’ve found that designating a single team member as an “intellectual owner” for a particular tool works in some cases – they stay up to date on best practices and maintain contact with the vendor’s engineering team. If your company has a continued education program, this is a good place to spend it.

This is something we think a lot about at Hex, where we're creating a platform that makes it easy to build and share interactive data products which can help teams be more impactful.

If this is is interesting, click below to get started, or to check out opportunities to join our team.