Most data teams believe in self-serve, but few are satisfied with the current state. We asked leaders at Settle, Doximity, and Kaplan what comes next.
We surveyed over 2,000 data leaders in our State of Data Teams 2025 report, and a major finding on self-serve stood out: 70% of respondents think self-serve is a worthy goal, but 53% are unhappy with their BI platforms.
The gap between goal and current status isn’t just an interesting data point. In our qualitative responses, respondents shared charged thoughts:
“Our environment is a huge mess, we are cleaning it up this year.”
“We’re unhappy, internal users complain about all all the time.”
“I hate it, it's a waste of time.”
Self-serve is already beset by conflicting definitions (not everyone agrees on what self-serve really means), and current tools aren’t cutting it. More and more teams are experiencing dashboard fatigue and are looking for better, more advanced approaches.
To find answers, we brought together three data leaders from organizations of varying sizes:
Nicklas Ankarstad, Sr. Director of Analytics at Settle, an inventory, procurement, and payments management platform with a three-person data team.
Grace Finnan, Sr. Data Analyst at Doximity, the country's largest network for medical professionals, with a mature, decentralized data organization and a data team for every product function.
Balin Michael, Sr. Manager of Data Science at Kaplan, a provider of educational programs, with separate data teams across divisions, some as small as one, with Michael as a team of one supporting 30-40 employees.
Despite the different sizes and structures of their organizations, one theme emerged: The dashboard is important, and will remain important, but self-serve has to do more to truly drive business value.
Our participant's answers have been lightly edited for clarity.
Ten years ago, there might not have been such a consensus on self-serve. Nowadays, as our survey showed, 70% of data leaders believe self-serve is a worthy goal. But what is self-serve? Definitions still vary, and some data teams treat dashboards as a means to an end, while others treat them as the ultimate goal.
Looking ahead to 2025, are there any aspects of our approach to dashboards and self-service that we're missing?
Nicklas Ankarstad, Settle: "I think they’re a part of a product portfolio. Dashboards have a purpose. I think they serve a specific need. There are people who like to log in and see the numbers, and that's part of their routine. It’s similar to how some people like to read the newspaper in the morning. They come in to check the numbers.
I've been pushing toward meeting the stakeholders where they are. Instead of having a dashboard that you have to log into so you can see what the numbers are for today, for example, can I just send the numbers to you in a Slack message automatically and have them show up in the same spot?
We also do a lot of moving data from one place to another. So, reverse ETL-ing, and those types of things. I think that is also a big component of driving self-serve, of meeting the customer or stakeholder where they are. If you want to see it in Salesforce, if you don't want to log into my BI tool, then okay, I'll send it to Salesforce. I just make sure that I govern the definitions and run through all those things."
Grace Finnan, Doximity: "For us, at least on the team that I'm on, we're not a client-facing team. We’re in the background, and our dashboards are really just for the product manager. We don't really have any sort of external stakeholders. But for those teams, certainly dashboarding is a big part of what they do."
The functionality of self-serve often leads to the concept of democratization, but democratization is as much a people problem as a technology problem. What freedom are you going to give to your business stakeholders? What level of control do you want to have as a data team? And most importantly, how do you strike the right balance?
Grace Finnan, Doximity: "We have a pretty extensive dimensional modeling system at Doximity. That's our source of truth for reporting metrics, I tend to find that if the question is really complex, then the stakeholders tend to ask a data analyst to get the answer for them, even if the data is available for them to use, because it’s just a little bit intimidating.
Even if the data is there and it's correct and reliable, and we all know that it's reliable, it's still a barrier to entry sometimes, because they may feel like it's too much to take on. But a lot of the time, if it's a more straightforward kind of question, we have a lot of trust in our dimensional models. They are the reliable source of truth for data, and that answer is going to be correct if they go there for it."
Nicklas Ankarstad, Settle: "I think from the reverse side, a little bit. Startups tend to try to be product-led, and a lot of things that impact my world happen upstream. If we launch a new product or add a new feature, and there are tables created, I need to do something with those. There are new objectives we're trying to hit. A lot of that stakeholder interaction comes from a desire to do this with our product and evolve it and revise it.
I need to work with PMs. I need to work with engineers and developers to get the information that they're building and the knowledge that they have, so that I can pretend like I know what they're talking about to a stakeholder who's asking a question.
I'm a big believer in developer-driven documentation. A nice way to do it is to have the developer-led documentation happen, but make that easy for the developers by having suggestions and AI as a part of that. That way, if you have to write a definition, I can just adjust that definition.
My table's documented, and I don't have to do a bunch of stuff. I just know what it is, and then I can surface that back up to my stakeholders downstream. We use some fancy AI tooling for that. That's a big piece for us, whether I can make that experience easier for them. There's not this big handoff where someone dumps a bunch of stuff. I click a button, ingest it, get a little bit more information, and then I can pass that information along to my stakeholders."
Semantic models go a long way toward ensuring accuracy, but effective data governance requires a multi-pronged approach that enables self-serve, democratized access without sacrificing trust in your data as an asset. Organizations must ensure they’re not compromising data security or granting unauthorized access to sensitive information, while also enabling users to access the data, derive insights, and build consensus based on the data they’re using. It’s a thorny problem and a difficult tension – how should teams strike the best balance?
Balin Michael, Kaplan: "I'm really the only one building data models and querying data in our warehouse. That's fairly self-contained, and the only way that gets exposed is through either an ad hoc report that I would run or building dashboards in Hex or Tableau.
We have a centralized team that manages our whole enterprise. It’s up them them to make sure, with manager approval, that the folks who are requesting access get it. Usually, we do. I can always set specific permissions on individual dashboards.
Sometimes, there are dashboards that we only want a couple of people to see. And that has worked out pretty well for us. It's tougher to see what people do once they take that information out of a dashboard. I don't think I would have a good answer for making sure that that information is used as the data producer intended."
Nicklas Ankarstad, Settle: "When we talk about getting everybody to speak the same language and the data governance component of things, I do think that that's an important piece. But, to me, there are steps in the process where it's okay to have divergent opinions.
Usually, that happens when we're going through how we count something, for example. We're going through the process, and it is okay that we're not all speaking the same language for the time being. It's an iterative process of refinement because if I just came in and said, “This is the way we're going to do things,” then I’d lose all those people, and they won't use my stuff, and they’ll create their own silos and do their own things. Letting all the ideas flow freely – that needs to happen first.
In the definition process, there is the path towards where we drive towards convergence, and that's where we go, as a data team, and say, “Hi, we're the data team. We all agreed that this is the definition we want, and we're going to put this in our governed model, and we're going to use that. It'll take me X amount of time to get this finished, but then I want everybody using that definition. You can use your pseudo-ones for now, but then I want everybody using that. Are we all in agreement? I will do this work if you all commit to that.”
Usually, people say, “Absolutely. Can you also create this cool dashboard?” And I say,
“Absolutely. We'll throw that in for free.” And that’s the iterative process, and that’s how we drive that divergent thinking towards convergence. It’s a big piece of what we do and why this isn't a technical challenge – it's a people challenge. And that’s why I still feel good about that. I will have a job even after all the AI stuff happens."
Grace Finnan, Doximity: "For our dimensional models, we have YAML schemas that are the documentation, and we help document not only what's in this table, but also how it relates to the other table. The structure and the relationships are all documented. We have code owners for the repository that holds all of the dimensional models.
Anytime a PR comes out for a new change, it gets reviewed by one of us to make sure that everything stays high quality, that everything's documented. We have a team that is in charge of all of the universal metrics tables that a bunch of the teams use.
That includes email engagement, events, and engagement in general. It includes anything that's an upstream table that other teams will use. There is one team dedicated to that, and all they do is make sure that our reporting metrics are accurate and that there are no tracking issues.
Each individual team has its own tables for its own uses, but we also really try to make sure that those are as accurate as possible, too. You can't really control what people do once they get hold of your data, and sometimes, some other team is using a table we made, and we're like, “Wait, what? What are you doing with that?”
The best you can do is ensure that it's well-documented. That way, everyone knows what it is supposed to be. That's all you can do from there."
In the State of Data Teams survey, we asked data leaders about the future of the front end of the data stack. Some respondents said they were tired of dashboards; others wanted more integrated systems; and still others appreciated dashboards but wanted them to be a smaller part of a bigger puzzle. AI is, of course, on the top of everyone’s mind, and the future seems exciting but uncertain. If we project even just two years in the future, what do you think the ideal versions of BI and self-service will be?
Nicklas Ankarstad, Settle: "Two years isn't a long time to move an entire industry, so I'm just going to go with where I think more advanced stuff will be.
This idea – can I do less of the reporting and analysis and have a more automated process writing up reports and saying the variance was X, Y, and Z, and look at a chart and decipher what it means, and spitting that out into words that a layperson can read – I think this is something that is relatively easy. I've seen LLMs that can already do that.
I think that would be a continuous push. Instead of just seeing a chart and a dashboard, you get a report that actually speaks to you and shows you what actions you should take. We've been exploring some things with that and seeing that it's actually pretty decent.
It's a junior-level analyst at this point. It is not a senior-level analyst, but it can do some stuff, and I think that's from a continuous iteration from an analyst sharing through an LLM bot. I think that's a big piece. The other piece I think that's going to continue is that I think data gets more permeated, and you ship data to other platforms. That trend definitely continues."
Grace Finnan, Doximity: "We’re actively exploring ways that we can start leveraging AI for this. We have some teams doing deep dives into different tools and tech that we want to use.
One thing that we want to try doing is to model the dimensional modeling ecosystem semantically, so that it can support frequently asked questions that, right now, would require a DA to have to explain. We also want to try exploring self-serve tooling to enable some non-technical people to experiment with LLMs for an MVP or a prototype.
Generally, for AI, we want to consider the tools and the functionality that they bring with as much intention as we have with our current modeling infrastructure before AI. “Garbage in, garbage out” still applies. Data and data analysts bring value by asking the right questions and being able to communicate those actionable insights. An LLM might be pretty good at it now, but it's always nice to have the human touch for that, too."
Balin Michael, Kaplan: "Generally, with leveraging AI, the way I've started thinking about it is for more personalized or ad hoc work. Maybe that work gets offloaded to a certain number of dashboards that are currently built in a flexible way to try and serve multiple purposes.
Those might not exist as much in the future, because you might have a more static executive level thing that everyone agrees on, including things like the top metrics for the quarter or the year. That would still be in a similar state as today, although hopefully pushed to where folks need it rather than having to have everyone log in.
Hopefully, too, giving more context to the data that business users are using in their other tools, whether that's a CRM or an email marketing platform, and enabling them to have that data closer to where they're doing their work."
At the end of our discussion, an audience member asked a question that was almost worthy of its own webinar: How should data teams think about centralization versus decentralization? The audience member in question was trying to build a hybrid structure that included embedded analysts in specific parts of the business while maintaining a central data team. Our panel had varying viewpoints, diverging from their experiences working in different organizational structures and at different sizes.
Grace Finnan, Doximity: "We have an infra team, and they support every data team on all of the teams as far as the tooling that we use and making sure that we're all consistently using the same tools. That definitely helps. I think that being able to have consistency in the tooling between teams is really important.
With the dimensional model, having a ground truth definition of “This is what this thing is,” which everyone can get behind and know, is very helpful. That way, when you expand out to other teams, you're on the same page and you’re all using the same definition of whatever it is that you're doing.
Having a team that helps with the engineering functions of the tools that the analysts are going to use helps. I think having a central team to keep those consistent across the other teams is really important."
Balin Michael, Kaplan: "In a similar vein with tooling, best practices for documentation, testing, and code review – those things would be very helpful to have standardized across a broken-up data team. Similarly, owning some of the infrastructure or having access to governance on the warehouse itself, that might be nice to have centralized in a central team. I think it's still very important to have folks who are embedded and closer to the functions they're supporting to get as much of the context as they can."
Nicklas Ankarstad, Settle: "I've lived through all of those models in some way, shape, or form in prior jobs, and I think the answer is always somewhere in the middle. If you go full decentralization, then you lose some of that network effect, the effect of coming together and having people actually talk about the work. Being a solo data person is a little tricky because you don't have all that to bounce ideas off of.
There are drawbacks to the fully decentralized model and the fully centralized model. You can end up being so far away from the business that you're just talking to the data people all the time. And that's a little tricky. You need to find the happy medium where you're connected enough on the business side to have the necessary context – you need those conversations – but you also need to be connected enough on the data side.
With a lot of data engineering tasks, especially the types of things that are a little more behind the scenes, the answer is probably to centralize all of that. The other work is going to depend a little on your company culture and how far along the spectrum you go. But I don't think you have to be fully centralized or fully decentralized. I think you have some power users in the business, and some analysts on the data team."
At Hex, we think that BI has become too much of a stand-alone toolset. The way organizations work with data – whether that’s the dashboard or the report – is an important part of the puzzle, but, as this discussion proved, BI can’t stand alone.
Every organization encounters an analytics cycle, regardless of what tool they use, where the data team answers hard questions, publishes their insights, and turns them into a library of answers people can explore on their own. But eventually, those users run into a wall and have questions the self-serve tooling can’t answer. Because this is a cycle, data teams need to establish a mechanism for people to volley questions back to the data team.
This can be a vicious cycle or a virtuous cycle. When it’s virtuous, you don’t end up with an overwhelming number of dashboards with one-off answers. You end up with models that compound in value, which offers a better way to build trust and consensus around data.
And watch the rest of our State of Data Teams live event series here!