Data leaders from HubSpot, Wellthy, and Harvest Hosts weigh in on AI
AI is everywhere, but there's a curious disconnect: while 77% of data leaders in our recent State of Data Teams Report express excitement about AI's potential, only 3% listed it among their team's top priorities. What explains this paradox?
To find answers, we brought together three data leaders from organizations of varying sizes:
Stephen Moseley from HubSpot, a CRM platform with a 150-person centralized analytics team
Kelly Burdine from Wellthy, a digital caregiving platform with a mid-sized central data team
Brent Driscoll from Harvest Hosts, an RV camping membership company with a two-person analytics team
Despite their different organizational structures, one thing became clear: AI is already deeply integrated into their development workflows, though maybe a little bit more behind the scenes, than as headline initiatives.
Get the recap below! And register for our upcoming State of Data Teams webinars on self-serve and data governance.
Our participant's answers have been lightly edited for clarity.
Stephen Moseley, HubSpot: "In terms of how our teams are using AI today, there's two areas where we're very opinionated about what people should be doing. The first is for our developers — our analytics engineers, MLEs and data scientists — and that they should be using a tool like Cursor or Claude Code to accelerate their development work.
The second is for our analyst community using a co-pilot. We are a Hex shop, so using a tool like Hex Magic to accelerate how code is generated for the purpose of data exploration and generating advanced analyses.
And then, the frontier that we're pushing on, but I don't think the technology is quite there yet, is that our business users should be using AI and like a chat like interface for discovery and eventually self-service.”
Kelly Burdine, Wellthy: “We are using Cursor for all of our development, so using a lot of AI throughout that process. For a while now, we've been using AI to generate all of our YAML files for any type of new model that we're creating.
We are also starting to use AI within our BI tool. We haven't released it in the wild yet to our business users, but have been testing it, using it ourselves, and are pretty impressed so far with what it's doing.
We've been using AI a lot to explore our codebase. As a software engineer, sometimes it can be difficult to understand how that data is getting generated, how it works, what a null value might mean, and so being able to ask it questions and getting back pretty reliable, accurate answers has been a game changer for us and really sped up development.”
Brent Driscoll, Harvest Hosts: “We've been doing a lot of the stuff that Steven and Kelly mentioned. The only addition I would make is that we’ve been using AI for classification and extraction tasks and it's been super useful for us that way.”
Stephen Moseley, HubSpot: “We have a long-term roadmap for how we hope for it to support business users. We're starting by investing in basic question deflection to help users find data and do basic manipulation of core business metrics.
To determine what the basic questions are we're starting with; we’re using the certified dashboards that sit within each function and then reverse engineering those into the conversational interface. It's like anything that would sit on a core KPI dashboard for one of our functional teams.”
We found that AI can do really well if you're building it on a narrowly scoped semantic model, but once the semantic model expands the AI needs to it introduces the need for the AI to make assumptions about interpretation which reduces like the determinism in the output, so that’s what we’re focusing on right now.”
Stephen Moseley, HubSpot: “In our experience, it's necessary but not sufficient. We're still working through exactly what our strategy is for building the semantic model and exposing it to the AI so it can bring the proper context into the prompt that it's executing.
Context engineering, I think is the new hot term from the last two weeks, and that's actually a lot of what we're spending our time doing. It’s a really nice term. It describes what we're attempting to do with the semantic model, which is using it to bring the appropriate context into the model so it can generate the right SQL and build the right response.”
Kelly Burdine: "It’s tricky. Like most other data teams you probably feel like you're sprinting all the time, and so anytime you're trying to add in something else — like building out something with AI or even just implementing AI within your current workflow — you're going to take a temporary step back because you have to learn something new.
There were some updates to the AI features in the BI tool we use and so we spent a day doing a hackathon. I told my leadership team, ‘We're heads down on this,’ and we threw everything at it to train it, see how good it is and evaluate it. We were pretty floored at how far it's come.
At this point it, it almost feels foolish not to prioritize AI. Even though it’s going to take a bit of time to figure out a new workflow, it's going to save us so much time down the road.
Every day, we’re looking at our tasks and trying to see: Can I use AI with this? Can I make this more efficient? Can I spend a little extra time to learn this so that the next time I do this is significantly easier?"
Stephen Moseley, HubSpot: "I totally agree with you, Kelly. The mental model that we’re using is; exploration voice versus exploitation. I see a lot of these AI tools as being little experiments, and at some point it flips over to having enough confidence and wanting to exploit it. It helps us think through: when have I learned enough about an AI tool to either reject it or move it into the exploitation phase?"
Kelly Burdine, Wellthy: "I think it always helps to have at least one advocate or leadership approval. You do need to make it really clear what the trade-offs are. It might be, ‘We also need to invest in some AI exploration and that might push this date back for this other thing. Are we okay with that? Here's what we're going to gain' I also think showing what you're seeing, like results, helps, even if it's really little. For example, we do product demos every couple weeks. And so when we see interesting things, even if it's like not ready for prime time, we're like, ‘Hey, here's something we're looking at,’ And it gets people excited, that helps get that buy-in."
Stephen Moseley, HubSpot: “We just kicked off a project internally to investigate building capability for real-time exploration of unstructured data, particularly on Gong calls. So over the last couple of years, we’ve built corpus of 500,000 Gong calls. Each call is between a half hour and an hour in length and in multiple different languages.
So it's a really rich set of data but really hard to use, especially when the context window of Claude can only handle three or four of them at a time. So there's a lot of work that needs to be done in terms of how do you collect the right sample of Gong calls? How you feed it to the model? And then, how you use prompt engineer with the model to actually get a synthesized result that answers your question?
It's a deceptively hard question because I feel like with the rise of AI tools everyone has a lot of confidence that they can do it on their own, but there's a lot of interesting pitfalls. And so we're just diving into that problem and I think it's tantalizing and really interesting and there's a lot of pent up demand from really every function at HubSpot to be able to explore this set of un unstructured data in a much deeper, specific, and much more interactive way than we have in the past."
Kelly Burdine, Wellthy: "I think what excites me the most is when we're talking about analytics engineering development and being able to automate so much more of that. Now we’re seeing integrations
For example, when a request comes in — we usually commit the form on Slack, and then we generate a linear issue. And now we're seeing integrations where the Linear issue connects to GitHub, and with the right requirements, it can generate that first PR within minutes. That's a game changer.
In the last few months, we’re seeing a lot more integrations with AI and things like Cursor. I think there's one that came out of Y Combinator called like NAO Labs — it's another IDE specifically for data purposes with an AI."
Brent Driscoll, Harvest Hosts: “The time to value has just changed so dramatically recently with Codex and Claude code and all those tools. I'm excited about the self-serve analytics by an agent that can recursively ask scientific method style questions, like: What do I expect to see? Does this match my expectations? And going through the rest of the list on that.
One thing that I have tried and failed a few times to build at Harvest Hosts is like a better memory stack, whether it's RAG or graph, RAG or whatever. I'm not an engineer and it's really hard to build that, so having some out-of-the-box tools to help there would also be a game changer for me. Then being able to build an agent that can do data analytics at Harvest Hosts or somebody else being able to do that same thing.”
And sign up for the rest of our State of Data Teams live event series here!