Skip to main content
Blog

What is business logic? Definition and examples for data teams

Revenue means three different things to three different teams. That disagreement is a business logic problem, not a data quality one.

What is business logic? Definition and examples for data teams

Monday morning. The CEO asks, "What's our revenue?" Finance says €4.2M. Marketing says €5.1M. The data warehouse says €3.8M.

Your organization has rules for how it measures itself: what counts as revenue, who qualifies as an active user, when a customer is considered churned. These rules seem simple until a dozen different people have implemented them independently, in a dozen different places, with differences nobody agreed on.

This article explains what business logic is, why it keeps ending up scattered, and how to centralize it before metric drift erodes confidence in your data.

What is business logic?

Business logic is the set of rules, calculations, and transformation criteria that translate raw data into the numbers your organization uses to make decisions. It encodes how your organization measures itself, not just what the data says but what it means.

In software architecture, "business logic" typically refers to the middle tier of a 3-layer application, sitting between the user interface and the data access layer. For data teams, the concept is the same but the implementation lives in SQL, dbt models, and YAML configs rather than application code.

You'll find it in a few consistent places.

Revenue calculations. Revenue usually requires filters and exclusions beyond a simple SUM(amount). A working definition might look like SUM(order_total) WHERE status = 'completed' AND refunded = FALSE. In a semantic model, the same logic might appear as revenue - cost: explicit, reusable, and available to every report that references it.

Active user criteria. Defining an active user means deciding which actions qualify, over what time window, and with what deduplication logic. "Someone who logged in" isn't a definition. The encoded version is.

Churn definitions. A churn definition starts as a business policy, then gets written into SQL. Does a customer churn after 30, 60, or 90 days of inactivity? Does reactivation cancel a prior churn event? These aren't obvious, and how you answer them has to be written down somewhere.

When teams write it consistently in one place, everyone from product to finance pulls the same number.

Business logic vs. business rules

You'll hear these used interchangeably, but the distinction is practically useful.

Business rules are the organizational policies: the what. "Revenue is only recognized when payment is received and goods have shipped." Finance leadership decides that. It describes intent.

Business logic is the technical implementation: the how. SUM(order_total) WHERE payment_status = 'paid' AND fulfillment_status = 'shipped' is what turns that intent into something a database can execute.

That split is where collaboration gets cleaner. Business stakeholders can own and validate the rules without touching SQL. Analytics engineers own the implementation. When a business rule changes (moving the churn window from 60 to 90 days, say), a centralized implementation means you update it once, and every report using that metric reflects the change automatically.

Why business logic keeps ending up in the wrong place

Without that centralization, you're hunting down the same change in five BI tools and three analyst notebooks. And that's exactly what happens at most organizations: every new tool creates another place where someone has to answer "How do we calculate revenue here?" Each answer drifts slightly from the last, whether the difference comes from rounding, transaction filters, or a different lookback window. Hex's State of Data Teams research puts it plainly: trust is the single biggest barrier to AI adoption, cited nearly twice as often as any other concern. Metric inconsistency is a core reason why, and it's hard to build trust in self-serve when the same question returns different answers depending on who's asking.

Most teams end up with business logic scattered across:

  • BI tools and dashboards, where each platform needs its own metric implementation. Change one definition and you're touching five tools.
  • Analyst notebooks and scripts, where each analyst has written their own version of LTV, CAC, or churn. The dashboards look similar. The numbers don't match.
  • Spreadsheets, where logic gets buried in nested formulas across tabs that nobody fully understands anymore, including the person who built them.
  • People's heads, where metric knowledge lives until those people leave. Centralization is also documentation.

The result is predictable: cross-functional meetings derail into debates about which number is right, and trust in the data team erodes. That's how Finance, Marketing, and the warehouse all report different revenue figures for the same period: each number valid within its own system, none of them reconciled with the others.

Most organizations underestimate what metric drift actually costs them, because the downstream consequences — stalled projects, repeated analysis, broken forecasts — rarely get attributed back to the original definitional disagreement. They just feel like friction.

The AI complication

Scattered logic also creates a specific problem when you bring AI into the workflow. Without centralized metric definitions, an AI assistant infers your formulas from raw schema. It might decide Revenue = SUM(amount) from the transactions table, while your actual formula is SUM(order_total) WHERE status = 'completed' AND refunded = FALSE from the orders table. The output looks plausible. It can still be materially wrong, and if nobody checks the generated SQL, that error moves straight into a dashboard. Centralized definitions give AI agents a stable reference point instead of a best guess.

How teams centralize business logic progressively

Most teams don't get here in one move. The practical path runs from lightweight signals to formal semantic models, and you can stop at whatever level the pain justifies.

Step 1: Document and endorse

Add descriptions to the 10–20 most-queried tables and columns. Tag trusted datasets as "endorsed" or "certified." Write metric definitions in a README, wiki, or data catalog. Label deprecated tables so analysts stop accidentally using them. These signals give both humans and AI agents clearer direction: which tables to trust, which to avoid, and where your team has already reached agreement. No new tooling needed.

Step 2: Move logic into version-controlled transformations

Pull business logic out of BI tools and into tested, version-controlled dbt models. You get consistent transformation logic, code review for business rule changes, and regression tests that catch metric drift before it reaches a dashboard. SQL models help, but they don't fully solve semantic consistency on their own: teams still need a clear way to identify the right models and right joins across a complex schema.

Step 3: Formalize with semantic models when you're ready

Semantic models define metrics and dimensions in YAML that a query engine translates into SQL. The result is "define once, use everywhere" across dashboards, notebooks, and AI query interfaces. When finance asks for revenue in a natural language question and an analyst queries it in a collaborative notebook, they get the same number, because they're reading from the same definition.

If you're running one BI tool for 90%+ of your analytics, a BI-native semantic layer is often enough. If you have multiple tools or are rolling out AI broadly, a unified workspace with a shared semantic layer becomes more important.

That's where Hex fits in. Your team can define metrics directly in Hex or use semantic model sync to bring in existing definitions from dbt MetricFlow, Cube, or Snowflake, whichever fits where you are.

If you want to accelerate that work, Hex’s Semantic Model Agent builds and edits semantic model definitions without you having to start from scratch. And Context Studio closes the loop, surfacing where use questions to agents are hitting governance gaps and which definitions need attention as usage scales.

Teams that centralize definitions tend to stop having the Monday morning revenue argument. The underlying data didn't change. The agreement did.

Who owns business logic, and why that question matters

Unclear ownership is the primary driver of logic drift. When nobody explicitly owns "revenue," multiple teams redefine it in parallel and the definitions diverge.

The confusion usually comes from mixing up two different things: who owns the metric's meaning, and who owns its implementation. Most organizations need to keep these distinct:

When this is explicit, business stakeholders validate that the churn definition reflects policy, analytics engineers make sure the SQL is right, and downstream reports update through a controlled process. Fuzzy ownership is where drift starts.

Calendly's approach illustrates this directly. Their go-to-market analytics team was stretched thin: fast on stakeholder requests, but without the bandwidth to build reproducible, trusted data assets. They built a Standardized Metric Library in Hex to document company KPIs, and that shared reference became the tiebreaker for conflicting reports across the organization. The ownership decision mattered as much as the tooling. Leaders gained confidence in the numbers only once someone was clearly responsible for maintaining them.

The hardest part of this work is usually organizational. You have to sit with business teams and agree on the rules before any tool enforces them.

Getting your business logic in order

Business logic is how your organization turns raw data into decisions. Scattered logic produces conflicting numbers, stalls decision-making, and gives AI systems enough ambiguity to guess wrong. Shared definitions give analysts, engineers, and business teams a common foundation for self-serve analytics, reporting, and the kind of analysis that actually moves things.

You don't need to solve this all at once. Start by documenting and endorsing your most important tables. Move logic into version control. Add semantic models when the pain justifies the investment. Make ownership explicit: who defines each metric, who implements it, who keeps it current.

Do that, and the Monday morning conversation stops being about which number is right and starts being about what to do with it.

Request a demo to see how Hex works with your data.

Frequently Asked Questions

How does centralizing business logic affect self-serve analytics adoption?

Self-serve tends to break down when different teams pull different numbers for the same metric. Once that happens, people stop trusting the tool and go back to filing requests with the data team, which defeats the whole point. Consistent business logic is what makes self-serve reliable, because the metric definition holds across dashboards, notebooks, and AI queries. According to Hex's State of Data Teams research, trust is now the top concern for data leaders adopting AI — cited nearly twice as often as any other barrier — and metric inconsistency is a primary driver of that trust gap.

What's the difference between a semantic layer and a metrics layer?

A metrics layer focuses on calculations: formulas, filters, and aggregations. A semantic layer covers more ground: table relationships, valid joins, dimension hierarchies, access controls. Think of the metrics layer as one component within a broader semantic layer. In practice, tools like dbt MetricFlow, Cube, and Hex's Semantic Modeling cover both metric definitions and broader semantic context, though the scope varies by product. For teams running multiple BI tools or deploying AI broadly, you typically need the fuller semantic layer sooner than you'd expect.

How do you get business stakeholders to agree on metric definitions?

Start with a metric they're already arguing about. Revenue, churn, and active users are the usual suspects because the disagreement is already visible at the executive level. Frame the conversation around a specific conflict (Finance and Sales reporting different revenue figures, for example), rather than making the case for data governance in the abstract. Keep the scope tight: agree on 5–10 critical metrics, document the definitions in plain language, and have the analytics engineering team implement them in a single place. Once leaders see a number stop moving between reports, they're usually willing to do it again for the next batch.

Get "The Data Leader’s Guide to Agentic Analytics"  — a practical roadmap for understanding and implementing AI to accelerate your data team.