The Data Leaders Digest (October 2025)

Published on October 31, 2025

This Month’s Focus: Killing What Doesn’t Work Before Building What Does

October brought a shift in our conversations. After months exploring ETL costs, migration strategy, and transformation tooling, we turned our attention to the output layer, where data teams often burn the most credibility.

Season 3 of the Data Matas podcast launched with three conversations that challenge conventional wisdom: Sami Rahman explaining how Hypebeast reached 97% AI adoption by deliberately withholding access, Ollie Hughes arguing that BI delivers the worst ROI in the modern stack, and Phil Thirlwell showing what data leaders should kill before building another dashboard.

Our LinkedIn Live session complemented these insights by asking whether DuckDB genuinely solves warehouse cost problems or just adds another layer to an already complex stack.

What emerged across all four conversations was a common thread: the problem isn’t lack of tools or speed, it’s clarity, focus, and trust.

Teams achieving real impact aren’t building more. They’re building less, but building it better.

Watch the full LinkedIn Live session featuring Kyle Cheung, Bill Wallis, and Aaron discussing DuckDB’s role in cutting warehouse costs

When Adding Another Tool Isn’t The Answer

Our October LinkedIn Live tackled a question many data leaders are asking: “Do we really need to run everything in Snowflake?”

Three practitioners shared their perspectives:

  • Kyle Cheung from Greybeam explained how teams use DuckDB to eliminate development and testing costs from warehouse bills
  • Bill Wallis from Tasman Analytics brought community insights on where DuckDB delivers genuine productivity gains
  • Aaron Phethean from Matatika connected these tactical wins to broader strategic questions about vendor lock-in and workload placement

The conversation revealed something important: teams achieving 30-70% cost reductions through DuckDB aren’t necessarily more sophisticated. They’ve recognised that not every workload belongs in an expensive cloud warehouse.

Four Signals You’re Burning Warehouse Credits Unnecessarily

1. Development Workflows Hit Production Compute

Bill Wallis described his daily experience: “The main way I use DuckDB is to enable my developer workflow. Where the data isn’t sensitive, dump it locally into a parquet file, do all my analytics and development locally with DuckDB.”

The productivity gains extend beyond cost. Local execution means instant feedback loops—queries that took 30 seconds in the warehouse run in under 2 seconds locally.

Watch for these signs:

  • Engineers waiting for warehouse scheduling during development
  • CI/CD pipelines triggering full warehouse refreshes for minor changes
  • Data scientists burning credits exploring datasets that could run locally

Do this week: Track your development-related warehouse spend separately from production workloads. If it’s more than 20% of your total bill, you have optimisation opportunities.

2. You’re Treating DuckDB as Warehouse Competition, Not Complement

Kyle Cheung emphasised understanding design constraints: “It’s incredible for what it does, but it’s designed for a single node. That’s where the limits appear.”

Teams achieving the best results use DuckDB where it excels, local analytics, validation, and caching, whilst keeping governed data and large-scale processing in cloud warehouses.

Look for these problems:

  • Attempting to run multi-user production workloads in DuckDB
  • Missing the governance and audit trails that warehouses provide
  • Forcing DuckDB into use cases where scale matters more than speed

Do this week: Map your workloads by governance requirements and scale needs. Identify which truly need warehouse capabilities versus which could run locally.

3. No Visibility Into Unit Costs

Aaron Phethean highlighted the missing link: “We don’t need to rip out good systems. We just need to give teams the flexibility to run smarter.”

Finance sees total warehouse bills without understanding which workloads generate business value versus which burn credits unnecessarily.

Measure these ratios:

  • Warehouse credit consumption by workflow type (development, production, ad-hoc)
  • Cost per pipeline run comparing warehouse versus local execution
  • Time-to-validation improvements from faster feedback loops

Do this week: Implement query tagging to track warehouse spend by team, project, or use case. You can’t optimise what you can’t measure.

4. You’re Adding Tools Without Removing Complexity

The panel agreed: hybrid execution only works when environments behave consistently.

Kyle noted: “You need your local environment to behave like production, or you’re just creating different problems.”

Implementation steps:

  • Integrate DuckDB with dbt or SQL Mesh to maintain identical transformation logic
  • Use validation tools to compare results before committing to architectural changes
  • Establish clear promotion criteria, when local validation passes, automated deployment pushes to warehouse

Do this week: Document your promotion workflow. If it involves manual steps or unclear handoffs, you’re creating technical debt.

Season 3 Launch: Three Conversations That Challenge Everything

The Data Matas podcast returned this month with conversations that fundamentally question how data teams operate. Rather than focusing on tools or platforms, these discussions explore behaviour, culture, and what actually drives trust in data.

Episode 1: Want 97% AI Adoption? Start By Saying “Not Yet”

Sami Rahman, Director of Data & AI at Hypebeast, shared how his psychology background shaped an unconventional adoption strategy: deliberately withholding AI access for 10 weeks whilst building curiosity through teasers and demos.

The result? 97% adoption within three weeks of launch.

Sami explained the core insight: “The reason people are fearful around AI isn’t the tech. It’s because they don’t trust governments or institutions to look after them if jobs disappear.”

By framing AI as a force multiplier rather than replacement, automating boring tasks like trend scanning and system updates whilst keeping creative decisions human, Hypebeast turned potential resistance into demand.

Key implementation lessons:

  • Focus on “unsexy” use cases that free up time for valuable work
  • Create curiosity gaps rather than forcing adoption
  • Treat AI agents as disposable products with clear lifecycle benchmarks
  • Delete underperforming agents without sentimentality

What Hypebeast gained: Staff spending less time on drudgery, faster decision-making through consistent information packaging, and a lean toolkit that only includes high-value agents.

Read the full article | Listen to the episode

Episode 2: BI Has the Worst ROI in the Modern Data Stack

Ollie Hughes, CEO of Count, delivered an uncomfortable truth: whilst every other part of the data stack has evolved, BI tools remain stuck in 2005 paradigms.

“You read a dashboard, you discuss it somewhere else, and you try to work out what’s going on. That paradigm is still the same as 2005.”

The conversation explored why more dashboards don’t equal better decisions, how data teams get trapped in “service desk” mode, and why accuracy alone doesn’t build trust.

Ollie’s prescription: stop treating BI as a broadcast medium and start enabling collaboration where analysts and stakeholders work through problems together.

Core insights:

  • The service trap caps your value by limiting impact to the quality of questions being asked
  • Trust comes from methodology transparency, not just being right
  • Prioritisation matters more than speed, solving the most important problem beats answering every request
  • Operational clarity (showing how the business fits together) delivers more value than dashboard proliferation

What successful teams do differently: They filter requests ruthlessly, co-create analysis with stakeholders, and make “showing their working” standard practice.

Read the full article | Listen to the episode

Episode 3: Three Things Every Data Leader Should Kill Before Building Another Dashboard

Phil Thirlwell, former data leader at Worldpay and FIS, brought nearly a decade of experience wrestling with dashboard sprawl, KPI overload, and the “can you just…” request culture.

His perspective is refreshingly direct: at one point he inherited 600 Power BI dashboards, most outdated, some contradictory, none fully trusted.

“It feels good to be answering those questions because it feels like you’re giving immediate value, right? But you’re not really producing scalable solutions that are maybe getting to the heart of the problem.”

Phil’s prescription: kill dashboard sprawl, kill service-desk mentality, and kill KPI overload, in that order.

Implementation framework:

  • Audit existing dashboards and archive anything with zero usage in 90 days
  • Replace ad-hoc request queues with structured intake linked to business outcomes
  • Limit executive dashboards to fewer than 10 KPIs
  • Co-develop metrics with the people who generate the data to build ownership
  • Simplify before you automate, AI multiplies whatever’s already there

The real-world impact: Phil’s team at FIS cut hundreds of reports into a handful of trusted products. Engagement jumped, decision-making accelerated, and confusion dropped.

Read the full article | Listen to the episode

The Pattern Across All Four Conversations

Whether discussing DuckDB adoption, AI rollout strategy, BI effectiveness, or dashboard governance, the same lesson emerged: subtraction often delivers more value than addition.

Teams achieving sustainable results aren’t chasing the latest tools. They’re:

  • Running workloads where they belong, not where vendors want them
  • Creating demand through scarcity rather than forced adoption
  • Focusing engineering capacity on problems that matter
  • Killing what doesn’t work before building more

The data leaders gaining credibility with their organisations share a common approach: they measure impact by clarity and decisions enabled, not dashboards shipped or credits consumed.

Your Next Step

If your team is hitting limits, whether warehouse costs that scale with volume rather than value, AI adoption that’s stalled despite executive pressure, or dashboard sprawl that’s eroding trust, the path forward starts with honest assessment.

Are your costs scaling with business value? Is your team spending more time on maintenance than strategy? Are stakeholders asking for more reports because they don’t trust the ones they have?

We’ll help you identify where optimisation reaches its ceiling and map the most strategic path forward. If changes make sense, we’ll show you how to validate alternatives without traditional risks.

Book Your ETL Escape Audit

You’ll get concrete benchmarks of your current setup and clear visibility into improvements you can present to leadership with confidence.

Further Resources

Best,
Aaron

You’re receiving this because you’re part of our data community.

Connect to Apps & Data now
Use Matatika to rapidly produce insights from more than 500+ apps and community sources
Speak to an expert
Build a connector
Integrate your App or securely connect to your private data.
Learn more
Partner with us
Are you a data provider? We can work with you to publish your data.
Contact Us

Data Leaders Digest

Stay up to date with the latest news and insights for data leaders.