 
October brought a shift in our conversations. After months exploring ETL costs, migration strategy, and transformation tooling, we turned our attention to the output layer, where data teams often burn the most credibility.
Season 3 of the Data Matas podcast launched with three conversations that challenge conventional wisdom: Sami Rahman explaining how Hypebeast reached 97% AI adoption by deliberately withholding access, Ollie Hughes arguing that BI delivers the worst ROI in the modern stack, and Phil Thirlwell showing what data leaders should kill before building another dashboard.
Our LinkedIn Live session complemented these insights by asking whether DuckDB genuinely solves warehouse cost problems or just adds another layer to an already complex stack.
What emerged across all four conversations was a common thread: the problem isn’t lack of tools or speed, it’s clarity, focus, and trust.
Teams achieving real impact aren’t building more. They’re building less, but building it better.
Watch the full LinkedIn Live session featuring Kyle Cheung, Bill Wallis, and Aaron discussing DuckDB’s role in cutting warehouse costs
Our October LinkedIn Live tackled a question many data leaders are asking: “Do we really need to run everything in Snowflake?”
Three practitioners shared their perspectives:
The conversation revealed something important: teams achieving 30-70% cost reductions through DuckDB aren’t necessarily more sophisticated. They’ve recognised that not every workload belongs in an expensive cloud warehouse.
Bill Wallis described his daily experience: “The main way I use DuckDB is to enable my developer workflow. Where the data isn’t sensitive, dump it locally into a parquet file, do all my analytics and development locally with DuckDB.”
The productivity gains extend beyond cost. Local execution means instant feedback loops—queries that took 30 seconds in the warehouse run in under 2 seconds locally.
Watch for these signs:
Do this week: Track your development-related warehouse spend separately from production workloads. If it’s more than 20% of your total bill, you have optimisation opportunities.
Kyle Cheung emphasised understanding design constraints: “It’s incredible for what it does, but it’s designed for a single node. That’s where the limits appear.”
Teams achieving the best results use DuckDB where it excels, local analytics, validation, and caching, whilst keeping governed data and large-scale processing in cloud warehouses.
Look for these problems:
Do this week: Map your workloads by governance requirements and scale needs. Identify which truly need warehouse capabilities versus which could run locally.
Aaron Phethean highlighted the missing link: “We don’t need to rip out good systems. We just need to give teams the flexibility to run smarter.”
Finance sees total warehouse bills without understanding which workloads generate business value versus which burn credits unnecessarily.
Measure these ratios:
Do this week: Implement query tagging to track warehouse spend by team, project, or use case. You can’t optimise what you can’t measure.
The panel agreed: hybrid execution only works when environments behave consistently.
Kyle noted: “You need your local environment to behave like production, or you’re just creating different problems.”
Implementation steps:
Do this week: Document your promotion workflow. If it involves manual steps or unclear handoffs, you’re creating technical debt.
The Data Matas podcast returned this month with conversations that fundamentally question how data teams operate. Rather than focusing on tools or platforms, these discussions explore behaviour, culture, and what actually drives trust in data.
Sami Rahman, Director of Data & AI at Hypebeast, shared how his psychology background shaped an unconventional adoption strategy: deliberately withholding AI access for 10 weeks whilst building curiosity through teasers and demos.
The result? 97% adoption within three weeks of launch.
Sami explained the core insight: “The reason people are fearful around AI isn’t the tech. It’s because they don’t trust governments or institutions to look after them if jobs disappear.”
By framing AI as a force multiplier rather than replacement, automating boring tasks like trend scanning and system updates whilst keeping creative decisions human, Hypebeast turned potential resistance into demand.
Key implementation lessons:
What Hypebeast gained: Staff spending less time on drudgery, faster decision-making through consistent information packaging, and a lean toolkit that only includes high-value agents.
Read the full article | Listen to the episode
Ollie Hughes, CEO of Count, delivered an uncomfortable truth: whilst every other part of the data stack has evolved, BI tools remain stuck in 2005 paradigms.
“You read a dashboard, you discuss it somewhere else, and you try to work out what’s going on. That paradigm is still the same as 2005.”
The conversation explored why more dashboards don’t equal better decisions, how data teams get trapped in “service desk” mode, and why accuracy alone doesn’t build trust.
Ollie’s prescription: stop treating BI as a broadcast medium and start enabling collaboration where analysts and stakeholders work through problems together.
Core insights:
What successful teams do differently: They filter requests ruthlessly, co-create analysis with stakeholders, and make “showing their working” standard practice.
Read the full article | Listen to the episode
Phil Thirlwell, former data leader at Worldpay and FIS, brought nearly a decade of experience wrestling with dashboard sprawl, KPI overload, and the “can you just…” request culture.
His perspective is refreshingly direct: at one point he inherited 600 Power BI dashboards, most outdated, some contradictory, none fully trusted.
“It feels good to be answering those questions because it feels like you’re giving immediate value, right? But you’re not really producing scalable solutions that are maybe getting to the heart of the problem.”
Phil’s prescription: kill dashboard sprawl, kill service-desk mentality, and kill KPI overload, in that order.
Implementation framework:
The real-world impact: Phil’s team at FIS cut hundreds of reports into a handful of trusted products. Engagement jumped, decision-making accelerated, and confusion dropped.
Read the full article | Listen to the episode
Whether discussing DuckDB adoption, AI rollout strategy, BI effectiveness, or dashboard governance, the same lesson emerged: subtraction often delivers more value than addition.
Teams achieving sustainable results aren’t chasing the latest tools. They’re:
The data leaders gaining credibility with their organisations share a common approach: they measure impact by clarity and decisions enabled, not dashboards shipped or credits consumed.
If your team is hitting limits, whether warehouse costs that scale with volume rather than value, AI adoption that’s stalled despite executive pressure, or dashboard sprawl that’s eroding trust, the path forward starts with honest assessment.
Are your costs scaling with business value? Is your team spending more time on maintenance than strategy? Are stakeholders asking for more reports because they don’t trust the ones they have?
We’ll help you identify where optimisation reaches its ceiling and map the most strategic path forward. If changes make sense, we’ll show you how to validate alternatives without traditional risks.
You’ll get concrete benchmarks of your current setup and clear visibility into improvements you can present to leadership with confidence.
Best,
Aaron
You’re receiving this because you’re part of our data community.
Stay up to date with the latest news and insights for data leaders.