Related posts for ‘#Blog’

How to Do Even More With Mixpanel Data

Mixpanel gives you brilliant product analytics. Funnels, retention, user journeys. You can see exactly what users do in your application. But as teams mature, they start asking questions Mixpanel can't answer on its own. Which users generate the most revenue? Which marketing campaigns drive engaged customers? How does feature usage correlate with support tickets or churn risk? The most valuable insights come from connecting Mixpanel data to the rest of your business. That's what becomes possible when you extend Mixpanel with a warehouse-first approach.

How to Scale Mixpanel Data Efficiently Without Spiralling Costs

Your product's growing. More users, more events, more insights flowing through Mixpanel. That's exactly what you want. The problem? As your Mixpanel event volume increases, your data infrastructure costs often grow faster than your revenue. High-volume, append-only event data breaks traditional ETL pricing models. Every duplicate event costs money. Every unchanged property gets billed. Growth becomes a financial penalty. There's a smarter way to handle event data at scale.

Three Things Every Data Leader Should Kill Before Building Another Dashboard

Most organisations are drowning in dashboards that no one trusts. In this Data Matas episode, former Worldpay and FIS data leader Phil Thirlwell explains why the key to better decisions isn’t building more it’s stopping first. He breaks down how dashboard sprawl, KPI overload, and service-desk habits create chaos, and how treating dashboards like products can rebuild trust. Phil shares practical ways to simplify metrics, prioritise outcomes, and run data teams with purpose. The takeaway: fewer dashboards, clearer decisions, stronger alignment between data teams and the business.

 How to kill dashboard sprawl, service-desk habits, and KPI overload to rebuild trust in data.

BI Has the Worst ROI in the Modern Data Stack – How to Escape the Service Trap and Drive Real Decisions

Business intelligence is broken. Too many dashboards, not enough decisions. Learn from Count CEO Ollie Hughes how to escape the BI service trap, rebuild trust, and drive real impact through operational clarity and prioritisation.

Introduction

Business intelligence (BI) was meant to be the jewel in the crown of the data stack, the place where numbers become insights and insights become decisions. Yet if you ask most data leaders today, BI delivers the worst ROI in the stack. Teams are drowning in dashboards, executives don’t trust the numbers, and tools that haven’t evolved in 20 years are still eating up budgets.

Ollie Hughes, CEO of Count, argues that the industry has got it wrong. More dashboards aren’t the answer. AI won’t fix reporting chaos. And data teams need to stop behaving like service desks.

This article is based on Ollie’s appearance on the Data Matas podcast and translates his perspective into actionable lessons you can apply right now.

What you’ll learn:

  • Why BI tools are stuck in the past and what that means for ROI
  • How to spot the “service trap” in your team before it kills your value
  • Why accuracy alone doesn’t build trust in data
  • How ruthless prioritisation changes the credibility of a data team
  • Practical steps to create operational clarity instead of dashboard noise

Meet Ollie Hughes

Ollie Hughes is the co-founder and CEO of Count, a canvas-based BI platform built around collaboration, not dashboards. His mission is to reframe how organisations use data: not as a firehose of metrics, but as a tool for genuine decision-making.

He has spent years inside the industry, from building data teams to leading a new wave of BI innovation, and has become one of its sharpest critics. His critique isn’t abstract. It’s grounded in the frustrations most practitioners feel every day: wasted reports, confused stakeholders, and tools that don’t fit how people actually work.

“BI tools are still the same ones we were using 15, 20 years ago. They’re expensive, and all you’re really paying for is to see sales numbers sent around the company. It’s not driving decisions.”

That willingness to say the quiet part out loud, and back it with solutions, makes Ollie an important voice for data leaders rethinking their approach.


Why This Challenge Matters Now

Cloud data platforms, pipelines, and governance tools have all seen dramatic innovation. BI hasn’t. It’s still read-only dashboards that require endless interpretation elsewhere. Meanwhile, every other tool we use, from marketing platforms to document editors, has become collaborative, flexible, and iterative.

The result? Data teams are under pressure to deliver value but stuck with outdated paradigms. Many end up in what Ollie calls the “report factory”: churning out dashboards that confuse more than they clarify.

A common misconception is that more speed solves the problem. Leaders throw AI at reporting in the hope that faster answers mean better decisions. In reality, Ollie argues, the bottleneck isn’t producing numbers, it’s helping humans interpret them and agree on what action to take.

This is why now is the moment to rethink BI. As companies adopt AI at pace, trust, clarity, and prioritisation become the real levers of success.


How to Escape the Service Trap and Drive Real Decisions

1. Recognise That BI Hasn’t Evolved

Most of the modern stack has innovated, BI hasn’t. Dashboards may be prettier, and integrations with tools like dbt smoother, but the core interaction hasn’t changed.

“You read a dashboard, you discuss it somewhere else, and you try to work out what’s going on. That paradigm is still the same as 2005.”

Implementation guidance:

  • What to do first: Audit your BI output. How many dashboards exist? How many are actively used?
  • Tools/structures: Usage analytics inside your BI tool can show adoption and engagement.
  • Watch-outs: Don’t assume integration features equal innovation, the form factor is what matters.
  • Expected benefit: Clear view of which reports genuinely support decision-making.

2. Avoid the Service Trap

Many data teams confuse activity with impact. They answer every request, build dashboards for every stakeholder, and believe they’re adding value. In reality, they’re generating information overload.

“Just doing what the business asks of you floods the company with chaos. If the business is asking stupid questions, the data team is going to be producing stupid answers.”

Implementation guidance:

  • What to do first: Count the number of dashboards per employee. If it’s high, you’re in the trap.
  • Tools/structures: Introduce a request filter or impact sizing model.
  • Watch-outs: Saying “yes” to everything positions your team as a service desk, not a strategic partner.
  • Expected benefit: Fewer but higher-value outputs, stronger alignment with the business.

3. Build Trust Beyond Accuracy

Most data leaders obsess over accuracy. But Ollie warns: accuracy alone doesn’t create trust.

Imagine telling the CEO: “Our regression says move all marketing spend from Channel A to Channel B.” Even if it’s correct, they won’t act on it unless they understand how you got there.

“Trust comes from methodology, transparency, and track record, not just from being right.”

Implementation guidance:

  • What to do first: Make “show your working” a standard practice for every analysis.
  • Tools/structures: Adopt lightweight documentation or visual lineage tools to explain methodology.
  • Watch-outs: Overcomplicating explanations can backfire, clarity beats detail.
  • Expected benefit: Executives who feel confident enough in the process to act on insights.

4. Prioritise What Really Matters

Ollie’s strongest advice: not all requests are equal. Some will change the trajectory of the business; most won’t.

“If you’ve solved the most important problem the business has today and the CEO recognises that, you’ll be remembered for it. That’s what matters.”

Implementation guidance:

  • What to do first: Track where your team’s time goes: maintenance vs problem-solving.
  • Tools/structures: Create a simple payroll allocation dashboard – % time on top 3 business priorities.
  • Watch-outs: Saying “no” is hard, but without it your team’s value will always be capped.
  • Expected benefit: Senior leaders see the team as solving the biggest problems, not just keeping the lights on.

5. Create Operational Clarity, Not More Dashboards

In a world where every SaaS product spits out metrics, the role of the data team is to simplify, not add noise. Ollie calls this “operational clarity.”

“The job is to show the forest, not just the branches. Visualise the business, make it feel simple, align everyone on what matters.”

Implementation guidance:

  • What to do first: Build a single-page growth model that shows how key metrics relate.
  • Tools/structures: Collaborative BI tools (like canvas-based environments) that allow business and data teams to work together.
  • Watch-outs: Don’t replicate existing reports, focus on connections, not duplication.
  • Expected benefit: A business that understands itself better, asks better questions, and makes clearer decisions.

Putting it All Together

The path forward isn’t more dashboards or faster charts. It’s about shifting from outputs to outcomes.

A realistic sequence:

  1. Audit existing dashboards and usage
  2. Filter requests through impact sizing
  3. Make transparency part of delivery
  4. Redirect at least 50% of team capacity to the biggest problems
  5. Replace scattered reporting with a unifying model of the business

Signals of success: fewer but more impactful outputs, leaders asking sharper questions, and a measurable increase in trust in data-driven decisions.


Real-World Impact

Count’s customers have already applied this model. By shifting away from dashboard churn, they’ve reduced noise, improved decision-making speed, and redefined how business and data teams work together.

The result isn’t just cost savings. It’s a cultural shift: data teams that no longer see themselves as report writers, but as strategic partners shaping the direction of the business.


Your Next Move

The lesson from Ollie Hughes is simple: stop measuring success by the number of dashboards you ship. Measure it by the clarity and decisions you enable.

Focus your team’s time on the most important problems, show your working, and embrace operational clarity. That’s how data leaders can turn BI from the lowest ROI into one of the highest.

🎙️ Listen to the full conversation with Ollie Hughes on the Data Matas podcast for more actionable insights.


Dig Deeper

 

How Hypebeast Reached 97% AI Adoption Without Fear or Layoffs

At Hypebeast, 97% of staff now use AI daily not out of fear, but choice. Director of Data & AI, Sami Rahman, reframed AI as a creative ally, not a threat. By focusing on practical wins, like speeding up research and cutting drudgery, he built trust and curiosity. Instead of pushing tools, he created demand through scarcity, measured impact rigorously, and deleted underused agents without sentiment. The result: adoption that stuck, creativity that flourished, and teams that saw AI as empowerment, not replacement. A playbook for leaders who want AI adoption to last built on trust, not hype.

AI adoption is at the top of every data leader’s agenda. Yet most attempts stall. Leaders flood their teams with new tools, staff get overwhelmed, and adoption drops. In some cases, AI is seen as a threat rather than an enabler.

At Hypebeast, it’s different. 97% of staff now use AI agents in their daily work. Not because they were forced, but because they wanted to.

In this episode of Data Matas, I spoke with Sami Rahman, Director of Data & AI at Hypebeast, about how he made AI adoption stick. His story offers grounded lessons for any data leader trying to balance hype with reality.


Meet Sami Rahman

Sami leads Data & AI at Hypebeast, the global fashion and lifestyle brand. His career spans psychology, counter-terrorism, and data science — giving him a rare perspective on how people, systems, and trust interact.

That unusual career path means he doesn’t just see AI as technology. He sees it as part of a wider human system — where behaviour, incentives, and culture matter just as much as code.

“We’re a creative company. We don’t want to replace journalists or designers. But we can use AI as a force multiplier — speeding up research, consolidating information, and helping people make decisions faster.”

That human-first but pragmatic outlook shaped every decision in Hypebeast’s adoption journey.


Why Most AI Adoption Efforts Fail

AI is no longer optional. Boards and executives expect adoption. But many teams fail to deliver value. Why?

Sami points to fear — not of the technology, but of being abandoned.

“The reason people are fearful around AI isn’t the tech. It’s because they don’t trust governments or institutions to look after them if jobs disappear.”

This distinction matters. If AI is framed as replacement, staff feel threatened. If it’s framed as empowerment, they engage.

Psychology backs this up. People resist change when they fear loss of control or status. The solution isn’t just better tech — it’s better framing. Data leaders need to talk about AI as a tool that supports their teams’ value, not one that makes them redundant.


Lessons from Hypebeast’s Adoption Journey

1. Frame AI as a Force Multiplier

At Hypebeast, AI is not a substitute for creativity. It’s an assistant. Tasks like research, summarisation, and trend monitoring were made faster and easier, while final judgement stayed human.

“We’re not trying to replace jobs. We might automate manual tasks, but we won’t remove the human side.”

This framing reassured staff that their value remained central — and made AI a welcome tool, not a competitor.

Implementation guidance:

  • What to do first: Communicate clearly that AI enhances, not replaces.
  • Tools: Introduce AI where speed and consolidation matter (e.g. research, summarisation).
  • Watch-outs: Don’t oversell — focus on real, modest gains.
  • Benefit: Higher trust and willingness to experiment.

2. Focus on “Unsexy” Use Cases

Flashy AI demos rarely translate into real value. Sami leaned into the unglamorous but high-impact tasks: scanning social feeds, packaging intelligence, automating logistics and finance.

“We leaned into use cases that aren’t super sexy but free up time.”

By cutting drudgery, staff had more time for meaningful creative work.

Implementation guidance:

  • What to do first: Audit manual processes that drain time.
  • Tools: Simple AI agents for monitoring, reporting, logistics.
  • Watch-outs: Avoid over-investing in “showcase” projects.
  • Benefit: Faster results, more trust in AI.

3. Create a Curiosity Gap

Perhaps the boldest move was delaying access. For 10 weeks, Sami drip-fed teasers: short demos showing what AI could do.

“It was 10 weeks before we gave anyone access — on purpose… By the launch, adoption went from 3% to 95% in three weeks.”

Scarcity created FOMO. Instead of pushing adoption, Hypebeast created pull.

This contrasts with most change management programmes, which over-prepare staff with slide decks, training, and handholding. Sami flipped the playbook — and in his industry, it worked.

He’s quick to note it wouldn’t fit everywhere. In banking or pharma, where regulation and compliance demand rigour, leaders may need a heavier hand. But in fast-moving creative industries, curiosity was the lever.

Implementation guidance:

  • What to do first: Hold back, and drip-feed examples.
  • Tools: Short demo videos or snippets to spark curiosity.
  • Watch-outs: Don’t launch too early before demand builds.
  • Benefit: Rapid, voluntary adoption.

4. Kill Zombie Agents Without Sentimentality

Hypebeast set strict benchmarks: daily or weekly agents had to hit 80% usage. If they weren’t used, they were deleted.

“If an agent isn’t used, we delete it. No sentimentality. It’s not failure — it’s iteration.”

That unsentimental approach kept adoption high and avoided wasted energy.

This is another overlooked lesson. Too many teams keep “zombie tools” alive because someone invested time or money. Sami’s product mindset — measure, test, delete — freed his team to focus only on what added value.

Implementation guidance:

  • What to do first: Define usage thresholds per agent type.
  • Tools: Usage dashboards, adoption metrics.
  • Watch-outs: Don’t cling to underused tools.
  • Benefit: Consistently high adoption, leaner portfolio.

Measuring Adoption: Beyond Usage

Usage was Sami’s primary metric, but adoption measurement can and should go deeper. Data leaders can track:

  • Time saved on manual tasks
  • User satisfaction with AI support
  • Error reduction in workflows
  • Frequency of repeat use across teams

By triangulating usage with impact, leaders can see not just whether AI is being used — but whether it’s making a difference.

This level of measurement is critical for building trust with executives and avoiding the “we bought AI, but what did it achieve?” backlash.


Real-World Impact

Hypebeast reached 97% adoption within three weeks of launch. Staff now use AI agents daily across journalism, retail, logistics, and design.

The before state was one of downsizing, high pressure, and manual workloads. The after state is a team with more time for creativity, backed by systems that take care of drudgery.

Instead of creating fear or resistance, the approach built curiosity and trust. AI is now embedded in workflows, freeing staff to focus on creative and strategic tasks.


Putting It All Together

Hypebeast’s success was not built on hype or heavy-handed change management. It came from reframing AI as support, solving practical problems, creating demand through scarcity, and cutting what didn’t work.

The results speak for themselves: adoption consistently above 90%, and a team that sees AI as an enabler, not a threat.

For data leaders, the playbook is clear:

  • Start with framing — AI enhances, it doesn’t replace
  • Automate boring tasks first
  • Create demand with curiosity, not forced adoption
  • Be ruthless with underperforming tools

Your Next Move: A Leader’s Checklist

Before your next AI rollout, ask yourself:

  1. Have I made it clear AI is here to support, not replace?
  2. Am I solving everyday pain points first, not chasing flashy demos?
  3. Can I create demand by showing value before rolling out access?
  4. Do I have clear benchmarks for adoption and usage?
  5. Am I willing to cut what doesn’t deliver?

Tick those five boxes, and you’ll be far closer to adoption that actually sticks.


Dig Deeper

This article is based on Data Matas Episode [X] with Sami Rahman, Director of Data & AI at Hypebeast.

📺 Watch the full conversation: https://www.youtube.com/@matatika
🎙️ Listen to the podcast: https://www.matatika.com/podcasts/

 

How DuckDB Cuts Development Costs Without Touching Production

Rising warehouse costs are pushing data teams to rethink how and where they run workloads. At our October LinkedIn Live, experts from Greybeam, Tasman Analytics, and Matatika unpacked how DuckDB helps teams cut unnecessary warehouse spend by shifting development, testing, and ad-hoc analysis to fast, local environments. The takeaway: DuckDB isn’t a warehouse replacement. It’s a cost-control companion. Successful teams use hybrid execution to pair local speed with cloud scale, measure true unit costs, and build flexible, future-proof stacks. With Matatika’s Mirror Mode, teams can validate savings before committing, achieving sustainable efficiency without disrupting production.

Rising warehouse bills are forcing data leaders to rethink where workloads belong. One analytics leader recently described the frustration: “We’re spending thousands monthly on Snowflake, but half those credits go to engineers just testing queries.”

Even with careful query tuning, most teams eventually hit the same problem, optimisation stops paying off whilst credits keep climbing. Engineers burn budget on development iterations. Data scientists trigger expensive scans during exploration. BI tools repeatedly query the same datasets. Analytics workloads that could run locally hit expensive warehouse compute instead.

That’s where DuckDB enters the conversation. It’s fast, lightweight, and local, offering teams a way to eliminate unnecessary warehouse costs by running workloads closer to where they’re actually needed, without touching production systems.

At our October LinkedIn Live, we brought together three experts to unpack where DuckDB really delivers savings, where it doesn’t, and how to make those gains sustainable across your entire stack.

You’ll discover:

  • Why warehouse optimisation eventually hits architectural limits
  • How DuckDB eliminates unnecessary costs across development, testing, and query workloads
  • Where hybrid execution delivers the best balance of speed, compliance, and cost
  • How to prove savings before making irreversible infrastructure changes

This article captures insights from our discussion featuring Kyle Cheung (Greybeam), Bill Wallis (Tasman Analytics), and Aaron Phethean (Matatika).

Meet the Experts

Kyle Cheung — Founder, Greybeam

Kyle helps teams adopt open-source analytical engines like DuckDB and connect them with enterprise infrastructure. He guides clients through practical integration challenges as they shift from monolithic warehouse dependency to modular hybrid systems.

Bill Wallis — Founder, Tasman Analytics

Bill advises analytics teams experimenting with local-first data approaches. His daily work with DuckDB provides ground-truth perspective on what actually works when moving from side project to production workflow.

Aaron Phethean — Founder, Matatika

Aaron leads Matatika’s platform strategy, helping data teams eliminate lock-in and cut processing costs with performance-based pricing. His focus is on enabling seamless, low-risk transitions between tools using Matatika’s Mirror Mode validation approach.

The Problem You’re Solving

Data teams are burning warehouse credits in ways that don’t show up as “bad queries”:

Development workflows hitting production compute – engineers testing transformations burn credits every iteration, turning simple debugging into expensive operations.

Ad-hoc analysis eating budget – data scientists exploring datasets trigger expensive scans that could run locally for free.

CI/CD pipelines duplicating costs – every pull request runs full warehouse refreshes when most changes affect only small portions of data.

No visibility into unit costs – finance sees the total bill but can’t connect warehouse spend to actual business value delivered.

Meanwhile, finance demands cost cuts whilst business stakeholders expect faster insights. Teams feel trapped between warehouse bills that scale with every query and the fear of disrupting production systems that already work.

The pressure mounts when traditional optimisation delivers diminishing returns. You’ve tuned queries, implemented incremental models, and optimised scheduling. Yet costs keep climbing because your workload patterns fundamentally conflict with warehouse pricing models.

What Successful Teams Do Differently

1. They Use DuckDB for Development Without Disrupting Production

The insight: The fastest ROI comes from moving development and testing workloads off the warehouse — not migrating production systems.

Bill Wallis shared his approach at Tasman Analytics: “The main way I use DuckDB is to enable my developer workflow. Where the data isn’t sensitive, dump it locally into a parquet file, do all my analytics and development locally with DuckDB.”

This eliminates the cost pattern that hits most data teams. As Bill explained: “I’m not spending money every single time I’m running one of my development queries in Snowflake.”

The productivity gains extend beyond just cost. Local execution means instant feedback loops — queries that took 30 seconds in the warehouse run in under 2 seconds locally.

Kyle Cheung sees this pattern emerging across his client base: “Some of our customers are interested in running their CI or dev pipelines using DuckDB and not having to call Snowflake compute for that.”

How teams implement it:

  • Identify non-sensitive datasets suitable for local development and export production schema snapshots to Parquet format
  • Configure dbt or SQL Mesh to run tests against DuckDB locally before promoting to warehouse deployment
  • Set up CI/CD gates that validate transformations locally first, only calling warehouse compute once models pass all checks

Expected outcome: Teams eliminate the majority of development-related warehouse costs whilst accelerating feedback loops. Engineers stop waiting for warehouse scheduling and can iterate freely without budget concerns.

2. They Treat DuckDB as a Complement, Not a Competitor

The insight: DuckDB is powerful for specific use cases, but it’s not a warehouse replacement — it’s a cost-control companion.

As Bill Wallis put it: “Governance, scale, and collaboration are where the warehouse still wins.”

Kyle Cheung emphasised understanding DuckDB’s design constraints: “It’s incredible for what it does, but it’s designed for a single node. That’s where the limits appear.”

Teams achieve the best results by using DuckDB where it excels — for local analytics, validation, and caching — whilst keeping governed data and large-scale processing in cloud warehouses.

How teams implement it:

  • Use DuckDB for fast prototyping, data exploration, and notebook analysis where datasets fit comfortably in memory
  • Cache frequently-accessed tables in DuckDB to avoid repeated warehouse hits – think of this as a smarter read only cache.
  • Maintain the warehouse as the system of record for governed data, audit trails, and multi-user collaboration

Expected outcome: Predictable governance, faster experimentation, and reduced risk of data drift. Teams gain cost savings through smarter workload placement without sacrificing the warehouse capabilities that matter for production systems.

3. They Combine Local Speed with Cloud Scale

The insight: The future isn’t “warehouse versus DuckDB” — it’s hybrid execution where you run small workloads locally and reserve cloud compute for where it matters most.

Aaron Phethean connected this to broader infrastructure trends: “We’re seeing the same pattern as DevOps — push more development closer to the engineer, automate what’s repeatable, and reserve the heavy lifting for where it matters most.”

This mirrors how modern software engineering works. Developers run tests locally, then promote to staging and production. Data teams can apply the same principles.

The challenge is maintaining consistency. Kyle noted: “You need your local environment to behave like production, or you’re just creating different problems.”

How teams implement it:

  • Integrate DuckDB with dbt or SQL Mesh to maintain identical transformation logic across local and cloud environments
  • Use Matatika’s Mirror Mode to run both environments side-by-side, comparing results before committing to architectural changes
  • Establish clear promotion criteria — when local validation passes, automated deployment pushes to warehouse without manual intervention

Expected outcome: Stable hybrid pipelines that combine DuckDB’s speed and zero-cost iteration with cloud resilience and governance. Engineering velocity increases because local testing removes warehouse scheduling as a bottleneck.

4. They Focus on Measuring True Unit Cost

The insight: Real efficiency isn’t about cutting tools, it’s about measuring cost per unit of value delivered and optimising from there.

Aaron Phethean highlighted that cost visibility is often the missing link: “We don’t need to rip out good systems. We just need to give teams the flexibility to run smarter.”

Most finance teams see total warehouse bills without understanding which workloads generate business value versus which burn credits unnecessarily. Without attribution, you can’t optimise effectively.

How teams implement it:

  • Track warehouse credit consumption by workflow type (development, production, ad-hoc analysis) using query tagging
  • Use Matatika’s performance-based pricing model to measure cost per unit of business value delivered, not per connector or user
  • Create monthly cost attribution reports showing which teams, projects, or use cases drive warehouse spend

Expected outcome: Data teams gain control of their budget narrative. You can show leadership exactly where money goes, prove ROI for infrastructure changes, and make confident decisions about workload placement based on actual cost data rather than assumptions.

5. They Build Optionality into Their Stack

The insight: Cost control isn’t a one-off exercise, it’s a mindset. Teams that stay flexible can adopt new approaches like DuckDB without painful migrations later.

Kyle Cheung shared how his clients avoid lock-in: “You don’t need to change everything at once. Start small, see what actually saves money, then scale that.”

Aaron Phethean emphasised long-term thinking: “If a new engine outperforms your current stack, you should be able to test it without disruption.”

How teams implement it:

  • Adopt open data formats (Parquet, Iceberg) and standard SQL rather than vendor-specific features
  • Use compatibility validation tools like Matatika’s Mirror Mode to test new approaches in parallel with existing systems
  • Schedule infrastructure renewal reviews 3-6 months before contracts expire to avoid forced decisions under time pressure

Expected outcome: A modular, future-proof data stack that allows experimentation without downtime or double-payment. Leaders gain the freedom to choose the best performance per cost at any point in time rather than being locked into decisions made years ago.

What Teams Discover When They Implement This

Bill Wallis described the immediate productivity shift from his daily experience: “I’m not spending money every single time I’m running one of my development queries in Snowflake. The feedback loop is instant, queries that took 30 seconds now run in under 2.”

That speed advantage compounds over weeks and months. Engineers who previously waited for warehouse queries during development can now iterate freely, testing ideas without budget concerns or scheduling delays.

Kyle Cheung sees measurable results across his client implementations: “Some customers run their entire CI pipeline using DuckDB. They’re not hitting Snowflake compute at all until production deployment.”

The validation approach matters as much as the technology choice. Aaron emphasised: “Teams using Mirror Mode can prove DuckDB savings before changing production. You’re not asking leadership to trust you, you’re showing them side-by-side cost comparisons.”

This evidence-based approach removes the usual migration anxiety. Instead of big-bang changes that could disrupt production, teams validate incrementally and only commit once results are clear.

Making It Happen

Start with impact analysis rather than immediate implementation. Identify where your warehouse is being used for low-value workloads, development, testing, or ad-hoc analysis that doesn’t require governed production data.

Choose one workflow as a pilot. Move it to DuckDB and measure the cost and performance difference over two weeks. Track warehouse credit reduction, engineering productivity gains, and any friction points that emerge.

Then use Matatika’s Mirror Mode to replicate and validate production pipelines side-by-side, proving savings before making any irreversible changes. This parallel validation eliminates the traditional migration risk of “we won’t know if it works until we’ve fully switched.”

Key metrics to track:

  • Monthly warehouse credit reduction broken down by workload type
  • Engineering hours saved per release cycle from faster local iteration
  • Cost per pipeline run comparing warehouse versus DuckDB execution
  • Time-to-validation for new models showing feedback loop improvements

The goal is sustainable efficiency through hybrid execution that scales with business demands rather than hitting artificial limits imposed by pure warehouse or pure local approaches.

From Warehouse Lock-In to Hybrid Control

The teams achieving sustainable cost control aren’t choosing between warehouse and DuckDB, they’re building hybrid infrastructure that uses each where it excels.

DuckDB eliminates unnecessary warehouse spend on development and testing. Warehouses continue handling governed data, large-scale processing, and multi-user collaboration. The combination delivers better economics than either approach alone.

What successful teams do differently: they start with impact analysis, validate new approaches with Mirror Mode before committing, and build optionality into their stack so they can adapt as better tools emerge.

The goal isn’t change for change’s sake. It’s sustainable growth through infrastructure that enables rather than constrains business opportunities whilst keeping costs aligned with actual value delivered.

Teams that master hybrid execution gain competitive advantage through faster engineering velocity and transparent cost attribution that proves ROI to leadership.

Book Your ETL Escape Audit

Ready to identify where your warehouse credits are going and whether hybrid execution makes sense for your stack?

We’ll help you assess your current warehouse usage patterns, identify cost optimisation opportunities through smarter workload placement, and show you how Mirror Mode validation eliminates traditional migration risks. If DuckDB-style hybrid execution makes sense for your situation, we’ll map out a clear path forward.

You’ll get a concrete benchmark of your current cost-per-workload and visibility into realistic improvements you can present to leadership with confidence.

Book Your ETL Escape Audit →

Further Resources

 

How OLTP and OLAP Databases Differ and Why It Matters for Your Data Architecture

Most data teams misuse OLTP and OLAP systems by forcing mismatched workloads, leading to bottlenecks, high costs, and missed opportunities. Smart teams separate environments, optimise data flow with incremental syncing, and use safe migration tools like Mirror Mode to achieve both transactional efficiency and analytical power without disruption.

How to Optimise OLAP and OLTP Systems for Better Performance

Most data teams struggle because inefficient architectures force them to choose between fast transactions (OLTP) and powerful analytics (OLAP), creating delays, high costs, and frustrated users. Smart teams separate systems by purpose, use efficient syncing like Change Data Capture, and adopt performance-based pricing to achieve real-time insights, cost savings, and scalable architectures without disruption.

The FinOps Skills Every Data Engineer Needs in 2025

In 2025, data engineers are expected not only to deliver robust pipelines but also to integrate FinOps principles, ensuring systems scale economically as well as technically. Those who master cost attribution, pricing model evaluation, and cost-conscious architecture design are becoming business-critical, as financial awareness now defines engineering success.

Why Fivetran’s Tobiko Data Acquisition Signals Trouble for Data Teams

Fivetran’s acquisition of Tobiko Data signals a shift from open source innovation to commercial consolidation, creating what many see as a “platform prison” where Extract, Load, and Transform are locked into one vendor ecosystem. While this promises simplicity, the true cost emerges over time through rising fees, reduced flexibility, and strategic dependencies that make switching prohibitively expensive.