Stop Scaling What You Don’t Understand

Published on June 16, 2025

Why Monzo Chose to Rebuild Their Data Platform From First Principles

At Matatika, we frequently encounter teams who believe that scaling means moving faster—more models, more dashboards, more integrations. But through our work with data leaders and our conversations on the Data Matas podcast, we’ve learnt that growth without clarity only compounds risk.

In our latest episode, we explored this challenge with John Napoleon-Kuofie from Monzo, who faced the daunting task of rebuilding a data platform with over 1,000 inherited dbt models and inconsistent definitions across their stack. His team’s bold decision? Stop scaling and start simplifying.

 

Why Data Leaders Are Hitting Pause Before Scaling

In today’s complex data environment, this challenge is more common than many teams admit. As generative AI tools enter the mix, the pressure to “do more” with data is higher than ever. But as we’ve seen repeatedly in our client work at Matatika, without solid foundations, more volume just means more noise.

Through our conversation with John Napoleon-Kuofie, Analytics Engineer at Monzo, we uncovered practical strategies that any data team can implement:

  • Rebuild your data platform around real-world concepts, not legacy definitions
  • Cut down alert fatigue by focusing tests on action, not coverage
  • Prepare your models for AI by simplifying structure and improving trust
  • Create a culture where individual contributions scale company-wide
  • Design systems that future engineers can understand and extend

Meet John Napoleon-Kuofie

John is an Analytics Engineer at Monzo, one of the UK’s leading digital banks. With a background in both software engineering and data analytics, he bridges technical depth with user-first thinking. At Monzo, he’s part of the team responsible for rebuilding core payment models, rewriting testing strategies, and restoring confidence in how data drives decisions.

What makes John’s perspective unique is his hands-on experience navigating deep model complexity in a regulated, high-growth environment. Rather than layering tools on top of existing mess, he’s helping the team start again with clarity.

“We’re starting again, from first principles: What is a payment? What are the building blocks?”

At Matatika, we’ve observed this pattern across numerous client engagements—the most successful data transformations often begin with fundamental questions about business logic rather than technical architecture.

 

The Hidden Cost of Inherited Complexity

For most data teams, technical debt isn’t code—it’s logic. Over time, teams inherit models, definitions, and tests that no one remembers writing. Monzo was no different. John walked into a platform with over 1,000 dbt models, each layered with assumptions, inconsistencies, and unclear ownership.

This creates two critical problems that we see repeatedly in our client work:

  1. It’s hard to trust what the data means
  2. It’s even harder to change anything safely

Rather than endlessly refactoring, John’s team made a bold decision that mirrors our approach at Matatika: pause and rebuild from first principles.

“I didn’t build these systems. So I don’t just know how everything joins together—I have to trace it manually.”

This resonates deeply with our experience helping teams migrate from complex, inherited ETL systems to cleaner, more maintainable architectures through our Mirror Mode approach.

 

What you can do:

  • Identify critical domains (like payments, customers, revenue) that need redefinition
  • Hold stakeholder sessions to align on real-world meaning before writing a single line of SQL
  • Start building from the bottom up, with clear naming, ownership, and model lineage

Expected result: Fewer edge cases, cleaner joins, and a data layer your team actually understands—and trusts.

 

Test What You’re Willing to Fix

Most teams inherit tests like “NOT NULL” or “value in list,” but never ask: what happens when the test fails?

At Monzo, John noticed an overload of alerts without action. So they flipped the model: don’t write tests unless you’re committed to doing something about the failure.

“If a null value comes in and you’re not going to do anything about it—why write a test for it?”

This philosophy aligns perfectly with our performance-based approach at Matatika—paying only for what actually adds value to your data operations.

To implement this:

  • Audit your current test suite: remove or flag tests with no clear owner
  • For each new test, include:
    • Why it matters
    • What it protects
    • What to do if it fails
  • Create a test ownership rota to reduce fatigue and increase accountability
  • Adopt a “test template” that includes escalation paths

Expected result: Fewer false positives, clearer error context, and tests that improve confidence—not anxiety.

 

Don’t Build AI on a Broken Foundation

There’s huge pressure on data teams to “do something” with AI. But John’s view aligns with our philosophy at Matatika: if your models don’t make sense, no AI layer can save you.

“I don’t think I could stick AI on top of what we have now and produce a good answer.”

The path to AI-readiness isn’t hype—it’s hygiene. This mirrors what we tell clients considering ETL transformations: clean foundations enable innovation.

Are your models ready?

  • Are your models readable?
  • Are your joins documented?
  • Do stakeholders trust the numbers?

What to do first:

  • Identify your highest-value use case for AI (e.g. natural language query, smart alerts)
  • Trace the lineage of the data it would rely on
  • Simplify and re-document those models before building anything generative

Expected result: A smoother, faster path to trustworthy AI features that don’t collapse under ambiguity.

 

Let Individuals Drive Innovation

Innovation at Monzo doesn’t always come from the top. In fact, many of their best ideas start with one person seeing something broken—and fixing it.

“Monzo is a place where an idea from one person can scale across the company.”

From Slack alerts to Undo Payments, small internal projects become adopted features because the culture supports it. At Matatika, we’ve seen how this approach transforms not just individual teams but entire data cultures.

How to encourage this:

  • Empower engineers to own small experiments
  • Recognise fixes and quality-of-life wins, not just major releases
  • Share learnings in open channels to spark adoption
  • Use tools like feature flags, hack weeks, and internal dashboards of “things we fixed”

Expected result: Faster iteration, stronger engagement, and a more resilient team culture.

 

Build for the Next Person

John doesn’t just build models—he leaves behind a trail of thinking. That’s because today’s fast-moving decisions become tomorrow’s legacy.

“I want the next person to read what my head was saying at the time.”

Instead of writing perfect code, John focuses on writing understandable logic. This philosophy underpins our approach to ETL transformations at Matatika—building systems that teams can actually maintain and evolve.

To apply this in your team:

  • Add commentary to key joins and filters explaining the business reasoning
  • Keep documentation close to the code (in-line or same repo)
  • Review docs like code—clarity is a deliverable

Expected result: Onboarding gets easier. Debugging gets faster. And the team spends less time asking, “Why is this like that?”

 

Putting It All Into Practice

If you’re inheriting complexity or under pressure to scale, John’s advice aligns with our experience at Matatika: pause, simplify, and rebuild on purpose.

Where to start:

  1. Redefine one core domain with clear logic and ownership
  2. Remove 20% of tests that no longer serve a purpose
  3. Choose one area where clean models will unlock AI or automation
  4. Encourage a team member to solve one internal problem without permission

Timeline: Within 6–8 weeks, most teams can clean one core model set and re-align testing strategy.

Metrics to track:

  • Reduction in test failures
  • Model rework frequency
  • Time-to-understand for new joiners
  • Stakeholder trust (via qualitative surveys)

 

When Small Fixes Become Scalable Wins

One of John’s favourite examples? A Slack alert originally built by one person now runs across multiple systems. Another: Undo Payments started as a hackathon project and became a product.

These examples aren’t anomalies—they’re the result of a culture that rewards initiative. The lesson? Empowerment scales faster than processes.

This mirrors what we see when helping teams transform their ETL processes through Mirror Mode—small, validated changes compound into significant operational improvements.

 

Your Next Move

If you take one thing from our conversation with John, it’s this: clarity scales. The fastest teams are the ones who understand what they’re building.

As the team behind the Data Matas podcast, we’re committed to sharing these perspectives to help the community grow. Want more? Listen to the full conversation with John Napoleon-Kuofie to hear how his approach can inform your own data platform strategy.

 

Frequently Asked Questions

How do you decide which legacy models to rebuild versus refactor?

Start with your most critical business domains—the data that directly impacts customer experience or regulatory compliance. At Monzo, John’s team prioritised payment models because they’re foundational to everything else. Focus on models that are frequently queried, often joined to other tables, or causing repeated issues. If a model requires extensive documentation just to understand it, that’s usually a sign it needs rebuilding rather than refactoring.

What’s the best way to manage stakeholder expectations during a platform rebuild?

Transparency and small wins are crucial. Involve stakeholders in defining what each business concept actually means before you start coding. At Matatika, we use our Mirror Mode approach to show parallel progress—stakeholders can see new systems working alongside existing ones before any cutover happens. Set clear milestones and communicate what will improve for end users, not just what’s changing technically.

How can data teams prepare their foundations for AI without over-engineering?

Focus on clarity and trust before complexity. As John noted, AI can’t fix ambiguous data—it only amplifies existing problems. Start by ensuring your core models are readable, well-documented, and trusted by stakeholders. Identify one specific AI use case (like natural language queries for executive dashboards) and trace the data lineage it would require. Clean and simplify those pathways first, rather than trying to make everything “AI-ready” at once.

How do you balance individual innovation with team standards and governance?

Create guardrails, not gates. Encourage experimentation within defined boundaries—clear coding standards, documented decision-making processes, and regular showcases where individuals can demonstrate improvements. At Monzo, successful internal tools become adopted because they solve real problems, not because they were mandated from above. The key is making it easy for good ideas to scale while maintaining quality standards.

Dig Deeper

🎙️ Listen to the full episode
🔗 Connect with John Napoleon-Kuofie on LinkedIn 

 🌍 Visit Matatika’s Website
📺 Subscribe to Data Matas on YouTube

#Blog

Data Leaders Digest

Stay up to date with the latest news and insights for data leaders.