At Matatika, we frequently encounter teams who believe that scaling means moving faster—more models, more dashboards, more integrations. But through our work with data leaders and our conversations on the Data Matas podcast, we’ve learnt that growth without clarity only compounds risk.
In our latest episode, we explored this challenge with John Napoleon-Kuofie from Monzo, who faced the daunting task of rebuilding a data platform with over 1,000 inherited dbt models and inconsistent definitions across their stack. His team’s bold decision? Stop scaling and start simplifying.
In today’s complex data environment, this challenge is more common than many teams admit. As generative AI tools enter the mix, the pressure to “do more” with data is higher than ever. But as we’ve seen repeatedly in our client work at Matatika, without solid foundations, more volume just means more noise.
Through our conversation with John Napoleon-Kuofie, Analytics Engineer at Monzo, we uncovered practical strategies that any data team can implement:
Meet John Napoleon-Kuofie
John is an Analytics Engineer at Monzo, one of the UK’s leading digital banks. With a background in both software engineering and data analytics, he bridges technical depth with user-first thinking. At Monzo, he’s part of the team responsible for rebuilding core payment models, rewriting testing strategies, and restoring confidence in how data drives decisions.
What makes John’s perspective unique is his hands-on experience navigating deep model complexity in a regulated, high-growth environment. Rather than layering tools on top of existing mess, he’s helping the team start again with clarity.
“We’re starting again, from first principles: What is a payment? What are the building blocks?”
At Matatika, we’ve observed this pattern across numerous client engagements—the most successful data transformations often begin with fundamental questions about business logic rather than technical architecture.
For most data teams, technical debt isn’t code—it’s logic. Over time, teams inherit models, definitions, and tests that no one remembers writing. Monzo was no different. John walked into a platform with over 1,000 dbt models, each layered with assumptions, inconsistencies, and unclear ownership.
This creates two critical problems that we see repeatedly in our client work:
Rather than endlessly refactoring, John’s team made a bold decision that mirrors our approach at Matatika: pause and rebuild from first principles.
“I didn’t build these systems. So I don’t just know how everything joins together—I have to trace it manually.”
This resonates deeply with our experience helping teams migrate from complex, inherited ETL systems to cleaner, more maintainable architectures through our Mirror Mode approach.
Expected result: Fewer edge cases, cleaner joins, and a data layer your team actually understands—and trusts.
Most teams inherit tests like “NOT NULL” or “value in list,” but never ask: what happens when the test fails?
At Monzo, John noticed an overload of alerts without action. So they flipped the model: don’t write tests unless you’re committed to doing something about the failure.
“If a null value comes in and you’re not going to do anything about it—why write a test for it?”
This philosophy aligns perfectly with our performance-based approach at Matatika—paying only for what actually adds value to your data operations.
To implement this:
Expected result: Fewer false positives, clearer error context, and tests that improve confidence—not anxiety.
There’s huge pressure on data teams to “do something” with AI. But John’s view aligns with our philosophy at Matatika: if your models don’t make sense, no AI layer can save you.
“I don’t think I could stick AI on top of what we have now and produce a good answer.”
The path to AI-readiness isn’t hype—it’s hygiene. This mirrors what we tell clients considering ETL transformations: clean foundations enable innovation.
Are your models ready?
What to do first:
Expected result: A smoother, faster path to trustworthy AI features that don’t collapse under ambiguity.
Innovation at Monzo doesn’t always come from the top. In fact, many of their best ideas start with one person seeing something broken—and fixing it.
“Monzo is a place where an idea from one person can scale across the company.”
From Slack alerts to Undo Payments, small internal projects become adopted features because the culture supports it. At Matatika, we’ve seen how this approach transforms not just individual teams but entire data cultures.
How to encourage this:
Expected result: Faster iteration, stronger engagement, and a more resilient team culture.
John doesn’t just build models—he leaves behind a trail of thinking. That’s because today’s fast-moving decisions become tomorrow’s legacy.
“I want the next person to read what my head was saying at the time.”
Instead of writing perfect code, John focuses on writing understandable logic. This philosophy underpins our approach to ETL transformations at Matatika—building systems that teams can actually maintain and evolve.
To apply this in your team:
Expected result: Onboarding gets easier. Debugging gets faster. And the team spends less time asking, “Why is this like that?”
If you’re inheriting complexity or under pressure to scale, John’s advice aligns with our experience at Matatika: pause, simplify, and rebuild on purpose.
Where to start:
Timeline: Within 6–8 weeks, most teams can clean one core model set and re-align testing strategy.
Metrics to track:
One of John’s favourite examples? A Slack alert originally built by one person now runs across multiple systems. Another: Undo Payments started as a hackathon project and became a product.
These examples aren’t anomalies—they’re the result of a culture that rewards initiative. The lesson? Empowerment scales faster than processes.
This mirrors what we see when helping teams transform their ETL processes through Mirror Mode—small, validated changes compound into significant operational improvements.
If you take one thing from our conversation with John, it’s this: clarity scales. The fastest teams are the ones who understand what they’re building.
As the team behind the Data Matas podcast, we’re committed to sharing these perspectives to help the community grow. Want more? Listen to the full conversation with John Napoleon-Kuofie to hear how his approach can inform your own data platform strategy.
How do you decide which legacy models to rebuild versus refactor?
Start with your most critical business domains—the data that directly impacts customer experience or regulatory compliance. At Monzo, John’s team prioritised payment models because they’re foundational to everything else. Focus on models that are frequently queried, often joined to other tables, or causing repeated issues. If a model requires extensive documentation just to understand it, that’s usually a sign it needs rebuilding rather than refactoring.
What’s the best way to manage stakeholder expectations during a platform rebuild?
Transparency and small wins are crucial. Involve stakeholders in defining what each business concept actually means before you start coding. At Matatika, we use our Mirror Mode approach to show parallel progress—stakeholders can see new systems working alongside existing ones before any cutover happens. Set clear milestones and communicate what will improve for end users, not just what’s changing technically.
How can data teams prepare their foundations for AI without over-engineering?
Focus on clarity and trust before complexity. As John noted, AI can’t fix ambiguous data—it only amplifies existing problems. Start by ensuring your core models are readable, well-documented, and trusted by stakeholders. Identify one specific AI use case (like natural language queries for executive dashboards) and trace the data lineage it would require. Clean and simplify those pathways first, rather than trying to make everything “AI-ready” at once.
How do you balance individual innovation with team standards and governance?
Create guardrails, not gates. Encourage experimentation within defined boundaries—clear coding standards, documented decision-making processes, and regular showcases where individuals can demonstrate improvements. At Monzo, successful internal tools become adopted because they solve real problems, not because they were mandated from above. The key is making it easy for good ideas to scale while maintaining quality standards.
Dig Deeper
🎙️ Listen to the full episode
🔗 Connect with John Napoleon-Kuofie on LinkedIn
🌍 Visit Matatika’s Website
📺 Subscribe to Data Matas on YouTube
Stay up to date with the latest news and insights for data leaders.