Engineering time disappears into trivial updates. Every data change request is a context switch. Even five-minute tasks fragment your day. You never get into deep focus because you’re constantly interrupted by “quick questions” that aren’t actually quick.
Data goes stale while waiting in the queue. The mapping table that needs updating isn’t blocking just one report, it’s blocking every downstream decision that depends on accurate categorisation. Revenue analysis. Budget forecasting. Strategic planning. All delayed because an update is sitting in someone’s backlog.
Friction kills initiative. Analysts stop asking for improvements because the overhead isn’t worth it. They know product categories don’t map correctly, but requesting the fix means bothering someone, waiting indefinitely, and probably having to follow up multiple times. Easier to just work around it.
This isn’t a productivity problem. It’s an organisational design problem masquerading as a workflow issue.
When Resident Advisor rebuilt their data infrastructure, they faced exactly this challenge.
Their business analysts needed to maintain several critical datasets:
Product-to-general-ledger mappings. Finance understands how revenue should be categorised for accounting purposes. Engineering doesn’t. Yet every time a new product launched or a category changed, analysts had to request an update.
Currency exchange rates. Finance needed to update rates at least daily, sometimes more frequently during volatile periods. Waiting for engineering to process updates meant revenue reports went out with stale data.
These weren’t edge cases. They were business-critical datasets that changed frequently and required domain expertise to maintain correctly.
RA gave their analysts direct ownership of the data through Google Sheets.
Finance maintains a Sheet with current exchange rates. Operations maintains a Sheet with product mappings. Each Sheet has a clear owner who understands the business context.
Matatika syncs those Sheets into their PostgreSQL warehouse automatically. Daily for exchange rates. Weekly for mappings that change less frequently.
dbt models sit on top of the synced data, providing the transformation layer and data quality tests. If a mapping looks wrong or a rate seems suspicious, tests fail and alert the team.
Analysts update the Sheets whenever they need to. Engineering maintains the pipeline. Everyone works in their domain of expertise.
Analysts gained autonomy. Finance updates exchange rates directly without creating tickets or waiting for availability. Product mappings change when the business needs them to change, not when engineering has time.
Engineering reclaimed their time. No more context switching for trivial data updates. No more interrupt-driven days spent processing requests that shouldn’t require engineering involvement.
Data stayed current. Revenue reports reflect the latest exchange rates. Product categorisation stays accurate as the business evolves. Dashboards show reality instead of data that’s waiting in someone’s queue.
Knowledge stayed with domain experts. Finance understands currency risk. They control the rates. Operations understands product taxonomy. They control the mappings. Engineering maintains reliable infrastructure. Separation of concerns, properly implemented.
This isn’t about “empowering” analysts as some aspirational goal. It’s about putting data maintenance where it belongs and getting engineering out of the critical path for updates they shouldn’t be involved in.
The objection we hear constantly: “If we give analysts direct database access, they’ll break things.”
Fair concern. The answer isn’t to give analysts direct database access. It’s to give them appropriate tools that engineering can safely integrate.
Here’s the architecture that works:
Business users maintain Google Sheets in your organisation’s workspace. Proper access controls mean only designated owners can edit. Everyone else gets read-only access.
Sheets have stable structures. Column names are clear. New rows represent data changes, that’s expected. New columns trigger schema changes that get handled explicitly.
This is the interface layer. Analysts work in a tool they already understand, without needing SQL knowledge or database credentials.
ETL connectors sync specified Sheets into the warehouse on defined schedules. Authentication is handled properly with OAuth and credential rotation. Schema changes are detected automatically.
If a Sheet adds a column, the warehouse table updates to match. If a column gets renamed, downstream dbt models catch the breaking change through testing before it reaches production.
Engineering maintains the infrastructure. They don’t maintain the data.
Raw Sheets data lands in staging schemas. dbt models apply transformations, business logic, and, critically, tests.
Tests validate data quality. Missing required values fail the build. Suspicious changes trigger alerts. Referential integrity gets checked. Data doesn’t reach production dashboards until it passes validation.
This is how you give analysts autonomy without sacrificing reliability. The data flows through the same quality gates as any other source.
Schema changes trigger notifications. Sync failures alert on-call engineers. Data quality test failures show up in Slack before anyone sees incorrect dashboards.
Nothing breaks silently. If an analyst accidentally deletes critical data, the sync failure is immediately visible. If a mapping looks wrong, tests catch it before downstream models run.
The safety isn’t in preventing analysts from making changes. It’s in detecting problems quickly and failing explicitly rather than producing incorrect results.
This approach works for reference data that changes frequently (exchange rates, territory assignments, product mappings), configuration that controls warehouse behaviour (feature flags, processing parameters), and small datasets with high business context where correctness requires domain expertise that engineering doesn’t have.
It doesn’t work for high-volume transactional data, data requiring complex access control, critical business entities like customer records or financial transactions, or datasets that will definitely grow beyond spreadsheet scale.
The test is simple: if the dataset fits comfortably in a spreadsheet, changes frequently, requires business expertise to maintain correctly, and doesn’t need database-level features, Google Sheets is probably appropriate. If you’re constantly fighting tool limitations, use a different tool.
You don’t flip a switch and suddenly analysts own all the data. Here’s how to transition incrementally.
Look at your recent tickets. Which data update requests come up repeatedly? Which datasets require business context that engineering doesn’t have?
Start with the most annoying bottleneck. The data update that generates the most tickets or the most follow-ups.
Before you hand over any data, establish who owns it. Get explicit agreement from someone that they’re responsible for accuracy and maintenance.
Every Google Sheet needs a designated owner. Not “the finance team.” A specific person who’s responsible for data accuracy. When something looks wrong, you know who to ask.
Set up automated syncing from Google Sheets to your warehouse. Create dbt models with appropriate tests. Document what the data is and how it’s used.
Start with conservative validation. You can always loosen rules later. You can’t easily add validation after people get used to working without it.
Keep the old process running while the new one proves itself. Both systems produce output. You compare them to verify consistency.
This is your validation period. If discrepancies appear, investigate why. Maybe the Sheet needs better structure. Maybe your tests need refinement.
Once the automated pipeline reliably produces correct results, stop doing manual updates. Document the new process. Train whoever needs training.
Make it clear that the old workflow is retired. If people revert to emailing CSVs because old habits die hard, politely redirect them to the Sheet.
The first dataset you migrate teaches you what works and what doesn’t. Maybe your monitoring needs adjustment. Maybe your documentation wasn’t clear enough.
Fix the issues, then migrate the next dataset. Each one gets easier because you’re refining the pattern.
Let’s be specific about what changes when you implement this properly.
Analysts update data when they need to. Not when engineering has availability. Not after waiting in a queue. Immediately.
Engineering stops being interrupt-driven. Data update requests disappear from Slack and Jira. Engineers can focus on actual engineering problems for hours at a time without context switching.
Data freshness improves dramatically. The gap between “this mapping is wrong” and “this mapping is fixed” shrinks from days to minutes.
Shadow systems disappear. When analysts can update data directly, they stop building Excel workarounds. Everything flows through the proper pipeline.
Confidence increases. Analysts trust the data because they control it. Engineering trusts the pipeline because tests validate correctness. Leadership trusts the reports because numbers stay current.
Organisational velocity improves. Decisions happen faster when data reflects reality in near real-time instead of reflecting last week’s reality after someone processes the update queue.
This is what Resident Advisor achieved. Their analysts maintain GL mappings and exchange rates directly. Engineering maintains reliable infrastructure. Everyone works on problems that match their expertise.
That’s not “empowerment” as an abstract concept. It’s operational efficiency through proper organisational design.
Won’t giving analysts direct data access create governance problems?
Only if you don’t implement proper governance. Google Sheets has access controls, you define who can edit. dbt has testing, you validate data quality. Your warehouse has audit logs, you track changes. This isn’t “no governance,” it’s appropriate governance at the right layers.
What if an analyst makes a mistake that breaks downstream reporting?
dbt tests catch data quality issues before they reach production dashboards. If a mapping looks wrong or values seem suspicious, the build fails and alerts fire. Mistakes get caught in staging, not in production. That’s the whole point of having a transformation layer with testing.
How do we train analysts to use this properly?
Most analysts already use Google Sheets daily. They don’t need technical training, they need process documentation. What does each column mean? What’s the expected update frequency? What happens downstream when data changes? Clear documentation solves this, not training programmes.
What if the analyst who owns a Sheet leaves the company?
Same thing that happens when any knowledge owner leaves, you need succession planning. Ownership should be documented in your data catalogue. Sheets should be in shared drives, not personal accounts. This isn’t a Sheets problem, it’s an organisational knowledge management problem.
Won’t this just move the bottleneck from engineering to data quality issues?
Only if you skip implementing tests. Proper dbt testing catches quality issues before they cause problems. And catching quality issues early is vastly preferable to processing update requests late. You’re trading interrupt-driven work for proactive monitoring, which is a massive improvement.
Engineering shouldn’t be the mandatory intermediary for every data update.
When analysts maintain data that requires their domain expertise—product mappings, exchange rates, territory assignments, and engineering maintains the infrastructure that syncs, validates, and transforms that data, everyone works on problems that match their skills.
Resident Advisor implemented this pattern for their GL mappings and currency rates. Their analysts update Sheets directly. Their pipeline syncs automatically. Their reports stay accurate. Their engineering team focuses on engineering problems.
That’s not revolutionary. Its basic operational efficiency is achieved by putting data ownership where it belongs.
The bottleneck isn’t technical. It’s organisational. And it’s entirely solvable.
Book a 30-minute discovery call. We’ll help you assess whether Google Sheets automation fits your use case and show you exactly how it would work for your data.
Previously in this series: