DBT and Snowflake teams often reach a point where further optimisation delivers diminishing returns, with costs rising and engineering velocity slowing due to architectural limitations. This recap of our LinkedIn Live shows how SQL Mesh’s incremental, state-aware processing enables 50–70% cost savings, greater productivity, and sustainable growth by replacing DBT’s expensive full-rebuild approach.
Cloud providers like AWS are introducing AI-powered cost transparency tools, while ETL vendors remain silent, continuing to profit from opaque, row-based pricing models that penalise efficiency and scale. By switching to performance-based pricing and auditing pipeline usage, data teams can cut ETL costs by up to 50% without sacrificing performance.
Row-based ETL pricing models conceal hidden costs such as duplicate processing, unchanged record syncing, and development retries, leading to inflated bills that often do not reflect actual data value. Shifting to performance-based pricing aligns costs with real infrastructure usage, enabling predictable budgeting, greater efficiency, and funding for innovation.
Astronomer’s PR mishap responding to a kiss cam controversy by hiring a celebrity, spotlights a deeper issue in vendor culture: misplaced priorities and poor judgment under pressure. For data leaders, it raises critical concerns about whether vendors invest in engineering excellence or opt for brand theatrics when things go wrong.
The real value of Big Data LDN 2025 lies not in vendor pitches or keynote sessions, but in candid corridor conversations among data leaders grappling with vendor fatigue, renewal pressure, and cost consolidation. As budgets tighten and complexity rises, the smartest teams are shifting from reactive tool dependency to proactive strategies that prioritise flexibility, performance-based pricing, and long-term efficiency.
Many data teams waste budget by misusing senior engineering talent on firefighting tasks and poor tool choices, rather than focusing on high-value, strategic work. High-performing teams prioritise experienced hires, measure business impact, reduce reactive work, and use AI and tools strategically to maximise ROI and team effectiveness.
Modern ETL pricing models often charge based on row counts, which fundamentally misaligns with how analytical systems actually process data—via columnar methods focused on compute efficiency and performance. This disconnect not only creates technical debt and unpredictable costs but also diverts engineering resources away from optimisation and innovation toward managing arbitrary billing constraints.
Row-based ETL pricing models create unpredictable, disproportionately high costs that penalize business growth, disrupt budgeting, and divert engineering resources from innovation to cost control. Performance-based pricing, aligned with actual infrastructure usage, offers a more predictable and strategic alternative that supports scalable data operations without financial volatility.
The June 12, 2025 Google Cloud outage revealed a harsh truth: modern data stacks often create more firefighting than innovation, as fragmented toolchains and so-called “managed” services increase maintenance burdens, costs, and risk. Matatika’s Mirror Mode offers a risk-free path out of this cycle by allowing teams to validate a more stable, antifragile infrastructure—enabling a shift from constant maintenance to strategic, high-impact data work.
Many data teams avoid proper data modelling due to its perceived complexity, often relying on ad-hoc structures that lead to performance issues and eroded trust in analytics. The most effective teams use flexible schema strategies, balancing star and snowflake designs, to align with their specific performance, storage, and maintenance needs.