Data Engineering Weekly

Ananth Packkildurai

The Weekly Data Engineering Newsletter www.dataengineeringweekly.com

  1. 20 AUG

    Knowledge, Metrics, and AI: Rethinking the Semantic Layer with David Jayatillake

    Semantic layers have been with us for decades—sometimes buried inside BI tools, living in analysts’ heads. But as data complexity grows and AI pushes its way into the stack, the conversation is shifting. In a recent conversation with David Jayatillake, a long-time data leader with experience at Cube, Delphi Labs, and multiple startups, we explored how semantic layers move from BI lock-in to invisible, AI-driven infrastructure—and why that matters for the future of metrics and knowledge management. What Exactly Is a Semantic Layer? Every company already has a semantic layer. Sometimes it’s software; sometimes it’s in people’s heads. When an analyst translates a stakeholder’s question into SQL, they’re acting as a human semantic layer. A software semantic layer encodes this process so SQL is generated consistently and automatically. David’s definition is sharp: a semantic layer is a knowledge graph plus a compiler. The knowledge graph stores entities, metrics, and relationships; the compiler translates requests into SQL. From BI Tools to Independent Layers BI tools were the first place semantic layers showed up: Business Objects, SSAS, Looker, and Power BI. This works fine for smaller orgs, but quickly creates vendor lock-in for enterprises juggling multiple BI tools and warehouses. Independent semantic layers emerged to solve this. By abstracting the logic outside BI, companies can ensure consistency across Tableau, Power BI, Excel, and even embedded analytics in customer-facing products. Tools like Cube and DBT metrics aim to play that role. Why Are They Hard to Maintain? The theory is elegant: define once, use everywhere. But two big issues keep surfacing: * Constant change. Business definitions evolve. A revenue formula that works today may be obsolete tomorrow. * Standardization. Each vendor proposes their standard—DBT metrics, LookML, Malloy. History tells us one “universal” standard usually spawns another to unify the rest. Performance complicates things further—BI vendors optimize their compilers differently, making interoperability tricky. Culture and Team Ownership A semantic layer is useless without cultural buy-in. Product teams must emit clean events and define success metrics. Without it, the semantic layer starves. Ownership varies: sometimes product engineering owns it end-to-end with embedded data engineers; other times, central data teams or hybrid models step in. What matters is aligning metrics with product outcomes. Data Models vs. Semantic Layers Dimensional modeling (Kimball, Data Vault) makes data neat and joinable. But models alone don’t enforce consistent definitions. Without a semantic layer, organizations drift into “multiple versions of the truth.” Beyond Metrics: Metric Trees Semantic layers can also encode metric trees—hierarchies explaining why a metric changed. Example: revenue = ACV × deals. If revenue drops, metric trees help trace whether ACV or deal count is responsible. This goes beyond simple dimension slicing and powers real root cause analysis. Where AI Changes the Game Maintaining semantic layers has always been their weak point. AI changes that: * Dynamic extensions: AI can generate new metrics on demand. * Governance by design: Instead of hallucinating answers, AI can admit “I don’t know” or propose a new definition. * Invisible semantics: Users query in natural language, and AI maintains and optimizes the semantic layer behind the scenes. Executives demanding “AI access to data” are accelerating this shift. Text-to-SQL alone fails without semantic context. With a semantic layer, AI can deliver governed, consistent answers instantly. Standardization Might Not Matter Will the industry settle on a single semantic standard? Maybe not—and that’s okay. Standards like Model Context Protocol (MCP) allow AI to translate across formats. SQL remains the execution layer, while semantics bridge business logic. Cube, DBT, Malloy, or Databricks metric views can all coexist if AI smooths the edges. When Do You Need One? Two clear signals: * Inconsistency: Teams struggle to agree on fundamental metrics such as revenue or churn. * Speed: Stakeholders wait weeks for analyst queries that could be answered in seconds with semantic + AI. If either pain point resonates, it’s time to consider a semantic layer. Looking Ahead David sees three big shifts coming soon: * Iceberg is a universal storage format—true multi-engine querying across DuckDB, Databricks, and others. * Invisible semantics. Baked into tools, maintained by AI, no more “selling” semantic layers. * AI-native access. Semantic layers are the primary interface between humans, AI, and data. Final Thoughts Semantic layers aren’t new—they’ve quietly powered BI tools and lived in analysts’ heads for years. What’s new is the urgency: executives want AI to answer questions instantly, and that requires a consistent semantic foundation. As David Jayatillake reminds us, the journey is from BI lock-in to invisible semantics—semantic layers that are dynamic, governed, and maintained by AI. The question is no longer if your organization needs one, but when you’ll make the shift—and whether your semantic layer will keep pace with the AI-driven future of data. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dataengineeringweekly.com

    42 min
  2. 13 AUG

    Insights from Jacopo Tagliabue, CTO of Bauplan: Revolutionizing Data Pipelines with Functional Data Engineering

    Data Engineering Weekly recently hosted Jacopo Tagliabue, CTO of Bauplan, for an insightful podcast exploring innovative solutions in data engineering. Jacopo shared valuable perspectives drawn from his entrepreneurial journey, his experience building multiple companies, and his deep understanding of data engineering challenges. This extensive conversation spanned the complexities of data engineering and showcased Bauplan’s unique approach to tackling industry pain points. Entrepreneurial Journey and Problem Identification Jacopo opened the discussion by highlighting the personal and professional experiences that led him to create Bauplan. Previously, he built a company specializing in Natural Language Processing (NLP) at a time when NLP was still maturing as a technology. After selling this initial venture, Jacopo immersed himself deeply in data engineering, navigating through complex infrastructures involving Apache Spark, Airflow, and Snowflake. He recounted the profound frustration of managing complicated and monolithic data stacks that, despite their capabilities, came with significant operational overhead. Transitioning from Apache Spark to Snowflake provided some relief, yet it introduced new limitations, particularly concerning Python integration and vendor lock-in. Recognizing the industry-wide need for simplicity, Bauplan was conceptualized to offer engineers a straightforward and efficient alternative. Bauplan’s Core Abstraction - Functions vs. Traditional ETL At Bauplan’s core is the decision to use functions as the foundational building block. Jacopo explained how traditional ETL methodologies typically demand extensive management of infrastructure and impose high cognitive overhead on engineers. By contrast, functions offer a much simpler, modular approach. They enable data engineers to focus purely on business logic without worrying about complex orchestration or infrastructure. The Bauplan approach distinctly separates responsibilities: engineers handle code and business logic, while Bauplan’s platform takes charge of data management, caching, versioning, and infrastructure provisioning. Jacopo emphasized that this separation significantly enhances productivity and allows engineers to operate efficiently, creating modular and easily maintainable pipelines. Data Versioning, Immutability, and Reproducibility Jacopo firmly underscored the importance of immutability and reproducibility in data pipelines. He explained that data engineering historically struggles with precisely reproducing pipeline states, especially critical during debugging and auditing. Bauplan directly addresses these challenges by automatically creating immutable snapshots of both the code and data state for every job executed. Each pipeline execution receives a unique job ID, guaranteeing precise reproducibility of the pipeline’s state at any given moment. This method enables engineers to easily debug issues without impacting the production environment easily easily, thereby streamlining maintenance tasks and enhancing reliability. Jacopo highlighted this capability as central to achieving robust data governance and operational clarity. Branching, Collaboration, and Conflict Resolution Bauplan integrates Git-like branching capabilities tailored explicitly for data engineering workflows. Jacopo detailed how this capability allows engineers to experiment, collaborate, and innovate safely in isolated environments. Branching provides an environment where engineers can iterate without fear of disrupting ongoing operations or production pipelines. Jacopo explained that Bauplan handles conflicts conservatively. If two engineers attempt to modify the same table or data concurrently, Bauplan requires explicit rebasing. While this strict conflict-resolution policy may appear cautious, it maintains data integrity and prevents unexpected race conditions. Bauplan ensures that each branch is appropriately isolated, promoting clean, structured collaboration. Apache Arrow and Efficient Data Shuffling Efficiency in data shuffling, especially between pipeline functions, was another critical topic. Jacopo praised Apache Arrow’s role as the backbone of Bauplan’s data interchange strategy. Apache Arrow's zero-copy transfer capability significantly boosts data movement speed, removing traditional bottlenecks associated with serialization and data transfers. Jacopo illustrated how Bauplan leverages Apache Arrow to facilitate rapid data exchanges between functions, dramatically outperforming traditional systems like Airflow. By eliminating the need for intermediate serialization, Bauplan achieves significant performance improvements and streamlines data processing, enabling rapid, efficient pipelines. Vertical Scaling and System Performance Finally, the conversation shifted to vertical scaling strategies employed by Bauplan. Unlike horizontally distributed systems like Apache Spark, Bauplan strategically focuses on vertical scaling, which simplifies infrastructure management and optimizes resource utilization. Jacopo explained that modern cloud infrastructures now offer large compute instances capable of handling substantial data volumes efficiently, negating the complexity typically associated with horizontally distributed systems. He clarified Bauplan’s current operational range as optimally suited for pipeline data volumes typically between 10GB and 100 100GB. This range covers a vast majority of standard enterprise use cases, making Bauplan a highly suitable and effective solution for most organizations. Jacopo stressed that although certain specialized scenarios still require distributed computing platforms, the majority of pipelines benefit immensely from Bauplan’s simplified, vertically scaled approach. In summary, Jacopo Tagliabue offered a compelling vision of Bauplan’s mission to simplify and enhance data engineering through functional abstractions, immutable data versioning, and efficient, vertically scaled operations. Bauplan presents an innovative solution designed explicitly around the real-world challenges data engineers face, promising significant improvements in reliability, performance, and productivity in managing modern data pipelines. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dataengineeringweekly.com

    45 min
  3. 25 APR

    AI and Data in Production: Insights from Avinash Narasimha [AI Solutions Leader at Koch Industries]

    In our latest episode of Data Engineering Weekly, co-hosted by Aswin, we explored the practical realities of AI deployment and data readiness with our distinguished guest, Avinash Narasimha, AI Solutions Leader at Koch Industries. This discussion shed significant light on the maturity, challenges, and potential that generative AI and data preparedness present in contemporary enterprises. Introducing Our Guest: Avinash Narasimha Avinash Narasimha is a seasoned professional with over two decades of experience in data analytics, machine learning, and artificial intelligence. His focus at Koch Industries involves deploying and scaling various AI solutions, with particular emphasis on operational AI and generative AI. His insights stem from firsthand experience in developing robust AI frameworks that are actively deployed in real-world applications. Generative AI in Production: Reality vs. Hype One key question often encountered in the industry revolves around the maturity of generative AI in actual business scenarios. Addressing this concern directly, Avinash confirmed that generative AI has indeed crossed the pilot threshold and is actively deployed in several production scenarios at Koch Industries. Highlighting their early adoption strategy, Avinash explained that they have been on this journey for over two years, emphasizing an established continuous feedback loop as a critical component in maintaining effective generative AI operations. Production Readiness and Deployment Deployment strategies for AI, particularly for generative models and agents, have undergone significant evolution. Avinash described the systematic approach based on his experience: * Beginning with rigorous experimentation * Transitioning smoothly into scalable production environments * Incorporating robust monitoring and feedback mechanisms. The result is a successful deployment of multiple generative AI solutions, each carefully managed and continuously improved through iterative processes. The Centrality of Data Readiness During our conversation, we explored the significance of data readiness, a pivotal factor that influences the success of AI deployment. Avinash emphasized data readiness as a fundamental component that significantly impacts the timeline and effectiveness of integrating AI into production systems. He emphasized the following: - Data Quality: Consistent and high-quality data is crucial. Poor data quality frequently acts as a bottleneck, restricting the performance and reliability of AI models. - Data Infrastructure: A Robust data infrastructure is necessary to support the volume, velocity, and variety of data required by sophisticated AI models. - Integration and Accessibility: The ease of integrating and accessing data within the organization significantly accelerates AI adoption and effectiveness. Challenges in Data Readiness Avinash openly discussed challenges that many enterprises face concerning data readiness, including fragmented data ecosystems, legacy systems, and inadequate data governance. He acknowledged that while the journey toward optimal data readiness can be arduous, organizations that systematically address these challenges see substantial improvements in their AI outcomes. Strategies for Overcoming Data Challenges Avinash also offered actionable insights into overcoming common data-related obstacles: - Building Strong Data Governance: A robust governance framework ensures that data remains accurate, secure, and available when needed, directly enhancing AI effectiveness. - Leveraging Cloud Capabilities: He noted recent developments in cloud-based infrastructure as significant enablers, providing scalable and sophisticated tools for data management and model deployment. - Iterative Improvement: Regular feedback loops and iterative refinement of data processes help gradually enhance data readiness and AI performance. Future Outlook: Trends and Expectations Looking ahead, Avinash predicted increased adoption of advanced generative AI tools and emphasized ongoing improvements in model interpretability and accountability. He expects enterprises will increasingly prioritize explainable AI, balancing performance with transparency to maintain trust among stakeholders. Moreover, Avinash highlighted the anticipated evolution of data infrastructure to become more flexible and adaptive, catering specifically to the unique demands of generative AI applications. He believes this evolution will significantly streamline the adoption of AI across industries. Key Takeaways - Generative AI is Ready for Production: Organizations, particularly those that have been proactive in their adoption, have successfully integrated generative AI into production, highlighting its maturity beyond experimental stages. - Data Readiness is Crucial: Effective AI deployment is heavily dependent on the quality, accessibility, and governance of data within organizations. - Continuous Improvement: Iterative feedback and continuous improvements in data readiness and AI deployment strategies significantly enhance performance and outcomes. Closing Thoughts Our discussion with Avinash Narasimha provided practical insights into the real-world implementation of generative AI and the critical role of data readiness. His experience at Koch Industries illustrates not only the feasibility but also the immense potential generative AI holds for enterprises willing to address data challenges and deploy AI thoughtfully and systematically. Stay tuned for more insightful discussions on Data Engineering Weekly. All rights reserved, ProtoGrowth Inc., India. I have provided links for informational purposes and do not suggest endorsement. All views expressed in this newsletter are my own and do not represent current, former, or future employers’ opinions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dataengineeringweekly.com

    37 min
  4. 6 MAR

    Is Apache Iceberg the New Hadoop? Navigating the Complexities of Modern Data Lakehouses

    The modern data stack constantly evolves, with new technologies promising to solve age-old problems like scalability, cost, and data silos. Apache Iceberg, an open table format, has recently generated significant buzz. But is it truly revolutionary, or is it destined to repeat the pitfalls of past solutions like Hadoop? In a recent episode of the Data Engineering Weekly podcast, we delved into this question with Daniel Palma, Head of Marketing at Estuary and a seasoned data engineer with over a decade of experience. Danny authored a thought-provoking article comparing Iceberg to Hadoop, not on a purely technical level, but in terms of their hype cycles, implementation challenges, and the surrounding ecosystems. This blog post expands on that insightful conversation, offering a critical look at Iceberg's potential and the hurdles organizations face when adopting it. Hadoop: A Brief History Lesson For those unfamiliar with Hadoop's trajectory, it's crucial to understand the context. In the mid-2000s, Hadoop emerged as a groundbreaking solution for processing massive datasets. It promised to address key pain points: * Scaling: Handling ever-increasing data volumes. * Cost: Reducing storage and processing expenses. * Speed: Accelerating data insights. * Data Silos: Breaking down barriers between data sources. Hadoop achieved this through distributed processing and storage, using a framework called MapReduce and the Hadoop Distributed File System (HDFS). However, while the promise was alluring, the reality proved complex. Many organizations struggled with Hadoop's operational overhead, leading to high failure rates (Gartner famously estimated that 80% of Hadoop projects failed). The complexity stemmed from managing distributed clusters, tuning configurations, and dealing with issues like the "small file problem." Iceberg: The Modern Contender Apache Iceberg enters the scene as a modern table format designed for massive analytic datasets. Like Hadoop, it aims to tackle scalability, cost, speed, and data silos. However, Iceberg focuses specifically on the table format layer, offering features like: * Schema Evolution: Adapting to changing data structures without rewriting tables. * Time Travel: Querying data as it existed at a specific time. * ACID Transactions: Ensuring data consistency and reliability. * Partition Evolution: Changing data partitioning without breaking existing queries. Iceberg's design addresses Hadoop's shortcomings, particularly data consistency and schema evolution. But, as Danny emphasizes, an open table format alone isn't enough. The Ecosystem Challenge: Beyond the Table Format Iceberg, by itself, is not a complete solution. It requires a surrounding ecosystem to function effectively. This ecosystem includes: * Catalogs: Services that manage metadata about Iceberg tables (e.g., table schemas, partitions, and file locations). * Compute Engines: Tools that query and process data stored in Iceberg tables (e.g., Trino, Spark, Snowflake, DuckDB). * Maintenance Processes: Operations that optimize Iceberg tables, such as compacting small files and managing metadata. The ecosystem is where the comparison to Hadoop becomes particularly relevant. Hadoop also had a vast ecosystem (Hive, Pig, HBase, etc.), and managing this ecosystem was a significant source of complexity. Iceberg faces a similar challenge. Operational Complexity: The Elephant in the Room Danny highlights operational complexity as a major hurdle for Iceberg adoption. While the Iceberg itself simplifies some aspects of data management, the surrounding ecosystem introduces new challenges: * Small File Problem (Revisited): Like Hadoop, Iceberg can suffer from small file problems. Data ingestion tools often create numerous small files, which can degrade performance during query execution. Iceberg addresses this through table maintenance, specifically compaction (merging small files into larger ones). However, many data ingestion tools don't natively support compaction, requiring manual intervention or dedicated Spark clusters. * Metadata Overhead: Iceberg relies heavily on metadata to track table changes and enable features like time travel. If not handled correctly, managing this metadata can become a bottleneck. Organizations need automated processes for metadata cleanup and compaction. * Catalog Wars: The catalog choice is critical, and the market is fragmented. Major data warehouse providers (Snowflake, Databricks) have released their flavors of REST catalogs, leading to compatibility issues and potential vendor lock-in. The dream of a truly interoperable catalog layer, where you can seamlessly switch between providers, remains elusive. * Infrastructure Management: Setting up and maintaining an Iceberg-based data lakehouse requires expertise in infrastructure-as-code, monitoring, observability, and data governance. The maintenance demands a level of operational maturity that many organizations lack. Key Considerations for Iceberg Adoption If your organization is considering Iceberg, Danny stresses the importance of careful planning and evaluation: * Define Your Use Case: Clearly articulate your specific needs. Are you prioritizing performance, cost, or both? What are your data governance and security requirements? Your answers will influence your choices for storage, computing, and cataloging. * Evaluate Compatibility: Ensure your existing infrastructure and tools (query engines, data ingestion pipelines) are compatible with Iceberg and your chosen catalog. * Consider Cloud Vendor Lock-in: Be mindful of potential lock-in, especially with catalogs. While Iceberg is open, cloud providers have tightly coupled implementation specific to their ecosystem. * Build vs. Buy: Decide whether you have the resources to build and maintain your Iceberg infrastructure or if a managed service is better. Many organizations prefer to outsource table maintenance and catalog management to avoid operational overhead. * Talent and Expertise: Do you have the in-house expertise to manage Spark clusters (for compaction), configure query engines, and manage metadata? If not, consider partnering with consultants or investing in training. * Start the Data Governance Process: Don't wait until the last minute to build the data governance framework. You must create the framework and processes before jumping into adoption. The Catalog Conundrum: Beyond Structured Data The role of the catalog is evolving. Initially, catalogs focused on managing metadata for structured data in Iceberg tables. However, the vision is expanding to encompass unstructured data (images, videos, audio) and AI models. This "catalog of catalogs" or "uber catalog" approach aims to provide a unified interface for accessing all data types. The benefits of a unified catalog are clear: simplified data access, consistent semantics, and easier integration across different systems. However, building such a catalog is complex, and the industry is still grappling with the best approach. S3 Tables: A New Player? Amazon's recent announcement of S3 Tables raised eyebrows. These tables combine object storage with a table format, offering a highly managed solution. However, they are currently limited in terms of interoperability. They don't support external catalogs, making integrating them into existing Iceberg-based data stacks difficult. The jury is still unsure whether S3 Tables will become a significant player in the open table format landscape. Query Engine Considerations Choosing the right query engine is crucial for performance and cost optimization. While some engines like Snowflake boast excellent performance with Iceberg tables (with minimal overhead compared to native tables), others may lag. Factors to consider include: * Performance: Benchmark different engines with your specific workloads. * Cost: Evaluate the cost of running queries on different engines. * Scalability: Ensure the engine can handle your anticipated data volumes and query complexity. * Compatibility: Verify compatibility with your chosen catalog and storage layer. * Use Case: Different engines excel at different tasks. Trino is popular for ad-hoc queries, while DuckDB is gaining traction for smaller-scale analytics. Is Iceberg Worth the Pain? The ultimate question is whether the benefits of Iceberg outweigh the complexities. For many organizations, especially those with limited engineering resources, fully managed solutions like Snowflake or Redshift might be a more practical starting point. These platforms handle the operational overhead, allowing teams to focus on data analysis rather than infrastructure management. However, Iceberg can be a compelling option for organizations with specific requirements (e.g., strict data residency rules, a need for a completely open-source stack, or a desire to avoid vendor lock-in). The key is approaching adoption strategically, clearly understanding the challenges, and a plan to address them. The Future of Table Formats: Consolidation and Abstraction Danny predicts consolidation in the table format space. Managed service providers will likely bundle table maintenance and catalog management with their Iceberg offerings, simplifying the developer experience. The next step will be managing the compute layer, providing a fully end-to-end data lakehouse solution. Initiatives like Apache XTable aim to provide a standardized interface on top of different table formats (Iceberg, Hudi, Delta Lake). However, whether such abstraction layers will gain widespread adoption remains to be seen. Some argue that standardizing on a single table format is a simpler approach. Iceberg's Role in Event-Driven Architectures and Machine Learning Beyond traditional analytics, Iceberg has the potential to contribute significantly to event-driven architectures and machine learning. Its features, such as time travel, ACID transactions, and data versioning, make it a suitable backend for streaming systems and change data

    42 min
  5. 26 FEB

    The State of Lakehouse Architecture: A Conversation with Roy Hassan on Maturity, Challenges, and Future Trends

    Lakehouse architecture represents a major evolution in data engineering. It combines data lakes' flexibility with data warehouses' structured reliability, providing a unified platform for diverse data workloads ranging from traditional business intelligence to advanced analytics and machine learning. Roy Hassan, a product leader at Upsolver, now Qlik, offers a comprehensive reality check on Lakehouse implementations, shedding light on their maturity, challenges, and future directions. Defining Lakehouse Architecture A Lakehouse is not a specific product, tool, or service but an architectural framework. This distinction is critical because it allows organizations to tailor implementations to their needs and technological environments. For instance, Databricks users inherently adopt a Lakehouse approach by storing data in object storage, managing it with the Delta Lake format, and analyzing it directly on the data lake. Assessing the Maturity of Lakehouse Implementations The adoption and maturity of Lakehouse implementations vary across cloud platforms and ecosystems: Databricks: Many organizations have built mature Lakehouse implementations using Databricks, leveraging its robust capabilities to handle diverse workloads. Amazon Web Services (AWS): While AWS provides services like Athena, Glue, Redshift, and EMR to access and process data in object storage, many users still rely on traditional data lakes built on Parquet files. However, a growing number are adopting Lakehouse architectures with open table formats such as Iceberg, which has gained traction within the AWS ecosystem. Azure Fabric: Built on the Delta Lake format, Azure Fabric offers a vertically integrated Lakehouse experience, seamlessly combining storage, cataloging, and computing resources. Snowflake: Organizations increasingly use Snowflake in a Lakehouse-oriented manner, storing data in S3 and managing it with Iceberg. While new workloads favor Iceberg, most existing data remains within Snowflake’s internal storage. Google BigQuery: The Lakehouse ecosystem in Google Cloud is still evolving. Many users prefer to keep their workloads within BigQuery due to its simplicity and integrated storage. Despite these differences in maturity, the industry-wide adoption of Lakehouse architectures continues to expand, and their implementation is becoming increasingly sophisticated. Navigating Open Table Formats: Iceberg, Delta Lake, and Hudi Discussions about open table formats often spark debate, but each format offers unique strengths and is backed by a dedicated engineering community: Iceberg and Delta Lake share many similarities, with ongoing discussions about potential standardization. Hudi specializes in streaming use cases and optimizing real-time data ingestion and processing. [Listen to The Future of Data Lakehouses: A Fireside Chat with Vinoth Chandar - Founder CEO Onehouse & PMC Chair of Apache Hudi] Most modern query engines support Delta Lake and Iceberg, reinforcing their prominence in the Lakehouse ecosystem. While Hudi and Paimon have smaller adoption, broader query engine support for all major formats is expected over time. Examining Apache XTable’s Role Apache XTable aims to improve interoperability between different table formats. While the concept is practical, its long-term relevance remains uncertain. As the industry consolidates around fewer preferred formats, converting between them may introduce unnecessary complexity, latency, and potential points of failure—especially at scale. Challenges and Criticisms of Lakehouse Architecture One common criticism of Lakehouse architecture is its lower abstraction level than traditional databases. Developers often need to understand the underlying file system, whereas databases provide a more seamless experience by abstracting storage management. The challenge is to balance Lakehouse's flexibility and traditional databases' ease of use. Best Practices for Lakehouse Adoption A successful Lakehouse implementation starts with a well-defined strategy that aligns with business objectives. Organizations should: • Establish a clear vision and end goals. • Design a scalable and efficient architecture from the outset. • Select the right open table format based on workload requirements. The Significance of Shared Storage Shared storage is a foundational principle of Lakehouse architecture. Organizations can analyze data using multiple tools and platforms by storing it in a single location and transforming it once. This approach reduces costs, simplifies data management, and enhances agility by allowing teams to choose the most suitable tool for each task. Catalogs: Essential Components of a Lakehouse Catalogs are crucial in Lakehouse implementations as metadata repositories describing data assets. These catalogs fall into two categories: Technical catalogs, which focus on data management and organization. Business catalogs, which provide a business-friendly view of the data landscape. A growing trend in the industry is the convergence of technical and business catalogs to offer a unified view of data across the organization. Innovations like the Iceberg REST catalog specification have advanced catalog management by enabling a decoupled and standardized approach. The Future of Catalogs: AI and Machine Learning Integration In the coming years, AI and machine learning will drive the evolution of data catalogs. Automated data discovery, governance, and optimization will become more prevalent, allowing organizations to unlock new AI-powered insights and streamline data management processes. The Changing Role of Data Engineers in the AI Era The rise of AI is transforming the role of data engineers. Traditional responsibilities like building data pipelines are shifting towards platform engineering and enabling AI-driven data capabilities. Moving forward, data engineers will focus on: • Designing and maintaining AI-ready data infrastructure. • Developing tools that empower software engineers to leverage data more effectively. Final Thoughts Lakehouse architecture is rapidly evolving, with growing adoption across cloud ecosystems and advancements in open table formats, cataloging, and AI integration. While challenges remain—particularly around abstraction and complexity—the benefits of flexibility, cost efficiency, and scalability make it a compelling approach for modern data workloads. Organizations investing in a Lakehouse strategy should prioritize best practices, stay informed about emerging trends, and build architectures that support current and future data needs. All rights reserved ProtoGrowth Inc, India. I have provided links for informational purposes and do not suggest endorsement. All views expressed in this newsletter are my own and do not represent current, former, or future employers’ opinions. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dataengineeringweekly.com

    1h 3m
  6. 19 FEB

    Beyond Kafka: Conversation with Jark Wu on Fluss - Streaming Storage for Real-Time Analytics

    Fluss is a compelling new project in the realm of real-time data processing. I spoke with Jark Wu, who leads the Fluss and Flink SQL team at Alibaba Cloud, to understand its origins and potential. Jark is a key figure in the Apache Flink community, known for his work in building Flink SQL from the ground up and creating Flink CDC and Fluss. You can read the Q&A version of the conversation here, and don’t forget to listen to the podcast. What is Fluss and its use cases? Fluss is a streaming storage specifically designed for real-time analytics. It addresses many of Kafka's challenges in analytical infrastructure. The combination of Kafka and Flink is not a perfect fit for real-time analytics; the integration of Kafka and Lakehouse is very shallow. Fluss is an analytical Kafka that builds on top of Lakehouse and integrates seamlessly with Flink to reduce costs, achieve better performance, and unlock new use cases for real-time analytics. How do you compare Fluss with Apache Kafka? Fluss and Kafka differ fundamentally in design principles. Kafka is designed for streaming events, but Fluss is designed for streaming analytics. Architecture Difference The first difference is the Data Model. Kafka is designed to be a black box to collect all kinds of data, so Kafka doesn't have built-in schema and schema enforcement; this is the biggest problem when integrating with schematized systems like Lakehouse. In contrast, Fluss adopts a Lakehouse-native design with structured tables, explicit schemas, and support for all kinds of data types; it directly mirrors the Lakehouse paradigm. Instead of Kafka's topics, Fluss organizes data into database tables with partitions and buckets. This Lakehouse-first approach eliminates the friction of using Lakehouse as a deep storage for Fluss. The second difference is the Storage Model. Fluss introduces Apache Arrow as its columnar log storage model for efficient analytical queries, whereas Kafka persists data as unstructured and row-oriented logs for efficient sequence scans. Analytics requires strong data-skipping ability in storage, so sequence scanning is not common; columnar pruning and filter pushdown are basic functionalities of analytical storage. Among the 20,000 Flink SQL jobs at Alibaba, only 49% of columns of Kafka data are read on average. The third difference is Data Mutability: Fluss natively supports real-time updates (e.g., row-level modifications) through LSM tree mechanisms and provides read-your-writes consistency with milli-second latency and high throughput. While Kafka primarily handles append-only streams, the Kafka compacted topic only provides a weak update semantic that compact will keep at least one value for a key, not only the latest. The fourth difference is the Lakehouse Architecture. Fluss embraces the Lakehouse Architecture. Fluss uses Lakehouse as a tiered storage, and data will be converted and tiered into data lakes periodically; Fluss only retains a small portion of recent data. So you only need to store one copy of data for your streaming and Lakehouse. But the true power of this architecture is it provides a union view of Streaming and Lakehouse, so whether it is a Kafka client or a query engine on Lakehouse, they all can visit the streaming data and Lakehouse data as a union view as a single table. It brings powerful analytics to streaming data users. On the other hand, it provides second-level data insights for Lakehouse users. Most importantly, you only need to store one copy of data for your streaming and Lakehouse, which reduces costs. In contrast, Kafka's tiered storage only stores Kafka log segments in remote storage; it is only a storage cost optimization for Kafka and has nothing to do with Lakehouse. The Lakehouse storage serves as the historical data layer for the streaming storage, which is optimized for storing long-term data with minute-level latencies. On the other hand, streaming storage serves as the real-time data layer for Lakehouse storage, which is optimized for storing short-term data with millisecond-level latencies. The data is shared and is exposed as a single table. For streaming queries on the table, it firstly uses the Lakehouse storage as historical data to have efficient catch-up read performance and then seamlessly transitions to the streaming storage for real-time data, ensuring no duplicate data is read. For batch queries on the table, streaming storage supplements real-time data for Lakehouse storage, enabling second-level freshness for Lakehouse analytics. This capability, termed Union Read, allows both layers to work in tandem for highly efficient and accurate data access. Confluent Tableflow can bridge Kafka and Iceberg data, but that is just a data movement that data integration tools like Fivetran or Airbyte can also achieve. Tableflow is a Lambda Architecture that uses two separate systems (streaming and batch), leading to challenges like data inconsistency, dual storage costs, and complex governance. On the other hand, Fluss is a Kappa Architecture; it stores one copy of data and presents it as a stream or a table, depending on the use case. Benefits: * Cost and Time Efficiency: no longer need to move data between system * Data Consistency: reduces the occurrence of similar-yet-different datasets, leading to fewer data pipelines and simpler data management. * Analytics on Stream * Freshness on Lakehouse When to use Kafka Vs. Fluss Kafka is a general-purpose distributed event streaming platform optimized for high-throughput messaging and event sourcing. It excels in event-driven architectures and data pipelines. Fluss is tailored for real-time analytics. It works with streaming processing like Flink and Lakehouse formats like Iceberg and Paimon. How do you compare Fluss with OLAP Engines like Apache Pinot? Architecture: Pinot is an OLAP database that supports storing offline and real-time data and supports low-latency analytical queries. In contrast, Fluss is a storage to store real-time streaming data but doesn't provide OLAP abilities; it utilizes external query engines to process/analyze data, such as Flink and StarRocks/Spark/Trino (on the roadmap). Therefore, Pinot has additional query servers for OLAP serving, and Fluss has fewer components. Pinot is a monolithic architecture that provides complete capabilities from storage to computation. Fluss is used in a composable architecture that can plug multiple engines into different scenarios. The rise of Iceberg and Lakehouse has proven the power of composable architecture. Users use Parquet as the file format and Iceberg as the table format, Fluss on top of Iceberg as the real-time data layer, Flink for streaming processing, and StarRocks/Trino for OLAP queries. Fluss in the architecture can augment the existing Lakehouse with mill-second-level fresh data insights. API: The API of Fluss is RPC protocols like Kafka, which provides an SDK library, and query engines like Flink provide SQL API. Pinot provides SQL for OLAP queries and BI tool integrations. Streaming reads and writes: Fluss provides comprehensive streaming reads and writes like Kafka, but Pinot doesn't natively support them. Pinot connects to external streaming systems to ingest data using a pull-based mechanism and doesn't support a push-based mechanism. When to use Fluss vs Apache Pinot? If you want to build streaming analytics streaming pipelines, use Fluss (and usually Flink together). If you want to build OLAP systems for low-latency complex queries, use Pinot. If you want to augment your Lakehouse with streaming data, use Fluss. How is Fluss integrated with Apache Flink? Fluss focuses on storing streaming data and does not offer streaming processing capabilities. On the other hand, Flink is the de facto standard for streaming processing. Fluss aims to be the best storage for Flink and real-time analytics. The vision behind the integration is to provide users with a seamless streaming warehouse or streaming database experience. This requires seamless integration and in-depth optimization from storage to computation. For instance, Fluss already supports all of Flink's connector interfaces, including catalog, source, sink, lookup, and pushdown interfaces. In contrast, Kafka can only implement the source and sink interfaces. Our team is the community's core contributor to Flink SQL; we have the most committers and PMC members. We are committed to advancing the deep integration and optimization of Flink SQL and Fluss. Can you elaborate on Fluss's internal architecture? A Fluss cluster consists of two main processes: the CoordinatorServer and the TabletServer. The CoordinatorServer is the central control and management component. It maintains metadata, manages tablet allocation, lists nodes, and handles permissions. The TabletServer stores data and provides I/O services directly to users. The Fluss architecture is similar to the Kafka broker and uses the same durability and leader-based replication mechanism. Consistency: A table creation will request CoordinatorServer, which creates the metadata and assigns replicas to TabeltServers (three replicas by default), one of which is the leader. The replica leader writes the incoming logs and replica followers fetch logs from the replica leader. Once all replicas replicate the log, the log write response will be successfully returned. Fault Tolerance: If the TabletServer fails, CoordinatorServer will assign a new leader from the replica list, and it becomes the new leader to accept new read/write requests. Once a failed TabeltServer comes back, it catches up with the logs from the new leader. Scalability: Fluss can scale up linearly by adding TabletServers. How did Fluss implement the columnar storage? Let’s start with why we need columnar storage for streaming data. Fluss is designed for real-time analytics. In analytical queries, it's common that only a portion of the columns are read, and a filter condition can prune a significant amount

    37 min
  7. 9 JAN

    The Future of Data Lakehouses: A Fireside Chat with Vinoth Chandar - Founder CEO Onehouse & PMC Chair of Apache Hudi

    Exploring the Evolution of Lakehouse Technology: A Conversation with Vinoth Chandar and Onehouse CEO In this episode, Ananth, author of Data Engineering Weekly and CEO of Onehouse, discusses the latest developments in the Lakehouse technology space, particularly focusing on Apache Hudi, Iceberg, and Delta Lake. They discuss the intricacies of building high-scale data ecosystems, the impact of table format standardization, and technical advances in incremental processing and indexing. The conversation delves into the role of open source in shaping the future of data engineering and addresses community questions about integrating various databases and improving operational efficiency. 00:00 Introduction and New Year Greetings 01:19 Introduction to Apache Hudi and Its Impact 02:22 Challenges and Innovations in Data Engineering 04:16 Technical Deep Dive: Hudi's Evolution and Features 05:57 Comparing Hudi with Other Data Formats 13:22 Hudi 1.0: New Features and Enhancements 20:37 Industry Perception and the Future of Data Formats 24:29 Technical Differentiators and Project Longevity 26:05 Open Standards and Vendor Games 26:41 Standardization and Data Platforms 28:43 Competition and Collaboration in Data Formats 33:38 Future of Open Source and Data Community 36:14 Technical Questions from the Audience 47:26 Closing Remarks and Future Outlook This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dataengineeringweekly.com

    48 min
  8. 29/12/2024

    Agents of Change: Navigating 2025 with AI and Data Innovation

    Agents of Change: Navigating 2025 with AI and Data Innovation In this episode of Dew, the hosts and guests discuss their predictions for 2025, focusing on the rise and impact of agentic AI. The conversation covers three main categories: 1. The role of agent AI 2. The future workforce dynamic involving human and AI agent 3. Innovations in data platforms heading into 2025. Highlights include insights from Ashwin and our special guest, Rajesh, on building robust agent systems, strategies for data engineers and AI engineers to remain relevant, data quality and observability, and the evolving landscape of Lakehouse architectures. The discussion also discusses the challenges of integrating multi-agent systems and the economic implications of AI sovereignty and data privacy. 00:00 Introduction and Predictions for 2025 01:49 Exploring Agentic AI 04:44 The Evolution of AI Models 16:36 Enterprise Data and AI Integration 25:06 Managing AI Agents 36:37 Opportunities in AI and Agent Development 38:02 The Evolving Role of AI and Data Engineers 38:31 Managing AI Agents and Data Pipelines 39:05 The Future of Data Scientists in AI 40:03 Multi-Agent Systems and Interoperability 44:09 Economic Viability of Multi-Agent Systems 47:06 Data Platforms and Lakehouse Implementations 53:14 Data Quality, Observability, and Governance 01:02:20 The Rise of Multi-Cloud and Multi-Engine Systems 01:06:21 Final Thoughts and Future Outlook This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dataengineeringweekly.com

    1h 11m

About

The Weekly Data Engineering Newsletter www.dataengineeringweekly.com

You Might Also Like