Tech on the Rocks

Kostas, Nitay

Join Kostas and Nitay as they speak with amazingly smart people who are building the next generation of technology, from hardware to cloud compute. Tech on the Rocks is for people who are curious about the foundations of the tech industry. Recorded primarily from our offices and homes, but one day we hope to record in a bar somewhere. Cheers!

  1. AUG 18

    Email as a Knowledge Graph: Micro CEO Brett on Rebuilding CRM at the Inbox

    Summary Brett — founder & CEO of Micro — joins Nitay and Kostas to share how he’s turning email into a knowledge graph and rebuilding CRM right inside the inbox. He traces a path from Google’s M&A and Allo product team to Clearbit and Launch House, then digs into why most “inbox zero” workflows fail, how interoperability and AI agents shift power to the interface, and what it takes to design an email experience people actually live in. What you’ll learn Why email is a system of record—and how Micro converts threads into people, companies, attachments, tasks, and “updates”The wedge: founders’ real workflows (fundraising, hiring, sales) and why CRM belongs in the inboxProduct & UX lessons: skeuomorphic first, flexible theming (consumer vs. enterprise), and copy-the-UI-before-evolving-itM&A realities from Google: talent vs. tech vs. business acquisitions, and why culture kills most dealsBurnout and agency: why founders report less burnout than big-company rolesThe next phase: cross-app “updates” (email, LinkedIn DMs, etc.), Salesforce/HubSpot read–write, and agentic automationChapters 00:00 Brett's Journey: From Consulting to Tech Innovator 02:41 The Role of Strategy in Tech Companies 05:16 Understanding M&A: Successes and Failures 07:55 The Evolution of AI in Corporate Strategy 10:26 Transitioning to Product Management 13:19 Lessons from Clearbit: Culture and Growth 15:50 The Impact of Burnout on Career Choices 18:15 Finding Fulfillment in Entrepreneurship 21:09 Navigating the B2B Landscape 23:34 The Necessity of Products in a Crisis 33:24 The Unexpected Layoff and New Beginnings 34:39 The Launch House Experience 37:16 Transforming Reality into an Accelerator 39:17 The Evolution of Founders and Content Creation 41:52 Introducing Micro: A New Email Experience 47:02 Extracting Information for Better Workflows 53:49 Integrating with Existing Ecosystems 01:01:16 The Future of Email and AI

    1h 1m
  2. JUL 28

    Community, Compilers & the Rust Story with Steve Klabnik

    Summary Steve Klabnik has spent the last 15 years shaping how developers write code—from teaching Ruby on Rails to stewarding Rust’s explosive growth. In this wide-ranging conversation, Steve joins Kostas and Nitay to unpack the forces behind Rust’s rise and the blueprint for developer-first tooling. From Rails to Rust: How a web-framework luminary fell for a brand-new systems language and helped turn it into today’s go-to for memory-safe, zero-cost abstractions.Community as UX: The inside story of Cargo, humane compiler errors, and why welcoming IRC channels can matter more than benchmarks.Standards vs. Shipping: What Rust borrowed from the web’s rapid-release model—and why six-week cadences beat three-year committee cycles.Three tribes, one language: How dynamic-language devs, functional programmers, and C/C++ veterans each found a home in Rust—and what they contributed in return.Looking ahead: Steve’s watch-list of next-gen languages (Hylo, Zig, Odin) and the lessons Rust’s journey holds for anyone building tools, communities, or startups today.Whether you’re chasing segfault-free code, dreaming up a new PL, or just curious how open-source movements gain momentum, this episode is packed with insight and practical takeaways. Chapters00:00 Introduction and Personal Connection00:59 Journey from Ruby on Rails to Rust02:21 Early Programming Experiences and Interests07:20 Community Dynamics in Programming Languages13:59 The Importance of Community in Open Source14:37 How Ruby on Rails and Rust Built Their Communities21:44 Standardization vs. Unified Development Models30:55 Community Debt in Programming Languages36:24 Release Cadence vs. Feature Development37:36 Rust's Unique Selling Proposition43:30 Attracting Diverse Programming Communities52:31 The Future of Systems Programming Languages

    59 min
  3. MAY 8

    Business Physics: How Brand, Pricing, and Product Design Define Success with Erik Swan

    SummaryIn this episode, Erik reflects on his long and storied tech career—from the days of punch cards to founding multiple startups, including a stint at Splunk. At 61, he offers a unique perspective on how the industry has evolved and shares candid insights into what it takes to build a successful company. He discusses the evolution from building simple tools to creating comprehensive solutions and eventually platforms, emphasizing the importance of starting with a “hammer”—a focused, simple tool—before scaling to a broader offering. Eril introduces his concept of the “physics of business,” a framework for understanding go-to-market dynamics, pricing, and the critical role of brand in differentiating a product in a crowded market. He also touches on the challenges of product-led growth, the importance of achieving a strong “K value” (viral or network effects), and the pitfalls of allowing short-term quarterly pressures to derail long-term vision. Toward the end, he hints at his current project, Bestimer, which aims to apply lessons from his past ventures and leverage modern AI to tackle a massive, data-intensive problem. Chapters 00:00 Erik's Journey Through Tech History04:06 The Philosophy of Designing for Success09:49 Understanding the Physics of Business14:29 Timing and Luck in Startups18:09 Lessons Learned from Splunk23:30 The Power of Brand in Business28:02 Leveraging AI for Brand Development32:04 The Resilience of Splunk36:45 Building a Competitive Edge37:28 From Tool to Solution40:59 The Importance of Onboarding44:32 Navigating Growth and Market Fit51:11 Innovating with AI: The Next Chapter

    1h 2m
  4. APR 24

    Incremental Materialization: Reinventing Database Views with Gilad Kleinman of Epsio

    Summary In this episode, Gilad Kleinman, co-founder of Epsio, shares his unique journey from PHP development to low-level kernel programming and how that evolution led him to build an innovative incremental views engine.  Gilad explains that Epsio tackles a common challenge in databases: making heavy, complex queries faster and more efficient through incremental materialization. He describes how traditional materialized views fall short—often requiring full refreshes—and how Epsio seamlessly integrates with existing databases by consuming replication streams (CDC) and writing back to result tables without disrupting the core transactional system.  The conversation dives into the technical trade-offs and optimizations involved, such as handling stateful versus stateless operators (like group-by and window functions), using Rust for performance, and the challenges of ensuring consistency.  Gilad also contrasts Epsio’s approach with streaming systems like Flink, emphasizing that by maintaining tight integration with the native database, Epsio can offer immediate, up-to-date query results while minimizing disruption.  Finally, he outlines his vision for the future of incremental stream processing and materialized views as a means to reduce compute costs and enhance overall system performance. Chapters 00:00 From PHP to Kernel Development: A Journey07:30 Introducing Epsio: The Incremental Views Engine10:56 The Importance of Materialized Views15:07 Understanding Incremental Materialization19:21 Optimizing Query Performance with Epsio24:53 Integrating Epsio with Existing Databases27:02 The Shift from Theory to Practice in Data Processing29:42 Seamless Integration with Existing Databases32:02 Understanding Epsio Incremental Processing Mechanism34:46 Challenges and Limitations of Incremental Views36:49 The Complexity of Implementing Operators39:56 Trade-offs in Incremental Computation41:21 User Interaction with Epsio43:01 Comparing EPSIO with Streaming Systems45:09 Architectural Guarantees of Epsio50:33 The Future of Incremental Data Processing

    52 min
  5. MAR 21

    From Data Mesh to Lake House: Revolutionizing Metadata with Lakekeeper

    SummaryIn this episode, Viktor Kessler shares his journey and insights from his extensive experience in data management—from building risk management systems and data warehouses to working as a solutions architect at MongoDB and Dremio, and now co-founding a startup. Initially exploring data mesh concepts, Viktor explains how real-world challenges—such as the disconnect between technical data models and business needs, inconsistent definitions across departments, and the difficulty in managing actionable metadata—led him and his co-founder to pivot toward building a lake house solution. His startup is developing Lakekeeper, an open source REST catalog for Apache Iceberg, which aims to bridge the gap between decentralized data production and centralized metadata management. The conversation also delves into the evolution of data catalogs, the necessity for self-service analytics, and how creating consumption-ready data products can transform data functions from cost centers into profit centers. Finally, Viktor outlines ways for interested listeners to get involved with the Lakekeeper community through GitHub, upcoming meetups, and a dedicated Discord channel. Chapters 00:00 Introduction to Viktor Kessler and His Journey04:57 Transitioning from Data Mesh to Lake House09:15 Understanding Data Mesh: Pain Points and Solutions13:47 The Role of Metadata in Data Management18:16 The Evolution of Catalogs and Metadata Management28:14 Stabilizing the Consumption Pipeline31:18 Centralizing Metadata for Decentralized Organizations37:09 Bridging the Gap: Technical and Business Perspectives43:17 Rethinking Data Products and Consumption50:45 Finding Balance: Control and Flexibility in Data Management

    57 min
  6. MAR 6

    Reinventing Stream Processing: From LinkedIn to Responsive with Apurva Mehta

    Summary In this episode, Apurva Mehta, co-founder and CEO of Responsive, recounts his extensive journey in stream processing—from his early work at LinkedIn and Confluent to his current venture at Responsive. He explains how stream processing evolved from simple event ingestion and graph indexing to powering complex, stateful applications such as search indexing, inventory management, and trade settlement. Apurva clarifies the often-misunderstood concept of “real time,” arguing that low latency (often in the one- to two-second range) is more accurate for many applications than the instantaneous response many assume. He delves into the challenges of state management, discussing the limitations of embedded state stores like RocksDB and traditional databases (e.g., Postgres) when faced with high update rates and complex transactional requirements. The conversation also covers the trade-offs between SQL-based streaming interfaces and more flexible APIs, and how Responsive is innovating by decoupling state from compute—leveraging remote state solutions built on object stores (like S3) with specialized systems such as SlateDB—to improve elasticity, cost efficiency, and operational simplicity in mission-critical applications. Chapters 00:00 Introduction to Apurva Mehta and Streaming Background08:50 Defining Real-Time in Streaming Contexts14:18 Challenges of Stateful Stream Processing19:50 Comparing Streaming Processing with Traditional Databases26:38 Product Perspectives on Streaming vs Analytical Systems31:10 Operational Rigor and Business Opportunities38:31 Developers' Needs: Beyond SQL45:53 Simplifying Infrastructure: The Cost of Complexity51:03 The Future of Streaming Applications Click here to view the episode transcript.

    58 min
  7. FEB 20

    Semantic Layers: The Missing Link Between AI and Data with David Jayatillake from Cube

    In this episode, we chat with David Jayatillake, VP of AI at Cube, about semantic layers and their crucial role in making AI work reliably with data.  We explore how semantic layers act as a bridge between raw data and business meaning, and why they're more practical than pure knowledge graphs.  David shares insights from his experience at Delphi Labs, where they achieved 100% accuracy in natural language data queries by combining semantic layers with AI, compared to just 16% accuracy with direct text-to-SQL approaches.  We discuss the challenges of building and maintaining semantic layers, the importance of proper naming and documentation, and how AI can help automate their creation.  Finally, we explore the future of semantic layers in the context of AI agents and enterprise data systems, and learn about Cube's upcoming AI-powered features for 2025. 00:00 Introduction to AI and Semantic Layers05:09 The Evolution of Semantic Layers Before and After AI09:48 Challenges in Implementing Semantic Layers14:11 The Role of Semantic Layers in Data Access18:59 The Future of Semantic Layers with AI23:25 Comparing Text to SQL and Semantic Layer Approaches27:40 Limitations and Constraints of Semantic Layers30:08 Understanding LLMs and Semantic Errors35:03 The Importance of Naming in Semantic Layers37:07 Debugging Semantic Issues in LLMs38:07 The Future of LLMs as Agents41:53 Discovering Services for LLM Agents50:34 What's Next for Cube and AI Integration

    59 min

Ratings & Reviews

5
out of 5
5 Ratings

About

Join Kostas and Nitay as they speak with amazingly smart people who are building the next generation of technology, from hardware to cloud compute. Tech on the Rocks is for people who are curious about the foundations of the tech industry. Recorded primarily from our offices and homes, but one day we hope to record in a bar somewhere. Cheers!

You Might Also Like