Masters of Automation - A podcast about the future of work.

Alp Uguray

Masters of Automation is a podcast and article series on creative technologists, startup founders, and entrepreneurs who change the future of work and our lives through automation and artificial intelligence. We will cover their personal stories on what led them to innovate and build new products and services. The automation ecosystem evolves every day with new startups forming and technologies building, and it's best to hear the stories from the true #MastersofAutomation.

  1. 6D AGO

    Snowflake VP of AI Baris Gultekin: No AI Strategy Without a Data Strategy & Why Skills Are the New Apps

    The following is a conversation between Alp Uguray and Baris Gultekin. Guest Bio Baris Gultekin is Vice President of AI at Snowflake, where he leads the Cortex AI product portfolio and drives the company's AI product roadmap and strategy. He built Snowflake's AI product suite from the ground up after joining in 2023 through the acquisition of nxyz, the blockchain data infrastructure company he co-founded and served as CEO, backed by Paradigm, Sequoia, and Greylock. Previously, Baris spent 16 years at Google, where he co-created Google Now and served as Product Director for Google Assistant, growing it from 10M to 500M monthly users. He holds an M.S. in Electrical Engineering from Cornell University and an MBA from Stanford GSB. Takeaways The Google Assistant era taught a hard lesson: when AI feels natural but fails on most use cases, trust breaks down completely. "Bring AI to the data" isn't just a security play — it enables governance inheritance, semantic understanding, and dramatically simpler architectures. There is no AI strategy without a data strategy: break down silos, bring business semantics onto the platform, then build agents. Skills — modular instructions and scripts that agents load on-demand — are the most exciting frontier in enterprise AI, creating an exponentially expanding surface of capabilities. Enterprise platforms like Salesforce, SAP, and Workday won't disappear, but the majority of interactions will shift from humans to agents. Agent-to-agent protocols like MCP are unlocking real-time data access across silos — and natural language makes adoption faster than any prior protocol shift. Trust is the gating factor for autonomy: low-stakes automations already run without humans; high-stakes cases still need oversight, but what took weeks now takes minutes. We're in the terminal era of AI UX — visual, voice, and multi-modal interfaces are about to transform how we interact with intelligence. The economics of intelligence are democratizing: the pie is growing so fast it's lifting all boats rather than creating a tug of war between platforms. Startup advice from someone who's done it: the best founders solve problems they feel passionately about — people who build successful products are ones who cannot not do it. Chapters 00:00 Walking away from Google after 16 years to start nxyz 03:01 From Google Now to Google Assistant: why the technology wasn't ready 05:24 Bring AI to the data: Snowflake's core architecture thesis 08:34 The application layer evolves: agents, SaaS, and natural language interfaces 11:27 Agent-to-agent protocols: MCP, commerce, and the agentic web 13:25 The Matrix moment: skills as the breakthrough for enterprise agents 15:48 Hyper-personalization and the rise of internal tools 16:21 We're back to the terminal: why AI UX is about to change everything 18:57 The oracle problem: source of truth when agents negotiate 21:15 The economics of intelligence: NVIDIA, the AI stack, and a growing pie 24:57 Jobs, education, healthcare: how intelligence reshapes every industry 27:18 Sci-fi became real: how Baris's kids use AI to learn 30:22 Aligning enterprise AI: accountability, guardrails, and scoped agents 34:00 Healthcare data: regulation, fragmentation, and a weekend project with Claude computer use 37:07 Reactive vs. autonomous agents: why trust is the bottleneck 40:24 How Cortex AI changes the data layer: MCP, semantic models, and real-time insights 44:14 No AI strategy without a data strategy: the ontology of enterprise knowledge 46:50 Startup advice: passion, depth, and the burning desire to build 49:19 Snowflake's partner ecosystem: why data access is the biggest startup unlock 53:16 General intelligence: what we overestimate, underestimate, and why tools matter most Quotes "These agents, it's like in The Matrix — 'I want to know karate' and I know karate. Being able to tap into these skills expands the surface on which you can take action." — Baris Gultekin, VP of AI at Snowflake "There is no AI strategy without a data strategy." — Baris Gultekin, VP of AI at Snowflake "When you make something more natural, there are expectations of human-like intelligence. When it works in a couple of your use cases but not most, that expectation breaks down. That was the biggest challenge with the Google Assistant era." — Baris Gultekin, VP of AI at Snowflake "The pie is growing so much that it is not a fixed pie we need to protect. The tide is rising all boats." — Baris Gultekin, VP of AI at Snowflake "People who build successful products are ones that cannot not do it. There is this burning desire to go make it happen." — Baris Gultekin, VP of AI at Snowflake

  2. JAN 24

    Ramesh Raskar: The Internet of AI Agents: Why AGI Won’t Be One “God Model”

    The following is a conversation between Alp Uguray and Professor Ramesh Raskar. Summary In this episode of Masters of Automation, Alp Uguray sits down with MIT Professor Ramesh Raskar to explore a future where AI shifts from centralized “foundries” (massive cloud models) to a world of personal, edge-based agents that we own, customize, and connect—an Internet of AI Agents (see Project NANDA). Raskar traces his journey from a small town near Nashik—where curiosity and constraint shaped his mindset—to being inspired by Jurassic Park, then “waking up” to the power of storytelling and human realism through South Park, which ultimately pulled him from computer graphics into machine learning and systems thinking. From there, the conversation dives into his core thesis: the next frontier isn’t bigger models—it’s AI that’s closer to your data, your context, and your control. They unpack the idea of Agent Zero (a private AI agent for every person), how agents might evolve through foundations → commerce → societies, and why the next big economic layer may be agent teaming/orchestration rather than models or apps. They also confront the two diverging futures: a “quiet dystopia” dominated by a few agent stores versus a “green curve” world of billions of micro-AIs empowering global creativity and shared prosperity. The episode closes with Project NANDA—Raskar’s effort to build core infrastructure for trust, naming, certification, interoperability, and ongoing attestation in an open agentic ecosystem (open-source repo, research paper). Guest Bio Professor Ramesh Raskar is an MIT professor and researcher known for building ambitious, system-level technologies spanning computer vision, machine learning, human-computer interaction, and decentralized AI. Across academia and industry, he has worked on problems at the intersection of intelligence, networks, and real-world impact—from early work in research labs like Mitsubishi Electric Research Laboratories (MERL) to leading initiatives that connect AI with security, trust, and societal-scale coordination. In this episode, he shares his vision for Agent Zero and Project NANDA (Networked AI Agents in Decentralized Architecture)—a proposed foundation for a safer, open “agentic web.” Takeaways Your origin story matters: constraint + curiosity can become a superpower when you keep learning relentlessly. The frontier isn’t bigger—it’s closer: AI value shifts when models live near your data, context, and needs. Foundry → Garage → Bazaar: centralized AI is necessary early, but the natural evolution is toward edge ownership and then networked commerce. Agent Zero: the case for “an agent for every citizen” that’s private, authenticated, and usable even in low-connectivity environments. Centralization creates consumers; decentralization creates creators—but the future needs a balance, not extremism. Agent foundations matter: naming, identity, certification, interoperability, and attestation become essential infrastructure (think ICANN + DNS + certificate authorities). The next economic battleground: not just models/apps—agent teaming and orchestration may capture the most value. A new capitalism emerges: specialized agents priced by performance, reliability, and live reputation (think FICO scores for agents). Healthcare’s bottleneck is locked decentralization: silos kill network effects—agents could enable privacy-preserving markets for knowledge and outcomes (in a world shaped by HIPAA constraints). Two futures: a quiet dystopia of a few agent stores vs. a green curve with billions of micro-AIs and shared prosperity. Chapters 00:00 Origins in Nashik: libraries, curiosity, and the “nature doesn’t negotiate” mindset 02:20Jurassic Park → computer graphics; South Park → functional realism → machine learning 06:15 Choosing depth over hype: why meaningful problems beat money-first decisions 07:55 Foundry vs. garage vs. bazaar: the long arc of compute, networks, and AI 12:38 “We rent intelligence and give away our data” — why ownership flips the model 16:59 Agent Zero: an agent for every person, regardless of wealth or bandwidth 21:16 Agent foundations → commerce → societies (and why a “telephone exchange” for agents is inevitable) 30:50 Knowledge pricing + agent markets: “FICO scores” for quality, reliability, and trust 40:45 Why agent evals are harder than LLM evals (and what might replace “LLM-as-judge”) 42:48 Healthcare: population AI, privacy, and unlocking global health intelligence 54:49 The workforce fork: quiet dystopia (red curve) vs. creator economy (green curve) 01:04:10 Robotics reality check: why manipulation is still the hard frontier 01:20:59Project NANDA: naming, certification, interoperability, attestation—and avoiding the “agentic app store trap” 01:25:40 Closing

  3. 08/15/2025

    Maxime Labonne: Edge AI and the Future of Localized Intelligence with Private, offline LLMs

    The following is a conversation between Alp Uguray and Maxime Labonne. Summary In this episode of the Masters of Automation podcast, host Alp Uguray interviews Maxime Labonne, discussing the challenges and innovations in running large language models (LLMs) on edge devices. They explore the importance of post-training techniques for enhancing small models, the future of local AI models, and the integration of AI into everyday applications. The conversation also touches on the role of context in AI performance, architectural considerations, and the dual paths of AI development. Maxim shares his journey from cybersecurity to AI, the use of AI in spam detection, and the potential of agent-to-agent communication. The episode concludes with insights on the future of AI in gaming and the importance of community in AI development. Takeaways Running LLMs on edge devices presents challenges like latency and model quality. Post-training techniques are crucial for enhancing small models' performance. Local AI models can provide privacy and customization for users. Agentic workflows can enhance AI's functionality in applications. Context windows are vital for AI reasoning and performance. Model architecture significantly impacts AI capabilities and efficiency. There are two paths in AI development: AGI and interpretable models. Maxime transitioned from cybersecurity to AI due to the open community. AI can be effectively used in cybersecurity for spam detection. Agent-to-agent communication in AI is still in its infancy.

  4. 07/15/2025

    025- Stephen Wolfram: Computation, AGI, Language & the Future of Reasoning.

    The following is a conversation between Alp Uguray and Stephen Wolfram. Summary In this conversation, Alp Uguray hosts Stephen Wolfram to discuss the intersection of computation, AI, and human intelligence. They explore the differences between large language models and formal computation, the concept of the Ruliad, and the limitations of AI in understanding complex mathematical proofs. The discussion also delves into the future of AI, the nature of communication and knowledge transfer among AI systems, and the implications of computational processes in the natural world. In this conversation, Stephen Wolfram discusses the nature of sensory data in AI, the implications of quantum mechanics on human cognition, and the future of education with a focus on computational thinking. He emphasizes the importance of foundational understanding in entrepreneurship and the need for adaptability in business. The discussion highlights the evolving landscape of technology and education, advocating for a shift from specialized skills to a more generalized approach to learning and thinking. Takeaways Computation allows for a level of understanding beyond unaided human capabilities. Large language models (LLMs) mimic human-like reasoning but lack formal structure. The Ruliad encompasses all possible computations, but LLMs struggle to navigate it. Human mathematics is shaped by our sensory experiences and historical context. AI's ability to reason is fundamentally different from human reasoning. The efficiency of computation contrasts with the inefficiency of pure reasoning. AI could develop a richer language for communication beyond human languages. Understanding the computations in nature is a challenge for both humans and AI. The evolution of AI communication may lead to new forms of knowledge transfer. The future of AI may involve intelligences that are alien to human understanding. The sensory data we receive shapes our understanding of the world. AI's perception differs significantly from human sensory experiences. Quantum mechanics introduces the concept of multiple paths of history. Human cognition seeks definite answers, contrasting with quantum uncertainty. Education should focus on computational thinking rather than just programming skills. The future of programming may resemble the decline of hand trades. Generalized knowledge will be more valuable than specialized skills. Conviction in entrepreneurship stems from a solid foundational understanding. Successful entrepreneurs often pivot their plans based on real-time feedback. Computational thinking enhances our ability to understand and innovate.

  5. 07/09/2025

    023- From MIT Researcher to YC Entrepreneur: Building Workflow Stack with AI w/ Bernard Aceituno

    The following is Part I of my conversation with Bernard Aceituno, Co-Founder of Stack AI (YC) and previously PhD at MIT. I will release Part II at another point as we will do the recording again. Here is a snippet of our conversation at MIT CSAIL, where Bernard spent 5 years researching. Summary In this engaging conversation, Bernard shares his eclectic journey from Venezuela to becoming a co-founder of Stack AI, detailing his academic background, entrepreneurial spirit, and the challenges faced in the startup world. He discusses the importance of collaboration, the evolution of AI in industry, and the significance of understanding customer needs. The conversation also touches on the dynamics of building a team, the role of research in product development, and the future of AI in enterprise automation. Takeaways Bernard's journey began in Venezuela, where he pursued his passion for science and technology. He transitioned from academia to entrepreneurship, driven by a desire to impact the world with technology. The importance of being in an entrepreneurial environment to stay motivated and focused. Collaboration with Tony led to the creation of Stack AI, focusing on solving real-world problems with AI. Y Combinator provided crucial support and validation for their startup idea. Understanding customer needs is essential for product development and success. The shift towards enterprise automation presents both challenges and opportunities for startups. Building a strong team with shared values is critical for growth and success. Transparency and explainability in AI are vital for building trust with customers. Immigrant founders often face unique challenges but also have access to valuable mentorship opportunities.

  6. 07/09/2025

    023 - The Future of Digital Biomarkers, Responsible AI and Wearables w/Dr. Brinnae Bent

    Summary In this episode of the Masters Automation Podcast, Dr. Brinnae Bent shares her journey from a childhood filled with diverse experiences to becoming a leader in the intersection of healthcare and artificial intelligence. She discusses her work on digital biomarkers, the evolution of wearable technology, and the importance of responsible AI in healthcare. Dr. Bent also delves into her experiences as an ultra-marathoner, the impact of stress on performance, and the challenges of predictive healthcare models. In this conversation, Brinnae Bent discusses the complexities of AI, particularly in the context of healthcare and neuroscience. She emphasizes the importance of explainability in AI models, especially large language models (LLMs), and how they can be made more interpretable. The discussion also covers the role of education in shaping future technologies, with a focus on student engagement and the integration of AI in teaching. Bent shares insights on how students are approaching problem-solving in AI and the significance of open-ended projects. The conversation concludes with rapid-fire questions that explore personal insights and future aspirations in the field of AI. Takeaways Dr. Bent's journey into healthcare and AI was influenced by her early experiences as a certified nurse assistant. The evolution of wearable technology has democratized health monitoring. Digital biomarkers can transform vast amounts of data into actionable health insights. Open source projects in technology foster collaboration and innovation. Understanding the brain's functioning is crucial for developing effective healthcare solutions. Wearable devices have the potential to predict health conditions before traditional methods. Personal health data can encourage better lifestyle choices and interventions. Stress impacts the body similarly, regardless of its source. Acute stress can enhance performance, while chronic stress can lead to burnout. Interpretable machine learning models are essential for responsible AI in healthcare. Explainability in AI is crucial for trust, especially in healthcare. Neuroscience and AI can inspire each other in understanding complex systems. Students are increasingly interested in responsible AI and its implications. Open-ended projects encourage creativity and innovation in students. AI can be leveraged to personalize education and enhance learning experiences. Understanding the human brain can inform the design of interpretable AI models. The rapid evolution of AI requires continuous adaptation in education. Students are eager to engage in deep discussions about AI ethics and safety. Learning to code is essential for non-technical individuals to engage with AI. Future generations will shape the role of AI in society. success. On the potential of Wearables and Digital Biomarkers:

5
out of 5
10 Ratings

About

Masters of Automation is a podcast and article series on creative technologists, startup founders, and entrepreneurs who change the future of work and our lives through automation and artificial intelligence. We will cover their personal stories on what led them to innovate and build new products and services. The automation ecosystem evolves every day with new startups forming and technologies building, and it's best to hear the stories from the true #MastersofAutomation.