Barrchives

Barr Yaron

Founders are trying to figure out how to build their companies (and cultures) around AI. But the specifics — like how much (if at all) to invest in research, how to build the right team, and how to evaluate whether their systems work as intended—are still up in the air. Join Barrchives as we speak with the founder-CEOs who are pushing the boundaries of AI, and share their decisions and their stories. Hosted by Barr Yaron, Partner at Amplify Partners.

  1. HACE 1 DÍA

    Behind the scenes of Temporal’s explosive growth, with co-founders Max and Samar

    Temporal’s co-founders join Barr Yaron and Lenny Pruss to unpack how durable execution became the backbone of modern distributed apps and why it’s a perfect fit for AI agents. Samar and Max trace the path from Amazon SWF to Uber’s Cadence to founding Temporal, dig into developer experience choices, hard lessons with Cassandra, and what “code that can’t crash” really means in practice. This episode also covers open source strategy, multi-agent orchestration, Nexus RPC, how startups and enterprises are adopting Temporal, and what scaling the company taught them as leaders. If you ship backend systems, build AI agents, or care about reliability at scale, this one’s for you. This episode is broken down into the following chapters: 00:00–00:06 — Origins: Samar and Max, Barr and Lenny, why this convo 00:06–00:14 — Early systems: Amazon SWF → Azure Durable Task; DX lessons 00:14–00:23 — Uber years: replacing Kafka, “Jeremy,” birthing Cadence, open-sourcing from day one 00:23–00:31 — Durable execution, explained; code-first over DSLs; SDK ergonomics (Go/Java) 00:31–00:36 — Hard tech war stories: Cassandra, queues on Cassandra, multi-region replication 00:36–00:45 — AI agents ≈ dynamic workflows; why Temporal fits agents and tools 00:45–00:59 — Roadmap: streaming, large payloads, data workflows, Nexus RPC for long-running calls 00:59–01:13 — Adoption & GTM: digital natives, fintech, startups; greenfield AI vs brownfield; “Temporal is overkill?” 01:13–01:22 — Case study: OpenAI Codex on Temporal; internal dogfooding 01:22–01:40 — Scaling the org vs. scaling the tech; remote shift, hiring; mission & next five yearsSubscribe to the Barrchives newsletter: https://www.barrchives.com/ Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXf Apple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613 Twitter: https://x.com/barrnanas LinkedIn: https://www.linkedin.com/in/barryaron/

    1 h y 28 min
  2. 9 SEP

    How Factory builds agents that help across the entire SDLC with Matan Grinberg, Founder & CEO

    Factory co-founder and CEO Matan Grinberg joins Barr Yaron to talk about the future of agent-driven development, why enterprise migrations are the perfect wedge for AI adoption, and how software engineering is moving toward a world where humans orchestrate instead of implement. They dive into Factory’s origin story, the challenges of building AI systems for large organizations, and what the world might look like when millions of “droids” (AI agents) collaborate on software. Along the way, Matan shares surprising use cases, lessons from working with enterprises, and how his personal journey—from physics to burritos to building Factory—has shaped his leadership. This episode is broken down into the following chapters: 00:00 – Intro and welcome 01:06 – Founding Factory: from ChatGPT experiments to AI engineers in every tab 04:05 – Early vision: autonomy for software engineering 06:14 – Why focus on the enterprise vs. indie developers 08:29 – Behavior change and technical challenges in large orgs 10:25 – Using painful migrations as a wedge for adoption 12:20 – The paradigm shift to agent-driven development 15:59 – Ubiquity: making droids available across IDEs, Slack, Jira, and more 17:16 – Why droids need the same context as human engineers 20:15 – Memory, configurability, and organizational learning 23:05 – How many droids? Specialization vs. general purpose agents 25:34 – Bespoke vs. common workflows across enterprises 27:06 – The hardest droid to build: coding itself 28:26 – Testing, costs, and scaling agentic workflows 30:29 – Why observability is essential for trustworthy agents 31:28 – Surprising use cases: PM adoption and GDPR audits 34:02 – Who Factory is building for: PMs, juniors, seniors, and beyond 36:09 – Systems thinking as the core engineering skill 38:09 – Building for enterprise trust: guardrails and governance 40:35 – What’s missing at the model layer today 42:43 – Migrations as a go-to wedge in go-to-market 43:53 – The thought experiment: what if 1M engineers collaborated? 46:07 – Scaling agent orgs: structure, monitoring, and observability 48:46 – Why everything must be recorded for droids to succeed 50:11 – Recruiting people obsessed with software development 51:37 – Burritos, routines, and how Matan has changed as a leader 53:41 – From coffee to Celsius, and why team culture matters most 54:20 – Closing thoughts: the future when agents are truly ubiquitous Subscribe to the Barrchives newsletter: https://www.barrchives.com/ Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXf Apple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613 Twitter: https://x.com/barrnanas LinkedIn: https://www.linkedin.com/in/barryaron/

    53 min
  3. 5 AGO

    How to Build 10x Cheaper Object Storage, with Simon Eskildsen, Co-founder & CEO at Turbopuffer

    In this episode of Barrchives, Barr Yaron sits down with Simon Eskildsen, co-founder and CEO of turbopuffer, to explore how he went from infrastructure challenges at Shopify to launching a groundbreaking vector database company. Simon shares his journey from recognizing the inefficiencies of traditional vector storage solutions to creating TurboPuffer, a revolutionary database designed specifically for AI-driven applications. He details key moments of insight—from working with startups struggling with prohibitive storage costs, to realizing the untapped potential of affordable object storage combined with modern vector indexing techniques. This episode is broken down into the following chapters: 00:00 – Intro: Simon Eskildsen, Founder of TurboPuffer 00:26 – The “aha” moment: Simon’s transition from Shopify and startup consulting to founding TurboPuffer 03:13 – Turning “strings into things”: The power of vector search 05:51 – Why vector databases? Economic drivers and technology shifts 07:35 – Building TurboPuffer V1: Key architecture choices and early trade-offs 10:44 – Challenges of indexing: Evaluating exhaustive search, HNSW, and clustering 17:23 – Finding product-market fit with Cursor: TurboPuffer’s first major customer 20:05 – Defining TurboPuffer’s ideal customer profile and market positioning 23:43 – Gaining conviction: When Simon knew TurboPuffer would scale 25:39 – TurboPuffer V2: Architectural evolution and incremental indexing improvements 32:12 – How AI-native workloads fundamentally change database design 35:41 – Key trade-offs in TurboPuffer’s database architecture (accuracy, latency, and cost) 38:07 – Ensuring vector database accuracy: Production vs. academic benchmarks 41:03 – Deciding when TurboPuffer was ready for General Availability (GA) 42:27 – The future of vector search and storage needs for AI agents 45:03 – Building customer-centric engineering teams at TurboPuffer 47:12 – Common storage hygiene mistakes (or opportunities) in AI companies 49:42 – Simon’s personal growth as a leader since founding TurboPuffer Subscribe to the Barrchives newsletter: https://www.barrchives.com/ Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXf Apple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613 Twitter: https://x.com/barrnanas LinkedIn: https://www.linkedin.com/in/barryaron/

    50 min
  4. 13 MAY

    How Abridge Uses AI to Help Doctors Spend More Time With Patients, with Zachary Lipton

    AI won’t fix healthcare unless it starts with the conversation. In this episode, Zachary Lipton—Chief Technology & Science Officer at Abridge and Raj Reddy Associate Professor of Machine Learning at Carnegie Mellon University—joins Barr Yaron for a deep, technical, and emotional dive into how AI can truly transform clinical care. From building a world-class ambient documentation system to tackling speech recognition in 28 languages, Zack shares what it takes to engineer trust into AI when the stakes are patient lives, not just clicks. We cover: Why general-purpose models fail in clinical settings How Abridge designs for accuracy, context, and trust The tension between personalization and evaluation Why ambient AI might be the most promising foundation for fixing healthcare This is one of the most in-depth looks at what it actually takes to build production-grade AI in medicine. This episode is broken down into the following chapters: 00:00 – Intro 00:34 – What Abridge actually does (hint: it’s not just notes) 01:09 – Why documentation is killing the healthcare experience 03:05 – How we got to the current burnout crisis 04:16 – The key insight: healthcare is a conversation 07:33 – Building a digital scribe: the original vision for Abridge 09:15 – Why off-the-shelf models don’t cut it in clinical speech 11:36 – 28 languages, noisy ERs, and overlapping conversations 13:20 – Predicting what enters the medical lexicon next 14:21 – How Abridge adapts models for edge-case medical speech 15:18 – Beyond transcripts: the complexity of clinical note generation 17:10 – Foundation models are tools, not solutions 18:06 – The “Ship of Theseus” strategy of model orchestration 20:32 – Style transfer for doctors, patients, and payers 20:54 – Metrics: ASR evaluation vs. documentation quality 23:43 – Stratifying ASR performance by setting, language, and jargon 24:50 – Why eval is so hard when there’s no “gold note” 25:45 – The tension between personalization and general eval 28:05 – Lessons from machine translation: building robust eval pipelines 30:32 – Abridge’s “look at the f*cking data” (LFD) internal review 33:54 – Blinded clinical eval with linked evidence and audio 36:50 – Why human fallibility is just as real as AI hallucination 38:21 – What kind of CTO Zack actually is 40:32 – Why AI product development is its own discipline 42:44 – AI innovation now lies in the product-data-model loop 44:25 – Closing the loop: how design drives modeling 45:25 – How Abridge hires researchers who care about product 47:29 – The mission filter: if you’d be equally happy at Microsoft, go 49:35 – What’s next: the AI layer for healthcare, not point solutions 52:57 – Closing thoughts Subscribe to the Barrchives newsletter: https://www.barrchives.com/ Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXf Apple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613 Twitter: https://x.com/barrnanas LinkedIn: https://www.linkedin.com/in/barryaron/

    53 min
  5. 6 MAY

    How AI21 Labs Builds Frontier Models For The Enterprise, With Ori Goshen, Co-Founder and Co-CEO at AI21 Labs

    What if deep learning isn’t the future of AI—but just part of it? In this episode, Ori Goshen, Co-founder and Co-CEO at AI21 Labs, shares why his team set out to build reliable, deterministic AI systems—long before ChatGPT made language models mainstream. We explore the launch of Wordtune, the development of Jamba, and the release of Maestro—AI21’s orchestration engine for enterprise agentic workflows. Ori opens up about what it takes to move beyond probabilistic systems, build trust with global enterprises, and balance research and product in one of the most competitive AI markets in the world. If you want a masterclass in enterprise AI, model training, architecture tradeoffs, and scaling innovation out of Israel—this is it. 🔔 Subscribe for deep dives with the people shaping the future of AI. This episode is broken down into the following chapters: 00:00 – Intro 00:47 – Why AI21 started with “deep learning is necessary but not sufficient” 02:34 – Building reliable AI systems from day one 03:46 – The risk of neural-symbolic hybrids and early bets on NLP 05:40 – Why Wordtune became the first product 08:14 – From B2C success to a pivot back into enterprise 09:43 – What AI21 learned from Wordtune for enterprise AI 11:15 – Defining “product algo fit” 12:27 – Training models before it was cool: Jurassic, Jamba, and beyond 13:38 – How to hire model-training engineers with no playbook 14:53 – Recruiting systems talent: what to look for 16:29 – How to orient your models around real enterprise needs 17:10 – Why Jamba was designed for long-context enterprise use cases 19:52 – What’s special about the Mamba + Transformer hybrid architecture 22:46 – Experimentation, ablations, and finding the right architecture 25:27 – Bringing Jamba to market: what enterprises actually care about 29:26 – The state of enterprise AI readiness in 2023 → 2025 31:41 – The biggest challenge: evaluation systems 32:10 – What most teams get wrong about evals 33:45 – Architecting reliable, non-deterministic systems 34:53 – What is Maestro and why build it now? 36:02 – Replacing “prompt and pray” with AI for AI systems 38:43 – Building interpretable and explicit agentic systems 41:09 – Balancing control and flexibility in orchestration 43:36 – What enterprise AI might actually look like in 5 years 47:03 – Why Israel is a global powerhouse for AI 49:44 – How Ori has evolved as a leader under extreme volatility 52:26 – Staying true to your mission through chaos Subscribe to the Barrchives newsletter: https://www.barrchives.com/ Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXf Apple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613 Twitter: https://x.com/barrnanas LinkedIn: https://www.linkedin.com/in/barryaron/

    52 min
  6. 22 ABR

    How to Build a Secure Browser for AI, With Ofer Ben Noon, Former Founder and CEO, Talon Security

    What does it take to reimagine the browser—one of the most commoditized technologies in the world—for the enterprise? In this episode, Ofer Ben Noon, founder of Talon and now part of Palo Alto Networks, shares the wild journey from exploring digital health to building the world’s first enterprise-grade secure browser. We dig into: Why the browser became the new security perimeter How Talon raised a $26M seed and scaled fast What it takes to compile Chromium daily (and why it’s so hard) Why Precision AI is essential to secure AI usage in the enterprise And how generative AI, SaaS sprawl, and autonomous agents are reshaping enterprise risk in real time If you care about AI x cybersecurity, endpoint security, or enterprise infrastructure—this is a deep, real, and tactical look behind the curtain. This episode is broken down into the following chapters: 00:00 – Intro 01:05 – Why Ofer originally wanted to build in digital health 02:15 – The pandemic shift to SaaS, hybrid work, and browser-first 04:44 – Why Chromium was the perfect technical unlock 05:27 – The insane complexity of compiling Chromium 07:10 – What makes an enterprise browser different from a consumer browser 09:36 – Browser isolation, web security, and file security 10:50 – Why Talon needed a massive seed round from day one 11:53 – What an MVP looked like for Talon 14:08 – Early skepticism from CISOs and how Talon earned trust 16:50 – Discovering new enterprise use cases over time 17:11 – How AI and Precision AI power Talon’s security engine 19:21 – Why Ofer chose to sell to Palo Alto Networks 21:06 – Petabytes of data, 30B+ attacks blocked daily 23:44 – The risks of LLMs and generative AI in the browser 24:24 – What Talon sees when users interact with AI tools 25:05 – The #1 risk: privacy and user error 26:43 – Why AI use must be governed like any other SaaS 27:22 – How Talon built secure enterprise access to ChatGPT 28:05 – Mapping 1,000+ GenAI tools and classifying risk 29:43 – Real-time blocking, DLP, and prompt visibility 31:25 – Why user mistakes are accelerating in the age of agents 32:04 – How autonomous AI agents amplify risk across the enterprise 33:55 – The browser as the new control layer for users and AI 36:57 – What AI is unlocking in cybersecurity orgs 39:36 – Why data volume will determine which security companies win 40:28 – Ofer’s leadership philosophy and staying grounded post-acquisition 42:40 – Closing reflections Subscribe to the Barrchives newsletter: https://www.barrchives.com/ Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXf Apple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613 Twitter: https://x.com/barrnanas LinkedIn: https://www.linkedin.com/in/barryaron/

    43 min
  7. 8 ABR

    How Vanta Helps Customers Build Secure and Compliant AI Products, with Christina Cacioppo, Co-founder and CEO, and Iccha Sethi, VP of Engineering

    Vanta helped create the automated security and compliance category—and now, they’re redefining it with AI. In this episode, Christina Cacioppo (CEO & Co-Founder) and Iccha Sethi (VP of Engineering) join Barr Yaron to go deep on how AI is transforming the way Vanta builds products, evaluates models, and helps companies earn and demonstrate trust. They cover: - Why compliance is the perfect playground for AI - How Vanta balances reliability, explainability, and scale - What it takes to build golden datasets in high-stakes domains - The real-world AI infrastructure behind Vanta AI If you care about real AI product development—not just hype—this is a masterclass in doing it right. 🔔 Subscribe for more deep dives with leading AI builders and thinkers. This episode is broken down into the following chapters: 00:00 – Intro 01:06 – Christina’s early entrepreneurial roots (Beanie Babies & all) 02:51 – From venture to founder: why Christina started Vanta 04:00 – What Vanta actually does 05:32 – Iccha on why she joined as VP of Engineering 07:09 – When Vanta started leaning into AI 08:33 – AI’s growing role in Vanta’s product roadmap 09:52 – How AI powers questionnaire automation 12:25 – Using LLMs to map policy docs to cloud configs 13:27 – Building trust: human-in-the-loop and explainability 16:03 – Vanta’s evaluation system for AI features 18:17 – How golden datasets are constructed (and maintained) 20:59 – Feedback loops: online eval from user behavior 22:43 – How model feedback informs product updates 23:38 – What Vanta wants from foundation models (but isn’t getting yet) 24:32 – Retrieval: how Vanta processes customer documents 27:13 – The hardest technical challenges in AI integration 29:41 – Internal adoption: how non-technical teams are using AI too 31:52 – Vanta’s centralized AI team & how other teams plug in 33:27 – Internal education: building AI intuition org-wide 34:31 – From prototype to production: experimentation culture 36:41 – Customer sentiment around AI in compliance workflows 38:22 – Enterprise buyers & the AI “kill switch” 39:06 – Personalized experiences as the future of trust 40:21 – How enterprises are approaching AI risk assessments 41:50 – What excites Iccha and Christina about the future of AI at Vanta Subscribe to the Barrchives newsletter: https://www.barrchives.com Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXf Apple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613 Twitter: https://x.com/barrnanas LinkedIn: https://www.linkedin.com/in/barryaron/

    42 min

Calificaciones y reseñas

5
de 5
7 calificaciones

Acerca de

Founders are trying to figure out how to build their companies (and cultures) around AI. But the specifics — like how much (if at all) to invest in research, how to build the right team, and how to evaluate whether their systems work as intended—are still up in the air. Join Barrchives as we speak with the founder-CEOs who are pushing the boundaries of AI, and share their decisions and their stories. Hosted by Barr Yaron, Partner at Amplify Partners.

También te podría interesar