How AI Is Built

Nicolay Gerold

Real engineers. Real deployments. Zero hype. We interview the top engineers who actually put AI in production. Learn what the best engineers have figured out through years of experience. Hosted by Nicolay Gerold, CEO of Aisbach and CTO at Proxdeal and Multiply Content.

  1. 13 DE AGO.

    Embedding Intelligence: AI's Move to the Edge

    Nicolay here, while everyone races to cloud-scale LLMs, Pete Warden is solving AI problems by going completely offline. No network connectivity required. Today I have the chance to talk to Pete Warden, CEO of Useful Sensors and author of the TinyML book. His philosophy: if you can't explain to users exactly what happens to their data, your privacy model is broken. Key Insight: The Real World Action Gap LLMs excel at text-to-text transformations but fail catastrophically at connecting language to physical actions. There's nothing in the web corpus that teaches a model how "turn on the light" maps to sending a pin high on a microcontroller. This explains why every AI agent demo focuses on booking flights and API calls - those actions are documented in text. The moment you step off the web into real-world device control, even simple commands become impossible without custom training on action-to-outcome data. Pete's company builds speech-to-intent systems that skip text entirely, going directly from audio to device actions using embeddings trained on limited action sets. 💡 Core Concepts Speech-to-Intent: Direct audio-to-action mapping that bypasses text conversion, preserving ambiguity until final classification ML Sensors: Self-contained circuit boards processing sensitive data locally, outputting only simple signals without exposing raw video/audio Embedding-Based Action Matching: Vector representations mapping natural language variations to canonical device actions within constrained domains ⏱ Important Moments Real World Action Problem: [06:27] LLMs discuss turning on lights but lack training data connecting text commands to device control Apple Intelligence Challenges: [04:07] Design-led culture clashes with AI accuracy limitations Speech-to-Intent vs Speech-to-Text: [12:01] Breaking audio into text loses critical ambiguity information Limited Action Set Strategy: [15:30] Smart speakers succeed by constraining to ~3 functions rather than infinite commands 8-Bit Quantization: [33:12] Remains deployment sweet spot - processor instruction support matters more than compression On-Device Privacy: [47:00] Complete local processing provides explainable guarantees vs confusing hybrid systems 🛠 Tools & Tech Whisper: github.com/openai/whisper Moonshine: github.com/usefulsensors/moonshine TinyML Book: oreilly.com/library/view/tinyml/9781492052036 Stanford Edge ML: github.com/petewarden/stanford-edge-ml 📚 Resources Looking to Listen Paper: looking-to-listen.github.io Lottery Ticket Hypothesis: arxiv.org/abs/1803.03635 Connect: pete@usefulsensors.com | petewarden.com | usefulsensors.com Beta Opportunity: Moonshine browser implementation for client-side speech processing in JavaScript

    1h6min
  2. #054 Building Frankenstein Models with Model Merging and the Future of AI

    29 DE JUL.

    #054 Building Frankenstein Models with Model Merging and the Future of AI

    Nicolay here,most AI conversations focus on training bigger models with more compute. This one explores the counterintuitive world where averaging weights from different models creates better performance than expensive post-training. Today I have the chance to talk to Maxime Labonne, who's a researcher at Liquid AI and the architect of some of the most popular open source models on Hugging Face. He went from researching neural networks for cybersecurity to building "Frankenstein models" through techniques that shouldn't work but consistently do. Key Insight: Model Merging as a Free LunchThe core breakthrough is deceptively simple: take two fine-tuned models, average their weights layer by layer, and often get better performance than either individual model. Maxime initially started writing an article to explain why this couldn't work, but his own experiments convinced him otherwise. The magic lies in knowledge compression and regularization. When you train a model multiple times on similar data, each run creates slightly different weight configurations due to training noise. Averaging these weights creates a smoother optimization path that avoids local minima. You can literally run model merging on a CPU - no GPUs required. In the podcast, we also touch on: Obliteration: removing safety refusal mechanisms without retrainingWhy synthetic data now comprises 90%+ of fine-tuning datasetsThe evaluation crisis and automated benchmarks missing real-world performanceChain of thought compression techniques for reasoning models💡 Core Concepts Model Merging: Averaging weights across layers from multiple fine-tuned models to create improved performance without additional trainingObliteration: Training-free method to remove refusal directions from models by computing activation differencesLinear Merging: The least opinionated merging technique that simply averages weights with optional scaling factorsRefusal Direction: The activation pattern that indicates when a model will output a safety refusal📶 Connect with Maxime: X / Twitter: https://x.com/maximelabonneLinkedIn: https://www.linkedin.com/in/maxime-labonne/Company: https://www.liquid.ai/📶 Connect with Nicolay: LinkedIn: https://www.linkedin.com/in/nicolay-gerold/X / Twitter: https://x.com/nicolaygeroldWebsite: https://www.nicolaygerold.com/⏱ Important Moments Model Merging Discovery Process: [00:00:30] Maxime explains how he started writing an article to debunk model mergingTwo Main Merging Use Cases: [11:04] Clear distinction between merging checkpoints versus combining different task-specific capabilitiesLinear Merging as Best Practice: [21:00] Why simple weight averaging consistently outperforms more complex techniquesLayer Importance Hierarchy: [21:18] First and last layers have the most influence on model behaviorObliteration Technique Explained: [36:07] How to compute and subtract refusal directions from model activationsSynthetic Data Dominance: [50:00] Modern fine-tuning uses 90%+ synthetic data🛠 Tools & Tech Mentioned MergeKit: https://github.com/cg123/mergekitTransformer Lens: https://github.com/TransformerLensOrg/TransformerLensHugging Face Transformers: https://github.com/huggingface/transformersPyTorch: https://pytorch.org/📚 Recommended Resources Maxime's Model Merging Articles: https://huggingface.co/blog/mergeModel Soups Paper: https://arxiv.org/abs/2203.05482Will Brown's Rubric Engineering: https://x.com/willccbb/status/1883611121577517092

    1h7min
  3. #053 AI in the Terminal: Enhancing Coding with Warp

    23 DE JUL.

    #053 AI in the Terminal: Enhancing Coding with Warp

    Nicolay here, Most AI coding tools obsess over automating everything. This conversation focuses on the right balance between human skill and AI assistance - where manual context beats web search every time. Today I have the chance to talk to Ben Holmes, a software engineer at Warp, where they're building the AI-first terminal. Manual context engineering trumps automated web search for getting accurate results from coding assistants. Key Insight Expansion The breakthrough insight is brutally practical: manual context construction consistently outperforms automated web search when working with AI coding assistants. Instead of letting your AI tool search for documentation, find the right pages yourself and feed them directly into the model's context window. Ben demonstrated this with OpenAI's Realtime API documentation - after an hour of back-and-forth with web search, he manually found the correct API signatures and saved them as a reference file. When building new features, he attached this curated documentation directly, resulting in immediate success rather than repeated failures from outdated or incorrect search results. This approach works because you can verify documentation accuracy before feeding it to the AI, while web search often returns the first result regardless of quality or recency. In the podcast, we also touch on: Why React Native might become irrelevant as AI translation between native languages improves Model-specific strengths: Gemini excels at debugging while Claude dominates f unction calling The skill of working without AI assistance - "raw dogging" code for deep learning Warp's architecture using different models for planning (O1/O3) vs. coding (Claude/Gemini) 💡 Core Concepts Manual Context Engineering: Curating documentation, diagrams, and reference materials directly rather than relying on automated web search. Model-Specific Workflows: Matching AI models to their strengths - O1 for planning, Claude for f unction calling, Gemini for debugging. Raw Dog Programming: Coding without AI assistance to build f undamental skills in codebase navigation and problem-solving. Agent Mode Architecture: Multi-model system where Claude orchestrates task distribution to specialized agents through f unction calls. 📶 Connect with Ben: Twitter/X, YouTube, Discord (Warp Community), Website 📶 Connect with Nicolay: LinkedIn, X/Twitter, Bluesky, Website, nicolay.gerold@gmail.com ⏱ Important Moments React Native's Potential O bsolescence: [08:42] AI translation between native languages could eliminate cross-platform frameworks Manual vs Automated Context: [51:42] Why manually curating documentation beats AI web search Raw Dog Programming Benefits: [12:00] Value of coding without AI assistance during Ben's first week at Warp Model-Specific Strengths: [26:00] Gemini's superior debugging vs Claude's speculative code fixes OpenAI Desktop App Advantage: [13:44] O utperforms Cursor for reading long files Warp's Multi-Model Architecture: [31:00] How Warp uses O1/O3 for planning, Claude for orchestration Function Calling Accuracy: [28:30] Claude outperforms other models at chaining f unction calls AI as Improv Partner: [56:06] Current AI says "yes and" to everything rather than pushing back 🛠 Tools & Tech Mentioned Warp Terminal, OpenAI Desktop App, Cursor, Cline, Go by Example, OpenAI Realtime API, MCP 📚 Recommended Resources Warp Discord Community, Ben's YouTube Channel, Go Programming Documentation 🔮 What's Next Next week, we continue exploring production AI implementations with more insights into getting generative AI systems deployed effectively. 💬 Join The Conversation Follow How AI Is Built on YouTube, Bluesky, or Spotify. Discord coming soon! ♻ Building the platform for engineers to share production experience. Pay it forward by sharing with one engineer facing similar challenges. ♻

    1h5min
  4. #052 Don't Build Models, Build Systems That Build Models

    1 DE JUL.

    #052 Don't Build Models, Build Systems That Build Models

    Nicolay here, Today I have the chance to talk to Charles from Modal, who went from doing a PhD on neural network optimization in the 2010s - when ML engineers could build models with a soldering iron and some sticks - to architecting serverless infrastructure for AI models. Modal is about removing barriers so anyone can spin up a hundred GPUs in seconds. The critical insight that stuck with me: "Don't build models, build systems that build models." Organizations often make the mistake of celebrating a one-time fine-tuned model that matches GPT-4 performance only to watch it become obsolete when the next foundation model arrives - typically three to six months down the road. Charles's approach to infrastructure is particularly unconventional. He argues that serverless isn't just about convenience - it fundamentally changes how ambitious you can be with scale. "There's so much that gets in the way of trying to spin up a hundred GPUs or a thousand CPU containers that people just don't think to do something big." The winning approach involves automated data pipelines with feedback collection, continuous evaluation against new foundation models, AB testing and canary deployments, and systematic error analysis and retraining. In the podcast, we also cover: Why inference, not training, is where the money is madeHow to rethink compute when moving from traditional cloud to serverlessThe economics of automated resource managementWhy task decomposition is the key ML engineering skillWhen to earn the right to fine-tune versus using foundation models*📶 Connect with Charles:* Twitter - https://twitter.com/charlesirl Modal Labs - https://modal.com Modal Slack Community - https://modal.com/slack  *📶 Connect with Nicolay:* LinkedIn - https://linkedin.com/in/nicolay-gerold/ X / Twitter - https://x.com/nicolaygerold Bluesky - https://bsky.app/profile/nicolaygerold.com Website - https://nicolaygerold.com/ My Agency Aisbach - https://aisbach.com/  (for ai implementations / strategy) *⏱️ Important Moments* From CUDA to Serverless: [00:01:38] Charles's journey from PhD neural network optimization to building Modal's serverless infrastructure.Rethinking Scale Ambition: [00:01:38] "There's so much that gets in the way of trying to spin up a hundred GPUs that people just don't think to do something big."The Economics of Serverless: [00:04:09] How automated resource management changes the cattle vs pets paradigm for GPU workloads.Lambda vs Modal Philosophy: [00:04:20] Why Modal was designed for tasks that take bytes and emit megabytes, unlike Lambda's middleware focus.Inference Economics Reality: [00:10:16] "Almost nobody gets paid to make models - organizations get paid to make predictions."The Open Source Commoditization: [00:14:55] How foundation models are becoming undifferentiated capabilities like databases.Task Decomposition as Core Skill: [00:22:00] Why breaking down problems is equivalent to recognizing API boundaries in software engineering.Systems That Build Models: [00:33:31] The critical difference between delivering static weights versus repeatable model production systemsEarning the Right to Fine-Tune: [00:34:06] The infrastructure prerequisites needed before attempting model customization.Multi-Node Training Challenges: [00:52:24] How serverless platforms handle the contradiction of high-performance computing with spiky demand.*🛠️ Tools & Tech Mentioned* Modal - https://modal.com  (serverless GPU infrastructure) AWS Lambda - https://aws.amazon.com/lambda/  (traditional serverless)Kubernetes - https://kubernetes.io/  (container orchestration)Temporal - https://temporal.io/ (workflow orchestration)Weights & Biases - https://wandb.ai/ (experiment tracking)Hugging Face - https://huggingface.co/  (model repository)PyTorch Distributed - https://pytorch.org/tutorials/intermediate/ddp_tutorial.html  (multi-GPU training)Redis - https://redis.io/ (caching and queues)*📚 Recommended Resources* Full Stack Deep Learning - https://fullstackdeeplearning.com/ (deployment best practices)Modal Documentation - https://modal.com/docs (getting started guide)Deep Seek Paper - https://arxiv.org/abs/2401.02954 (disaggregated inference patterns)AI Engineer Summit - https://ai.engineer/ (community events)MLOps Community - https://mlops.community/ (best practices) 💬 Join The Conversation Follow How AI Is Built on YouTube - https://youtube.com/@howaiisbuilt, Bluesky - https://bsky.app/profile/howaiisbuilt.fm, or Spotify - https://open.spotify.com/show/3hhSTyHSgKPVC4sw3H0NUc?_authfailed=1%29  If you have any suggestions for future guests, feel free to leave it in the comments or write me (Nicolay) directly on LinkedIn - https://linkedin.com/in/nicolay-gerold/, X - https://x.com/nicolaygerold, or Bluesky - https://bsky.app/profile/nicolaygerold.com. Or at nicolay.gerold@gmail.com.  I will be opening a Discord soon to get you guys more involved in the episodes! Stay tuned for that.

    59min
  5. #051 Build systems that can be debugged at 4am by tired humans with no context

    17 DE JUN.

    #051 Build systems that can be debugged at 4am by tired humans with no context

    Nicolay here, Today I have the chance to talk to Charity Majors, CEO and co-founder of Honeycomb, who recently has been writing about the cost crisis in observability. "Your source of truth is production, not your IDE - and if you can't understand your code there, you're flying blind." The key insight is architecturally simple but operationally transformative: replace your 10-20 observability tools with wide structured events that capture everything about a request in one place. Most teams store the same request data across metrics, logs, traces, APM, and error tracking - creating a 20X cost multiplier while making debugging nearly impossible because you're reconstructing stories from fragments. Charity's approach flips this: instrument once with rich context, derive everything else from that single source. This isn't just about cost - it's about giving engineers the connective tissue to understand distributed systems. When you can correlate "all requests failing from Android version X in region Y using language pack Z," you find problems in minutes instead of days. The second is putting developers on call for their own code. This creates the tight feedback loop that makes engineers write more reliable software - because nobody wants to get paged at 3am for their own bugs. In the podcast, we also touch on: Why deploy time is the foundational feedback loop (15 minutes vs 15 hours changes everything)The controversial "developers on call" stance and why ops people rarely found companiesHow microservices made everything trace-shaped and killed traditional metrics approachesThe "normal engineer" philosophy - building for 4am debugging, not peak performanceAI making "code of unknown quality" the new normalProgressive deployment strategies (kibble → dogfood → production)and more💡 Core Concepts Wide Structured Events: Capturing all request context in one instrumentation event instead of scattered log lines - enables correlation analysis that's impossible with fragmented data.Observability 2.0: Moving from metrics-as-workhorse to structured-data-as-workhorse, where you instrument once and derive metrics/alerts/dashboards from the same rich dataset.SLO-based Alerting: Replacing symptom alerts (CPU, memory, disk) with customer-impact alerts that measure whether you're meeting promises to users.Progressive Deployment: Gradual rollout through staged environments (kibble → dogfood → production) that builds confidence without requiring 2X infrastructure.Trace-shaped Systems: Architecture pattern recognizing that distributed systems problems are fundamentally about correlating events across time and services, not isolated metrics.📶 Connect with Charity: LinkedInBlueskyPersonal BlogCompany📶 Connect with Nicolay: LinkedInX / TwitterWebsite⏱️ Important Moments Gateway Drug to Engineering: [01:04] How IRC and bash tab completion sparked Charity's fascination with Unix command line possibilitiesADHD and Incident Response: [01:54] Why high-pressure outages brought out her best work - getting "dead calm" when everything's brokenCode vs. Production Reality: [02:56] Evolution from focusing on code beauty to understanding performance, behavior, and maintenance over timeThe Alexander's Horse Principle: [04:49] Auto-deployment as daily practice - if you grow up deploying constantly, it feels natural by the time you scaleProduction as Source of Truth: [06:32] Why your IDE output doesn't matter if you can't understand your code's intersection with infrastructure and usersThe Logging Evolution: [08:03] Moving from debugger-style spam logs to fewer, wider structured events oriented around units of workBubble Up Anomaly Detection: [10:27] How correlating dimensions reveals that failures cluster around specific Android versions, regions, and feature combinationsEverything is Trace-Shaped: [12:45] Why microservices complexity is about locating problems in distributed systems, not just identifying themAI as Acceleration of Automation: [15:57] Most AI panic could be replaced with "automation" - it's the same pattern, just faster feedback loopsNon-determinism as Genuinely New: [16:51] The one aspect of AI that's actually novel in software systems, requiring new architectural patternsThe Cost Crisis: [22:30] How 10-20 observability tools create unsustainable cost multipliers as businesses scaleSLO Revolution: [28:40] Deleting 90% of alerts by focusing on customer impact instead of system symptomsShrinking Feedback Loops: [34:28] Keeping deploy-to-validation under one hour so engineers can connect actions to outcomesNormal Engineer Design: [38:12] Building systems that work for tired humans at 4am, not just heroes during business hoursThe Instrumentation Habit: [23:15] Always looking at your code in production after deployment to build informed instincts about system behaviorProgressive Deployment Strategy: [36:43] Kibble → Dog Food → Production pipeline for gradual confidence buildingReal Engineering Bar: [49:00] Discussion on what actually makes exceptional vs normal engineers🛠️ Tools & Tech Mentioned Honeycomb - Observability platform for structured eventsOpenTelemetry - Vendor-neutral instrumentation frameworkIRC - Early gateway to computingParse - Mobile backend where Honeycomb's origin story began📚 Recommended Resources "In Praise of Normal Engineers" - Charity's blog post"How I Failed" by Tim O'Reilly"Looking at the Crux" by Richard Rumelt"Fluke" - Book about randomness in history"Engineering Management for the Rest of Us" by Sarah Dresner

    1h6min
  6. #050 Bringing LLMs to Production: Delete Frameworks, Avoid Finetuning, Ship Faster

    27 DE MAI.

    #050 Bringing LLMs to Production: Delete Frameworks, Avoid Finetuning, Ship Faster

    Nicolay here, Most AI developers are drowning in frameworks and hype. This conversation is about cutting through the noise and actually getting something into production. Today I have the chance to talk to Paul Iusztin, who's spent 8 years in AI - from writing CUDA kernels in C++ to building modern LLM applications. He currently writes about production AI systems and is building his own AI writing assistant. His philosophy is refreshingly simple: stop overthinking, start building, and let patterns emerge through use. The key insight that stuck with me: "If you don't feel the algorithm - like have a strong intuition about how components should work together - you can't innovate, you just copy paste stuff." This hits hard because so much of current AI development is exactly that - copy-pasting from tutorials without understanding the why. Paul's approach to frameworks is particularly controversial. He uses LangChain and similar tools for quick prototyping - maybe an hour or two to validate an idea - then throws them away completely. "They're low-code tools," he says. "Not good frameworks to build on top of." Instead, he advocates for writing your own database layers and using industrial-grade orchestration tools. Yes, it's more work upfront. But when you need to debug or scale, you'll thank yourself. In the podcast, we also cover: Why fine-tuning is almost always the wrong choiceThe "just-in-time" learning approach for staying sane in AIBuilding writing assistants that actually preserve your voiceWhy robots, not chatbots, are the real endgame💡 Core Concepts Agentic Patterns: These patterns seem complex but are actually straightforward to implement once you understand the core loop. React: Agents that Reason, Act, and Observe in a loopReflection: Agents that review and improve their own outputsFine-tuning vs Base Model + Prompting: Fine-tuning involves taking a pre-trained model and training it further on your specific data. The alternative is using base models with careful prompting and context engineering. Paul's take: "Fine-tuning adds so much complexity... if you add fine-tuning to create a new feature, it's just from one day to one week."RAG: A technique where you retrieve relevant documents/information and include them in the LLM's context to generate better responses. Paul's approach: "In the beginning I also want to avoid RAG and just introduce a more guided research approach. Like I say, hey, these are the resources that I want to use in this article."📶 Connect with Paul: LinkedInX / TwitterNewsletterGitHubBook📶 Connect with Nicolay: LinkedInX / TwitterBlueskyWebsiteMy Agency Aisbach (for ai implementations / strategy)⏱️ Important Moments From CUDA to LLMs: [02:20] Paul's journey from writing CUDA kernels and 3D object detection to modern AI applications.AI Content Is Natural Evolution: [11:19] Why AI writing tools are like the internet transition for artists - tools change, creativity remains.The Framework Trap: [36:41] "I see them as no code or low code tools... not good frameworks to build on top of."Fine-Tuning Complexity Bomb: [27:41] How fine-tuning turns 1-day features into 1-week experiments.End-to-End First: [22:44] "I don't focus on accuracy, performance, or latency initially. I just want an end-to-end process that works."The Orchestration Solution: [40:04] Why Temporal, D-Boss, and Restate beat LLM-specific orchestrators.Hype Filtering System: [54:06] Paul's approach: read about new tools, wait 2-3 months, only adopt if still relevant.Just-in-Time vs Just-in-Case: [57:50] The crucial difference between learning for potential needs vs immediate application.Robot Vision: [50:29] Why LLMs are just stepping stones to embodied AI and the unsolved challenges ahead.🛠️ Tools & Tech Mentioned LangGraph (for prototyping only)Temporal (durable execution)DBOS (simpler orchestration)Restate (developer-friendly orchestration)Ray (distributed compute)UV (Python packaging)Prefect (workflow orchestration)📚 Recommended Resources The Economist Style Guide (for writing)Brandon Sanderson's Writing Approach (worldbuilding first)LangGraph Academy (free, covers agent patterns)Ray Documentation (Paul's next deep dive)🔮 What's Next Next week, we will take a detour and go into the networking behind voice AI with Russell D’Sa from Livekit. 💬 Join The Conversation Follow How AI Is Built on YouTube, Bluesky, or Spotify. If you have any suggestions for future guests, feel free to leave it in the comments or write me (Nicolay) directly on LinkedIn, X, or Bluesky. Or at nicolay.gerold@gmail.com. I will be opening a Discord soon to get you guys more involved in the episodes! Stay tuned for that. ♻️ I am trying to build the new platform for engineers to share their experience that they have earned after building and deploying stuff into production. Pay it forward by sharing with one engineer who's facing similar challenges. That's the agreement - I deliver practical value, you help grow this resource for everyone. ♻️

    1h7min
  7. #050 TAKEAWAYS Bringing LLMs to Production: Delete Frameworks, Avoid Finetuning, Ship Faster

    27 DE MAI.

    #050 TAKEAWAYS Bringing LLMs to Production: Delete Frameworks, Avoid Finetuning, Ship Faster

    Nicolay here, Most AI developers are drowning in frameworks and hype. This conversation is about cutting through the noise and actually getting something into production. Today I have the chance to talk to Paul Iusztin, who's spent 8 years in AI - from writing CUDA kernels in C++ to building modern LLM applications. He currently writes about production AI systems and is building his own AI writing assistant. His philosophy is refreshingly simple: stop overthinking, start building, and let patterns emerge through use. The key insight that stuck with me: "If you don't feel the algorithm - like have a strong intuition about how components should work together - you can't innovate, you just copy paste stuff." This hits hard because so much of current AI development is exactly that - copy-pasting from tutorials without understanding the why. Paul's approach to frameworks is particularly controversial. He uses LangChain and similar tools for quick prototyping - maybe an hour or two to validate an idea - then throws them away completely. "They're low-code tools," he says. "Not good frameworks to build on top of." Instead, he advocates for writing your own database layers and using industrial-grade orchestration tools. Yes, it's more work upfront. But when you need to debug or scale, you'll thank yourself. In the podcast, we also cover: Why fine-tuning is almost always the wrong choiceThe "just-in-time" learning approach for staying sane in AIBuilding writing assistants that actually preserve your voiceWhy robots, not chatbots, are the real endgame💡 Core Concepts Agentic Patterns: These patterns seem complex but are actually straightforward to implement once you understand the core loop. React: Agents that Reason, Act, and Observe in a loopReflection: Agents that review and improve their own outputsFine-tuning vs Base Model + Prompting: Fine-tuning involves taking a pre-trained model and training it further on your specific data. The alternative is using base models with careful prompting and context engineering. Paul's take: "Fine-tuning adds so much complexity... if you add fine-tuning to create a new feature, it's just from one day to one week."RAG: A technique where you retrieve relevant documents/information and include them in the LLM's context to generate better responses. Paul's approach: "In the beginning I also want to avoid RAG and just introduce a more guided research approach. Like I say, hey, these are the resources that I want to use in this article."📶 Connect with Paul: LinkedInX / TwitterNewsletterGitHubBook📶 Connect with Nicolay: LinkedInX / TwitterBlueskyWebsiteMy Agency Aisbach (for ai implementations / strategy)⏱️ Important Moments From CUDA to LLMs: [02:20] Paul's journey from writing CUDA kernels and 3D object detection to modern AI applications.AI Content Is Natural Evolution: [11:19] Why AI writing tools are like the internet transition for artists - tools change, creativity remains.The Framework Trap: [36:41] "I see them as no code or low code tools... not good frameworks to build on top of."Fine-Tuning Complexity Bomb: [27:41] How fine-tuning turns 1-day features into 1-week experiments.End-to-End First: [22:44] "I don't focus on accuracy, performance, or latency initially. I just want an end-to-end process that works."The Orchestration Solution: [40:04] Why Temporal, D-Boss, and Restate beat LLM-specific orchestrators.Hype Filtering System: [54:06] Paul's approach: read about new tools, wait 2-3 months, only adopt if still relevant.Just-in-Time vs Just-in-Case: [57:50] The crucial difference between learning for potential needs vs immediate application.Robot Vision: [50:29] Why LLMs are just stepping stones to embodied AI and the unsolved challenges ahead.🛠️ Tools & Tech Mentioned LangGraph (for prototyping only)Temporal (durable execution)DBOS (simpler orchestration)Restate (developer-friendly orchestration)Ray (distributed compute)UV (Python packaging)Prefect (workflow orchestration)📚 Recommended Resources The Economist Style Guide (for writing)Brandon Sanderson's Writing Approach (worldbuilding first)LangGraph Academy (free, covers agent patterns)Ray Documentation (Paul's next deep dive)🔮 What's Next Next week, we will take a detour and go into the networking behind voice AI with Russell D’Sa from Livekit. 💬 Join The Conversation Follow How AI Is Built on YouTube, Bluesky, or Spotify. If you have any suggestions for future guests, feel free to leave it in the comments or write me (Nicolay) directly on LinkedIn, X, or Bluesky. Or at nicolay.gerold@gmail.com. I will be opening a Discord soon to get you guys more involved in the episodes! Stay tuned for that. ♻️ I am trying to build the new platform for engineers to share their experience that they have earned after building and deploying stuff into production. Pay it forward by sharing with one engineer who's facing similar challenges. That's the agreement - I deliver practical value, you help grow this resource for everyone. ♻️

    11min
  8. #049 BAML: The Programming Language That Turns LLMs into Predictable Functions

    20 DE MAI.

    #049 BAML: The Programming Language That Turns LLMs into Predictable Functions

    Nicolay here, I think by now we are done with marveling at the latest benchmark scores of the models. It doesn’t tell us much anymore that the latest generation outscores the previous by a few basis points. If you don’t know how the LLM performs on your task, you are just duct taping LLMs into your systems. If your LLM-powered app can’t survive a malformed emoji, you’re shipping liability, not software. Today, I sat down with Vaibhav (co-founder of Boundary) to dissect BAML—a DSL that treats every LLM call as a typed function. It’s like swapping duct-taped Python scripts for a purpose-built compiler. Vaibhav advocates for building first principle based primitives. One principle stood out: LLMs are just functions; build like that from day 1. Wrap them, test them, and let a human only where it counts. Once you adopt that frame, reliability patterns fall into place: fallback heuristics, model swaps, classifiers—same playbook we already use for flaky APIs. We also cover: Why JSON constraints are the wrong hammer—and how Schema-Aligned Parsing fixes itWhether “durable” should be a first-class keyword (think async/await for crash-safety)Shipping multi-language AI pipelines without forcing a Python microserviceToken-bloat surgery, symbol tuning, and the myth of magic promptsHow to keep humans sharp when 98 % of agent outputs are already correct💡 Core Concepts Schema-Aligned Parsing (SAP)Parse first, panic later. The model can handle Markdown, half-baked YAML, or rogue quotes—SAP puts it into your declared type or raises. No silent corruption.Symbol TuningLabels eat up tokens and often don’t help with your accuracy (in some cases they even hurt). Rename PasswordReset to C7, keep the description human-readable.Durable ExecutionDurable execution refers to a computing paradigm where program execution state persists despite failures, interruptions, or crashes. It ensures that operations resume exactly where they left off, maintaining progress even when systems go down.Prompt CompressionEvery extra token is latency, cost, and entropy. Axe filler words until the prompt reads like assembly. If output degrades, you cut too deep—back off one line.📶 Connect with Vaibhav: LinkedInX / TwitterBAML📶 Connect with Nicolay: NewsletterLinkedInX / TwitterBlueskyWebsiteMy Agency Aisbach (for ai implementations / strategy)⏱️ Important Moments New DSL vs. Python Glue [00:54]Why bolting yet another microservice onto your stack is cowardice; BAML compiles instead of copies.Three-Nines on Flaky Models [04:27]Designing retries, fallbacks, and human overrides when GPT eats dirt 5 % of the time.Native Go SDK & OpenAPI Fatigue [06:32]Killing thousand-line generated clients; typing go get instead.“LLM = Pure Function” Mental Model [15:58]Replace mysticism with f(input) → output; unit-test like any other function.Tool-Calling as a Switch Statement [18:19]Multi-tool orchestration boils down to switch(action) {…}—no cosmic “agent” needed.Sneak Peek—durable Keyword [24:49]Crash-safe workflows without shoving state into S3 and praying.Symbol Tuning Demo [31:35]Swapping verbose labels for C0,C1 slashes token cost and bias in one shot.Inside SAP Coercion Logic [47:31]Int arrays to ints, scalars to lists, bad casts raise—deterministic, no LLM in the loop.Frameworks vs. Primitives Rant [52:32]Why BAML ships primitives and leaves the “batteries” to you—less magic, more control.🛠️ Tools & Tech Mentioned BAML DSL & PlaygroundTemporal • Prefect • DBOSoutlines • Instructor • LangChain📚 Recommended Resources BAML DocsSchema-Aligned Parsing (SAP)🔮 What's Next Next week, we will continue going more into getting generative AI into production talking to Paul Iusztin. 💬 Join The Conversation Follow How AI Is Built on YouTube, Bluesky, or Spotify. If you have any suggestions for future guests, feel free to leave it in the comments or write me (Nicolay) directly on LinkedIn, X, or Bluesky. Or at nicolay.gerold@gmail.com. I will be opening a Discord soon to get you guys more involved in the episodes! Stay tuned for that. ♻️ Here's the deal: I'm committed to bringing you detailed, practical insights about AI development and implementation. In return, I have two simple requests: Hit subscribe right now to help me understand what content resonates with youIf you found value in this post, share it with one other developer or tech professional who's working with AIThat's our agreement - I deliver actionable AI insights, you help grow this. ♻️

    1h3min
5
de 5
8 avaliações

Sobre

Real engineers. Real deployments. Zero hype. We interview the top engineers who actually put AI in production. Learn what the best engineers have figured out through years of experience. Hosted by Nicolay Gerold, CEO of Aisbach and CTO at Proxdeal and Multiply Content.

Você também pode gostar de