Generative AI in the Real World

O'Reilly

In 2023, ChatGPT put AI on everyone’s agenda. Now, the challenge will be turning those agendas into reality. In Generative AI in the Real World, Ben Lorica interviews leaders who are building with AI. Learn from their experience to help put AI to work in your enterprise.

  1. 9月19日

    Jay Alammar on Building AI for the Enterprise

    Jay Alammar, director and Engineering Fellow at Cohere, joins Ben Lorica to talk about building AI applications for the enterprise, using RAG effectively, and the evolution of RAG into agents. Listen in to find out what kinds of metadata you need when you’re onboarding a new model or agent; discover how an emphasis on evaluation helps an organization improve its processes; and learn how to take advantage of the latest code-generation tools. Timestamps 0:00: Introduction to Jay Alammar, director at Cohere. He’s also the author of Hands-On Large Language Models.0:30: What has changed in how you think about teaching and building with LLMs?0:45: This is my fourth year with Cohere. I really love the opportunity because it was a chance to join the team early (around the time of GPT-3). Aidan Gomez, one of the cofounders, was one of the coauthors of the transformers paper. I’m a student of how this technology went out of the lab and into practice. Being able to work in a company that’s doing that has been very educational for me. That’s a little of what I use to teach. I use my writing to learn in public. 2:20: I assume there’s a big difference between learning in public and teaching teams within companies. What’s the big difference?2:36: If you’re learning on your own, you have to run through so much content and news, and you have to mute a lot of it as well. This industry moves extremely fast. Everyone is overwhelmed by the pace. For adoption, the important thing is to filter a lot of that and see what actually works, what patterns work across use cases and industries, and write about those. 3:25: That’s why something like RAG proved itself as one application paradigm for how people should be able to use language models. A lot of it is helping people cut through the hype and get to what’s actually useful, and raise AI awareness. There’s a level of AI literacy that people need to come to grips with. 4:10: People in companies want to learn things that are contextually relevant. For example, if you’re in finance, you want material that will help deal with Bloomberg and those types of data sources, and material aware of the regulatory environment. 4:38: When people started being able to understand what this kind of technology was capable of doing, there were multiple lessons the industry needed to understand. Don’t think of chat as the first thing you should deploy. Think of simpler use cases, like summarization or extraction. Think about these as building blocks for an application. 5:28: It’s unfortunate that the name “generative AI” came to be used because the most important things AI can do aren’t generative: they’re the representation with embeddings that enable better categorization, better clustering, and enabling companies to make sense of large amounts of data. The next lesson was to not rely on a model’s information. In the beginning of 2023, there were so many news stories about the models being a search engine. People expected the model to be truthful, and they were surprised when it wasn’t. One of the first solutions was RAG. RAG tries to retrieve the context that will hopefully contain the answer. The next question was data security and data privacy: They didn’t want data to leave their network. That’s where private deployment of models becomes a priority, where the model comes to the data. With that, they started to deploy their initial use cases. 8:04: Then that system can answer systems to a specific level of difficulty—but with more difficulty, the system needs to be more advanced. Maybe it needs to search for multiple queries or do things over multiple steps. 8:31: One thing we learned about RAG was that just because something is in the context window doesn’t mean the machine won’t hallucinate. And people have developed more appreciation of applying even more context: GraphRAG, context engineering.

    43 分鐘
  2. 9月18日

    Phillip Carter on Where Generative AI Meets Observability

    Phillip Carter, formerly of Honeycomb, and Ben Lorica talk about observability and AI—what observability means, how generative AI causes problems for observability, and how generative AI can be used as a tool to help SREs analyze telemetry data. There’s tremendous potential because AI is great at finding patterns in massive datasets, but it’s still a work in progress. About the Generative AI in the Real World podcast: In 2023, ChatGPT put AI on everyone’s agenda. In 2025, the challenge will be turning those agendas into reality. In Generative AI in the Real World, Ben Lorica interviews leaders who are building with AI. Learn from their experience to help put AI to work in your enterprise. Timestamps 0:00: Introduction to Phillip Carter, a product manager at Salesforce. We'll focus on observability, which he worked on at Honeycomb.0:35: Let’s have the elevator definition of observability first, then we’ll go into observability in the age of AI.0:44: If you google “What is observability?” you’re going to get 10 million answers. It’s an industry buzzword. There are a lot of tools in the same space.1:12: At a high level, I like to think of it in two pieces. The first is that this is an acknowledgement that you have a system of some kind, and you do not have the capability to pull that system onto your local machine and inspect what is happening at a moment in time. When something gets large and complex enough, it’s impossible to keep in your head. The product I worked on at Honeycomb is actually a very sophisticated querying engine that's tied to a lot of AWS services in a way that makes it impossible to debug on my laptop.2:40: So what can I do? I can have data, called telemetry, that I can aggregate and analyze. I can aggregate trillions of data points to say that this user was going through the system in this way under these conditions. I can pull from these different dimensions and hold something constant.3:20: Let’s look at how the values differ when I hold one thing constant. Let’s hold another thing constant. That gives me an overall picture of what is happening in the real world.3:37: That is the crux of observability. I'm debugging, but not by stepping through something on my local machine. I click a button, and I can see that it manifests in a database call. But there are potentially millions of users, and things go wrong somewhere else in the system. And I need to try to understand what paths lead to that, and what commonalities exist in those paths.4:14: This is my very high-level definition. It’s many operations, many tasks, almost a workflow as well, and a set of tools.4:32: Based on your description, observability people are sort of like security people. WIth AI, there are two aspects: observability problems introduced by AI, and the use of AI to help with observability. Let’s tackle each separately. Before AI, we had machine learning. Observability people had a handle on traditional machine learning. What specific challenges did generative AI introduce?5:36: In some respects, the problems have been constrained to big tech. LLMs are the first time that we got truly world-class machine learning support available behind an API call. Prior to that, it was in the hands of Google and Facebook and Netflix. They helped develop a lot of this stuff. They’ve been solving problems related to what everyone else has to solve now. They’re building recommendation systems that take in many signals. For a long time, Google has had natural language answers for search queries, prior to the AI overview stuff. That stuff would be sourced from web documents. They had a box for follow-up questions. They developed this before Gemini. It’s kind of the same tech. They had to apply observability to make this stuff available at large.

    38 分鐘
  3. 9月17日

    Raiza Martin on Building AI Applications for Audio

    Audio is being added to AI everywhere: both in multimodal models that can understand and generate audio and in applications that use audio for input. Now that we can work with spoken language, what does that mean for the applications that we can develop? How do we think about audio interfaces—how will people use them, and what will they want to do? Raiza Martin, who worked on Google’s groundbreaking NotebookLM, joins Ben Lorica to discuss how she thinks about audio and what you can build with it. Timestamps 0:00: Introduction to Raiza Martin, who cofounded Huxe and formerly led Google’s NotebookLM team. What made you think this was the time to trade the comforts of big tech for a garage startup?1:01: It was a personal decision for all of us. It was a pleasure to take NotebookLM from an idea to something that resonated so widely. We realized that AI was really blowing up. We didn’t know what it would be like at a startup, but we wanted to try. Seven months down the road, we’re having a great time.1:54: For the 1% who aren’t familiar with NotebookLM, give a short description.2:06: It’s basically contextualized intelligence, where you give NotebookLM the sources you care about and NotebookLM stays grounded to those sources. One of our most common use cases was that students would create notebooks and upload their class materials, and it became an expert that you could talk with.2:43: Here’s a use case for homeowners: put all your user manuals in there.3:14: We have had a lot of people tell us that they use NotebookLM for Airbnbs. They put all the manuals and instructions in there, and users can talk to it.3:41: Why do people need a personal daily podcast?3:57: There are a lot of different ways that I think about building new products. On one hand, there are acute pain points. But Huxe comes from a different angle: What if we could try to build very delightful things? The inputs are a little different. We tried to imagine what the average person’s daily life is like. You wake up, you check your phone, you travel to work; we thought about opportunities to make something more delightful. I think a lot about TikTok. When do I use it? When I’m standing in line. We landed on transit time or commute time. We wanted to do something novel and interesting with that space in time. So one of the first things was creating really personalized audio content. That was the provocation: What do people want to listen to? Even in this short time, we’ve learned a lot about the amount of opportunity.6:04: Huxe is mobile first, audio first, right? Why audio?6:45: Coming from our learnings from NotebookLM, you learn fundamentally different things when you change the modality of something. When I go on walks with ChatGPT, I just talk about my day. I noticed that was a very different interaction from when I type things out to ChatGPT. The flip side is less about interaction and more about consumption. Something about the audio format made the types of sources different as well. The sources we uploaded to NotebookLM were different as a result of wanting audio output. By focusing on audio, I think we’ll learn different use cases than the chat use cases. Voice is still largely untapped.8:24: Even in text, people started exploring other form factors: long articles, bullet points. What kinds of things are available for voice?8:49: I think of two formats: one passive and one interactive. With passive formats, there are a lot of different things you can create for the user. The things you end up playing with are (1) what is the content about and (2) how flexible is the content? Is it short, long, malleable to user feedback? With interactive content, maybe I’m listening to audio, but I want to interact with it. Maybe I want to join in. Maybe I want my friends to join in. Both of those contexts are new. I think this is what’s going to emerge in the next few years.

    36 分鐘
  4. 9月16日

    Stefania Druga on Designing for the Next Generation

    How do you teach kids to use and build with AI? That’s what Stefania Druga works on. It’s important to be sensitive to their creativity, sense of fun, and desire to learn. When designing for kids, it’s important to design with them, not just for them. That’s a lesson that has important implications for adults, too. Join Stefania Druga and Ben Lorica to hear about AI for kids and what that has to say about AI for adults. Timestamps 0:27: You’ve built AI education tools for young people, and after that, worked on multimodal AI at DeepMind. What have kids taught you about AI design?0:48: It’s been quite a journey. I started working on AI education in 2015. I was on the Scratch team in the MIT Media Lab. I worked on Cognimates so kids could train custom models with images and texts. Kids would do things I would have never thought of, like build a model to identify weird hairlines or to recognize and give you backhanded compliments. They did things that are weird and quirky and fun and not necessarily utilitarian.2:05: For young people, driving a car is fun. Having a self-driving car is not fun. They have lots of insights that could inspire adults.2:25: You’ve noticed that a lot of the users of AI are Gen Z, but most tools aren’t designed with them in mind. What is the biggest disconnect?2:47: We don’t have a knob for agency to control how much we delegate to the tools. Most of Gen Z use off-the-shelf AI products like ChatGPT, Gemini, and Claude. These tools have a baked-in assumption that they need to do the work rather than asking questions to help you do the work. I like a much more Socratic approach. A big part of learning is asking and being asked good questions. A huge role for generative AI is to use it as a tool that can teach you things, ask you questions; [it’s] something to brainstorm with, not a tool that you delegate work to.4:25: There’s this big elephant in the room where we don’t have conversations or best practices for how to use AI.4:42: You mentioned the Socratic approach. How do you implement the Socratic approach in the world of text interfaces?4:57: In Cognimates, I created a copilot for kids coding. This copilot doesn’t do the coding. It asks them questions. If a kid asks, “How do I make the dude move?” the copilot will ask questions rather than saying, “Use this block and then that block.”6:40: When I designed this, we started with a person behind the scenes, like the Wizard of Oz. Then we built the tool and realized that kids really want a system that can help them clarify their thinking. How do you break down a complex event into steps that are good computational units?8:06: The third discovery was affirmations—whenever they did something that was cool, the copilot says something like “That’s awesome.” The kids would spend double the time coding because they had an infinitely patient copilot that would ask them questions, help them debug, and give them affirmations that would reinforce their creative identity.8:46: With those design directions, I built the tool. I’m presenting a paper at the ACM IDC (Interaction Design for Children) conference that presents this work in more detail. I hope this example gets replicated.9:26: Because these interactions and interfaces are evolving very fast, it’s important to understand what young people want, how they work and how they think, and design with them, not just for them.9:44: The typical developer now, when they interact with these things, overspecifies the prompt. They describe so precisely. But what you’re describing is interesting because you’re learning, you’re building incrementally. We’ve gotten away from that as grown-ups.10:28: It’s all about tinkerability and having the right level of abstraction. What are the right Lego blocks? A prompt is not tinkerable enough. It doesn’t allow for enough expressivity. It needs to be composable and allow the user to be in control.

    33 分鐘
  5. 9月15日

    Douwe Kiela on Why RAG Isn’t Dead

    Join our host Ben Lorica and Douwe Kiela, cofounder of Contextual AI and author of the first paper on RAG, to find out why RAG remains as relevant as ever. Regardless of what you call it, retrieval is at the heart of generative AI. Find out why—and how to build effective RAG-based systems. Points of Interest 0:25: Today’s topic is RAG. With frontier models advertising massive context windows, many developers wonder if RAG is becoming obsolete. What’s your take?1:03: We now have a blog post: isragdeadyet.com. If something keeps getting pronounced dead, it will never die. These long context models solve a similar problem to RAG: how to get the relevant information into the language model. But it’s wasteful to use the full context all the time. If you want to know who the headmaster is in Harry Potter, do you have to read all the books? 2:04: What will probably work best is RAG plus long context models. The real solution is to use RAG, find as much relevant information as you can, and put it into the language model. The dichotomy between RAG and long context isn’t a real thing.2:48: One of the main issues may be that RAG systems are annoying to build, and long context systems are easy. But if you can make RAG easy too, it’s much more efficient.3:07: The reasoning models make it even worse in terms of cost and latency. And if you’re talking about something with a lot of usage, high repetition, it doesn’t make sense. 3:39: You’ve been talking about RAG 2.0, which seems natural: emphasize systems over models. I’ve long warned people that RAG is a complicated system to build because there are so many knobs to turn. Few developers have the skills to systematically turn those knobs. Can you unpack what RAG 2.0 means for teams building AI applications?4:22: The language model is only a small part of a much bigger system. If the system doesn’t work, you can have an amazing language model and it’s not going to get the right answer. If you start from that observation, you can think of RAG as a system where all the model components can be optimized together.5:40: What you’re describing is similar to what other parts of AI are trying to do: an end-to-end system. How early in the pipeline does your vision start?6:07: We have two core concepts. One is a data store—that’s really extraction, where we do layout segmentation. We collate all of that information and chunk it, store it in the data store, and then the agents sit on top of the data store. The agents do a mixture of retrievers, followed by a reranker and a grounded language model.7:02: What about embeddings? Are they automatically chosen? If you go to Hugging Face, there are, like, 10,000 embeddings.7:15: We save you a lot of that effort. Opinionated orchestration is a way to think about it.7:31: Two years ago, when RAG started becoming mainstream, a lot of developers focused on chunking. We had rules of thumb and shared stories. This eliminates a lot of that trial and error.8:06: We basically have two APIs: one for ingestion and one for querying. Querying is contextualized on your data, which we’ve ingested. 8:25: One thing that’s underestimated is document parsing. A lot of people overfocus on embedding and chunking. Try to find a PDF extraction library for Python. There are so many of them, and you can’t tell which ones are good. They’re all terrible.8:54: We have our stand-alone component APIs. Our document parser is available separately. Some areas, like finance, have extremely complex layouts. Nothing off the shelf works, so we had to roll our own solution. Since we know this will be used for RAG, we process the document to make it maximally useful. We don’t just extract raw information. We also extract the document hierarchy. That is extremely relevant as metadata when you’re doing retrieval.10:11: There are open source libraries—what drove you to build your own, which I assume also encompasses OCR?

    35 分鐘

簡介

In 2023, ChatGPT put AI on everyone’s agenda. Now, the challenge will be turning those agendas into reality. In Generative AI in the Real World, Ben Lorica interviews leaders who are building with AI. Learn from their experience to help put AI to work in your enterprise.