Prompt engineering involves a lot more than simply getting smarter with how you structure the prompts you enter in an LLM browser interface.
Furthermore, a growing body of peer-reviewed research provides us with best practices to improve the accuracy and reliability of LLM outputs for the specific tasks we build systems around.
In this episode, Jake and David review evidence-based best practices for prompt engineering and, importantly, highlight what proper prompt engineering requires such that most of us likely cannot call ourselves prompt engineers.
Information
- Show
- FrequencyUpdated Semiweekly
- PublishedJuly 20, 2025 at 7:01 PM UTC
- Length1h 8m
- Season2
- Episode20
- RatingClean