Grounded Truth

Watchful

As more organizations adopt AI, we emerge from the wild west of training black-box machine learning models and placing AI into production to "see what happens"! With the burgeoning of several specifically designed AI solutions, such as Watchful (insert shameless plug), we begin to demystify the deployment of AI. The Grounded Truth Podcast, hosted by John Singleton (co-founder of Watchful), gathers some of the world's most influential data scientists, machine learning practitioners, and innovation leaders for discussions on how we can accelerate our understanding, development, and implementation of cutting-edge machine learning applications in both academia and enterprise. 

  1. Challenges and Shifts Required for Placing Generative AI into Production

    14/12/2023

    Challenges and Shifts Required for Placing Generative AI into Production

    In this episode of "Grounded Truth," we dive into the world of Generative AI and the complexities of placing it into production. Our special guests for this episode are Manasi Vartak, Founder and CEO of Verta, and Shayan Mohanty, Co-founder and CEO of Watchful. 🌐 Verta: Empowering Gen AI Application Development Learn about Verta's end-to-end automation platform for Gen AI application development. Explore how Verta's expertise in model management and serving has evolved to address the challenges of scaling and managing Gen AI models. Visit www.verta.ai for more insights.🚀 Evolution in the AI Landscape Discover the tectonic shift in the AI landscape over the past year, marked by the release of Chat GPT and the rise of Gen AI. Manasi shares insights into how Gen AI has democratized AI, making it a focal point in boardrooms and team discussions.🤔 Challenges in Gen AI Application Production Uncover the challenges and changes in workflow when transitioning from classical ML model development to Gen AI application production. Manasi provides valuable insights into the business hunger for AI and the increasing demand for data science resources.🌟 What's Changed Since Chat GPT's Release? Reflect on the transformative impact of Chat GPT and how it has influenced the priorities of data science leaders and organizations.🔮 Predictions for the AI Industry in 2024 Listen as Manasi and Shayan share their predictions on the future of the AI industry in 2024. Gain valuable insights into the trends and advancements that will shape the landscape.

    38 min
  2. Retrieval Augmented Generation (RAG) versus Fine Tuning in LLM Workflows

    11/12/2023

    Retrieval Augmented Generation (RAG) versus Fine Tuning in LLM Workflows

    🎙️ RAG vs. Fine Tuning - Dive into the latest episode of "Grounded Truth" hosted by John Singleton as he discusses "Retrieval Augmented Generation (RAG) versus Fine Tuning in LLM Workflows" with Emmanuel Turlay, Founder & CEO of Sematic and Airtrain.ai and Shayan Mohanty, Co-founder & CEO of Watchful. 🤖 RAG: Retrieval Augmented Generation - RAG involves putting content inside the prompt/context window to make models aware of recent events, private information, or company documents. The process includes retrieving the most relevant information from sources like Bing, Google, or internal databases, feeding it into the model's context window, and generating user-specific responses. Ideal for ensuring factual answers by extracting data from a specified context. ⚙️ Fine Tuning - Fine tuning entails training models for additional epochs on more data, allowing customization of the model's behavior, tone, or output format. Used to make models act in specific ways, such as speaking like a lawyer or adopting the tone of Harry Potter. Unlike RAG, it focuses more on the form and tone of the output rather than knowledge augmentation. 🤔 Decision Dilemma: RAG or Fine Tuning? Emmanuel highlights the misconception that fine tuning injects new knowledge, emphasizing its role in shaping the output according to user needs. RAG is preferred for factual answers, as it extracts information directly from a specified context, ensuring higher accuracy. Fine tuning, on the other hand, is more about customizing the form and tone of the output. 🔄 The Verdict: A Balanced Approach? It's not a one-size-fits-all decision. The choice between RAG and fine tuning depends on specific use cases. Evaluating the decision involves understanding the goals: knowledge augmentation (RAG) or customization of form and tone (Fine Tuning). Perhaps a balanced approach, leveraging both techniques based on the desired outcomes. AirTrain YouTube Channel: https://www.youtube.com/@AirtrainAI

    35 min
  3. Decoding LLM Uncertainties for Better Predictability

    09/11/2023

    Decoding LLM Uncertainties for Better Predictability

    Welcome to another riveting episode of "Grounded Truth"!  In this episode, your host John Singleton, co-founder and Head of Success at Watchful, is joined by Shayan Mohanty, CEO of Watchful. Together, they embark on a deep dive into the intricacies of Large Language Models (LLMs). In Watchful's journey through language model exploration, we've uncovered fascinating insights into putting the "engineering" back into prompt engineering. Our latest research focuses on introducing meaningful observability metrics to enhance our understanding of language models. If you'd like to explore on your own, feel free to play with a demo here: https://uncertainty.demos.watchful.io/  Repo can be found here: https://github.com/Watchfulio/uncertainty-demo  💡 What to expect in this episode:  -  Recap of our last exploration, where we unveiled the role of perceived ambiguity in LLM prompts and its alignment with the "ground truth."  - Introduction of two critical measures: Structural Uncertainty (using normalized entropy) and Conceptual Uncertainty (revealing internal cohesion through cosine distances).  - Why these measures matter: Assessing predictability in prompts, guiding decisions on fine-tuning versus prompt engineering, and setting the stage for objective model comparisons.  🚀 Join John and Shayan on this quest to make language model interactions more transparent and predictable. The episode aims to unravel complexities, provide actionable insights, and pave the way for a clearer understanding of LLM uncertainties.

    35 min
  4. Is Data Labeling Dead?

    11/08/2023

    Is Data Labeling Dead?

    Dive into the thought-provoking world of data labeling in this episode of the Grounded Truth podcast - "Is Data Labeling Dead?". Hosted by John Singleton and featuring Shayan Mohanty, co-founder and CEO of Watchful, this episode offers a captivating discussion on the changing landscape of data labeling and its intricate relationship with the rise of large language models (LLMs). Uncover the historical journey of data labeling, from its early manual stages to the advent of in-house solutions and automation. Delve into the pivotal question: is traditional data labeling becoming obsolete due to the capabilities of LLMs like GPT-3? While the title suggests a binary perspective, the podcast presents a nuanced exploration, showcasing the evolving nature of data labeling. Discover how LLMs have revolutionized the handling of low-context tasks like sentiment analysis and categorization, reshaping the demand for conventional data labeling services. However, the conversation goes beyond absolutes, shedding light on the transformation of data labeling rather than its demise. This episode of the Grounded Truth podcast underscores that data labeling is far from dead; it is evolving to accommodate the dynamic interplay between LLMs and labeling practices. While LLMs handle routine tasks efficiently, data labeling is pivoting towards high-context labeling, specialized needs, and optimizing workflows for sophisticated model development. Explore the captivating journey of data labeling in this episode, where tradition meets innovation and adaptation guides the way forward.

    41 min
  5. Leveraging Machine Teaching to Build Autonomous Agents

    17/07/2023

    Leveraging Machine Teaching to Build Autonomous Agents

    Welcome to "Grounded Truth," the podcast where we bring together influential data scientists, machine learning practitioners, and innovation leaders to discuss the most relevant topics in AI. I'm John Singleton, your host, and I'm thrilled to be joined by Kence Anderson, CEO and co-founder of Composabl (https://composabl.ai/), a platform for building autonomous intelligent agents. Kence is a seasoned entrepreneur and has a wealth of experience in the AI field, including his work at Microsoft as a principal program manager for AI and research machine teaching innovation. In this episode, we delve into the fascinating world of autonomous agents and their impact on various industries. From autonomous driving to industrial automation, we explore the challenges and advancements in human-like decision-making. Composabl's mission is to empower individuals without deep AI expertise to create intelligent agents using modular building blocks and their own subject matter expertise. Through the concept of machine teaching, users can train agents to make real-time decisions, whether it's controlling a drone, a bulldozer or even optimizing virtual processes in factories or logistics. Joining us as well is Shayan Mohanty, co-founder and CEO of Watchful, the machine teaching platform for data-centric AI. Together, we discuss the distinct roles of perception and action in AI and how they differ. While perception involves perceiving and understanding the environment, action focuses on making decisions and taking appropriate steps. We explore the significance of supervised learning in perception tasks like computer vision and prediction, as well as its limitations in driving actionable outcomes. To learn more from Kence Anderson, be sure to check out his latest publication, "Designing Autonomous AI: A Guide for Machine Teaching," available on O'Reilly (https://www.oreilly.com/library/view/designing-autonomous-ai/9781098110741/)  Don't forget to like, subscribe, and follow us on Apple Podcasts, Spotify, YouTube, and other podcast platforms. ‍ Reference: “Dreyfus model of skill acquisition” - https://en.wikipedia.org/wiki/Dreyfus_model_of_skill_acquisition

    55 min

À propos

As more organizations adopt AI, we emerge from the wild west of training black-box machine learning models and placing AI into production to "see what happens"! With the burgeoning of several specifically designed AI solutions, such as Watchful (insert shameless plug), we begin to demystify the deployment of AI. The Grounded Truth Podcast, hosted by John Singleton (co-founder of Watchful), gathers some of the world's most influential data scientists, machine learning practitioners, and innovation leaders for discussions on how we can accelerate our understanding, development, and implementation of cutting-edge machine learning applications in both academia and enterprise.