17 episodes

As more organizations adopt AI, we emerge from the wild west of training black-box machine learning models and placing AI into production to "see what happens"! With the burgeoning of several specifically designed AI solutions, such as Watchful (insert shameless plug), we begin to demystify the deployment of AI. The Grounded Truth Podcast, hosted by John Singleton (co-founder of Watchful), gathers some of the world's most influential data scientists, machine learning practitioners, and innovation leaders for discussions on how we can accelerate our understanding, development, and implementation of cutting-edge machine learning applications in both academia and enterprise. 

Grounded Truth Watchful

    • Technology

As more organizations adopt AI, we emerge from the wild west of training black-box machine learning models and placing AI into production to "see what happens"! With the burgeoning of several specifically designed AI solutions, such as Watchful (insert shameless plug), we begin to demystify the deployment of AI. The Grounded Truth Podcast, hosted by John Singleton (co-founder of Watchful), gathers some of the world's most influential data scientists, machine learning practitioners, and innovation leaders for discussions on how we can accelerate our understanding, development, and implementation of cutting-edge machine learning applications in both academia and enterprise. 

    The Future of AI: Doom or Boom? Featuring the Host of The AI FYI Podcast

    The Future of AI: Doom or Boom? Featuring the Host of The AI FYI Podcast

    Welcome to the "Grounded Truth Podcast," where we bring together some of the brightest minds in AI to explore the most pressing topics shaping our future. Our latest episode, "The Future of AI: Doom or Boom?"—promises to be a riveting discussion.

    Joining host John Singleton, Co-founder and Head of Success at Watchful, are:  Shayan Mohanty, CEO and Co-founder of Watchful, and the podcast "AI FYI" host Andy Butkovic, Joe Cloughley, and Kiran Vajapey.   Together, we'll delve into the fascinating world of AI, covering a wide range of topics:

    • LLMs adoption

    • AI ethics and cultural impact

    • AI's transformative effect on traditional industries.

    • The rapid pace of AI's technological advancement

    Whether you're a seasoned AI expert or simply curious about its impact, this episode promises something for everyone.

    Learn more about The AI FYI Podcast by visiting: http://www.aifyipod.com.

    • 42 min
    Challenges and Shifts Required for Placing Generative AI into Production

    Challenges and Shifts Required for Placing Generative AI into Production

    In this episode of "Grounded Truth," we dive into the world of Generative AI and the complexities of placing it into production. Our special guests for this episode are Manasi Vartak, Founder and CEO of Verta, and Shayan Mohanty, Co-founder and CEO of Watchful.

    🌐 Verta: Empowering Gen AI Application Development

    * Learn about Verta's end-to-end automation platform for Gen AI application development.
    * Explore how Verta's expertise in model management and serving has evolved to address the challenges of scaling and managing Gen AI models.
    * Visit www.verta.ai [http://www.verta.ai] for more insights.

    🚀 Evolution in the AI Landscape

    * Discover the tectonic shift in the AI landscape over the past year, marked by the release of Chat GPT and the rise of Gen AI.
    * Manasi shares insights into how Gen AI has democratized AI, making it a focal point in boardrooms and team discussions.

    🤔 Challenges in Gen AI Application Production

    * Uncover the challenges and changes in workflow when transitioning from classical ML model development to Gen AI application production.
    * Manasi provides valuable insights into the business hunger for AI and the increasing demand for data science resources.

    🌟 What's Changed Since Chat GPT's Release?

    * Reflect on the transformative impact of Chat GPT and how it has influenced the priorities of data science leaders and organizations.

    🔮 Predictions for the AI Industry in 2024

    * Listen as Manasi and Shayan share their predictions on the future of the AI industry in 2024. Gain valuable insights into the trends and advancements that will shape the landscape.

    • 37 min
    Retrieval Augmented Generation (RAG) versus Fine Tuning in LLM Workflows

    Retrieval Augmented Generation (RAG) versus Fine Tuning in LLM Workflows

    🎙️ RAG vs. Fine Tuning - Dive into the latest episode of "Grounded Truth" hosted by John Singleton as he discusses "Retrieval Augmented Generation (RAG) versus Fine Tuning in LLM Workflows" with Emmanuel Turlay, Founder & CEO of Sematic and Airtrain.ai and Shayan Mohanty, Co-founder & CEO of Watchful.

    🤖 RAG: Retrieval Augmented Generation - RAG involves putting content inside the prompt/context window to make models aware of recent events, private information, or company documents.

    The process includes retrieving the most relevant information from sources like Bing, Google, or internal databases, feeding it into the model's context window, and generating user-specific responses.

    Ideal for ensuring factual answers by extracting data from a specified context.

    ⚙️ Fine Tuning - Fine tuning entails training models for additional epochs on more data, allowing customization of the model's behavior, tone, or output format.

    Used to make models act in specific ways, such as speaking like a lawyer or adopting the tone of Harry Potter.

    Unlike RAG, it focuses more on the form and tone of the output rather than knowledge augmentation.

    🤔 Decision Dilemma: RAG or Fine Tuning?

    Emmanuel highlights the misconception that fine tuning injects new knowledge, emphasizing its role in shaping the output according to user needs.

    RAG is preferred for factual answers, as it extracts information directly from a specified context, ensuring higher accuracy.

    Fine tuning, on the other hand, is more about customizing the form and tone of the output.

    🔄 The Verdict: A Balanced Approach?

    It's not a one-size-fits-all decision. The choice between RAG and fine tuning depends on specific use cases.

    Evaluating the decision involves understanding the goals: knowledge augmentation (RAG) or customization of form and tone (Fine Tuning).

    Perhaps a balanced approach, leveraging both techniques based on the desired outcomes.

    AirTrain YouTube Channel: https://www.youtube.com/@AirtrainAI

    • 34 min
    Decoding LLM Uncertainties for Better Predictability

    Decoding LLM Uncertainties for Better Predictability

    Welcome to another riveting episode of "Grounded Truth"!  In this episode, your host John Singleton, co-founder and Head of Success at Watchful, is joined by Shayan Mohanty, CEO of Watchful. Together, they embark on a deep dive into the intricacies of Large Language Models (LLMs).

    In Watchful's journey through language model exploration, we've uncovered fascinating insights into putting the "engineering" back into prompt engineering. Our latest research focuses on introducing meaningful observability metrics to enhance our understanding of language models. If you'd like to explore on your own, feel free to play with a demo here: https://uncertainty.demos.watchful.io/  Repo can be found here: https://github.com/Watchfulio/uncertainty-demo  

    💡 What to expect in this episode:  

    -  Recap of our last exploration, where we unveiled the role of perceived ambiguity in LLM prompts and its alignment with the "ground truth."  

    - Introduction of two critical measures: Structural Uncertainty (using normalized entropy) and Conceptual Uncertainty (revealing internal cohesion through cosine distances).  

    - Why these measures matter: Assessing predictability in prompts, guiding decisions on fine-tuning versus prompt engineering, and setting the stage for objective model comparisons.  

    🚀 Join John and Shayan on this quest to make language model interactions more transparent and predictable. The episode aims to unravel complexities, provide actionable insights, and pave the way for a clearer understanding of LLM uncertainties.  

    • 34 min
    A Surprisingly Effective Way to Estimate Token Importance in LLM Prompts

    A Surprisingly Effective Way to Estimate Token Importance in LLM Prompts

    Welcome to another captivating episode of "Grounded Truth." Today, our host, John Singleton, engages in a deep dive into the world of prompt engineering, interpretability in closed-source LLMs, and innovative techniques to enhance transparency in AI models.

    Joining us as a special guest is Shayan Mohanty, the visionary CEO and co-founder of Watchful. Shayan brings to the table his latest groundbreaking research, which centers around a remarkable free tool designed to elevate the transparency of prompts used with large language models.

    In this episode, we'll explore Shayan's research, including: 🔍 Estimating token importances in prompts for powerhouse language models like ChatGPT. 🧠 Transitioning from the art to the science of prompt crafting. 📊 Uncovering the crucial link between model embeddings and interpretations. 💡 Discovering intriguing insights through comparisons of various model embeddings. 🚀 Harnessing the potential of embedding quality to influence model output. 🌟 Taking the initial strides towards the automation of prompt engineering.

    To witness the real impact of Shayan's research, don't miss the opportunity to experience a live demo at https://heatmap.demos.watchful.io/.

    • 27 min
    Is Data Labeling Dead?

    Is Data Labeling Dead?

    Dive into the thought-provoking world of data labeling in this episode of the Grounded Truth podcast - "Is Data Labeling Dead?". Hosted by John Singleton and featuring Shayan Mohanty, co-founder and CEO of Watchful, this episode offers a captivating discussion on the changing landscape of data labeling and its intricate relationship with the rise of large language models (LLMs).

    Uncover the historical journey of data labeling, from its early manual stages to the advent of in-house solutions and automation. Delve into the pivotal question: is traditional data labeling becoming obsolete due to the capabilities of LLMs like GPT-3? While the title suggests a binary perspective, the podcast presents a nuanced exploration, showcasing the evolving nature of data labeling.

    Discover how LLMs have revolutionized the handling of low-context tasks like sentiment analysis and categorization, reshaping the demand for conventional data labeling services. However, the conversation goes beyond absolutes, shedding light on the transformation of data labeling rather than its demise.

    This episode of the Grounded Truth podcast underscores that data labeling is far from dead; it is evolving to accommodate the dynamic interplay between LLMs and labeling practices. While LLMs handle routine tasks efficiently, data labeling is pivoting towards high-context labeling, specialized needs, and optimizing workflows for sophisticated model development. Explore the captivating journey of data labeling in this episode, where tradition meets innovation and adaptation guides the way forward.

    • 41 min

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Hard Fork
The New York Times
Darknet Diaries
Jack Rhysider

You Might Also Like

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington
Super Data Science: ML & AI Podcast with Jon Krohn
Jon Krohn
a16z Podcast
Andreessen Horowitz
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Erik Torenberg, Nathan Labenz
Greymatter
Greylock Partners
BG2Pod with Brad Gerstner and Bill Gurley
BG2Pod