One person, one interview, one story. Join us as we explore the impact of AI on our world, one amazing person at a time -- from the wildlife biologist tracking endangered rhinos across the savannah here on Earth to astrophysicists analyzing 10 billion-year-old starlight in distant galaxies to the Walmart data scientist grappling with the hundreds of millions of parameters lurking in the retailer’s supply chain. Every two weeks, we’ll bring you another tale, another 25-minute interview, as we build a real-time oral history of AI that’s already garnered nearly 3.4 million listens and been acclaimed as one of the best AI and machine learning podcasts. Listen in and get inspired.
How the Ohio Supercomputer Center Drives the Future of Computing - Ep. 213
NASCAR races are all about speed, but even the fastest cars need to factor in safety, especially as rules and tracks change. The Ohio Supercomputer Center is ready to help. In this episode of NVIDIA’s AI Podcast, host Noah Kravitz speaks with Alan Chalker, the director of strategic programs at the OSC, about all things supercomputing. The center’s Open OnDemand program, which takes the form of a web-based interface, empowers Ohio higher education institutions and industries with accessible, reliable and secure computational services and training and educational programs. Chalker dives into the history and evolution of the OSC, and explains how it’s working with client companies like NASCAR, which is simulating race car designs virtually. Tune in to learn more about Chalker’s outlook on the future of supercomputing and OSC’s role in realizing it.
Cardiac Clarity: Dr. Keith Channon Talks Revolutionizing Heart Health With AI - Ep. 212
Here’s some news to still beating hearts: AI is helping bring some clarity to cardiology. Caristo Diagnostics has developed an AI-powered solution for detecting coronary inflammation in cardiac CT scans. In this episode of NVIDIA’s AI Podcast, Dr. Keith Channon, cofounder and chief medical officer at the startup, speaks with host Noah Kravtiz about the technology. Called Caristo, it analyzes radiometric features in CT scan data to identify inflammation in the fat tissue surrounding coronary arteries, a key indicator of heart disease. Tune in to learn more about how Caristo uses AI to improve treatment plans and risk predictions by providing physicians with a patient-specific readout of inflammation levels.
DigitalPath's Ethan Higgins On Using AI to Fight Wildfires - Ep. 211
DigitalPath is igniting change in the golden state — using computer vision, generative adversarial networks and a network of thousands of cameras to detect signs of fire in real time.
In the latest episode of NVIDIA’s AI Podcast, host Noah Kravtiz spoke with DigitalPath system architect Ethan Higgins about the company’s role in the ALERTCalifornia initiative, a collaboration between California’s wildfire fighting agency CAL FIRE and the University of California, San Diego.
DigitalPath built computer vision models to process images collected from network cameras — anywhere from eight to 16 million a day — intelligently identifying signs of fire like smoke.
“One of the things we realized early on, though, is that it’s not necessarily a problem about just detecting a fire in a picture,” Higgins said. “It’s a process of making a manageable amount of data to handle.”
That’s because, he explained, it’s unlikely that humans will be entirely out of the loop in the detection process for the foreseeable future.
The company uses various AI algorithms to classify images based on whether they should be reviewed or acted upon — if so, an alert is sent out to a CAL FIRE command centers.
There are some downsides to using computer vision to detect wildfires — namely, that extinguishing more fires means a greater buildup of natural fuel and the potential for larger wildfires in the long term. DigitalPath, along with UCSD, are exploring using high-resolution LIDAR data to identify where those fuels can be let out in the form of prescribed burns.
Looking ahead, Higgins foresees the field tapping generative AI to accelerate new simulation tools — as well as using AI models to analyze the output of other models to doubly improve wildfire prediction and detection.
“AI is not perfect, but when you couple multiple models together, it can get really close,” he said.
The Case for Generative AI in the Legal Field - Ep. 210
Thomson Reuters, the global content and technology company, is transforming the legal industry with generative AI.
In the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Thomson Reuters’ Chief Product Officer David Wong about its potential — and implications.
Many of Thomson Reuters offerings for the legal industry either address an information retrieval problem or help generate written content.
It has a AI-driven digital solution that enables law practitioners to search laws and cases intelligently within different jurisdictions. It also provides AI-powered tools that are set to be integrated with commonly used products like Microsoft 365 to automate the time-consuming processes of drafting and analyzing legal documents.
These technologies increase the productivity of legal professionals, enabling them to focus their time on higher value work. According to Wong, ultimately these tools also have the potential to help deliver better access to justice.
To address ethical concerns, the company has created publicly available AI development guidelines, as well as privacy and data protection policies. And it’s participating in the drafting of ethical guidelines for the industries it serves.
There’s still a wide range of reactions surrounding AI use in the legal field, from optimism about its potential to fears of job replacement. But Wong underscored that no matter what the outlook, “it is very likely that professionals that use AI are going to replace professionals that don’t use AI.”
Looking ahead, Thomson Reuters aims to further integrate generative AI, as well as retrieval-augmented generation techniques into its flagship research products to help lawyers synthesize, read and respond to complicated technical and legal questions. Recently, Thomson Reuters acquired Casetext, which developed the first AI legal assistant, CoCounsel. In 2024 Thomson Reuters is building on this with the launch of an AI assistant that will be the interface across Thomson Reuters products with GenAI capabilities, including those in other fields such as tax and accounting.
Wayve CEO Alex Kendall on Making a Splash in Autonomous Vehicles - Ep. 209
A new era of autonomous vehicle technology, known as AV 2.0, has emerged, marked by large, unified AI models that can control multiple parts of the vehicle stack, from perception and planning to control.
Wayve, a London-based autonomous driving technology company, and a member of NVIDIA's startup accelerator program, is leading the surf.
In the latest episode of NVIDIA’s AI Podcast, host Katie Burke Washabaugh spoke with the company’s cofounder and CEO, Alex Kendall, about what AV 2.0 means for the future of self-driving cars.
Unlike AV 1.0’s focus on perfecting a vehicle’s perception capabilities using multiple deep neural networks, AV 2.0 calls for comprehensive in-vehicle intelligence to drive decision-making in real-world, dynamic environments.
Embodied AI — the concept of giving AI a physical interface to interact with the world — is the basis of this new AV wave.
Kendall pointed out that it’s a “hardware/software problem — you need to consider these things separately,” even as they work together. For example, a vehicle can have the highest-quality sensors, but without the right software, the system can’t use them to execute the right decisions.
Generative AI plays a key role, enabling synthetic data generation so AV makers can use a model’s previous experiences to create and simulate novel driving scenarios.
It can “take crowds of pedestrians and snow and bring them together” to “create a snowy, crowded pedestrian scene” that the vehicle has never experienced before.
According to Kendall, that will “play a huge role in both learning and validating the level of performance that we need to deploy these vehicles safely” — all while saving time and costs.
In June, Wayve unveiled GAIA-1, a generative world model for developing autonomous vehicles.
The company also recently announced LINGO-1, an AI model that allows passengers to use natural language to enhance the learning and explainability of AI driving models.
Looking ahead, the company hopes to scale and further develop its solutions, improving the safety of AVs to deliver value, build public trust and meet customer expectations.
Kendall views embodied AI as playing a definitive role in the future of the AI landscape, pushing pioneers to “build better” and “build further” to achieve the “next big breakthroughs.”
For more on NVIDIA's Inception startup accelerator program, visit https://www.nvidia.com/en-us/startups/
NVIDIA’s Annamalai Chockalingam on the Rise of LLMs - Ep. 206
Generative AI and large language models (LLMs) are stirring change across industries — but according to NVIDIA Senior Product Manager of Developer Marketing Annamalai Chockalingam, “we’re still in the early innings.”
In the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Chockalingam about LLMs: what they are, their current state and their future potential.
LLMs are a “subset of the larger generative AI movement” that deals with language. They’re deep learning algorithms that can recognize, summarize, translate, predict and generate language.
AI has been around for a while, but according to Chockalingam, three key factors enabled LLMs.
One is the availability of large-scale data sets to train models with. As more people used the internet, more data became available for use. The second is the development of computer infrastructure, which has become advanced enough to handle “mountains of data” in a “reasonable timeframe.” And the third is advancements in AI algorithms, allowing for non-sequential or parallel processing of large data pools.
LLMs can do five things with language: generate, summarize, translate, instruct or chat. With a combination of “these modalities and actions, you can build applications” to solve any problem, Chockalingam said.
Enterprises are tapping LLMs to “drive innovation,” “develop new customer experiences,” and gain a “competitive advantage.” They’re also exploring what safe deployment of those models looks like, aiming to achieve responsible development, trustworthiness and repeatability.
New techniques like retrieval augmented generation (RAG) could boost LLM development. RAG involves feeding models with up-to-date “data sources or third-party APIs” to achieve “more appropriate responses” — granting them current context so that they can “generate better” answers.
Chockalingam encourages those interested in LLMs to “get your hands dirty and get started” — whether that means using popular applications like ChatGPT or playing with pretrained models in the NVIDIA NGC catalog.
NVIDIA offers a full-stack computing platform for developers and enterprises experimenting with LLMs, with an ecosystem of over 4 million developers and 1,600 generative AI organizations. To learn more, register for LLM Developer Day on Nov. 17 to hear from NVIDIA experts about how best to develop applications.
One of the best podcasts around to learn about AI and deep learning.
Great Podcasts - highly Informative
These podcasts are a weathly of information and really well crafted to make it easy for the average listener to understand.
The interview technique is very cool and asks the inventors and reseachers in latmans's terms about various techniques and how they're being applied to our daily lives and what they see coming in the near future.
It's not NVIDIA specific, so if you're interested in what's going on and how things are progressing and at what speed, it's well worth a listen becuase this is affecting our future now.
Educational meets interesting
Educational meets interesting