85 episodes

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!

Learning Machines 101 Richard M. Golden, Ph.D., M.S.E.E., B.S.E.E.

    • Technology
    • 3.8 • 4 Ratings

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!

    LM101-086: Ch8: How to Learn the Probability of Infinitely Many Outcomes

    LM101-086: Ch8: How to Learn the Probability of Infinitely Many Outcomes

    This 86th episode of Learning Machines 101 discusses the problem of assigning probabilities to a possibly infinite set of outcomes in a space-time continuum which characterizes our physical world. Such a set is called an “environmental event”. The machine learning algorithm uses information about the frequency of environmental events to support learning. If we want to study statistical machine learning, then we must be able to discuss how to represent and compute the probability of an environmental event. It is essential that we have methods for communicating probability concepts to other researchers, methods for calculating probabilities, and methods for calculating the expectation of specific environmental events. This episode discusses the challenges of assigning probabilities to events when we allow for the case of events comprised of an infinite number of outcomes. Along the way we introduce essential concepts for representing and computing probabilities using measure theory mathematical tools such as sigma fields, and the Radon-Nikodym probability density function. Near the end we also briefly discuss the intriguing Banach-Tarski paradox and how it motivates the development of some of these special mathematical tools. Check out: www.learningmachines101.com and www.statisticalmachinelearning.com for more information!!!

    • 35 min
    LM101-085:Ch7:How to Guarantee your Batch Learning Algorithm Converges

    LM101-085:Ch7:How to Guarantee your Batch Learning Algorithm Converges

    This 85th episode of Learning Machines 101 discusses formal convergence guarantees for a broad class of machine learning algorithms designed to minimize smooth non-convex objective functions using batch learning methods. In particular, a broad class of unsupervised, supervised, and reinforcement machine learning algorithms which iteratively update their parameter vector by adding a perturbation based upon all of the training data. This process is repeated, making a perturbation of the parameter vector based upon all of the training data until a parameter vector is generated which exhibits improved predictive performance. The magnitude of the perturbation at each learning iteration is called the “stepsize” or “learning rate” and the identity of the perturbation vector is called the “search direction”. Simple mathematical formulas are presented based upon research from the late 1960s by Philip Wolfe and G. Zoutendijk that ensure convergence of the generated sequence of parameter vectors. These formulas may be used as the basis for the design of artificially intelligent smart automatic learning rate selection algorithms. The material in this podcast is designed to provide an overview of Chapter 7 of my new book “Statistical Machine Learning” and is based upon material originally presented in Episode 68 of Learning Machines 101! Check out: www.learningmachines101.com for the show notes!!!
     

    • 30 min
    LM101-084: Ch6: How to Analyze the Behavior of Smart Dynamical Systems

    LM101-084: Ch6: How to Analyze the Behavior of Smart Dynamical Systems

    In this episode of Learning Machines 101, we review Chapter 6 of my book “Statistical Machine Learning” which introduces methods for analyzing the behavior of machine inference algorithms and machine learning algorithms as dynamical systems. We show that when dynamical systems can be viewed as special types of optimization algorithms, the behavior of those systems even when they are highly nonlinear and high-dimensional can be analyzed. Learn more by visiting: www.learningmachines101.com and www.statisticalmachinelearning.com .

    • 33 min
    How to Use Calculus to Design Learning Machines

    How to Use Calculus to Design Learning Machines

    This particular podcast covers the material from Chapter 5 of my new book “Statistical Machine Learning: A unified framework” which is now available! The book chapter shows how matrix calculus is very useful for the analysis and design of both linear and nonlinear learning machines with lots of examples. We discuss how to use the matrix chain rule for deriving deep learning descent algorithms and how it is relevant to software implementations of deep learning algorithms.  We also discuss how matrix Taylor series expansions are relevant to machine learning algorithm design and the analysis of generalization performance!!
    For additional details check out: www.learningmachines101.com and www.statisticalmachinelearning.com
     

    • 34 min
    How to Analyze and Design Linear Machines

    How to Analyze and Design Linear Machines

    The main focus of this particular episode covers the material in Chapter 4 of my new forthcoming book titled “Statistical Machine Learning: A unified framework.”  Chapter 4 is titled “Linear Algebra for Machine Learning.
    Many important and widely used machine learning algorithms may be interpreted as linear machines and this chapter shows how to use linear algebra to analyze and design such machines. In addition, these same techniques are fundamentally important for the development of techniques for the analysis and design of nonlinear machines. 
    This podcast provides a brief overview of Linear Algebra for Machine Learning for the general public as well as information
    for students and instructors regarding the contents of Chapter 4
    of Statistical Machine Learning. For more details, check out: www.statisticalmachinelearning.com

    • 29 min
    How to Define Machine Learning (at at Least Try)

    How to Define Machine Learning (at at Least Try)

    This particular podcast covers the material in Chapter 3 of my new book “Statistical Machine Learning: A unified framework” with expected publication date May 2020. In this episode we discuss Chapter 3 of my new book which discusses how to formally define machine learning algorithms. Briefly, a learning machine is viewed as a dynamical system that is minimizing an objective function. In addition, the knowledge structure of the learning machine is interpreted as a preference relation graph which is implicitly specified by the objective function. In addition, this week we include in our book review section a new book titled “The Practioner’s Guide to Graph Data”  by Denise Gosnell and Matthias Broecheler. To find out more information visit the website: www.learningmachines101.com .

    • 37 min

Customer Reviews

3.8 out of 5
4 Ratings

4 Ratings

Listner99999 ,

Architect

Very informative podcast

Ben_1_ ,

Highly recommend, clear view on detailed concepts

I highly recommend this to anyone who wants to learn more about AI.
The podcast outlines the concepts and mechanisms used in learning machines. And also details the history behind the ideas.
Cheers

Top Podcasts In Technology

Lex Fridman Podcast
Lex Fridman
Acquired
Ben Gilbert and David Rosenthal
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Darknet Diaries
Jack Rhysider
Deep Questions with Cal Newport
Cal Newport
Hard Fork
The New York Times

You Might Also Like

Data Skeptic
Kyle Polich
The AI in Business Podcast
Daniel Faggella
After Hours
TED Audio Collective / Youngme Moon, Mihir Desai, & Felix Oberholzer-Gee
This American Life
This American Life