85 episodes

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!

Learning Machines 101 Richard M. Golden, Ph.D., M.S.E.E., B.S.E.E.

    • Technology
    • 4.0 • 5 Ratings

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!

    LM101-086: Ch8: How to Learn the Probability of Infinitely Many Outcomes

    LM101-086: Ch8: How to Learn the Probability of Infinitely Many Outcomes

    This 86th episode of Learning Machines 101 discusses the problem of assigning probabilities to a possibly infinite set of observed outcomes in a space-time continuum which corresponds to our physical world. The machine learning algorithm uses information about the frequency of environmental events to support learning. Along the way we discuss measure theory mathematical tools such as sigma fields, and the Radon-Nikodym probability density function as well as the intriguing Banach-Tarski paradox.

    • 35 min
    LM101-085:Ch7:How to Guarantee your Batch Learning Algorithm Converges

    LM101-085:Ch7:How to Guarantee your Batch Learning Algorithm Converges

    This 85th episode of Learning Machines 101 discusses formal convergence guarantees for a broad class of machine learning algorithms designed to minimize smooth non-convex objective functions using batch learning methods. Simple mathematical formulas are presented based upon research from the late 1960s by Philip Wolfe and G. Zoutendijk that ensure convergence of the generated sequence of parameter vectors.
    Check out: www.learningmachines101.com
    for more details!!! #machinelearning

    • 30 min
    LM101-084: Ch6: How to Analyze the Behavior of Smart Dynamical Systems

    LM101-084: Ch6: How to Analyze the Behavior of Smart Dynamical Systems

    In this episode of Learning Machines 101, we review Chapter 6 of my book “Statistical Machine Learning” which introduces methods for analyzing the behavior of machine inference algorithms and machine learning algorithms as dynamical systems. We show that when dynamical systems can be viewed as special types of optimization algorithms, the behavior of those systems even when they are highly nonlinear and high-dimensional can be analyzed.

    • 33 min
    How to Use Calculus to Design Learning Machines

    How to Use Calculus to Design Learning Machines

    This particular podcast covers the material from Chapter 5 of my new book “Statistical Machine Learning: A unified framework” which is now available! The book chapter shows how matrix calculus is very useful for the analysis and design of both linear and nonlinear learning machines with lots of examples. We discuss the relevance of the matrix chain rule and matrix Taylor series for machine learning algorithm design and the analysis of generalization performance! Check out: www.learningmachines101.com

    • 34 min
    How to Analyze and Design Linear Machines

    How to Analyze and Design Linear Machines

    The main focus of this particular episode covers the material in Chapter 4 of my new forthcoming book titled “Statistical Machine Learning: A unified framework.”  Chapter 4 is titled “Linear Algebra for Machine Learning.

    Many important and widely used machine learning algorithms may be interpreted as linear machines and this chapter shows how to use linear algebra to analyze and design such machines. Check out: www.statisticalmachinelearning.com

    • 29 min
    How to Define Machine Learning (at at Least Try)

    How to Define Machine Learning (at at Least Try)

    This podcast covers the material in Chapter 3 of my new book “Statistical Machine Learning: A unified framework” which discusses how to formally define machine learning algorithms. A learning machine is viewed as a dynamical system that is minimizing an objective function. In addition, the knowledge structure of the learning machine is interpreted as a preference relation graph w implicitly specified by the objective function. Also, the new book “The Practioner’s Guide to Graph Data” is reviewe

    • 37 min

Customer Reviews

4.0 out of 5
5 Ratings

5 Ratings

seagullmouse ,

Interesting but ditch the music

There aren't any/many podcasts about AI or machine learning. So this is a good place to start. It's pretty good slow pace and covers concepts well.

But please get rid of that annoying music at start and end of podcast. I am only 5 podcasts in so maybe the music will be gone by the time I catch up!

Top Podcasts In Technology

You Might Also Like