Skill Piper

Learning Machines 101

Brought to you by, Richard M. Golden, Ph.D., M.S.E.E., B.S.E.E.

https://skillpiper.com/share/892779679

Learning Machines 101

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that will be addressed in this podcast series!

...see more

    Mathematics

Subscribe and Listen Anywhere

  • rss
  • spotify
  • apple
  • breez
  • youtube
  • castbox
  • overcast
  • podcastaddict
  • pocketcasts
  • podbean
  • playerfm
  • antennapod
  • podcastrepublic
  • anytimeplayer

Recent Episodes of Learning Machines 101


LM101-086: Ch8: How to Learn the Probability of Infinitely Many Outcomes

LM101-086: Ch8: How to Learn the Probability of Infinitely Many Outcomes

This 86th episode of Learning Machines 101 discusses the problem of assigning probabilities to a possibly infinite set of outcomes in a space-time continuum which characterizes our physical world. Such a set is called an “environmental event”. The machine learning algorithm uses information about the frequency of environmental events to support learning. If we want to study statistical machine learning, then we must be able to discuss how to represent and compute the probability of an environmental event. It is essential that we have methods for communicating probability concepts to other researchers, methods for calculating probabilities, and methods for calculating the...

Episode 86 20 July 2021 35m and 29s


LM101-085:Ch7:How to Guarantee your Batch Learning Algorithm Converges

LM101-085:Ch7:How to Guarantee your Batch Learning Algorithm Converges

This 85th episode of Learning Machines 101 discusses formal convergence guarantees for a broad class of machine learning algorithms designed to minimize smooth non-convex objective functions using batch learning methods. In particular, a broad class of unsupervised, supervised, and reinforcement machine learning algorithms which iteratively update their parameter vector by adding a perturbation based upon all of the training data. This process is repeated, making a perturbation of the parameter vector based upon all of the training data until a parameter vector is generated which exhibits improved predictive performance. The magnitude of the perturbation at each learning iteration is called...

Episode 85 21 May 2021 30m and 51s


LM101-084: Ch6: How to Analyze the Behavior of Smart Dynamical Systems

LM101-084: Ch6: How to Analyze the Behavior of Smart Dynamical Systems

In this episode of Learning Machines 101, we review Chapter 6 of my book “Statistical Machine Learning” which introduces methods for analyzing the behavior of machine inference algorithms and machine learning algorithms as dynamical systems. We show that when dynamical systems can be viewed as special types of optimization algorithms, the behavior of those systems even when they are highly nonlinear and high-dimensional can be analyzed. Learn more by visiting: www.learningmachines101.com and www.statisticalmachinelearning.com .

Episode 84 5 January 2021 33m and 13s


How to Use Calculus to Design Learning Machines

How to Use Calculus to Design Learning Machines

This particular podcast covers the material from Chapter 5 of my new book “Statistical Machine Learning: A unified framework” which is now available! The book chapter shows how matrix calculus is very useful for the analysis and design of both linear and nonlinear learning machines with lots of examples. We discuss how to use the matrix chain rule for deriving deep learning descent algorithms and how it is relevant to software implementations of deep learning algorithms.  We also discuss how matrix Taylor series expansions are relevant to machine learning algorithm design and the analysis of generalization performance!!

For addit...

Episode 83 29 August 2020 34m and 22s


How to Analyze and Design Linear Machines

How to Analyze and Design Linear Machines

The main focus of this particular episode covers the material in Chapter 4 of my new forthcoming book titled “Statistical Machine Learning: A unified framework.”  Chapter 4 is titled “Linear Algebra for Machine Learning.

Many important and widely used machine learning algorithms may be interpreted as linear machines and this chapter shows how to use linear algebra to analyze and design such machines. In addition, these same techniques are fundamentally important for the development of techniques for the analysis and design of nonlinear machines. 

This podcast provides a brief overview of Linear Algebra for Machine Learning for the gene...

Episode 82 23 July 2020 29m and 5s


How to Define Machine Learning (at at Least Try)

How to Define Machine Learning (at at Least Try)

This particular podcast covers the material in Chapter 3 of my new book “Statistical Machine Learning: A unified framework” with expected publication date May 2020. In this episode we discuss Chapter 3 of my new book which discusses how to formally define machine learning algorithms. Briefly, a learning machine is viewed as a dynamical system that is minimizing an objective function. In addition, the knowledge structure of the learning machine is interpreted as a preference relation graph which is implicitly specified by the objective function. In addition, this week we include in our book review section a new book titled “The Practioner’s Guide...

Episode 81 9 April 2020 37m and 20s


How to Represent Knowledge using Set Theory

How to Represent Knowledge using Set Theory

This particular podcast covers the material in Chapter 2 of my new book “Statistical Machine Learning: A unified framework” with expected publication date May 2020. In this episode we discuss Chapter 2 of my new book, which discusses how to represent knowledge using set theory notation. Chapter 2 is titled “Set Theory for Concept Modeling”.

Episode 80 29 February 2020 31m and 43s


How to View Learning as Risk Minimization

How to View Learning as Risk Minimization

This particular podcast covers the material in Chapter 1 of my new (unpublished) book “Statistical Machine Learning: A unified framework”. In this episode we discuss Chapter 1 of my new book, which shows how supervised, unsupervised, and reinforcement learning algorithms can be viewed as special cases of a general empirical risk minimization framework. This is useful because it provides a framework for not only understanding existing algorithms but also for suggesting new algorithms for specific applications.

Episode 79 24 December 2019 26m and 7s


How to become a machine learning expert

How to become a machine learning expert

This particular podcast (Episode 78 of Learning Machines 101) is the initial episode in a new special series of episodes designed to provide commentary on a new book that I am in the process of writing. In this episode we discuss books, software, courses, and podcasts designed to help you become a machine learning expert! For more information, check out: www.learningmachines101.com

Episode 78 24 October 2019 39m and 18s


How to Choose the Best Model using BIC

How to Choose the Best Model using BIC

In this 77th episode of www.learningmachines101.com , we explain the proper semantic interpretation of the Bayesian Information Criterion (BIC) and emphasize how this semantic interpretation is fundamentally different from AIC (Akaike Information Criterion) model selection methods. Briefly, BIC is used to estimate the probability of the training data given the probability model, while AIC is used to estimate out-of-sample prediction error. The probability of the training data given the model is called the “marginal likelihood”.  Using the marginal likelihood, one can calculate the probability of a model given the training data and then use this analysis to support selec...

Episode 77 2 May 2019 24m and 15s

Skill Piper
HomeBlogAboutContactNewsletter

© 2024 Skill Piper. All rights reserved

Twitter