Machine Learning Guide

OCDevel
Machine Learning Guide

Machine learning audio course, teaching the fundamentals of machine learning and artificial intelligence. It covers intuition, models (shallow and deep), math, languages, frameworks, etc. Where your other ML resources provide the trees, I provide the forest. Consider MLG your syllabus, with highly-curated resources for each episode's details at ocdevel.com. Audio is a great supplement during exercise, commute, chores, etc.

  1. FOLGE 1

    MLG 001 Introduction

    Try a walking desk to stay healthy while you study or work! Show notes: ocdevel.com/mlg/1. MLG teaches the fundamentals of machine learning and artificial intelligence. It covers intuition, models, math, languages, frameworks, etc. Where your other ML resources provide the trees, I provide the forest. Consider MLG your syllabus, with highly-curated resources for each episode's details at ocdevel.com. Audio is a great supplement during exercise, commute, chores, etc. MLG, Resources Guide Gnothi (podcast project): website, Github What is this podcast? "Middle" level overview (deeper than a bird's eye view of machine learning; higher than math equations) No math/programming experience required Who is it for Anyone curious about machine learning fundamentals Aspiring machine learning developers Why audio? Supplementary content for commute/exercise/chores will help solidify your book/course-work What it's not News and Interviews: TWiML and AI, O'Reilly Data Show, Talking machines Misc Topics: Linear Digressions, Data Skeptic, Learning machines 101 iTunesU issues Planned episodes What is AI/ML: definition, comparison, history Inspiration: automation, singularity, consciousness ML Intuition: learning basics (infer/error/train); supervised/unsupervised/reinforcement; applications Math overview: linear algebra, statistics, calculus Linear models: supervised (regression, classification); unsupervised Parts: regularization, performance evaluation, dimensionality reduction, etc Deep models: neural networks, recurrent neural networks (RNNs), convolutional neural networks (convnets/CNNs) Languages and Frameworks: Python vs R vs Java vs C/C++ vs MATLAB, etc; TensorFlow vs Torch vs Theano vs Spark, etc

    9 Min.
  2. FOLGE 2

    MLG 002 What is AI, ML, DS

    Try a walking desk to stay healthy while you study or work! Show notes at ocdevel.com/mlg/2 Updated! Skip to [00:29:36] for Data Science (new content) if you've already heard this episode. What is artificial intelligence, machine learning, and data science? What are their differences? AI history. Hierarchical breakdown: DS(AI(ML)). Data science: any profession dealing with data (including AI & ML). Artificial intelligence is simulated intellectual tasks. Machine Learning is algorithms trained on data to learn patterns to make predictions. Artificial Intelligence (AI) - Wikipedia Oxford Languages: the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. AlphaGo Movie, very good! Sub-disciplines Reasoning, problem solving Knowledge representation Planning Learning Natural language processing Perception Motion and manipulation Social intelligence General intelligence Applications Autonomous vehicles (drones, self-driving cars) Medical diagnosis Creating art (such as poetry) Proving mathematical theorems Playing games (such as Chess or Go) Search engines Online assistants (such as Siri) Image recognition in photographs Spam filtering Prediction of judicial decisions Targeting online advertisements Machine Learning (ML) - Wikipedia Oxford Languages: the use and development of computer systems that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyze and draw inferences from patterns in data. Data Science (DS) - Wikipedia Wikipedia: Data science is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from noisy, structured and unstructured data, and apply knowledge and actionable insights from data across a broad range of application domains. Data science is related to data mining, machine learning and big data. History Greek mythology, Golums First attempt: Ramon Lull, 13th century Davinci's walking animals Descartes, Leibniz 1700s-1800s: Statistics & Mathematical decision making Thomas Bayes: reasoning about the probability of events George Boole: logical reasoning / binary algebra Gottlob Frege: Propositional logic 1832: Charles Babbage & Ada Byron / Lovelace: designed Analytical Engine (1832), programmable mechanical calculating machines 1936: Universal Turing Machine Computing Machinery and Intelligence - explored AI! 1946: John von Neumann Universal Computing Machine 1943: Warren McCulloch & Walter Pitts: cogsci rep of neuron; Frank Rosemblatt uses to create Perceptron (-> neural networks by way of MLP) 50s-70s: "AI" coined @Dartmouth workshop 1956 - goal to simulate all aspects of intelligence. John McCarthy, Marvin Minksy, Arthur Samuel, Oliver Selfridge, Ray Solomonoff, Allen Newell, Herbert Simon Newell & Simon: Hueristics -> Logic Theories, General Problem Solver Slefridge: Computer Vision NLP Stanford Research Institute: Shakey Feigenbaum: Expert systems GOFAI / symbolism: operations research / management science; logic-based; knowledge-based / expert systems 70s: Lighthill report (James Lighthill), big promises -> AI Winter 90s: Data, Computation, Practical Application -> AI back (90s) Connectionism optimizations: Geoffrey Hinton: 2006, optimized back propagation Bloomberg, 2015 was whopper for AI in industry AlphaGo & DeepMind

    1 Std. 4 Min.
  3. FOLGE 3

    MLG 003 Inspiration

    Try a walking desk to stay healthy while you study or work! Show notes at ocdevel.com/mlg/3.  This episode covers four major philosophical topics related to artificial intelligence. The purpose is to give broader context to why AI matters, before moving into technical details in later episodes. 1. Economic Automation AI is automating not just simple tasks like data entry or tax prep, but also high-skill jobs such as medical diagnostics, surgery, and creative work like design, music, and art. There are two common reactions: Fear: Concern over job displacement, similar to past economic shifts like the agricultural and industrial revolutions. Is your job safe? Optimism: Automation may lead to more comfortable living conditions and economic structures like Universal Basic Income. New job types could emerge, as they have in past transitions. 2. The Singularity The singularity refers to a point of runaway technological growth, where AI becomes capable of improving itself recursively. This concept is tied to "artificial general intelligence" and "seed AI"—systems that not only perform tasks but create better versions of themselves. The idea is that this could trigger extremely rapid change, possibly representing a new phase of evolution beyond humanity. 3. Consciousness I explore whether consciousness can emerge from machines. Since the brain is a physical machine and consciousness arises from it, it's possible that artificial systems could develop similar properties. Related ideas: Qualia: Subjective experiences. Functionalism: If something behaves like it’s conscious, it may be conscious. Turing Test: If a machine is indistinguishable from a human in conversation, it passes the test. 4. Misaligned Goals and Risk I discuss scenarios where AI causes harm not through malevolence but through poorly defined objectives. One example is the "paperclip maximizer" thought experiment, where an AI tasked with maximizing paperclip production might consume all resources to do so. This has led some public figures to raise concerns about AI safety. I don't share the same level of concern, but the topic is worth being aware of. References Ray Kurzweil, The Singularity is Near Ray Kurzweil, How to Create a Mind Daniel Dennett, Consciousness Explained Nick Bostrom, Superintelligence The Great Courses, Philosophy of Mind, Brain, Consciousness, and Thinking Machines In the next episode, I begin covering the technical foundations of machine learning, starting with supervised, unsupervised, and reinforcement learning.

    19 Min.
  4. FOLGE 4

    MLG 004 Algorithms - Intuition

    Try a walking desk to stay healthy while you study or work! Show notes at ocdevel.com/mlg/4 The AI Hierarchy Artificial Intelligence is divided into subfields such as reasoning, planning, and learning. Machine Learning is the learning subfield of AI. Machine learning consists of three phases: Predict (Infer) Error (Loss) Train (Learn) Core Intuition An algorithm makes a prediction. An error function evaluates how wrong the prediction was. The model adjusts its internal weights (training) to improve. Example: House Price Prediction Input: Spreadsheet with features like bedrooms, bathrooms, square footage, distance to downtown. Output: Predicted price. The algorithm iterates over data, learns patterns, and creates a model. A model = algorithm + learned weights. Features = individual columns used for prediction. Weights = coefficients applied to each feature. The process mimics algebra: rows = equations, entire spreadsheet = matrix. Training adjusts weights to minimize error. Feature Types Numerical: e.g., number of bedrooms. Nominal (Categorical): e.g., yes/no for downtown location. Feature engineering can involve transforming raw inputs into more usable formats. Linear Algebra Connection Machine learning uses linear algebra to process data matrices. Each row is an equation; training solves for best-fit weights across the matrix. Categories of Machine Learning 1. Supervised Learning Algorithm is explicitly trained with labeled data (e.g., price of a house). Examples: Regression (predicting a number): linear regression Classification (predicting a label): logistic regression 2. Unsupervised Learning No labels are given; the algorithm finds structure in the data. Common task: clustering (e.g., user segmentation for ads). Learns patterns without predefined classes. 3. Reinforcement Learning Agent takes actions in an environment to maximize cumulative reward. Example: mouse in a maze trying to find cheese. Includes rewards (+points for cheese) and penalties (–points for failure or time). Learns policies for optimal behavior. Algorithms: Deep Q-Networks, policy optimization. Used in games, robotics, and real-time decision systems. Terminology Recap Algorithm: Code that defines a learning strategy (e.g., linear regression). Model: Algorithm + learned weights (trained state). Features: Input variables (columns). Weights: Coefficients learned for each feature. Matrix: Tabular representation of input data. Learning Path and Structure Machine learning is a subfield of AI. Machine learning itself splits into: Supervised Learning Unsupervised Learning Reinforcement Learning Each category includes multiple algorithms. Resources MachineLearningMastery.com: Accessible articles on ML basics. The Master Algorithm by Pedro Domingos: Introductory audio-accessible book on ML. Podcast’s own curated learning paths: ocdevel.com/mlg/resources

    23 Min.
  5. FOLGE 5

    MLG 005 Linear Regression

    Try a walking desk to stay healthy while you study or work! Show notes at ocdevel.com/mlg/5. See Andrew Ng Week 2 Lecture Notes Key Concepts Machine Learning Hierarchy: Explains the breakdown into supervised, unsupervised, and reinforcement learning with an emphasis on supervised learning, which includes classification and regression. Supervised Learning: Divided into classifiers and regressors, with this episode focusing on linear regression as an introduction to regressor algorithms. Linear Regression: A basic supervised algorithm used for estimating continuous numeric outputs, such as predicting housing prices. Process of Linear Regression Prediction: Using a hypothesis function, predictions are made based on input features. Evaluation: Implements a cost function, "mean squared error," to measure prediction accuracy. Learning: Employs gradient descent, which uses calculus to adjust and minimize error by updating weights and biases. Concepts Explored Univariate vs. Multivariate Linear Regression: Focus on a single predictive feature versus multiple features, respectively. Gradient Descent: An optimization technique that iteratively updates parameters to minimize the cost function. Bias Parameter: Represents an average outcome in absence of specific feature information. Mean Squared Error: Common cost function used to quantify the error in predictions. Resources Andrew Ng's Coursera Course: A highly recommended resource for comprehensive and practical learning in machine learning. Course covers numerous foundational topics, including linear regression and more advanced techniques. Access to Andrew Ng's Course on Coursera is encouraged to gain in-depth understanding and application skills in machine learning. Coursera: Machine Learning by Andrew Ng

    34 Min.
  6. FOLGE 6

    MLG 006 Certificates & Degrees

    Try a walking desk to stay healthy while you study or work! Full notes at ocdevel.com/mlg/6 Pursuing Machine Learning: Individuals may engage with machine learning for self-education, as a hobby, or to enter the industry professionally. Use a combination of resources, including podcasts, online courses, and textbooks, for a comprehensive self-learning plan. Online Courses (MOOCs): MOOCs, or Massive Open Online Courses, offer accessible education. Key platforms: Coursera and Udacity. Coursera is noted for standalone courses; Udacity offers structured nanodegrees. Udacity nanodegrees include video content, mentoring, projects, and peer interaction, priced at $200/month. Industry Recognition: Udacity nanodegrees are currently not widely recognized or respected by employers. Emphasize building a robust portfolio of independent projects to augment qualifications in the field. Advanced Degrees: Master’s Degrees: Valued by employers, provide an edge in job applications. Example: Georgia Tech's OMSCS (Online Master’s of Science in Computer Science) offers a cost-effective ($7,000) online master’s program. PhD Programs: Embark on a PhD for in-depth research in AI rather than industry entry. Program usually pays around $30,000/year. Compare industry roles (higher pay, practical applications) vs. academic research (lower pay, exploration of fundamental questions). Career Path Decisions: Prioritize building a substantial portfolio of projects to bypass formal degree requirements and break into industry positions. Consider enriching your qualifications with a master's degree, or eventually pursue a PhD if deeply interested in pioneering AI research. Discussion and Further Reading: See online discussions about degrees/certifications: 1 2 3 4

    16 Min.
  7. FOLGE 8

    MLG 008 Math

    Try a walking desk to stay healthy while you study or work! Full notes at ocdevel.com/mlg/8  Mathematics in Machine Learning Linear Algebra: Essential for matrix operations; analogous to chopping vegetables in cooking. Every step of ML processes utilizes linear algebra. Statistics: The hardest part, akin to the cookbook; supplies algorithms for prediction and error functions. Calculus: Used in the learning phase (gradient descent), similar to baking; it determines the necessary adjustments via optimization. Learning Approach Recommendation: Learn the basics of machine learning first, then dive into necessary mathematical concepts to prevent burnout and improve appreciation. Mathematical Resources MOOCs: Khan Academy - Offers Calculus, Statistics, and Linear Algebra courses. Textbooks: Commonly recommended books for learning calculus, statistics, and linear algebra. Primers: Short PDFs covering essential concepts. Additional Resource The Great Courses: Offers comprehensive video series on calculus and statistics. Best used as audio for supplementing primary learning. Look out for "Mathematical Decision Making." Python and Linear Algebra Tensor: General term for any dimension list; TensorFlow from Google utilizes tensors for operations. Efficient computation using SimD (Single Instruction, Multiple Data) for vectorized operations. Optimization in Machine Learning Gradient descent used for minimizing loss function, known as convex optimization. Recognize keywords like optimization in calculus context.

    28 Min.
5
von 5
67 Bewertungen

Info

Machine learning audio course, teaching the fundamentals of machine learning and artificial intelligence. It covers intuition, models (shallow and deep), math, languages, frameworks, etc. Where your other ML resources provide the trees, I provide the forest. Consider MLG your syllabus, with highly-curated resources for each episode's details at ocdevel.com. Audio is a great supplement during exercise, commute, chores, etc.

Das gefällt dir vielleicht auch

Melde dich an, um anstößige Folgen anzuhören.

Bleib auf dem Laufenden mit dieser Sendung

Melde dich an oder registriere dich, um Sendungen zu folgen, Folgen zu sichern und die neusten Updates zu erhalten.

Wähle ein Land oder eine Region aus

Afrika, Naher Osten und Indien

Asien/Pazifik

Europa

Lateinamerika und Karibik

USA und Kanada