61 avsnitt

This is the audio podcast for the ML Street Talk YouTube channel at https://www.youtube.com/c/MachineLearningStreetTalk

Thanks for checking us out!

We think that scientists and engineers are the heroes of our generation. Each week we have a hard-hitting discussion with the leading thinkers in the AI space. Street Talk is unabashedly technical and non-commercial, so you will hear no annoying pitches. Corporate- and MBA-speak is banned on street talk, "data product", "digital transformation" are banned, we promise :)

Dr. Tim Scarfe, Dr. Yannic Kilcher and Dr. Keith Duggar.

Machine Learning Street Talk Machine Learning Street Talk

    • Teknologi
    • 5,0 • 1 betyg

This is the audio podcast for the ML Street Talk YouTube channel at https://www.youtube.com/c/MachineLearningStreetTalk

Thanks for checking us out!

We think that scientists and engineers are the heroes of our generation. Each week we have a hard-hitting discussion with the leading thinkers in the AI space. Street Talk is unabashedly technical and non-commercial, so you will hear no annoying pitches. Corporate- and MBA-speak is banned on street talk, "data product", "digital transformation" are banned, we promise :)

Dr. Tim Scarfe, Dr. Yannic Kilcher and Dr. Keith Duggar.

    #60 Geometric Deep Learning Blueprint (Special Edition)

    #60 Geometric Deep Learning Blueprint (Special Edition)

    The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Many high-dimensional learning tasks previously thought to be beyond reach -- such as computer vision, playing Go, or protein folding -- are in fact tractable given enough computational horsepower. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation or feature learning and second, learning by local gradient-descent type methods, typically implemented as backpropagation.

    While learning generic functions in high dimensions is a cursed estimation problem, most tasks of interest are not uniform and have strong repeating patterns as a result of the low-dimensionality and structure of the physical world.

    Geometric Deep Learning unifies a broad class of ML problems from the perspectives of symmetry and invariance. These principles not only underlie the breakthrough performance of convolutional neural networks and the recent success of graph neural networks but also provide a principled way to construct new types of problem-specific inductive biases.

    This week we spoke with Professor Michael Bronstein (head of graph ML at Twitter) and Dr.

    Petar Veličković (Senior Research Scientist at DeepMind), and Dr. Taco Cohen and Prof. Joan Bruna about their new proto-book Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges.

    See the table of contents for this (long) show at https://youtu.be/bIZB1hIJ4u8 

    • 3 tim. 33 min
    #59 - Jeff Hawkins (Thousand Brains Theory)

    #59 - Jeff Hawkins (Thousand Brains Theory)

    The ultimate goal of neuroscience is to learn how the human brain gives rise to human intelligence and what it means to be intelligent. Understanding how the brain works is considered one of humanity’s greatest challenges. 

    Jeff Hawkins thinks that the reality we perceive is a kind of simulation, a hallucination, a confabulation. He thinks that our brains are a model reality based on thousands of information streams originating from the sensors in our body.  Critically - Hawkins doesn’t think there is just one model but rather; thousands. 

    Jeff has just released his new book, A thousand brains: a new theory of intelligence. It’s an inspiring and well-written book and I hope after watching this show; you will be inspired to read it too. 

    https://numenta.com/a-thousand-brains-by-jeff-hawkins/

    https://numenta.com/blog/2019/01/16/the-thousand-brains-theory-of-intelligence/

    Panel:

    Dr. Keith Duggar https://twitter.com/DoctorDuggar

    Connor Leahy https://twitter.com/npcollapse

    • 2 tim. 34 min
    #58 Dr. Ben Goertzel - Artificial General Intelligence

    #58 Dr. Ben Goertzel - Artificial General Intelligence

    The field of Artificial Intelligence was founded in the mid 1950s with the aim of constructing “thinking machines” - that is to say, computer systems with human-like general intelligence. Think of humanoid robots that not only look but act and think with intelligence equal to and ultimately greater than that of human beings. But in the intervening years, the field has drifted far from its ambitious old-fashioned roots.

    Dr. Ben Goertzel is an artificial intelligence researcher, CEO and founder of SingularityNET. A project combining artificial intelligence and blockchain to democratize access to artificial intelligence. Ben seeks to fulfil the original ambitions of the field.  Ben graduated with a PhD in Mathematics from Temple University in 1990. Ben’s approach to AGI over many decades now has been inspired by many disciplines, but in particular from human cognitive psychology and computer science perspective. To date Ben’s work has been mostly theoretically-driven. Ben thinks that most of the deep learning approaches to AGI today try to model the brain. They may have a loose analogy to human neuroscience but they have not tried to derive the details of an AGI architecture from an overall conception of what a mind is. Ben thinks that what matters for creating human-level (or greater) intelligence is having the right information processing architecture, not the underlying mechanics via which the architecture is implemented.

    Ben thinks that there is a certain set of key cognitive processes and interactions that AGI systems must implement explicitly such as; working and long-term memory, deliberative and reactive processing, perc biological systems tend to be messy, complex and integrative; searching for a single “algorithm of general intelligence” is an inappropriate attempt to project the aesthetics of physics or theoretical computer science into a qualitatively different domain.

    TOC is on the YT show description https://www.youtube.com/watch?v=sw8IE3MX1SY

    Panel: Dr. Tim Scarfe, Dr. Yannic Kilcher, Dr. Keith Duggar

    Artificial General Intelligence: Concept, State of the Art, and Future Prospects

    https://sciendo.com/abstract/journals...

    The General Theory of General Intelligence: A Pragmatic Patternist Perspective

    https://arxiv.org/abs/2103.15100

    • 2 tim. 28 min
    #57 - Prof. Melanie Mitchell - Why AI is harder than we think

    #57 - Prof. Melanie Mitchell - Why AI is harder than we think

    Since its beginning in the 1950s, the field of artificial intelligence has vacillated between periods of optimistic predictions and massive investment and periods of disappointment, loss of confidence, and reduced funding. Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected.  Professor Melanie Mitchell thinks one reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself.

    YT vid- https://www.youtube.com/watch?v=A8m1Oqz2HKc

    Main show kick off [00:26:51]



    Panel: Dr. Tim Scarfe, Dr. Keith Duggar, Letitia Parcalabescu (https://www.youtube.com/c/AICoffeeBreak/)

    • 2 tim. 31 min
    #56 - Dr. Walid Saba, Gadi Singer, Prof. J. Mark Bishop (Panel discussion)

    #56 - Dr. Walid Saba, Gadi Singer, Prof. J. Mark Bishop (Panel discussion)

    It has been over three decades since the statistical revolution overtook AI by a storm and over two  decades since deep learning (DL) helped usher the latest resurgence of artificial intelligence (AI). However, the disappointing progress in conversational agents, NLU, and self-driving cars, has made it clear that progress has not lived up to the promise of these empirical and data-driven methods. DARPA has suggested that it is time for a third wave in AI, one that would be characterized by hybrid models – models that combine knowledge-based approaches with data-driven machine learning techniques. 

    Joining us on this panel discussion is polymath and linguist Walid Saba - Co-founder ONTOLOGIK.AI, Gadi Singer - VP & Director, Cognitive Computing Research, Intel Labs and J. Mark Bishop - Professor of Cognitive Computing (Emeritus), Goldsmiths, University of London and Scientific Adviser to FACT360.

    Moderated by Dr. Keith Duggar and Dr. Tim Scarfe

    https://www.linkedin.com/in/gadi-singer/

    https://www.linkedin.com/in/walidsaba/

    https://www.linkedin.com/in/profjmarkbishop/

    #machinelearning #artificialintelligence

    • 1 tim. 11 min
    #55 Self-Supervised Vision Models (Dr. Ishan Misra - FAIR).

    #55 Self-Supervised Vision Models (Dr. Ishan Misra - FAIR).

    Dr. Ishan Misra is a Research Scientist at Facebook AI Research where he works on Computer Vision and Machine Learning. His main research interest is reducing the need for human supervision, and indeed, human knowledge in visual learning systems. He finished his PhD at the Robotics Institute at Carnegie Mellon. He has done stints at Microsoft Research, INRIA and Yale. His bachelors is in computer science where he achieved the highest GPA in his cohort. 



    Ishan is fast becoming a prolific scientist, already with more than 3000 citations under his belt and co-authoring with Yann LeCun; the godfather of deep learning.  Today though we will be focusing an exciting cluster of recent papers around unsupervised representation learning for computer vision released from FAIR. These are; DINO: Emerging Properties in Self-Supervised Vision Transformers, BARLOW TWINS: Self-Supervised Learning via Redundancy Reduction and PAWS: Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with

    Support Samples. All of these papers are hot off the press, just being officially released in the last month or so. Many of you will remember PIRL: Self-Supervised Learning of Pretext-Invariant Representations which Ishan was the primary author of in 2019.



    References;



    Shuffle and Learn - https://arxiv.org/abs/1603.08561

    DepthContrast - https://arxiv.org/abs/2101.02691

    DINO - https://arxiv.org/abs/2104.14294

    Barlow Twins - https://arxiv.org/abs/2103.03230

    SwAV - https://arxiv.org/abs/2006.09882

    PIRL - https://arxiv.org/abs/1912.01991

    AVID - https://arxiv.org/abs/2004.12943 (best paper candidate at CVPR'21 (just announced over the weekend) - http://cvpr2021.thecvf.com/node/290)

     

    Alexei (Alyosha) Efros

    http://people.eecs.berkeley.edu/~efros/

    http://www.cs.cmu.edu/~tmalisie/projects/nips09/

     

    Exemplar networks

    https://arxiv.org/abs/1406.6909

     

    The bitter lesson - Rich Sutton

    http://www.incompleteideas.net/IncIdeas/BitterLesson.html

     

    Machine Teaching: A New Paradigm for Building Machine Learning Systems

    https://arxiv.org/abs/1707.06742

     

    POET

    https://arxiv.org/pdf/1901.01753.pdf

    • 1 tim. 36 min

Kundrecensioner

5,0 av 5
1 betyg

1 betyg

Mest populära podcaster inom Teknologi

Andra som lyssnade prenumererar på