100 Folgen

AI with AI explores the latest breakthroughs in artificial intelligence and autonomy, and discusses the technological and military implications. Join Andy Ilachinski and David Broyles as they explain the latest developments in this rapidly evolving field.

The views expressed here are those of the commentators and do not necessarily reflect the views of CNA or any of its sponsors.

AI with AI CNA

    • Neues aus der Technik

AI with AI explores the latest breakthroughs in artificial intelligence and autonomy, and discusses the technological and military implications. Join Andy Ilachinski and David Broyles as they explain the latest developments in this rapidly evolving field.

The views expressed here are those of the commentators and do not necessarily reflect the views of CNA or any of its sponsors.

    The Unreasonable Perplexity of Fat Tails

    The Unreasonable Perplexity of Fat Tails

    Andy and Dave discuss an announcement from Exscientia and Sumitomo that they have the first entirely-AI developed drug that is now entering clinical trials. The Director of the Joint Artificial Intelligence Center, Lt. Gen. Jack Shanahan has announced his retirement. Senator Michael Bennet sends a scathing letter to the U.S. Chief Technology Officer on its recent AI Principles for regulation. DARPA’s Habitus program seeks to automate the process of revealing and using local information, to enhance stability operations in under-governed regions. And Washington state legislature has at least one facial recognition bill under consideration. Google Research announces its Meena chatbot, which claims is more Sensible and Specific (a new metric that it developed) than the award-winning Mitsuku, though requiring 30 days to compile on 2,048 tensor processing units. Researchers at the Max-Planck Institute for Intelligent Systems and the University of Florence announce a method for fusing deep learning with combinatorics solvers to create a neural network for combinatorial problems. RAND releases a report on Deterrence in the Age of Thinking Machines. Sejnowski pens thoughts on “the unreasonable effectiveness of deep learning in AI.” Taleb takes a detailed look at the statistical consequences of fat tail distributions. Maj. Gen. Mick Ryan pens the final (?) part in his trilogy, AugoStrat Awakenings. Fortune publishes a special magazine on the topic of AI. And Andrew Ng and Geoffrey Hinton sit down for a 40-minute chat on deep learning.

    • 34 Min.
    Gremlin Pie!

    Gremlin Pie!

    Happy Pi-cast! Andy and Dave discuss some of the stories that have followed the New York Times articles on Clearview AI, to include Twitter telling the company to stop using its photos, and a consortium of 40 agencies calls on the U.S. government to ban facial recognition systems until more is known about the technology. Meanwhile, London’s Metropolitan Police is rolling out live facial recognition technology. BlueDot says that it used AI and its epidemiologists to send a warning about the Wuhan virus on 31 December 2019, a full week before the US CDC announcement on 6 January 2020. Google releases the largest high-resolution map of the fruit fly’s brain, with 25,000 neurons. DARPA’s Gremlin (X-61A) drone system makes its first test flight. And the Guinness Book of World Records recognizes Stephen Worswick as the most frequent winner (5 times) of the Loebner Prize, for his Mitsuku chatbot. In research, Facebook AI achieves near-perfect (99.9%) navigation without needing a map, testing its algorithm in its AI Habitat. Robert J. Marks makes The Case *for* Killer Robots. The Brookings Institute’s Indermit Gill predicts that the AI leader in 2030 will “rule the planet” until at least 2100. The ACT-IAC releases an AI Playbook, with step-by-step guidance for assessment, readiness, selection, implementation, and integration. Jessica Flack examines the Collective Computation of Reality in Nature and Society. Google’s Dataset Search is out of beta. And DoD will be holding its East Coast AI Symposium and Exposition 29 and 30 April in Crystal City.
    Click here to visit our website and explore the links mentioned in the episode. 



     

    • 34 Min.
    Private AIs, They’re Watching You

    Private AIs, They’re Watching You

    In a string of related news items on facial recognition, Andy and Dave discuss San Diego’s reported experiences with facial recognition over the last 7 years (coming to an end on 1 January 2020 with the enacting of California’s ban on facial recognition for law enforcement).  Across the Atlantic, the European Union is considering a ban on facial recognition in public spaces for 5 years while it determines the broader implications. And the New York Times puts the spotlight on Clearview AI, a company that claims to have billions of photos of people scraped from the web, and that identify people (and the sources of the photos, to include profiles and other information about the individuals) within seconds. In other news, the JAIC is looking for public input on an upcoming AI study, and it is also looking for help in applying machine learning to humanitarian assistance and disaster relief efforts. In research, Google announces that it has developed a “physics free” model for short-term local precipitation forecasting. And researchers at DeepMind and Harvard find experimental evidence that dopamine neurons in the brain may predict rewards in a distributional way (with insight gained from efforts in optimizing reinforcement-learning algorithms). Nature Communications examines the role of AI, whether positive or negative, in achieving the United Nations’ Sustainable Development Goals. The U.S. National Science Board releases its biennial report on Science and Engineering Indicators. The MIT Deep Learning Series has Lex Fridman speaking on Deep Learning State of the Art (and as a bonus, Andy recommends a video of Fridman interviewing Daniel Kahneman, author of “Thinking, Fast and Slow”). GPT-2 wields its sword and dashed bravely into the realm of Dungeons and Dragons. And GPT-2 tries its hand at chess, knowing nothing about the rules, with surprising results.
    Click here to visit our website and explore the links mentioned in the episode. 

    • 37 Min.
    Xenophobe

    Xenophobe

    The U.S. Government announces the restriction of the sale outside of the U.S. of AI for satellite image analysis. Baidu beats out Google and Microsoft for language “understanding” with its model ERNIE, which uses a technique that it developed specifically for the Chinese language. Samsung unveils NEON, its humanoid AI avatars. The U.S. Department of Defense stands up a counter-unmanned aerial system office. And GoogleAI publishes an AI system for breast cancer screening, but meets with some Twitter (and Wired) backlash on solving the “wrong problem.” Researchers at University of Vermont, the Allen Discovery Center/Tufts, and Wyss Institute/Harvard introduce the world’s “first living robots,” xenobots, constructed from skin and muscle cells of frogs (from designs made with evolutionary algorithms). RAND releases a report on an assessment and recommendations of the DOD’s posture for AI. AI for social good (AI4SG) releases its survey of research and publications on beneficial applications of AI. Daniel Dennett explores the question of whether HAL committed murder, in a classic 1996 essay. From the Bengio and Marcus debate, both references Daniel Kahneman’s “Thinking, Fast and Slow.” And Robert Downey Jr. hosts a YouTube series on The Age of AI.
    Click here to visit our website and explore the links mentioned in the episode. 


     

    • 41 Min.
    Fakers of the Lost Architecture

    Fakers of the Lost Architecture

    Andy and Dave discuss a new White House proposal on Principles for AI Regulation. A NIST study examines the effects of race, age, and sex on recognition software and identifies a variety of troubling issues. Facebook removes hundreds of accounts with AI-generated fake profile photos, and Facebook also bans the posting of deepfake videos (with some caveats). And Finland is making its online AI course available for the rest of the world. In research, Uber AI Labs offers a novel approach to accelerating neural architectural search by learning to generate synthetic training data; but the scientific community doesn’t think the findings are quite yet ready for publishing. Researchers at Korea University create an Evolvable Neural Unit (ENU) as a way to approximate the function of an individual neuron and synapse. And researchers at Charite in Berlin show that a single human biological neuron can compute XOR, previously thought not possible. Human-Centered AI at Standard University releases 2019 Annual Report on its AI Index, examining various trends and research in AI in 2019. The Center for a New American Security releases its full report on A Blueprint for Action in AI. Rafael Irizarry provides an Introduction to Data Science. And the video of the week is the debate between Yoshua Bengio and Gary Marcus on the current and future state of research in AI.
     

    • 38 Min.
    Hit the Wall: Do Not Play GO (Part II)

    Hit the Wall: Do Not Play GO (Part II)

    In research, Andy and Dave discuss a new idea from Schmidhuber, which introduces Upside-Down reinforcement learning, where no value functions or policy search are necessary, essentially transforming reinforcement learning into a form of supervised learning. Research from OpenAI demonstrates a “double-descent” inherent in deep learning tasks, where performance initially gets worse and then gets better as the model increases in size. Tortoise Media provides yet-another-AI-index, but with a nifty GUI for exploration. August Cole explores a future conflict with Arctic Night. And Richard Feynman provides thoughts (from 1985) on whether machines will be able to think.
    Twitter Throwdown: On 23 December, Yoshua Bengio and Gary Marcus will have a debate on the Best Way Forward for AI.

    • 33 Min.

Top‑Podcasts in Neues aus der Technik

Zuhörer haben auch Folgendes abonniert: