100 episodes

AI with AI explores the latest breakthroughs in artificial intelligence and autonomy, and discusses the technological and military implications. Join Andy Ilachinski and David Broyles as they explain the latest developments in this rapidly evolving field.

The views expressed here are those of the commentators and do not necessarily reflect the views of CNA or any of its sponsors.

AI with AI: Artificial Intelligence with Andy Ilachinski CNA

    • News

AI with AI explores the latest breakthroughs in artificial intelligence and autonomy, and discusses the technological and military implications. Join Andy Ilachinski and David Broyles as they explain the latest developments in this rapidly evolving field.

The views expressed here are those of the commentators and do not necessarily reflect the views of CNA or any of its sponsors.

    No Time to AI

    No Time to AI

    Andy and Dave discuss the latest in AI news, starting with the US Consumer Products Safety Commission report on AI and ML. The Deputy Secretary of Defense outlines Responsible AI Tenets, along with mandating the JAIC to start work on four activities for developing a responsible AI ecosystem. The Director of the US Chamber of Commerce’s Center for Global Regulatory Cooperation outlines concerns with the European Commission’s newly drafted rules on regulating AI. Amnesty International crowd-sources an effort to identify surveillance cameras that the New York City Police Department has in use, resulting in a map of over 15,000 camera locations. The Royal Navy uses AI for the first time at sea against live supersonic missiles. And the Ghost Fleet Overlord unmanned surface vessel program completes its second autonomous transit from the Gulf Coast, through the Panama Canal, and to the West Coast. Finally, CNA Russia Program team members Sam Bendett and Jeff Edmonds join Andy and Dave for a discussion on their latest report, which takes a comprehensive look at the ecosystem of AI in Russia, including its policies, resourcing, infrastructure, and activities.

    Click here to visit our website and explore the links mentioned in the episode.

    • 36 min
    Someday My ‘Nets Will Code

    Someday My ‘Nets Will Code

    Information about the AI Event Series mentioned in this episode: https://twitter.com/CNA_org/status/1400808135544213505?s=20
    To RSVP contact Larry Lewis at LewisL@cna.org.
    Andy and Dave discuss the latest in AI news, including a report on Libya from the UN Security Council’s Panel of Experts, which notes the March 2020 use of the “fully autonomous” Kargu-2 to engage retreating forces; it’s unclear whether any person died in the conflict, and many other important details are missing from the incident. The Biden Administration releases its FY22 DoD Budget, which increases the RDT&E request, including $874M in AI research. NIST proposes an evaluation model for user trust in AI and seeks feedback; the model includes definitions for terms such as reliability and explainability. EleutherAI has provided an open-source version of GPT-3, called GPT-Neo, which uses an 825GB data “Pile” to train, and comes in 1.3B and 2.7B parameter versions. CSET takes a hands-on look at how transformer models such as GPT-3 can aid disinformation, with their findings published in Truth, Lies, and Automation: How Language Models Could Change Disinformation. IBM introduces a project aimed to teach AI to code, with CodeNet, a large dataset containing 500 million lines of code across 55 legacy and active programming languages. In a separate effort, researchers at Berkeley, Chicago, and Cornell publish results on using transformer models as “code generators,” creating a benchmark (the Automated Programming Progress Standard)  to measure progress; they find that GPT-Neo could pass approximately 15% of introductory problems, with GPT-3’s 175B parameter model performing much worse (presumably due to the inability to fine-tune the larger model). The CNA Russia Studies Program leases an extensive report on AI and Autonomy in Russia, capping off their biweekly newsletters on the topic. Arthur Holland Michel publishes Known Unknowns: Data Issues and Military Autonomous Systems, which clearly identifies the known issues in autonomous systems that cause problems. The short story of the week comes from Asimov in 1956, with “Someday.” And the Naval Institute Press publishes a collection of essays in AI at War: How big data, AI, and machine learning are changing naval warfare. Finally, Diana Gehlhaus from Georgetown’s Center for Security and Emerging Technology (CSET),  joins Andy and Dave to preview an upcoming event, “Requirements for Leveraging AI.”
    Interview with Diana Gehlhaus: 33:32
    Click here to visit our website and explore the links mentioned in the episode.
     

    • 45 min
    Just the Tip of the Skyborg

    Just the Tip of the Skyborg

    Information about the AI Event Series mentioned in this episode: https://twitter.com/CNA_org/status/1400808135544213505?s=20
    To RSVP contact Larry Lewis at LewisL@cna.org.
    Andy and Dave discuss the latest in AI news, including the first flight of a drone equipped with the Air Force’s Skyborg autonomy core system. The UK Office for AI publishes a new set of guidance on automated decision-making in government, with Ethics, Transparency and Accountability Framework for Automated Decision-Making. The International Red Cross calls for new international rules on how governments use autonomous weapons. Senators introduce two AI bills to improve the US’s AI readiness, with the AI Capabilities and Transparency Act and the AI for the Military Act. Defense Secretary Lloyd Austin lays out his vision for the Department of Defense in his first major speech, stressing the importance of emerging technology and rapid increases in computing power. A report from the Allen Institute for AI shows that China is closing in on the US in AI research, expecting to become the leader in the top 1% of most-cited papers in 2023. In research, Ziming Liu and Max Tegmark introduce AI Poincaré, an algorithm that auto-discovers conserved quantities using trajectory data from unknown dynamics systems. Researchers enable a paralyzed man to “text with his thoughts,” reaching 16 words per minute. The Stimson Center publishes A New Agenda for US Drone Policy and the Use of Lethal Force. The Onlife Manifesto: Being Human in a Hyperconnected Era, first published in 2015, is available for open access. And Cade Metz publishes Genius Makers, with stories of the pioneers behind AI.
    Click here to visit our website and explore the links mentioned in the episode.

    • 34 min
    Rebroadcast: A.I. in the Sky

    Rebroadcast: A.I. in the Sky

    Andy and Dave welcome Arthur Holland Michel to the podcast for a discussion on predictability and understandability in military AI. Arthur is an Associate Researcher at the United Nations Institute for Disarmament Research, a Senior Fellow at the Carnegie Council for Ethics in International Affairs, and author of the book Eyes in the Sky: the Secret Rise of Gorgon Stare and How It Will Watch Us All. Arthur recently published The Black Box, Unlocked: Predictability and Understandability in Military AI, and the three discuss the inherent challenges of artificial intelligence and the challenges of creating definitions to enable meaningful global discussion on AI.
     

    • 36 min
    Doggone

    Doggone

    Andy and Dave discuss the latest in AI news, including a new AI website from the White House at AI.gov, which provides a variety of resources on recent reports, news, key US agencies, and other information. The U.S. Navy destroys a surface vessel using a swarm of drones (in combination with other weapons) for the first time. The NYPD announces the retirement of its Boston Dynamics robot dog (Digidog) due to negative public reaction at its use. The French Defence Ministry releases a report on the Integration of Autonomy into Lethal Weapon Systems. A paper in Digital Medicine examines the use of decision-aids in clinical settings. Matt Ginsberg (along with the Berkeley NLP Group) develops Dr. Fill, an algorithm that won this year’s American Crossword Puzzle Tournament, with three total errors. And the University of Glasgow publishes research on using return echoes over time to render a 3D image of an environment. Researchers use MRI and machine learning to identify brain activation configurations for 12 different cognitive tasks. Facebook AI Research, Inria, and Sorbonne publish research on emerging properties of self-supervised vision transformers, which includes the ability to segment objects with no supervision or segmentation-targeted objectives. Florian Jaton publishes The Constitution of Algorithms: Ground-Truthing, Programming, Formulation, which examines how algorithms come to be. Melanie Mitchell publishes a paper on Why AI Is Harder Than We Think. And UneeQ creates a Digital Einstein for people to interact with.
    Click here to visit our website and explore the links mentioned in the episode.
     

    • 39 min
    Superhumans

    Superhumans

    Andy's out this week, but Dave recently had a chance to do a series of interviews on a paper that he wrote, Superhumans, Implications of genetic engineering and human-centered bioengineering. So this week's podcast will feature a rebroadcast of the interview that Dave had on Titillating Sports. A big thanks to Rick Tittle and Darren Peck from the Sports Byline USA Network for conducting the interview and for allowing us to share it. Rick and Dave discuss the latest and greatest in genetic engineering and human-centered technology and talk about some of the near-term and far-term implications.

    Report: https://www.cna.org/CNA_files/PDF/Superhumans-Implications-of-Genetic-Engineering-and-Human-Centered-Bioengineering.pdf

    Titillating Sports Podcast: https://podcasts.apple.com/us/podcast/titillating-sports-with-rick-tittle/id1451555608

    • 15 min

Top Podcasts In News

Listeners Also Subscribed To