AI with AI explores the latest breakthroughs in artificial intelligence and autonomy, and discusses the technological and military implications. Join Andy Ilachinski and David Broyles as they explain the latest developments in this rapidly evolving field.
The views expressed here are those of the commentators and do not necessarily reflect the views of CNA or any of its sponsors.
All Good Things
For the final (for now?) episode of AI with AI, Andy and Dave discuss the latest in AI news and research, including a political declaration from the US Department of State on the responsible military use of AI and autonomy. NATO begins work on an AI certification standard. The IEEE introduces a new program that provides free access to AI ethics and governance standards. Reported in February, but performed in December, A joint Dept of Defense team performed 12 flight tests (over 17 hours) in which AI agents piloted Lockheed Martin’s X-62A VISTA, an F-16 variant. Andy provides a run-down of a large number of recent ChatGPT-related stories. Wolfram “explains” how ChatGPT works. Paul Scharre publishes Four Battlegrounds: Power in the Age of AI. And to come full circle, we began this podcast 6 years ago with the story of AlphaGo beating the world champion. So we close the podcast with news that a non-professional Go player, Kellin Pelrine, beat a top AI system 14 games to one having discovered a ‘not super-difficult method for humans to beat the machines. A heartfelt thanks to you all for listening over the years!
Up, Up, and Autonomy!
Andy and Dave discuss the latest in AI news and research, including the update of the Department of Defense Directive 3000.09 on Autonomy in Weapon Systems. NIST releases the first version of its AI Risk Management Framework. The National AI Research Resource (NAIRR) Task Force publishes its final report, in which it details its plans for a national research infrastructure, as well as its request for $2.6 billion over 6 years to fund the initiatives. DARPA announces the Autonomous Multi-domain Adaptive Swarms-of-Swarms (AMASS) program, a much larger effort (aiming for thousands of autonomous entities) than its previous OFFSET program. And finally, from the Naval Postgraduate School’s Energy Academic Group, Kristen Fletcher and Marina Lesse join to discuss their research and efforts in autonomous systems and maritime law and policy, to include a discussion about the DoDD 3000.09 update and the high-altitude balloon incident.
Andy and Dave discuss the latest in AI news and research, starting with an education program from AI that teaches US Air Force personnel the fundamentals of AI across three types: leaders, developers, and users. The US Equal Employment Opportunity Commission unveils its draft Strategic Enforcement Plan to target AI-based hiring bias. The US Department of State establishes the Office of the Special Envoy for Critical and Emerging Technology to bring “additional technology policy expertise, diplomatic leadership, and strategic direction to the Department’s approach to critical and emerging technologies.” Google calls in its founders, Larry Page and Sergey Brin, to help with the potential threat over ChatGPT and other AI technology. Researchers from Northwestern University publish research that demonstrates how ChatGPT can write fake research paper abstracts that can pass plagiarism checkers, and that human reviewers were only able to correctly identify 68% of the generated abstracts. Wolfram publishes an essay on a way to combine the computational powers of ChatGPT with Wolfram|Alpha. CheckPoint Research demonstrates how cybercriminals can use ChatGPT for nefarious exploits (including people without any experience in generating malicious tools). Researchers at Carnegie Mellon demonstrate that full body tracking is now possible using only WiFi signals, with comparable performance to image-based approaches. Microsoft introduces VALL-E, a text-to-speech AI model that can mimic anyone’s voice with only three seconds of sample input. The Cambridge Handbook of Responsible AI is the book of the week, with numerous essays on the philosophical, ethical, legal, and societal challenges that AI brings; Cambridge has made the book open-access online. And finally, Sam Bendett joins for an update on the latest AI and autonomy-related information from Russia as well as Ukraine.
Andy and Dave discuss the latest in AI and autonomy news and research, including a report from Human Center AI that assesses progress (or lack thereof) of the implementation of the three pillars of America’s strategy for AI innovation. The Department of Energy is offering up a total of $33M for research in leveraging AI/ML for nuclear fusion. China’s Navy appears to have launched a naval mothership for aerial drones. China is also set to introduce regulation on “deepfakes,” requiring users to give consent and prohibiting the technology for fake news, among many other things. Xiamen University and other researchers publish a “multidisciplinary open peer review dataset” (MOPRD), aiming to provide ways to automate the peer review process. Google executives issue a “code red” for Google’s search business over the success of OpenAI’s ChatGPT. New York City schools have blocked access for students and teachers to ChatGPT unless it involves the study of the technology itself. Microsoft plans to launch a version of Bing that integrates ChatGPT to its answers. And the International Conference on Machine Learning bans authors from using AI tools like ChatGPT to write scientific papers (though still allows the use of such systems to “polish” writing). In February, an AI from DoNotPay will likely be the first to represent a defendant in court, telling the defendant what to say and when. In research, the UCLA Departments of Psychology and Statistics demonstrate that analogical reasoning can emerge from large language models such as GPT-3, showing a strong capacity for abstract pattern induction. Research from Google Research, Stanford, Chapel Hill, and DeepMind shows that certain abilities only emerge from large language models that have a certain number of parameters and a large enough dataset. And finally, John H. Miller publishes Ex Machina through the Santa Fe Institute Press, examining the topic of Coevolving Machines and the Origins of the Social Universe.
The Kwicker Man
Andy and Dave discuss the latest in AI news and research, including the release of the US National Defense Authorization Act for FY2023, which includes over 200 mentions of “AI” and many more requirements for the Department of Defense. DoD has also awarded its cloud-computing contracts, not to one company, but four – Amazon, Google, Microsoft, and Oracle. At the end of November, the San Francisco Board voted to allow the police force to use robots to administer deadly force, however, after a nearly immediate response from a “No Killer Robots” campaign, in early December the board passed a revised version of the policy that prohibits police from using robots to kill people. Israeli company Elbit unveils its LANIUS drone, a “drone-based loitering munition” that can carry lethal or non-lethal payloads, and appears to have many functions similar to the ‘slaughter bots,’ except for autonomous targeting. Neuralink shows the latest updates on its research for putting a brain chip interface into humans, with demonstrations of a monkey manipulating a mouse cursor with its thoughts; the company also faces a federal investigation into possible animal-welfare violations. DeepMind publishes AlphaCode in Science, a story that we covered back in February. DeepMind also introduces DeepNash, an autonomous agent that can play Stratego. OpenAI unleashes ChatGPT, a spin-off of GPT-3 optimized for answering questions through back-and-forth dialogue. Meanwhile, Stack Overflow, a website for programmers, temporarily banned users from sharing responses generated by ChatGPT, because the output of the algorithm might look good, but it has “a high rate of being incorrect.” Researchers at the Weizmann Institute of Science demonstrate that, with a simple neural network, it is possible to reconstruct a “large portion” of the actual training samples. NOMIC provides an interactive map to explore over 6M images from Stable Diffusion. Steve Coulson creates “AI-assisted comics” using Midjourney. Stay tuned for AI Debate 3 on 23 December 2022. And the video of the week from Ricard Sole at the Santa Fe Institute explores mapping the cognition space of liquid and solid brains.
Andy and Dave discuss the latest in AI news and research, including the introduction of a lawsuit against Microsoft, GitHub and OpenAI for allegedly violating copyright law by reproducing open-source code using AI. The Texas Attorney General files a lawsuit against Google alleging unlawful capture and use of biometric data of Texans without their consent. DARPA flies its final flight of ALIAS, an autonomous system outfitted on a UH-60 Black Hawk. And Rafael’s DRONE DOME counter-UAS system wins Pentagon certification. In research, Meta publishes work on Cicero, an AI agent that combines Large Language Models with strategic reasoning to achieve human-level performance in Diplomacy. Meta researchers also publish work on ESMFold, an AI algorithm that predicts structures from some 600 million proteins, “mostly unknown.” And Meta also releases (then takes down due to misuse) Galactica, a 120B parameter language model for scientific papers. In a similar, but less turbulent vein, Explainpaper provides the ability to upload a paper, highlight confusing text, and ask queries to get explanations. CRC Press publishes online for free Data Science and Machine Learning: Mathematical and Statistical Methods, a thorough text for upper-class college or grad-school level. And finally, the video of the week features Andrew Pickering, Professor Emeritus of sociology and philosophy at the University of Exeter, UK, with a video on the Cybernetic Brain, and the book of the same name, published in 2011.
Informative, stimulating and well researched
Interesting interviews (Xenobots!) with analysts and AI researchers, and extremely useful resources for the practitioner, enthusiast or policy-maker (AI links and other resources, book reviews, and gimlet-eyed analysis and clear explanations of the latest AI research and potential societal and military implications).
Definitely worth “the price of admission”.
One of my favorites
Whether you’re a practitioner or just curious, this is a really fantastic podcast with tons of great resources. Thanks Andy!
As a actual practitioner I like this because it covers actual papers and stays away from the enthusiast / crazy speculation. Just regular discussion of developments in the field.