In this latest episode of Recordkeeping Roundcasts, we talk to Ellen Broad, author of Made by Humans: The AI Condition, about the way that rapidly advancing technologies like artificial intelligence and machine learning are being deployed in business, government and society, and the wide ranging implications of their adoption. Ellen and the Roundtable’s Cassie Findlay discuss on real world results flowing from machine decision making, accountability for the use of these systems, the role of recordkeeping and archives, and changing perceptions of privacy in the data economy.
Made by Humans is available from Melbourne University Press, or from all the big online book retailers. You can read Ellen’s bio after the transcript.
Transcript
CF: I am thrilled to be talking to Ellen Broad today. Ellen is the author of Made By Humans: The AI Condition. There’s a link available from the site too. There’s information about the book and where to buy it. I heartily recommend it. It’s very timely. Look at the ethics and the societal implications really of what seems like ever increasing gallop towards the use of artificial intelligence and machine learning in so many different aspects of our lives. And of course, for the recordkeeping people like myself and hopefully people who follow this series of recordings, there are lots of things about the adoption of this technology that make us think about what our jobs are and potentially what our jobs might be in the future as far as keeping evidence in the way that these systems are set up through to trying to promote accountability around decisions to deploy the technology. So, first of all, hello Ellen, and thank you very much for joining me.
EB: Thank you very much for having me.
CF: So I’ve got three questions. And I think the first one is probably an opportunity for you to help our listeners just understand what it is we’re talking about a little bit because not all of us come at this with lots of background knowledge. And so, the first of my three questions is, in your book, you talk about how we seem to be moving from lower stakes AI systems to higher stakes AI systems. I was just wondering if you could explain a little bit more about what you mean by that and give us some examples of those higher stakes systems.
EB: Sure. So perhaps it’s worth walking it back a little bit to talk about the context in which I refer to lower stakes and higher stakes systems because the book isn’t about artificial intelligence in general because there are many, many different technologies that seat under that umbrella, there’s virtual reality, robotics, drones, what I’m interested in, in the context of the book, is the increasing use of data to make predictions and decisions about people. And we have started doing this more and more in our day-to-day lives. So the lower stakes systems that I talk about are systems helping us choose what we watch next on Netflix for example, or the information Google uses to shape what it is that we see in search results.
These are in some context, even those examples, are not what you would call lower stakes. There’s a lot of discussion, increasing discussion, around how Google shapes the information that we see in the world, for example, but we’re moving from these systems that essentially are used to make individual predictions to us about content, what we might like to watch, what we might like to buy, if we’ve just purchased this book, we might be interested in this one over here into starting to use the same techniques to make what I would call higher stakes decisions. These are decisions about the kind of person that we might be. So whether we’re trustworthy, whether we’re a good fit for a job that we’re applying for, whether we should be accepted to university
Information
- Show
- Published6 October 2018 at 00:42 UTC
- Length22 min
- RatingClean