18 episodes

AI Australia by Eliiza is the definitive AI podcast for Australian businesses and the wider AI community. James Wilson and Nigel Dalton interview industry thought leaders and experts about implementing AI in Australian businesses, the ethical implications of AI, and what’s happening in the Australian AI community.

AI Australia Eliiza

    • Technology
    • 5.0 • 2 Ratings

AI Australia by Eliiza is the definitive AI podcast for Australian businesses and the wider AI community. James Wilson and Nigel Dalton interview industry thought leaders and experts about implementing AI in Australian businesses, the ethical implications of AI, and what’s happening in the Australian AI community.

    Banking In The Age of AI, with Dan Jermyn

    Banking In The Age of AI, with Dan Jermyn

    Dan Jermyn joins us today to share his experiences as the head of the biggest AI and Data Science practice in Australia. He explores the criticality of thought diversity in AI development, the need to break the status quo regularly, and the importance of regulations to customer safety. He also shares his insights about building effective data science teams as well as navigating politics to create AI that, first and foremost, serves a purpose with defined ethics and explainability even within complex models. 
    And, of course, Dan will tell us which sci-fi universe he envisions will most closely resemble our own future. Tune in to find out all of this and more about the remarkable projects and progress Dan has contributed to AI and machine learning in Australia. 
    Bio:
    Dan Jermyn is Chief Decision Scientist at Commonwealth Bank of Australia, who have reported 7.5 million digitally active customers as of 10/02/2021.
    Dan is an experienced leader in both technology and data science, with an established record of building award-winning, global teams in digital, big data and customer decisioning. Dan joined the Commonwealth Bank of Australia in 2017, where he has responsibility for delivering great customer experiences and innovative new solutions through data science.
    Dan has also worked as a strategy consultant and then Head of Analytics for an agency in the UK as well as co-founding a successful digital technology startup. Prior to his current role, he held various positions at the Royal Bank of Scotland including Head of Digital Analytics and Head of Big Data & Innovation.
    0:30 Has the foundation of banking changed over the last five or so decades? While banks still look out for the financial wellbeing of the communities they serve, the way we interface with our banks has changed—our bankers rarely know our financial dreams and wellbeing—an issue data scientists seek to resolve. 
    3:00 What will be the value proposition of machine learning to banking in the year 2050? How have futuristic technologies like robotics already changed the banking experience? Dan shares how robotics in banking has allowed frontline staff to derive more meaning from their work as well as how we can expect this trend will grow. 
    5:40 What is the state of the ecosystem for fintechs? Are small disruptors still able to break into the field alongside big players? Do different banking cultures have a competitive advantage? Does diversity of thought play a role?
    7:30 On the topic of banking regulations—is it friend of foe to the Big Four? Dan discusses customer safety and banking’s contributions in supporting the critical infrastructure to make Australia a leading digital economy. 
    9:15  Dan describes the organization's philosophy of developing machine learning and AI for banking purposes. Is there a roadmap for the best route for using AI and ML for service and customer solutions? Dan discusses with us a moderated approach that keeps purpose at the forefront of AI development—that end products must have a use case. 
    12:15 What are the most valuable lessons Dan has learned about building and scaling a data science team? He shares with us the criticality of diversity that extends beyond culture and gender into diverse experience, including people who were once first-line banking employees. 
    15:00 How do you navigate complex environments with deep issues such as data governance if you’re trying to advance your data science career? Is there an imbalance in the market as it pertains to data science talent? Dan discusses the benefit of hiring fresh out of college people who will break the status quo. 
    23:00 Dan discusses explainable AI in the context of innovative banking and the purpose and benefit of having a global ethical AI toolkit. 
    27:00 How does Dan see academia and industry working together going forward? Dan explains the importance of independent verification as well as symbiotic education—how members of industry can

    • 42 min
    CAIDE - Amplifying the Australian AI Conversation, with Jeannie Paterson and Tim Miller

    CAIDE - Amplifying the Australian AI Conversation, with Jeannie Paterson and Tim Miller

    Jeannie Paterson and Tim Miller join us from CAIDE to discuss AI, ethics, accountability and explainability.  What is changing in these spaces, especially in light of Covid-19, and what should researchers and governments turn attention to in order to ensure helpful, beneficial AI that is used ethically?
     
    More About Our Guests: 
    Jeannie Paterson & Tim Miller are the co-directors of the Centre for AI and Digital Ethics, a new collaborative, interdisciplinary research, teaching and policy centre at the University of Melbourne involving the faculties of Computing and Information Systems, Law, Arts and Science. Jeannie specialises in the areas of contracts, consumer rights and consumer credit law, as well as the role of new technologies in these fields. Jeannie’s research covers three interrelated themes:
    Support for vulnerable and disadvantaged consumers; The ethics and regulation of new technologies in consumer markets; Regulatory design for protecting consumer rights and promoting fair, safe and accountable AI. Jeannie is co-leader of the Digital Ethics research stream at the Melbourne Social Equity Institute, an interdisciplinary research institute focused on social equity and community-led research Tim is associate professor in the School of Computing and Information Systems at The University of Melbourne, his primary area of expertise is in artificial intelligence, with particular emphasis on:
    Human-AI interaction and collaboration Explainable Artificial Intelligence (XAI) Decision making in complex, multi-agent environments Reasoning about action and knowledge


    In this episode we investigate:
    Tim discusses why the center publishes so frequently and the importance of public outreach, especially in times of crisis such as Covid-19.   How did the center come about? How did Jeannie and Tim become involved with it? Is the center independent? Is it a collaborative effort between the several similar centers across Australia such as the 3AI center in Canberra? Tim discusses how the centers work together as well as how to differ in vision and approach. Jeannie expresses the importance of an interdisciplinary approach and their dedication to this.  Which subjects will they launch in semester two? Jeannie lets us know about subjects such as AI Ethics & Law which discusses how law responds to ethical dilemmas. What expertise does Jeannie have regarding law and the impacts of technology to consumers?  What are counterfactuals? Jeannie answers, “What would you ask the machine when you’d receive a particular mortgage recommendation?” How do counterfactuals help scrutinize the basis of the decision to see if it is valid? How does this remove systematic bias and prejudice? What are the new trends in explainable AI? Tim also delves into counterfactuals as well as cognitive psychology and cognitive science. How do you generate counterfactuals that are realistic? What do those look like? Tim expresses that the human factor in explainability is becoming increasingly important.  Tim discusses the impacts of Covid-19 on conferences and networking in this space now that everything is virtual. How does it make things less connective and enticing?  Jeannie answers, “What advice do you offer your family and friends regarding the Covid Safe app?” She delves into privacy and security as compared to the benefits and effectiveness of the app. Has the Covid Safe app set a precedent for privacy in Australia? Tim discusses how a contact tracing app was used by law enforcement to understand who was at a Black Lives Matter event and why this means there should be legislation surrounding apps such as this and how the data can be used.  What scale of discouragement will it require to make a difference to Australian businesses? Is money the answer? Tim discusses why we must educate consumers about data collection and privacy, in part by exemplifying what information about them can be discerned from the data they share. How

    • 47 min
    Do Robots Have Values? With Professor Jon Whittle

    Do Robots Have Values? With Professor Jon Whittle

    Today on AI Australia, we have the opportunity to talk to Professor Jon Whittle of Monash University about the impacts - both good and bad - data science is having on the world around us. As co-author of a series of published and soon-to-be-published papers in the fields of software development, ethics, and values, Jon is well placed to talk to us today about the heightened risks and opportunities that the development of data-science based systems brings to our world.
     
    About Professor Jon Whittle:
    Professor Jon Whittle is the Dean of the Faculty of Information Technology at Monash University. 
    Jon is a world-renowned expert in software engineering and human-computer interaction (HCI), with a particular interest in IT for social good. In software engineering, his work has focused around model-driven development (MDD) and, in particular, studies on industrial adoption of MDD. 
    In HCI, he is interested in ways to develop software systems that embed social values. Jon's work is highly interdisciplinary. As an example, he previously led a large multidisciplinary project with ten academic disciplines looking at how innovative digital technologies can support social change. In 2019 Monash launched the Data Futures Institute, which we will find out more about today. 
    Before joining Monash, Jon was a Technical Area Lead at NASA Ames Research Center, where he led research on software for space applications. He was also Head of the School of 
    Computing and Communications at Lancaster University, where he led eight multi-institution, 
    multi-disciplinary research projects. These projects focused on the role of IT in society and included digital health projects, sustainability projects and projects in digital civics. 
    2:30 How is Monash busting the old model of disciplinary siloed schools and turning toward a passion for multidisciplinary studies? How does the intersection of fields lead to progress, especially in software engineering and AI?  5:00 How do you go about raising awareness of these types of social, ethical, and psychological aspects of interdisciplinary work and studies with a deep technical kind of audience who are focused on their tools and being the best they can in their particular arena? What role do universities play in creating engineers who will consider ethics and values in their software products? 8:00 What are values? Jon gives us a long and short answer that can include everything from social responsibility to hedonism to inclusion. What role do social scientists play in how we understand values? Jon discusses Schwartz’s Ten Universal Values, corporate values, and the difference between ethics and values—as well as the ability for them to contradict one another. How do values differ by culture, age group, and other demographic factors? 14:00 Who gets dominance in a software application—who chooses the values that underpin the software? How do we take the implicit aspect of values in software and turn it into an explicit process? Jon discusses the maturity scale of companies and their corporate values and whether or not this impacts design decisions.  19:00 Jon discusses the impact of corporate values on software development and real-world cases of ethical issues that have arisen due to software. This includes everything from parole re-offender predictions, priority one-day shipping, and self-harm on Instagram.  23:00 Are values at the root of algorithmic bias? Different groups experience products and services differently, especially if the data and ideas going into it are heavily biased.  80% of AI professors are male—how does this influence the way systems are designed? Are our programs today working to increase diversity in AI and software development and, even when those fields are diverse, what difference will it make? Jon proposes someone on the team to specifically ask questions about values during the design process.  29:00 Will we ever get to an empirical state where

    • 43 min
    Digital Rights in the Age of AI

    Digital Rights in the Age of AI

    In today’s episode, Lizzie O’Shea discusses the great power of data and AI — and how we can use them to empower people rather than oppress them. She’ll discuss which technologies should be off-limits, compares data policies around the world, and proposes a code of ethics for engineers building these influential technologies. Lizzie probes who holds the power of AI and data and who should be responsible for ethics in this realm — corporations or the government — and who is better equipped to do so. Lizzie raises important questions about privacy concerns in our digital lives and even poses the question — do machines already rule the world?

    About Lizzie O’Shea: 
    Lizzie is a lawyer, writer, and broadcaster. Her commentary is featured regularly on national television programs and radio, where she talks about law, digital technology, corporate responsibility, and human rights. In print, her writing has appeared in the New York Times, Guardian, and Sydney Morning Herald, among others. 
    Lizzie is a founder and board member of Digital Rights Watch, which advocates for human rights online. She also sits on the board of the National Justice Project, Blueprint for Free Speech and the Alliance for Gambling Reform. At the National Justice Project, Lizzie worked with lawyers, journalists and activists to establish a Copwatch program, for which she was a recipient of the Davis Projects for Peace Prize. In June 2019, she was named a Human Rights Hero by Access Now. 
    As a lawyer, Lizzie has spent many years working in public interest litigation, on cases brought on behalf of refugees and activists, among others. I was proud to represent the Fertility Control Clinic in their battle to stop harassment of their staff and patients, as well as the Traditional Owners of Muckaty Station, in their successful attempt to stop a nuclear waste dump being built on their land. 
    Lizzie’s book, Future Histories looks at radical social movements and theories from history and applies them to debates we have about digital technology today. It has been shortlisted for the Premier’s Literary Award. 
    In this episode we cover the following topics:
    4:00 How does the modern day compare to times in decades past as it pertains to rights—is technology a force for good? How can we take back the power of technology to benefit humanity?  8:00 How can we manage AI and digital technology in a more intentional way? How are automated processes already determining the course of many people’s lives? Lizzie explains how the future when machines takeover is, in many ways, already here. Should technology be regulated in order to help solve problems, and what problems have already occurred? 16:00 Lizzie discusses the state of regulation across the world, including GDPR and New York’s data fiduciary law. Should we move beyond contractual ideas of privacy?  18:00 Lizzie explains her stance on facial recognition. Should facial recognition be limited in the same way as chemical warfare—a line that is not to be crossed? How can facial recognition technology be oppressive, and what can you do to protect yourself?  22:00 Is the social credit system in China far-fetched in the West? Lizzie discusses the modern surveillance state.  26:00 How does technology mirror power structures in the analog world? Lizzie discusses predictive policing technology and the biases that exist within it.  31:00 Should we create a code of ethics for engineers developing these technologies? What practical things could an engineer do if a project’s implications make them uncomfortable? 38:00 Lizzie discusses the influence of large companies, social media, and why some issues they face are better suited to politics than corporations.  46:00 We converge to talk about the politics behind data and AI, the need to educate our regulators, and speak with our younger generations who will one day create the rules surrounding the tech that rules the world. 

    • 50 min
    Finding AI’s True Potential for Humans Around the World With Kriti Sharma

    Finding AI’s True Potential for Humans Around the World With Kriti Sharma

    Kriti Sharma, seen in Forbes 30 Under 30 in Technology, is an artificial intelligence technologist, business executive, and humanitarian. She has been a part of Google India’s Women in Engineering, a United Nations Young Leader, and a Fellow of the Royal Society of the Arts. In 2018, Kriti launched rAInbow for domestic abuse victims in South Africa. She is the founder of AI For Good UK as well as an advisor to the United Nations Technology Innovation Lab. Previously, Kriti was VP of Artificial Intelligence at Sage. 
    Today, we are lucky to have her with us to discuss the future of AI and to answer an important question—what if disadvantaged groups don’t have a say in the AI tech we create? How can the world’s governments join forces from a regulatory perspective in order to create AI that benefits humanity as a whole? What types of social change and humanitarian impacts intersect with AI—how do we do bigger, better things that incorporate the human element? And how does GDPR play a role?
    On this episode we discuss:
    2:00 Kriti discusses her background, including her introduction to machine learning and robotics with a robot she created when she was 15. She also discusses the purpose of the Centre for Data Ethics and Innovation and the types of problems they focus on within the group. What do these problems mean from a regulatory point of view and in everyday life? 6:00 How do we get governments around the world to engage in more collaboration? Rather than become a “leader” in AI, why don’t we use the combined power of joining forces? How does being in London alter her view of AI around the world and why did she choose to work there? 9:00 Is employment a driver for concern in the markets that Kriti engages with? Because AI is at the height of its hype cycle, how does that impact AI in business moving forward? Is the alarmist narrative surrounding job automation valid? Kriti discusses that women are expected to lose twice as many jobs to automation as men as well as the dark side of new job creation. 14:00 Kriti discusses the importance of diversity in AI and how it can bring focus to actual usefulness as well as potential misuse cases that can arise. How has Kriti encountered challenges and mistakes on this front? 17:00 Where is the intersection between AI and climate change and how does this relate to the youth classes Kriti has taught? What social issues do the young people Kriti teaches care about?  21:00 Kriti discusses rAInbow for women in South Africa where 1 in 3 women face domestic abuse—a figure that is the same for Australia. Why are women not reporting their abuse? What other women’s issues and diversity issues intersect with AI?  28:00 What are Kriti’s thoughts on GDPR? Is it working?

    • 32 min
    OVIC: Closer to the Machine

    OVIC: Closer to the Machine

    On this episode of AI Australia, we’re changing things up a little. With James overseas, Nigel and his friend Sarah Turner from REA Group hosted multiple panel conversations with some of Australia’s best and brightest in the field of AI, computer science, mathematics, and regulation to discuss the launch of OVIC’s new book: Closer to the Machine.
    OVIC is the Office of Victorian Information Commissioner and is the primary regulator and source of independent advice to the community and Victorian government about how the public sector collects, uses and shares information.
     
    In this episode, we’re lucky to be joined by:
    Sarah Turner (co-host) - General Counsel, REA Group Adam Spencer - comedian, media personality and former radio presenter Rachel Dixon - Privacy and Data Protection Deputy Commissioner, OVIC Professor Toby Walsh (University of New South Wales and CSIRO’s Data61)  Professor Richard Nock (Australian National University and CSIRO’s Data61)  Associate Professor Ben Rubinstein (University of Melbourne)  Katie Miller (Independent Broad-based Anti-corruption Commission)  Professor Margaret Jackson (RMIT)   
    In this episode we discuss:
    The degree to which we all take for granted how big a part AI plays in our lives The rate of improvement in algorithms in their narrow fields of “expertise”. We discuss how quickly a chess-playing AI went from basic to beating the chess grandmaster Gary Kasparov OVIC’s motivation for publishing the book on data privacy and protection Grappling with the implications of how AI systems can be misused, or easily breached from a security standpoint. Do we continue to push the boundaries in the face of privacy risks and concerns? Do we pull back? The challenge of discrimination. Eventually, machines will need to make decisions that “discriminate” against people in one way or another - but there is such a thing as good discrimination and bad discrimination. Who gets to make those definitions? What role should government play in the regulation (or non-regulation) of AI? Accountability. What happens when AI “goes rogue”? Where does the buck stop? How the conversation has evolved over the years, and become more “realistic” in a sense.  
    Links mentioned:
    OVIC: Closer to the Machine book

    • 34 min

Customer Reviews

5.0 out of 5
2 Ratings

2 Ratings

Top Podcasts In Technology

Lex Fridman
Jason Calacanis
The New York Times
NPR
Ben Gilbert and David Rosenthal
Cal Newport

You Might Also Like