1 hr 3 min

Sasha Luccioni: Connecting the Dots Between AI's Environmental and Social Impacts The Gradient: Perspectives on AI

    • Technology

In episode 120 of The Gradient Podcast, Daniel Bashir speaks to Sasha Luccioni.
Sasha is the AI and Climate Lead at HuggingFace, where she spearheads research, consulting, and capacity-building to elevate the sustainability of AI systems. A founding member of Climate Change AI (CCAI) and a board member of Women in Machine Learning (WiML), Sasha is passionate about catalyzing impactful change, organizing events and serving as a mentor to under-represented minorities within the AI community.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at editor@thegradient.pub
Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (00:43) Sasha’s background
* (01:52) How Sasha became interested in sociotechnical work
* (03:08) Larger models and theory of change for AI/climate work
* (07:18) Quantifying emissions for ML systems
* (09:40) Aggregate inference vs training costs
* (10:22) Hardware and data center locations
* (15:10) More efficient hardware vs. bigger models — Jevons paradox
* (17:55) Uninformative experiments, takeaways for individual scientists, knowledge sharing, failure reports
* (27:10) Power Hungry Processing: systematic comparisons of ongoing inference costs
* (28:22) General vs. task-specific models
* (31:20) Architectures and efficiency
* (33:45) Sequence-to-sequence architectures vs. decoder-only
* (36:35) Hardware efficiency/utilization
* (37:52) Estimating the carbon footprint of Bloom and lifecycle assessment
* (40:50) Stable Bias
* (46:45) Understanding model biases and representations
* (52:07) Future work
* (53:45) Metaethical perspectives on benchmarking for AI ethics
* (54:30) “Moral benchmarks”
* (56:50) Reflecting on “ethicality” of systems
* (59:00) Transparency and ethics
* (1:00:05) Advice for picking research directions
* (1:02:58) Outro
Links:
* Sasha’s homepage and Twitter
* Papers read/discussed
* Climate Change / Carbon Emissions of AI Models
* Quantifying the Carbon Emissions of Machine Learning
* Power Hungry Processing: Watts Driving the Cost of AI Deployment?
* Tackling Climate Change with Machine Learning
* CodeCarbon
* Responsible AI
* Stable Bias: Analyzing Societal Representations in Diffusion Models
* Metaethical Perspectives on ‘Benchmarking’ AI Ethics
* Measuring Data
* Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice


Get full access to The Gradient at thegradientpub.substack.com/subscribe

In episode 120 of The Gradient Podcast, Daniel Bashir speaks to Sasha Luccioni.
Sasha is the AI and Climate Lead at HuggingFace, where she spearheads research, consulting, and capacity-building to elevate the sustainability of AI systems. A founding member of Climate Change AI (CCAI) and a board member of Women in Machine Learning (WiML), Sasha is passionate about catalyzing impactful change, organizing events and serving as a mentor to under-represented minorities within the AI community.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at editor@thegradient.pub
Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (00:43) Sasha’s background
* (01:52) How Sasha became interested in sociotechnical work
* (03:08) Larger models and theory of change for AI/climate work
* (07:18) Quantifying emissions for ML systems
* (09:40) Aggregate inference vs training costs
* (10:22) Hardware and data center locations
* (15:10) More efficient hardware vs. bigger models — Jevons paradox
* (17:55) Uninformative experiments, takeaways for individual scientists, knowledge sharing, failure reports
* (27:10) Power Hungry Processing: systematic comparisons of ongoing inference costs
* (28:22) General vs. task-specific models
* (31:20) Architectures and efficiency
* (33:45) Sequence-to-sequence architectures vs. decoder-only
* (36:35) Hardware efficiency/utilization
* (37:52) Estimating the carbon footprint of Bloom and lifecycle assessment
* (40:50) Stable Bias
* (46:45) Understanding model biases and representations
* (52:07) Future work
* (53:45) Metaethical perspectives on benchmarking for AI ethics
* (54:30) “Moral benchmarks”
* (56:50) Reflecting on “ethicality” of systems
* (59:00) Transparency and ethics
* (1:00:05) Advice for picking research directions
* (1:02:58) Outro
Links:
* Sasha’s homepage and Twitter
* Papers read/discussed
* Climate Change / Carbon Emissions of AI Models
* Quantifying the Carbon Emissions of Machine Learning
* Power Hungry Processing: Watts Driving the Cost of AI Deployment?
* Tackling Climate Change with Machine Learning
* CodeCarbon
* Responsible AI
* Stable Bias: Analyzing Societal Representations in Diffusion Models
* Metaethical Perspectives on ‘Benchmarking’ AI Ethics
* Measuring Data
* Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice


Get full access to The Gradient at thegradientpub.substack.com/subscribe

1 hr 3 min

Top Podcasts In Technology

Lex Fridman Podcast
Lex Fridman
Мы обречены
Мы обречены
Радио-Т
Umputun, Bobuk, Gray, Ksenks, Alek.sys
UX Podcast
James Royal-Lawson & Per Axbom
Design Details
Brian Lovin, Marshall Bock
The Instagram Stories - Social Media News
The Instagram Stories, Daniel Hill