Pioneer Park Podcast

Alignment, risks, and ethics in AI communities with Sonia Joseph

Check out our interview with Sonia Joseph, a member of South Park Commons and researcher at Mila, Quebec's preeminent AI research community.

Topics:

- India's Joan of Arc, Rani of Jhansi [[wiki](https://en.wikipedia.org/wiki/Rani_of...)] - Toxic Culture in AI - The Bay Area cultural bubble - Why Montreal is a great place for AI research - Why we need more AI research institutes - How doomerism and ethics come into conflict - The use and abuse of rationality - Neural foundations of ML

Links:

Mila: https://mila.quebec/en/ Follow Sonia on Twitter here: https://twitter.com/soniajoseph_ Follow your hosts: John: https://twitter.com/johnvmcdonnell Bryan: https://twitter.com/GilbertGravis And read their work:

Interview Transcript

Hi, I'm Bryan and I'm John. And we are hosting the Pioneer Park Podcast where we bring you in-depth conversations with some of the most innovative and forward-thinking creators, technologists, and intellectuals. We're here to share our passion for exploring the cutting edge of creativity and technology. And we're excited to bring you along on the journey. Tune in for thought-provoking conversations with some of the brightest minds of Silicon Valley and beyond.

John: okay, so today I'm super excited to invite Sonia onto the podcast. Sonia is an AI researcher at Mila Quebec AI Institute and co-founder of Alexandria, a Frontier Tech Publishing house. She's also a member of South Park Commons where she co-chaired a forum on agi, which just wrapped up in December.

We're looking forward to the public release of the curriculum later this year. So keep an eye out for that. Sonia, welcome to the.

Sonia: Hi John. Thanks so much for having me. [00:01:00] It's a pleasure to be here.

Bryan: Yeah, welcome.

Sonia: Hi, Bryan.

Bryan: Yeah, so I guess for full transparency, John and I were both attendees of this AGI forum.

And I was waiting every week's I guess session with baited breath. I thought that the discussions in the forum were super interesting. There was a bunch of really prominent, interesting guests that we had come through. And yeah, it was really interesting some intersection of like practical questions with sci.

And a lot of things that are like used to be sci-fi that are getting far more practical than perhaps we ever anticipated.

John: All right. So Sonia, I feel like the question that's on everyone's mind is, Who is Rahni of Jansi ?

Sonia: Oh my gosh. Yeah. Yeah. So basically like I grew up on a lot of like Indian literature and Indian myth.

And she's considered to be India's Jonah Arc. So female leader like has a place in feminist scholarship if you look at any literature. And I [00:02:00] believe she read. Of India against the British. I actually wanna fact check that .

John: Yeah, no, that's really cool. Just we love the the recent kind of blog post that you worked on with S and you pointed out how these kind of influences like really enabled you to succeed at your current endeavors.

So we're like, just curious about maybe like how your background. Made you who you are. .

Sonia: Yeah. Yeah. No, I appreciate that question a lot. So like I, I would say I had a kinda culturally schizophrenic background in some ways where I spent a lot of time. When I was a child in India but then the other half of my life was in Massachusetts.

Which was very like a lot of Protestantism and growing up on a lot of like American history. I like I saw things in a calculation of various like cultures and religions and that has like very much impacted like my entry into AI and how I'm conceiving of ai.

John: Yeah. Something that we loved about the AGI forum is that you have this [00:03:00] kind of really critical eye towards the culture of the way that AI is practiced and the way that research is going forward.

And we can I think you really brought this kind of unique perspective that was super valuable.

Bryan: Yeah, I'm curious do you, are there any points at which you think there's like current I guess problems either in the way that research is being done or the kind of I guess the moral framework in which that research is being done?

Sonia: It's a really interesting question. I would say the AI world is like very big first of all, so it's like hard to critique the entire thing. But it. Have it, parts of it have some of the problems that physics had in the 1990s or still has in being male dominated or like focused on like certain cultures.

And the culture will generate a certain type of research. So your scientific conclusions and the community or culture you're in, you have this like reciprocal relat. For example, in like the 1990s like there's this amazing book called The Trouble with Physics, with Lee [00:04:00] Smolin that goes into sort of like the anthropology of the physics community.

And the 1990s, the physics community was deeply obsessed with string theory. If you weren't working on string theory, you just weren't cool at all and you probably weren't gonna get tenure track. Goes into how string theory wasn't empirically proven. It was like mathematically, internally consistent, but it was by no means like a theory of everything.

And how the monoculture of physics and like the intellectual conclusion of St string theory would feed off each other in this like that cycle. Lee Smolin basically created his own institute to deal with this problem cuz he got just like very frustrated.

I don't think AI is quite so bad. But there are pockets of AI that I do notice. Similar dynamics. And particular the parts of AI that were previously like more influenced by effective altruism and LessWrong in this like the AI safety and alignment camp. I don't think these fields have as bad a problem anymore.

There have been recent. Attempts [00:05:00] called the reform attempt that Scott Aaronson had a very great blog post on how AI safety is being . There's an attempt for AI safety, like a legitimate science that's like empirically grounded and has mathematical theory. But I did notice that more classical AI safety definitely had these like 1990s style string theory problems, , both in the science being like not empirically verified, but like dogmatic. And also in the community that was generating it not being fairly healthy. And I guess with the caveat, I'll say have been either adjacent to or in these communities since I was basically like 12.

So I have seen like a very long history. And I also don't mean to like unilaterally critique these communities. think they have done a lot of good work and given a lot of contributions to the field both in terms of frameworks talent funding but a am looking at these communities with a critical eye, like as we move forward.

Cause it's like, what is, what are coming. Both as a scientific paradigm and as the research community that like generates that paradigm.

Bryan: I'm curious. To me there seemed like kind of two issues. I don't know if they're orthogonal but I think like the scientific integrity of a community and the ability for that community to [00:07:00] generate and falsify hypotheses and the culture of that community and whether or not that culture is a healthy culture to be in, whether it's like a nice place to work in and all that sort of stuff. And I guess my hypothesis is like none of us wanna work in a shitty culture and none of us wanna be part of communities where insults or like abusive behavior is tolerated at all.

But I think that a lot of scientific communities can be interpreted as quite dogmatic because there's an insistence on a specific sort of intellectual lens that you need to adapt to participate in the discussion. And for me it's it always seems like there's like a balance there.

Because for instance, if you wanna be a biologist, you better accept evolution. And like you, you're you have to meet that criteria. And I'm curious, do you think that, for instance in the, is there some sort of almost Intellectual cowtowing or basically a tip of the hat that one needs