Pioneer Park Podcast

Bryan

Interviews, conversations, and commentary on the frontier of science and technology gilbertgravis.substack.com

Épisodes

  1. The limits of human-derived mathematics with Jesse Michael Han

    08/05/2023

    The limits of human-derived mathematics with Jesse Michael Han

    Pioneer Park interviews Jesse Han, co-founder of Multi AI. Jesse discusses his background, experience at OpenAI, and his philosophy towards research. He draws inspiration from Alexander Grothendieck's philosophy of listening to the universe and arranging theories accordingly. He also talks about the differences between research and startup thinking, the potential for machines to inspire new algorithms, theories, and results in mathematics, and the use of language models and compute to reduce the risk of misaligned outputs. He believes that language models will become as cheap and accessible as microprocessors, and that the value will go to those who build the software and infrastructure to make them accessible to end users. He recommends that those looking to shift their career in the direction of deep learning and generative AI should work hard, find good mentors, and aim for something that will endure. Transcript Jesse Han === [00:00:00] Bryan Davis: Welcome to Pioneer Park. My name is Bryan Davis and this is John. Today we're interviewing Jesse Han. Jesse Han is the co-founder of Multi AI, an AI startup based in San Francisco. He holds a PhD in mathematics from the University of Pittsburgh, and previously worked as a research scientist at OpenAI. I know Jesse through working for Multi, for a few weeks last year, in which I was helping out with some of their product launches. And was thrilled to invite Jesse to talk a little bit more depth about his background, his experience at OpenAI and Multi as it goes forward. Welcome, Jesse. Jesse Han: Thanks for having me on the podcast, guys. Thrilled to be here. Alexander Grothendieck --- John McDonnell: Yeah. So Jesse, like I wanted to start this off by asking about on your personal webpage you have a picture of you looking up like very thoughtfully at this picture of Alexander Grothendieck. So I was curious what that picture means to you. Jesse Han: What that picture means to me. So I just thought that picture of him was really funny, [00:01:00] Because it's like this shrine. So for context, that hangs in Tong Long University in Vietnam, which was founded by one of the students that he mentored when he visited Vietnam during his career. And this was during a time that Vietnam was being bombed and he wrote some, like very moving recollections about how he would teach at the university. And then they would all have to dive into an air rate shelter and they would come out and one of the mathematicians had been like, hit by the bombs. And , that student became a very prominent mathematician in Vietnam, and he had such a lasting influence that they made this, the shine and honor of him. There's this nine foot tall portrait of him there. And I just thought it would be funny to kinda like Adam and God . But the other thing is that I take a lot of inspiration from his philosophy towards research. There's a saying that he has, a saying that he was famous for, which I think is still relevant for people working in startups today or like trying to run a company, which is that so he says that the, so I'm paraphrasing, but the mark of a good researcher is someone who listens very carefully [00:02:00] to the voices of things. They try to listen to what the universe is trying to tell them about the structure of the universe. And they they arrange their understanding and their theories and what they're doing accordingly. And I think similarly when you are trying to build something, when you're trying to do something new, you have to listen to what the world is telling you. You have to listen to what the market is telling you and build accordingly. I hope that was philosophical enough for you, . John McDonnell: Yeah, I love it. There's this there's this guy David Whyte, who's who's a poet, and he has this kind of concept that he likes to incorporate into poetry that life should be a conversation between you and the world. And John McDonnell: like a really meaningful life or a great life is one where that conversation is really effective and goes both ways. And so that really reminds me of that. Jesse Han: Yeah, totally , it's a reminder to be open to what the world is telling [00:03:00] you. And I think that's really important to remember as you like go heads down and you try to make something happen in a startup. You have to be on the lookout for signals that maybe you should be doing something different. Maybe you should be pressing something harder. It's a careful balance that you have to strike. Research vs startup thinking --- Bryan Davis: Do you find that the signals you're listening to or the incentives present are different in a research context versus startups? And if so, how so? Jesse Han: To be honest, I don't really think they're that different. Like in research. So especially if you're in like a high pressure environment or if you're working in like a field that's moving really quickly, like AI , like what research looks like is taking a bunch of bets and choosing how to allocate your resources and figuring out Like what kinds of unfair advantages that you have that might make you unusually capable of capitalizing on the outcomes of some of those bets. And so I think that a lot of. So a lot of the thinking [00:04:00] around what kinds of bets one should take in their career apply equally well to startups and similarly thinking around what kinds of activities are useful for startups to think about, apply equally well to research. So like an example is pursuing like very high impact research events. Like you could spend a large majority of your career just pursuing incremental advances which carry less risk and are more likely to be published but which don't have an endearing legacy in terms of the research activity of others in the field. Or on the other hand, you can work on something that fundamentally changes the way that people think about some problem inside of the field. And that has a far more scalable... so I think a lot of the same thinking applies. Jesse's path from research to startups --- Bryan Davis: And how did you navigate with that perspective? You were previously a researcher, you were a PhD student, you recently finished your PhD, and you've obviously worked in technology prior to launching a startup. [00:05:00] But how did you, what was your own journey from going from the research context into deciding to work in industry? Did you ever aspire to be a professor? Jesse Han: I did at some point. So at some point I was very deeply enmeshed in the pure mathematics world. I was trained a logician for most of my undergraduate years. And then I spent my masters just studying mathematical logic and model theory. But I think that gradually shifted towards a more ambitious vision, which formed the basis for the research program which I pursued in my PhD. Partially due to my realization that I probably didn't have what it was gonna take to become a top mathematics researcher. I simply didn't have the let's say the intellectual horsepower. Because there are a lot of very talented people working in math, and it's a super small field. So to like really get up there, it's like being a star athlete. It's like you have to train every day, you have to study the work of the masters.[00:06:00] You have to be in the right place at the right time with the right advisor, working on the exact right field to me making that kind of impact. And towards the beginning of my PhD I came to the realization. The more impactful thing for me to do would be to try to just automate all of mathematics instead. And so I had this grand vision of eventually building some kind of planetary scale system for automatically searching for mathematical theorem improves. So that one day human mathematicians would just be the operators of such a machine whose details. And intricacies would be hidden from them, like an operating system hides most of its complexities from the end user. And so that was what sort of drew me towards AI and got me into more industry adjacent things because building a system like that requires a lot of engineering skill, requires some pretty compute heavy resource. And that kind of brought me into the orbit of people trying to apply the [00:07:00] latest techniques and deep learning to automate theorem proving. Automating mathematics --- Bryan Davis: To anchor a little bit more in the math world, do you ever think we'll reach a point in mathematics or perhaps are we already there where we're at the limits of the capacity for human brains to comprehend and do you think that there's a zone in mathematics, in pure math where machines will begin to inspire, be the chief creators of new algorithms, new theories, new results? Jesse Han: Yeah, I think that's a really interesting question. I think the fields where computers have a large advantage is with like really concrete kinds of combinatorics. That's one thing that stands out. Like subfields of discreet mathematics, like places where computation is really the main way to see how phenomena occur. For example, if you are like studying the dynamics of Conway's game of life, [00:08:00] then running computer simulations, or say you're just like studying say cellular automata, then running computer simulations is probably the best way to gain a good understanding of what's going on with any of the phenomena happening there. But on the other hand, if you're working in more abstract fields that require like a large tower of definitions say algebraic geometry, then. The computer based foundations get a bit more shaky because there are many ways that you can represent various things. And there hasn't been a lot of work on shoring up, commonly accepted foundations. Does that answer the question? Bryan Davis: Yeah, I think it does. I remember reading, I believe it was a, or listening to an interview with Richard Fineman several years ago where he was talking about understanding the universe as peeling layers off an onion, an

    45 min
  2. AI avatars and creator alignment, with Avi Fein

    01/05/2023

    AI avatars and creator alignment, with Avi Fein

    Avi Fein, founder of Meebo, discusses how AI can be used to extend people's capabilities rather than replace them. He explains the differences between Meebo and ChatGPT, and how YouTube's success is due to its product definition and monetization engine. He also talks about the importance of trusting individuals rather than brands when it comes to moderating the internet, and the road to monetization. A great and wide-reaching conversation. Transcript John McDonnell: Okay, so we have with us today, Avi Fein. Avi is the founder of Meebo, which is a platform for building personalized chatbots. Prior to that, he was a member at South Park Commons, and previously worked at Neeva and YouTube and Google. Avi, welcome to Pioneer Park. Avi Fein: Thank you. Great to be here. Bryan Davis: Yeah. Welcome. Good to see you. So we've been having some conversations prior to this, which I think at some point we all realized oh, we should probably turn on the microphones just so we can begin to capture some of this. And I think we were just on the con topic of talking about how to master chat and really like [00:01:00] some of the challenges of chat. So first, can you just tell us a little bit about Meebo? Avi Fein: Sure. So Meebo is a platform where we build chat bots out of creators of various topics. We look for people who are usually like experts in a certain thing and, really have proactively shared their knowledge. And then on the other side, there's people who trust them and want to connect with them to get almost like one-on-one advice for recommendations for, questions that they may have. Where so much we're going to. I would say Instagram and TikTok and YouTube nowadays as being the place that we want to get knowledge out of and we want to get information from. But those are still static and a distant in many ways where, they're not relatable to you. They, can answer, really connect with things that you're interested in, and we want to break down those barriers and really use chat as an interface to make it interactable such that you can have conversation and go into the depths of both you and how it connects to that person and their knowledge and their content as well. Bryan Davis: Cool. So I guess something that's really top of mind, I think for a lot of people right now is ChatGPT differentiate. Tell us [00:02:00] how Meebo is different than just run of the mill vanilla ChatGPT Avi Fein: yeah. It's interesting cuz we started working on this before ChatGPT even came out, but I Very hipster of you. Yeah. But I would say the foundational ideas and principles actually cut across. Even the post ChatGPT world and that one, what we wanted to do was break apart knowledge to not have it be a monolith anymore. And if you looked at what a lot of people experienced with the web and the internet today through products like Google and now ChatGPT it's relatively generic. You get the same answer independent of who you are. Like if you do a Google search, if you do a ChatGPT like Q and A, we're all gonna get the same thing back. And our belief is that it's a much more. Delightful. And not only that, but like trustful experience when you can blow that up and go into the distribution of different perspectives and different niches of knowledge where someone's gonna have a slightly different take than, person A versus person B on a whole slew of things. And so for us it's how do you take, [00:03:00] some of the technology that ChatGPT is good, but apply it to the diversity of human perspectives and knowledge in many ways. I think the second part that we build on, That's beyond ChatGPT is playing with the idea of how do you use the technology to extend people versus replace them. And a lot of what people talk about in AI now is like these virtual assistants, which are just. SNTs of like humans where it's oh yeah, we've trained on a million of you and now this can do what all million of you can do. Like you should just use this one like AI bot. And that's true for art. It's true for now chatGPT and knowledge of like why would you talk to anyone else about ChatGPT it knows the entire internet? And I think what goes unsaid in those things is that when you do that, you lose the integrity of in the nuance of all those individual people and all the individual relationships and the trust even that you may have in that. And it becomes not to go back to the same [00:04:00] idea, but there's like monolith of just, the average across everything. And what we wanna lean into is, The individual. And it is like the personal, and it is the idea that we are all unique in our way and like how can it, how can AI extend us to give us superpowers versus just act as like a replacement of us all? John McDonnell: So when you talk about that uniqueness, certainly from a in the first comment you were saying, oh unlike ChatGPT, we wanna be really personalized. How are you able to achieve that? Avi Fein: I think it starts with people, and I like, we said before of what Meebo is like our building block, our atomic unit, was an individual creator, was as someone on YouTube. And really it's actually cutting crosses. It's like the person has, they're represented on YouTube, TikTok, Instagram, and even their website like that is like your identity. And so we started with the identity as being the atomic unit and then build up from that and the, Philosophy behind that was that you can capture their unique [00:05:00] perspective and their, you're their unique point of view, and then make that accessible and shareable with the world. And by doing that, you can maintain this like boundary so that it's no longer like the aggregation of them, plus 10 others who are like them. And then you actually lose texture and you lose the nuances of like their experience of the world. And you also then from the other side as a user, Know who you're talking with and you can have a trusted relationship versus being like, you have to take this leap of faith with ChatGPT, though what they're saying to you is the authoritative truth of the internet, and you're like, we're in a post truth world. What is the truth of the internet? Bryan Davis: Yeah. It brings to attention some of the interesting issues. A lot of the complaints about ChatGPT and related products have been that they hallucinate that the things that they spout so confidently aren't facts. Which I think has been a warning sign for a lot of people. But it is also true that the perspective of an individual creator is also not necessarily a fact. So I'm curious to hear your perspective on two angles. One is the ability to take a creator's perspective and actually [00:06:00] represent that faithfully. Like how do you ground your technology in the actual perspective of creators and how do you feel about creators being obligated to be truthful? For instance, fake news. Like what's the, what are the risks or maybe down the line for how Meebo could be a voice of people who you don't necessarily want to expand their voice. Avi Fein: Yeah. I'll go in reverse order because I think the first question is almost like the harder question than the second one, at least for us. On the second one, and this connects with the idea of like, how do you not think of the world as being a generic monolith of information that we all trust in that we're not trying to give you an opinion around what is fact and what is not fact in the world. By virtue of talking with an individual, you are establishing that you trust them, or at least like you like, like they're their source of knowledge and they're the source of information, not us. And having worked in this space before, at least at like Neeva and seeing some of these [00:07:00] dynamics and that one of the drawbacks of the, of those types of products is that trust is inferred into the brand such that I trust the first result on Google because Google said it's the first result. And the actual sourcing of the individual things that like go into it, start to fade away and not matter anymore. And then Google becomes responsible for moderating the internet. And then Twitter becomes responsible for moderating Twitter. And then Instagram becomes responsible for moderating those things because trust flows up into the brand versus like staying down into the individual. And they all say oh, that's what we, we don't want to have this responsibility, but they design the products and they build the products to, because they like, become the aggregation point and become the, they come the center point to do that. And for us I think it is about not meddling too much into those worlds and letting the individual points of view, the individual facts still sit where they like lay. Like we're not gonna strip it out like of someone's chatbot just because we may disagree with it. Because you on the other side are an [00:08:00] adult and like we trust that you will be able to form your own point of view on whether you can have that trust with that person or not. And that's the complexity of life and I think the reality of it. On the first part of how do you actually do a good job of this? That's like the long arc of technology and I don't want to claim that like after three or four months we've solved some massive problem and be like, ah, guys, like this is done. Like we we've done it here. What I can say that we lean into and we think give us tailwinds to be able to tackle this number one is that we come from a background in search. And what that means is that we spend a lot more time and energy and effort into. Retrieval as being like an important problem and understanding what are the facts or what are the opinions, or what are the things that this person has said and how do we like, make sure we're relying on that. And what that does is it gives you a boundary in terms of the AI and what it like when you are generating a response or when you are trying to [00:09:00] Yeah, like generate like leverage that you c

    48 min
  3. Getting kicked out of the SJSU Food Court with Peter and Chris

    20/03/2023

    Getting kicked out of the SJSU Food Court with Peter and Chris

    Peter Lowe and Chris Hockenbrocht discuss their startup Fresh Bot, a food automation platform that uses robotics and machine learning to reduce labor costs and make food more affordable. They discuss the importance of "jedi mind tricks" when launching a business, the trend of unhealthy food in America, the potential of automation in the food service industry, the challenges of automation, the difficulty of hardware startups in Silicon Valley, the potential of automated delivery, the idea of a burrito cannon, the technical risks of building a restaurant automation platform, the importance of owning the experience, their own diets, the idea of eating what our ancestors ate, the Amish and their cautious approach to new technology, the limitations of reductionism when it comes to food and nutrition, and their shared values and goals. - Chris and Peter === [00:00:00] hi, I'm Bryan and I'm John. And we are hosting the Pioneer Park Podcast where we bring you in-depth conversations with some of the most innovative and forward-thinking creators, technologists, and intellectuals. We're here to share our passion for exploring the cutting edge of creativity and technology. And we're excited to bring you along on the journey. Tune in for thought-provoking conversations with some of the brightest minds of Silicon Valley and beyond. John: Welcome to Pioneer Park. Today we're shooting live from South Park Commons. Our guest today are Peter Lowe and Chris Hockenbrocht. And Peter is an expert in hardware and product. Chris is an expert in machine learning and cryptography. They're both members here at South, Park Commons, and they're building a new startup called Fresh Bot. Peter and Chris, welcome to the show. Chris: Hey, thanks John. Peter: Welcome. Thank you. Glad to have you. How are y'all doing today? Bryan: Good with a little bit of setup for our first live feed. You know, both of you were here for some of that, so we're working out the kinks of getting [00:01:00] on microphones and getting videos set up. So, uh, you know, first time's a charm or maybe the third time's a charm. We'll find out. Chris: Yeah. John: Yep. All right. So we're super excited about the work you guys are doing and it entails both robotics and food. So, do you wanna tell us a little bit about what you're working on? Chris: Yeah one of the things that we really see as a trend is food costs rising. And so one of the questions is how can you even reduce that? And the way we see tackling that is through automated, front end in food service. So I wouldn't call it robotics, but a lot of different automation techniques that can be applied to different sorts of food preparations that hopefully can reduce the cost of labor going into the food. And hey if we can solve that, then we can start to think about bringing down food prices. Bryan: Interesting. Yeah, I just read recently that there's a suspicion that there's some collusion in the egg industry that is causing the massive rise of if egg prices that we've experienced the past couple years, [00:02:00] but obviously that's further up the production pipeline than what y'all are doing. So concretely, what is Fresh Bot? Chris: Right now? Currently we're looking into a variety of different products for food automation. We have a MVP on smoothie automation and other drinks. So there's a lot of different PE components that we put into a machine and it allows us to dispense liquid solids, do blending. And so we could conceivably put a lot of different things. One of the things I really like about this is that it's customizable. So you take individual machine and we can stock it with different things and we can tailor actually to the particular market. But we can do liquid solids powder dispensing. We can recombine these into any sort of drink that you might want, Peter: One reason yeah, starting with this kind of drinks platform and starting with smoothies, which are one of the hardest drinks to make is interesting. Like for reference Starbucks and Dutch Bros, like about 75% of their drink sales are their cold beverages at this point. They're, cold brew coffee, [00:03:00] frappuccino you know, juice drinks and stuff. So, I all of that is gonna be very easy to automate with the platform that we're making. Just for a little bit of market orientation reference there. Gotcha. Bryan: And I recall, so I think several of us have had the pleasure of being part of some test exercises with Fresh Bot, and it wasn't exactly Fresh Bot, but it was Peter testing your smoothie recipes here at South Park Commons. And at the time, I believe you just sort of brought in raw ingredients and you were just mixing on the spot and sort of having a few different offerings. And I guess that was just sort of a, a menu taste, a menu testing. Is that right? Peter: Yeah, yeah. I mean I think this kind of comes from having gone to the Stanford D school and taking on this product mindset which is, has been a sort of useful mindset and tool set, it's very difficult. Hardware is so complicated and so difficult to make that your engineering instinct is that you want to start building something immediately. But that's not necessarily the fastest way to get the answer to the questions that you have, right. About a startup addressing whatever your key risks are. And with a lot of the prototyping that we've done, it's actually. Not necessarily involved a soldering [00:04:00] iron at the first blush. Right. You know, one of the key questions was like, are people interested in food and the venues that we're interested in, do they want food or what, maybe which items resonate more with people, you know, did they want the sugary thing or the healthier thing? You know, getting some of this broad thick data, from users about like how they think about food, what they like. Bryan: I love you've shared with me over the past month or so, some of the stories from the front lines of your testing. I think some of them are really fascinating. How many places have y'all been kicked out of so far? Chris: Well, I mean, as far as I recall there's been two. We went to a mall, it was a security guard. He came up and said, you just can't be doing this here. Right. Bryan: I guess we should, uh, we should give people the setup. Mm-hmm. So what are you doing when you go to test these on site? Chris: So yeah the machine was taken to a mall. It wasn't actually a fully functioning prototype. What we were trying to do is gauge interaction. Would people simply walk up to the machine, interact, attempt and order Peter: mm-hmm. . And this was sort of not a machine, it was really sort of a fridge with a sticker on it. Yeah. It looked like pre-engineering. Yes.[00:05:00] Chris: Yeah. And security wasn't very happy about that. But you know, the only regret I think we have is not walking out in handcuffs, hey, it would've made for a great great PR stunt there. The other time was we were more recently at San Jose State University. We went right into their food court and we successfully got about two hours of sales done. Students were coming up, people were enjoying it, and then over time, one person would come up, they would go talk to their manager, go talk to this person. And eventually the building manager came who was in charge of all the food court. And he said, you just can't be here doing this. Like, you know, essentially people pay to come in. Like the restaurants that are there paying, you can't just come in. And Peter here was doing a really great job of deflecting them. You know, just, just, uh, it's really great if you change somebody's focus they start thinking about things in a whole different light. Like, if they're like, what are you doing here? Well, we're making healthy smoothies for people and then, you know, we really [00:06:00] care about people's health. You know, you end up in this place where, now they're like, pitting two goods against each other. Either I'm doing my job or I'm like supporting Healthy Smoothies. It's this cognitive dissonance that they have to resolve. And so it wasn't until we got to like a really serious manager who just came and told us that we had to leave that, uh mm-hmm. Peter: Yeah. Just to be clear too, I I have the, food safety handler Safe Serve certification. We're not breaking any food safety rules with any of this stuff. We do take health and you know, proper process seriously. Yeah. So yeah, we just can't pay the rent. Right. Yeah. Early testing phase. John: Jedi mind tricks are crucial to, to launching this kind of business. Peter: Yeah. I mean, I suppose really any startup, I guess there's a good reason why, you know, YC asks essentially, what's the biggest sort of non code, hack you've ever pulled off. Right? So there's a lot of hacks necessary sometimes. Yeah. Yeah. Absolutely. Bryan: yeah. So I'm curious to connect this back to the larger theme of health and access to [00:07:00] healthy food in America, and whether or not your efforts in this area are based in some sort of critique or analysis of what's happening in that space. Chris: Well, there's certainly a long running trend of food towards less healthy things, and there's probably a few different components playing into this. One is just taste preference, right? Less healthy food tastes better. People like sugar. Sugar. When it sits on the tongue, it is just, hmm, that's good. And it's hard to avoid. And so the products that you end up seeing at the supermarket, it CPG that is, or the products that you're getting from any sort of restaurant might be laced with additional sugars or additional fats. Things that just really, make it taste good. And so it's hard to satisfy the desire for healthy and balance that with taste. Another factor is, there's this industrial farming situation where we have a bunch of subsidies that go towards different sorts of crops and being subsidized now and being produced in mass. [00:08:00] Well, why don't we just shove

    48 min
  4. Alignment, risks, and ethics in AI communities with Sonia Joseph

    04/02/2023

    Alignment, risks, and ethics in AI communities with Sonia Joseph

    Check out our interview with Sonia Joseph, a member of South Park Commons and researcher at Mila, Quebec's preeminent AI research community. Topics: - India's Joan of Arc, Rani of Jhansi [[wiki](https://en.wikipedia.org/wiki/Rani_of...)] - Toxic Culture in AI - The Bay Area cultural bubble - Why Montreal is a great place for AI research - Why we need more AI research institutes - How doomerism and ethics come into conflict - The use and abuse of rationality - Neural foundations of ML Links: Mila: https://mila.quebec/en/ Follow Sonia on Twitter here: https://twitter.com/soniajoseph_ Follow your hosts: John: https://twitter.com/johnvmcdonnell Bryan: https://twitter.com/GilbertGravis And read their work: Interview Transcript Hi, I'm Bryan and I'm John. And we are hosting the Pioneer Park Podcast where we bring you in-depth conversations with some of the most innovative and forward-thinking creators, technologists, and intellectuals. We're here to share our passion for exploring the cutting edge of creativity and technology. And we're excited to bring you along on the journey. Tune in for thought-provoking conversations with some of the brightest minds of Silicon Valley and beyond. John: okay, so today I'm super excited to invite Sonia onto the podcast. Sonia is an AI researcher at Mila Quebec AI Institute and co-founder of Alexandria, a Frontier Tech Publishing house. She's also a member of South Park Commons where she co-chaired a forum on agi, which just wrapped up in December. We're looking forward to the public release of the curriculum later this year. So keep an eye out for that. Sonia, welcome to the. Sonia: Hi John. Thanks so much for having me. [00:01:00] It's a pleasure to be here. Bryan: Yeah, welcome. Sonia: Hi, Bryan. Bryan: Yeah, so I guess for full transparency, John and I were both attendees of this AGI forum. And I was waiting every week's I guess session with baited breath. I thought that the discussions in the forum were super interesting. There was a bunch of really prominent, interesting guests that we had come through. And yeah, it was really interesting some intersection of like practical questions with sci. And a lot of things that are like used to be sci-fi that are getting far more practical than perhaps we ever anticipated. John: All right. So Sonia, I feel like the question that's on everyone's mind is, Who is Rahni of Jansi ? Sonia: Oh my gosh. Yeah. Yeah. So basically like I grew up on a lot of like Indian literature and Indian myth. And she's considered to be India's Jonah Arc. So female leader like has a place in feminist scholarship if you look at any literature. And I [00:02:00] believe she read. Of India against the British. I actually wanna fact check that . John: Yeah, no, that's really cool. Just we love the the recent kind of blog post that you worked on with S and you pointed out how these kind of influences like really enabled you to succeed at your current endeavors. So we're like, just curious about maybe like how your background. Made you who you are. . Sonia: Yeah. Yeah. No, I appreciate that question a lot. So like I, I would say I had a kinda culturally schizophrenic background in some ways where I spent a lot of time. When I was a child in India but then the other half of my life was in Massachusetts. Which was very like a lot of Protestantism and growing up on a lot of like American history. I like I saw things in a calculation of various like cultures and religions and that has like very much impacted like my entry into AI and how I'm conceiving of ai. John: Yeah. Something that we loved about the AGI forum is that you have this [00:03:00] kind of really critical eye towards the culture of the way that AI is practiced and the way that research is going forward. And we can I think you really brought this kind of unique perspective that was super valuable. Bryan: Yeah, I'm curious do you, are there any points at which you think there's like current I guess problems either in the way that research is being done or the kind of I guess the moral framework in which that research is being done? Sonia: It's a really interesting question. I would say the AI world is like very big first of all, so it's like hard to critique the entire thing. But it. Have it, parts of it have some of the problems that physics had in the 1990s or still has in being male dominated or like focused on like certain cultures. And the culture will generate a certain type of research. So your scientific conclusions and the community or culture you're in, you have this like reciprocal relat. For example, in like the 1990s like there's this amazing book called The Trouble with Physics, with Lee [00:04:00] Smolin that goes into sort of like the anthropology of the physics community. And the 1990s, the physics community was deeply obsessed with string theory. If you weren't working on string theory, you just weren't cool at all and you probably weren't gonna get tenure track. Goes into how string theory wasn't empirically proven. It was like mathematically, internally consistent, but it was by no means like a theory of everything. And how the monoculture of physics and like the intellectual conclusion of St string theory would feed off each other in this like that cycle. Lee Smolin basically created his own institute to deal with this problem cuz he got just like very frustrated. I don't think AI is quite so bad. But there are pockets of AI that I do notice. Similar dynamics. And particular the parts of AI that were previously like more influenced by effective altruism and LessWrong in this like the AI safety and alignment camp. I don't think these fields have as bad a problem anymore. There have been recent. Attempts [00:05:00] called the reform attempt that Scott Aaronson had a very great blog post on how AI safety is being . There's an attempt for AI safety, like a legitimate science that's like empirically grounded and has mathematical theory. But I did notice that more classical AI safety definitely had these like 1990s style string theory problems, , both in the science being like not empirically verified, but like dogmatic. And also in the community that was generating it not being fairly healthy. And I guess with the caveat, I'll say have been either adjacent to or in these communities since I was basically like 12. So I have seen like a very long history. And I also don't mean to like unilaterally critique these communities. think they have done a lot of good work and given a lot of contributions to the field both in terms of frameworks talent funding but a am looking at these communities with a critical eye, like as we move forward. Cause it's like, what is, what are coming. Both as a scientific paradigm and as the research community that like generates that paradigm. Bryan: I'm curious. To me there seemed like kind of two issues. I don't know if they're orthogonal but I think like the scientific integrity of a community and the ability for that community to [00:07:00] generate and falsify hypotheses and the culture of that community and whether or not that culture is a healthy culture to be in, whether it's like a nice place to work in and all that sort of stuff. And I guess my hypothesis is like none of us wanna work in a s****y culture and none of us wanna be part of communities where insults or like abusive behavior is tolerated at all. But I think that a lot of scientific communities can be interpreted as quite dogmatic because there's an insistence on a specific sort of intellectual lens that you need to adapt to participate in the discussion. And for me it's it always seems like there's like a balance there. Because for instance, if you wanna be a biologist, you better accept evolution. And like you, you're you have to meet that criteria. And I'm curious, do you think that, for instance in the, is there some sort of almost Intellectual cowtowing or basically a tip of the hat that one needs to do when you're studying artificial intelligence to make it into the room to be taken seriously. Sonia: That's a great question. Yeah and evolution is an interesting example. Cause that's one that has been empiric. [00:08:00] Verified in various places and maybe the exact like structure o of evolution is open to debate. Like we dunno if it's more gradual or happens in leap burst. But example in some AI communities is of accepting that on oncoming AI is gonna be bad. Or like culture or more apocalyptic culture. And this is prevalent in a lot of AI safety communities where in order to. Get your research like taken seriously or to even be viewed as an ethical person. It becomes about character. You have to view AI as inevitable. It's coming fast, and it's more likely than not to be incredibly disastrous. And to be clear, I think ai, like we should be thinking about the safety behind incoming technologies. That's obvious and good. If AI ends the world. That would be terrible. And even if there's a very small percentage that could happen, we should like to make sure it doesn't happen. But I do think that some of these communities like overweight that and make it almost part of the sort of dogma when it's not empirically proven that this is gonna happen. We have no evidence this is going to happen. It's like a priority argument [00:09:00] that's actually like mimicking a lot of. Student stay cults and also like death cults that have been seen throughout history. And it's absolutely fascinating that much to less now than it was before. A lot of possibly AI safety has become like modern alignment. Or practiced in more professional spheres where I think views are a lot more, more nuanced and balanced. But there is still a shadow of Bostrom and Yudkowski and these original thinkers who were influential, e even more influential like 10 to 15 years ago. John: Sonia sometimes when I talk to people who are really into the alignment problem there's a kind of view that like the philosophical argum

    50 min

À propos

Interviews, conversations, and commentary on the frontier of science and technology gilbertgravis.substack.com