The Hitchhiker's Guide to AI

AJ Asver

Interviews with builders, creators and researchers at the cutting edge of AI to understand how it's going to change the way we live, work and play. guidetoai.parcha.com

الحلقات

  1. ٢٣‏/٠٦‏/٢٠٢٣

    Interview: Supercharging your team with Coda AI | David Kossnick

    Hi Hitchhikers, I’m excited to share another interview from my podcast, this time with David Kossnick, Product Manager at Coda. Coda is a collaborative document tool combining the power of a document, spreadsheet, app, and database. Before diving into the interview, I have an update on Parcha, the AI startup I recently co-founded. We’re building AI Agents that supercharge fintech compliance and operations teams. Our agents can carry out manual workflows by using the same policies, procedures, and tools that humans use. We’re applying AI in real-world use-cases with real customers and we’re hiring an applied AI engineer and a founding designer to join our team. If you are interested in learning more, please email founders@parcha.ai. Also don’t forget to subscribe to The Hitchhiker’s Guide to AI: Now, onto the interview... Interview: Supercharging your team with Coda AI | David Kossnick I use Coda daily to organize my work, so I was thrilled to chat with David Kossnick, the PM leading Coda’s AI efforts. We discussed how Coda built AI capabilities into their product, and their vision for the future of AI in workspaces, and he gave me some practical tips on how to use AI to speed up my founder-led sales process. Here are the highlights: * The story behind Coda’s AI features: Coda started by allowing developers to build “packs” to integrate with their product. A developer created an OpenAI pack that became very popular, showing Coda the potential for AI. At a hackathon, Coda explored many AI ideas and invested in native AI capabilities. They started with GPT-3, building specific AI features, then gained more flexibility with ChatGPT. * Focusing on input and flexibility: Coda designed flexible AI to work in many contexts. They focused on providing good “input” to guide users. The AI understands a workspace’s data and connections. Coda wants AI to feel like another teammate—able to answer questions but needing to be taught. * Saving time and enabling impact: Coda sees AI enabling teams to spend less time on busywork and more time on impact. David demonstrated how Coda’s AI can summarize transcripts, categorize feedback, draft PRDs, take meeting notes, and personalize outreach. * Tips for developing AI products: Start with an open-ended prompt to see how people use it, then build specific features for valuable use cases. Expect models and capabilities to change. Focus on providing good "input" to guide users. Launching AI requires figuring out model strengths, setting proper expectations, and crafting the right UX. * How AI can improve team collaboration: David shared a practical example of how AI can help product teams share insights, summarize meetings and even kick-start spec writing. * Using AI for founder-led sales: David also helped me set up an AI-powered Coda template for managing my startup's sales process. The AI can help qualify leads and draft personalized outreach emails. * The future of AI in workspaces: David is excited about AI enabling smarter workspaces and reducing busywork. He sees AI agents as capable teammates that understand companies and workflows. Imagine asking a workspace about a project's status or what you missed on vacation and getting a perfect summary. * From alpha to beta: Coda’s AI just launched in beta with more templates and resources. You can try it for free here: http://coda.io/ai David’s insights on developing and launching AI products were really valuable. Coda built an innovative product, and I'm excited to see how their AI capabilities progress. Thanks for reading The Hitchhiker's Guide to AI! Subscribe for free to receive new posts and support my work. Episode Links Coda’s new AI features are available in Beta starting today and you can check them out here: http://coda.io/ai. You can also check out the founder-led sales CRM I build using Coda here: Supercharging Founder-led Sales with AI Transcript HGAI: Coda AI w/ David Kossnick Intro David Kossnick: ,One of our biggest choices was to make AI a building block initially. And so it can be plugged in lots of different places. There's a writing assistant, but there's also AI, you can use in a column. And so you can use it to fill in data, you can use it to write for you to categorize, for you, to summarize for you and so forth across many different types of content. David Kossnick: Having that customizability and flexibility is really important. I'd say the other piece more broadly is there's been a lot of focus across the industry on what, how to make good output from AI models and benchmarks and what good output is and when do AI models hallucinate and lie to you and these types of things. David Kossnick: I think there's been considerably less focus on good input. And what I mean by that is like, how do you teach people what to do with this thing? It's incredibly powerful, but also writing natural language is really imprecise and really hard. AJ Asver: Hey everyone, and welcome to another episode of the Hitchhikers Guide to ai. I'm your host, AJ Asver and in this podcast I speak to creators, builders, and researchers in artificial intelligence to understand how it's going to change the way we live, work, and play. Now, You might have read in my newsletter that I just started a new AI startup AJ Asver: since starting this startup a few months ago, a big part of my job has been attracting our first set of customers. I love talking to customers and demoing our product, but when it comes to running a founder-led sales process, prospecting, qualifying leads, And synthesizing all of those notes can be really time consuming, and that's exactly why I decided it was time to use AI to help me speed up the process and be way more productive with my time. AJ Asver: And to do that, I'm gonna use my favorite productivity tool, Coda. Now, if you haven't heard of Coda, it's a collaborative document editing tool that's a mashup of a doc, a wiki, a spreadsheet, and a database. AJ Asver: In this week's episode, I'm joined by David Kossnick, who's the product manager that leads Coda's AI efforts. David's going to share the story behind Coda adding AI to their product. Show us how their new AI features work, and give me some tips on how I can use AI in Coda. AJ Asver: By the way, I've included a template for the AI powered sales CRM I built in the show notes, so you can check it out for yourself. AJ Asver: But before I jump into this episode, I wanted to share a quick update on my new startup At Parcha, we're on a mission to eliminate boring work. Our AI agents make it possible to automate repetitive manual workflows that slow down businesses today. AJ Asver: And we're starting with FinTech in compliance and operations. Now, if you're excited by the idea of working on cutting edge autonomous AI and you're a talented applied AI engineer or designer based in the Bay Area, we would love to hear from you. Please reach out to founders@parcha.ai if you wanna learn more about our company and our team. AJ Asver: Now, let's get back to the episode. Join me as I hear the story behind Coda's latest AI features in the Hitchhikers Guide to AI. AJ Asver: hey David, how's it going? Thank you so much for joining me for this episode. David Kossnick: It's going great. Thanks for having me on today. What is Coda? AJ Asver: I am so excited to, go deeper into Coda's AI features with you. As I was saying at the beginning of this episode, I've been using Coda's AI features for the last month. It's been kind of a preview and it's been really cool to see, it's capable of. I'm already a massive Coda fan, as you know. I used it previously at Brex. I used it to organize my podcast and my newsletter, and most recently it's kind of running behind the scenes at our startup as well for all sorts of different use cases. But in this episode, I'd love to jump in and really understand why you guys decided to build this and what really was the story behind Coda's AI tools and how it's gonna help everyone be more productive. AJ Asver: So maybe would you describe and what exactly it does? David Kossnick: Coda was founded with a thesis that the way people work is overly siloed. So if you think about the most common productivity tools, you have your doc, you have your spreadsheet, and you have your app. And these things don't really talk to each other. And the reality is often you want a paragraph and then a table, and then another paragraph, and then a filter view of the table, and then some context in an app that relates to that table. David Kossnick: And it's just really hard to do that. And so you have people falling back to the familiar doc, but litter with screenshots and half broken embeds. So Coda said, what if we made something where all these things could fit in one doc and they worked together perfectly? And that's what Coda is. David Kossnick: It's a modern, document that allows you to have a ton of flexibility and integrate with over 600 different tools, uh, for your team. AJ Asver: Yeah, I think that idea of Coda of being able to one, integrate with different tools would also be both a doc that can become a table and then have a mashup of all this different type of data is something I've really valued about it. I think, especially when I was at Brex and we used to run our team meetings on Coda, it was really great to be able to have like the action items really formatted well in the table, but also have the notes and more freeform and then combine that with kind of follow ups. AJ Asver: And we even had this crazy table my product team where we would post like weekly photos and that's like really hard to do or in an organized way in a doc, and you'd never wanna do that in a spreadsheet. So, um, I love the fact that Coda enables you to combine all that different type of data together. So, Coda has that. And then it also has packs, which you mentioned too, right? And these are these integrations that allow you to like take data from lo

    ٤٠ من الدقائق
  2. ١٤‏/٠٥‏/٢٠٢٣

    Interview: Human-level AI and AI Agents with Josh Albrecht, CTO of Generally Intelligent

    Interview: AGI and developing AI Agents with Josh Albrecht, CTO of Generally Intelligent I’ve been spending a lot of time researching, experimenting and building AI agents lately at Parcha. That’s why I was really excited I got the chance to interview AI researcher Josh Albrecht, who is the CTO and co-founder of Generally Intelligent. Generally Intelligent’s work on AI Agents is really at the bleeding edge of where AI is headed. In our conversation, we talk about how Josh defines AGI, how close we are to achieving it, what exactly an AI researcher does, and his company’s work on AI agents. We also hear about Josh’s investment thesis for Outset Capital, the AI venture capital fund he started with his co-founder Kanjun Qui. Overall it was a really great interview and we covered a lot of ground in a short period of time. If you’re as excited about the potential of AI agents as I am or want to better understand where research is heading in this space, as I am this interview is definitely worth listening to in full. Here are some of the highlights: * Defining AGI: Josh shares his definition of AGI, which he calls Human-level AI a machine’s ability to perform tasks that require human-like understanding and problem-solving skills. It involves passing a specific set of tests that measure performance in areas like language, vision, reasoning, and decision-making. * Generally Intelligent: General Intelligence's goal is to create more general, capable, robust, and safer AI systems. Specifically, they are focused on developing digital agents that can act on your computer, like in your web browser, desktop, and editor. These agents can autonomously complete tasks and run on top of language models like GPT. However, those language models were not created with this use case in mind, making it challenging to build fully functional digital agents. * Emergent behavior: Josh believes that the emergent behavior we are seeing in models today can be traced back to training data. For example being back to string together chains of thought could be from transcript of gamers on Twitch. * Memory systems: When it comes to memory systems for powerful agents, there are a few key things to consider. First of all, what do you want to store and what aspects do you want to pay attention to when you're recalling things? Josh’s view is that while it might seem like a daunting task, it turns out that this isn't actually a crazy hard problem. * Reducing latency: One way to get around the current latency when interacting with LLMs that are following chains of thought with agentic behavior is to change user expectations. Make the agent continuously communicate updates to the user for example vs. just waiting for to provide the answer. For example, the agent could send updates during the process, saying something like "I'm working on it, I'll let you know when I have an update." This can make the user feel more reassured that the agent is working on the task, even if it's taking some time. * Parallelizing chain of thought: Josh believes we can parallelize more of the work done by agents in chain of thought processes, asking many questions at once and then combining them to reach a final output for the user. * AI research day-to-day: Josh shared that much of the work he does as an AI researcher is not that different from other software engineering tasks. There’s a lot of writing code, waiting to run it and then dealing with bugs. It’s still a lot faster than research in the physical sciences where you have to wait for cells to grow for example! * Acceleration vs deceleration: Josh shared his viewpoints for both sides of the argument for accelerating vs decelerating AI. He also believes there are fundamental limits to how fast AI can be developed today and this could change a lot in 10 years as processing speeds continue to improve. * AI regulation: We discussed how it’s challenging to regulate AI due to the open-source ecosystem. * Universal unemployment: Josh shared his concerns that we need to get ahead of educating people on the potential societal impact of AI and how it could lead to “universal unemployment”. * Investing in AI startups: Josh shared Outset Capital’s investment thesis and how it’s difficult to predict what moats will be most important in the future. Episode Links: The Hitchhiker’s Guide to AI: http://hitchhikersguidetoai.com Generally Intelligent: http://generallyintelligent.com Josh on Twitter: https://twitter.com/joshalbrecht Episode Content: 00:00 Intro 01:42 What is AGI? 04:40 When will we know that we have AGI? 05:40 Self-driving cars vs AGI 07:10 Generally Intelligent's research on agents 09:51 Emergent Behaviour 11:07 Beyond language models 13:17 Memory and vector databases 15:25 Latency when interacting with agents 17:13 Interacting with agents like we interact with people 19:08 Chaing of thought 19:44 What do AI researchers do? 21:44 Accelerations vs. Deceleration of AI 24:05 LLMs as natural language-based CPUs 24:56 Regulating AI 27:31 Universal unemployment Thank you for reading The Hitchhiker's Guide to AI. This post is public so feel free to share it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit guidetoai.parcha.com

    ٣٣ من الدقائق
  3. ٢٣‏/٠٣‏/٢٠٢٣

    Enterprise AI, Augmented Employees, AGI and the Future of Work with Charlie Newark-French, CEO of Hyperscience

    Hi Hitchhikers! I’m excited to share this latest podcast episode, where I interview Charlie Newark-French, CEO of Hyperscience, which provides AI-powered automation solutions for enterprise customers. This is a must-listen if you are either a founder considering starting an AI startup for Enterprise or an Enterprise leader thinking about investing in AI. Charlie has a background in economics, management, and investing. Prior to Hyperscience, he was a late-stage venture investor and management consultant, so he also has some really interesting views on how AI will impact industry, employment, and society in the future. In this podcast, Charlie and I talk about how Hyperscience uses machine learning to automate document collection and data extraction in legacy industries like banking and insurance. We discuss how the latest large-scale language models like GTP-4 can be leveraged in enterprise and he shares his thoughts on the future of work where every employee is augmented by AI. We also touch on how AI startups should approach solving problems in the enterprise space and how enterprise buyers think about investing in AI and measuring ROI. Finally, I get Charlie’s perspective on Artificial General Intelligence or AGI, how it might change our future, and the responsibility of governments to prepare us for this future. I hope you enjoy the episode! Please don’t forget to subscribe @ http://hitchhikersguidetoai.com Thanks for reading The Hitchhiker's Guide to AI! Subscribe for free to receive new posts and support my work. Episode Notes Links: * Charlie on Linkedin: https://www.linkedin.com/in/charlienewarkfrench/ * Hyperscience: http://hyperscience.com * New York Times article on automation: https://www.nytimes.com/2022/10/07/opinion/machines-ai-employment.html?smid=nytcore-ios-share Episode Contents: 00:00 Intro 01:56 Hyperscience 04:52 GPT-4 09:41 Legacy businesses 11:13 Augmenting employees with AI 15:48 Tips for founders thinking about AI for enterprise 20:34 Tips enterprise execs considering AI 23:49 Artificial General Intelligence 29:41 AI Agents Everywhere 32:12 The future of society with AI 37:44 Closing remarks Transcript: HGAI: Charlie Newark French Intro AJ Asver: Hey everyone, and welcome to the Hitchhiker Guide to ai. I am so happy for you to join me for this episode. The Hitchhiker Guide to AI is a podcast where I explore the world of artificial intelligence and help you understand how it's gonna change the way we live, work, and play. Now for today's episode, I'm really excited to be joined by a friend of mine, Charlie Newark, French. AJ Asver: Charlie is the CEO of hyper science, a company that is working to bring AI into the enterprise. Now, Charlie's gonna talk a lot about what hyper science is and what they do, but what I'm really excited to hear Charlie's opinions on is how he sees automation impacting our future. AJ Asver: Both economically, but as a society, as you've seen with recent launch of G P T four and all the progress that's happening in AI, there's a lot of questions around what this means for everyday knowledge workers and what it means for jobs in the future. And Charlie, has some really interesting ideas about this, and he's been sharing a lot of them on his LinkedIn and I've been really excited to finally get him on the show so we can talk. Charlie also has a background in economics and management. He studied an MBA at Harvard and previously was at McKinsey, and so he has a ton of experience thinking about industry as a whole, enterprise and economics and how these kind of technology waves can impact us as a society. AJ Asver: If you are excited to hear about how AI is gonna impact our economy, our society, and how automation is gonna change the way we work, then you are gonna love this episode of The hitchhiker Guide to ai. AJ Asver: Hey Charlie, so great to have you on the podcast. Thank you so much for joining me. Charlie: Aj, thank you for having me. I'm excited to discuss everything you just talked about AJ Asver: maybe to start off, one of the things I'm really excited to understand is how did you end up at Hyper Science and what exactly do they do? Hyperscience Charlie: Yeah, hyper Science was founded in 2014. It was founded by three machine learning engineers. so We've been an ML company for a long time. My background before hyper science was in late stage investing. Had sort of the full spectrum of outcomes there. Charlie: Some why successful IPOs, some strategic acquisitions, and then a lot of miserable, sleepless nights on some of the other areas. I found, hyper science, incredibly impressed with, their ability to take cutting edge technology and apply it to real well problems. We use machine vision, we use large language models, and we use natural language processing, and we use that those technologies to speed up back office process. Charlie: The best examples here are a loan origination, insurance claims processing, customer onboarding. These are sort of miserable long processes, a lot of manual steps, and we speed those up. With some partners taking it down from about 15 days to four hours. Charlie: So all of that data that's flowing in of this is who I am, this is what's happened, this is the supporting evidence. We ingest that. It might be an email, it might be a document. It's some human readable data. We ingest that, we process it, and then ultimately the claims administrator can say, yes, pay out this claim, or no, there's something. AJ Asver: Yeah, so what, what you guys are doing essentially is you had folks that were previously looking at these documents, assessing these documents, maybe extracting the data out of these forms, maybe it was emails, and entering those into some database, right? And then decision was made, and now your technology's basically automating that. It's kind of sucking up all these documents and basically extracting all that information, helping make those decisions. My understanding is that with machine learning, what you're really doing is you've kind of trained on this data set, right, in a supervised way, which means you've said like, this is what good looks like. AJ Asver: This is what, you know, extracting a, a, a data from this form looks like now we're gonna teach this machine learning algorithm how to do it itself. Now what what I found really interesting is that, That was kind of where we made the most advancements, really in kind of AI over the last decade, I would say. AJ Asver: Right? It's like these deeper and deeper neural networks. They could do machine learning in very supervised ways, but what's recently happened with large language models especially, is that we've now got this like general purpose AI that, you know, GPT-4, for example, just launched this. and there was an amazing demo where I think the CTO of OpenAI basically sketched on like the back of a napkin, a mockup for a website, and then he put in in GPT and it was able to like, make the code for it. AJ Asver: Right. So when you think about a general purpose, let large language model like that, compared to the machine learning, y'all are using do you consider that to be a tool that you'll eventually use? Do you think it's kind of a, a threat to like the companies that have spent the last, you know, 5, 6, 7 years, decades, maybe kind of perfecting these ma machine learning tools or, you know, I, is it something that's gonna be more like different use cases that won't be used you know, by your customers? GPT-4 Charlie: Open ai ChatGPT, GPT-4. Anything that's been, the technology you're speaking about has really had two fundamental impacts. There's been the technology. It's just very, very cutting edge, advanced technology. And then you've got the adoption side of it. And I think both sides are as interesting as each other. Charlie: On the adoption side, I sort of like to compare it to the iPhone that there was a lot of cutting edge technology, but what they did is they made that technology incredibly easy to use. There's a few things that Open AI has done here that's been insanely impressive. First, , they use human language. Um, humans will always assign a higher level of intelligence to something that speaks in its language. Charlie: The other thing, it's a very small thing, but I love the way that it streams answers so it doesn't have a little loading sign that goes around and dumps an answer on you. It's like, it's almost like it's communicating with you. Allow you to read in real time and it feels more like a conversation. Charlie: Obviously the APIs have been a huge addition. It's just super easy to use, so that's been one big step forward. But it's a large language model. It's a chat bot. I don't wanna underestimate the impact of that technology, but my thoughts are AI will be everywhere. It's gonna be pervasive in every single thing we do. Charlie: And I hope that chatbots and large language models aren't the limitation of ai. I'd sort of like to compare chatbots and large language. To search the internet is this huge thing, one giant use case that if you ask people what is the internet? They think it's. Google, And that's the sort of way I think this will play out with AI and the likes of a whichever large language model and chatbot wins to be the Google of that world, which at the moment appears very clearly to be open ai. Charlie: But there's some examples of stuff that. Certainly right now, that approach wouldn't solve. I'll give you a few, but the, this list is tens of thousands of use cases long. We spoke about autonomous vehicles earlier. I suspect LLMs are not the approach for that physical robotics. Healthcare detecting radiology diseases, fraud detection. Charlie: I'm sure if you put in like a fake check in front of GPT-4 right now it was written on the napkin, it might be able to say, okay, this is what the word fraud means. This is what a check looks like, but you've got substantially more advanced ai AI out th

    ٣٩ من الدقائق
  4. ١٦‏/٠٣‏/٢٠٢٣

    How to prompt like a pro in MidJourney with Linus Ekenstam

    Note: This episode is best experienced as a video: https://www.youtube.com/watch?v=KDD4c5__qxc Hey Hitchhikers! MidJourney V5 was just released yesterday so it felt like the perfect opportunity to do a deep dive on prompting with a fellow AI newsletter . Linus creates amazing MidJourney creations every day ranging from retro rally cars to interior design photography that looks like it came straight out of a magazine. You wouldn’t believe that some of Linus’s images are made with AI when you see them. But what I love most about Linus is his focus on educating and sharing his prompting techniques with his followers. In fact, if you follow Linus on Twitter you will see that every image he creates includes the prompt in the “Alt” text description! In this episode, we cover how Linus shares how he went from designer to AI influencer, what generative AI means for the design industry, and we go through a few examples of prompting in MidJourney live. One thing we cover that is beneficial for anyone using MidJourney for creating character-driven stories is how to create consistent characters in every image. Using the tips I learned from Linus, I was able to create some pretty cool Midjourney images of my own, including this series where I took 90s movies and turned them into Lego! I also want to thank Linus for recommending my newsletter on his substack, which has helped me grow my subscribers to over a thousand now! Linus has an awesome AI newsletter that you can subscribe to here: I hope you enjoy the episode and don’t forget to subscribe to this newsletter at http://HitchhikersGuideToAI.com. Show Notes Links: - Watch on Youtube: https://bit.ly/3mWrE5e - The Hitchhikers Guide to AI newsletter: http://hitchhikersguidetoai.com - Linus's twitter: http://twitter.com/linusekenstam - Linus's newsletter: http://linusekenstam.substack.com - Bedtime stories: http://bedtimestory.ai - MidJourney: http://midjourney.com Episode Contents: 00:00 Intro 02:39 Linus's journey into AI 05:09 Generative AI and Designers 08:49 Prompting and the future of knowledge work 15:06 Midjourney prompting 16:20 Consistent Characters 28:36 Imagination to image generation 30:30 Bonzi Trees 31:32 Star Wars Lego Spaceships 37:57 Creating a scene in Lego 43:03 What Linus is most excited about in AI 46:10 Linus's Newsletter Transcript Intro aj_asver: Hey everyone. And welcome to the Hitchhiker's guide to AI. I am so excited for you to join me on this episode, where we are going to do a deep dive on mid journey. aj_asver: MidJourney V5, just launched. So it felt like the perfect time for me to jump in with my guests, Linus Ekenstam. And learn how to be a prompting pro. aj_asver: Linus is a designer turned AI influencer. Not only does he have an AI newsletter called inside my mind, but he's also created a really cool website where you can generate bedtimestories for your kids. Complete with illustrations. And he is a mid journey prompting pro. I am constantly amazed by the photos and images that Linus has created using mid journey. It totally blows my mind. aj_asver: From rally cars with retro vibes to bonsai trees that have candy growing on them. And most recently hyper-realistic photographs of interior design that looked like they came straight out of a magazine. Linus is someone I cannot wait to learn from. And he's also going to share his perspective on what all this generative AI means for the design industry, which he has been a part of for over a decade. By the way it's worth noting that a lot of the stuff we cover in this episode is very visual. So if you're listening to this. As an audio only podcast. You may want to click on the YouTube link in the show notes and jump straight to the video when you have time. aj_asver: So if you're excited about I'm one to learn how you can take the ideas in your head and turn them into awesome images. Then join me for this episode of the Hitchhiker's guide to AI. aj_asver: Thank you so much for joining me on the Hitch Hiker's Guide to ai. Really glad to have you on the podcast. I feel like I'm gonna learn so much in this episode. Linus Ekenstam: Yeah. Thank you for having me. Linus Ekenstam: I mean, I'm not sure about the prompt, you know, prompt guru, but let's try aj_asver: Well, I mean, you tweet about your prompts every day. aj_asver: on Twitter, and they seem to be getting better every time. So You are my source of truth when it comes to becoming a great prompter. And I also, by the way, love the one thing you do when you tweet your mid journey kind of pictures that you built, um, that you've created, that you always add in the alt text on Twitter. Um, exactly what the prompt was. And I found that really helpful. Cause when I'm trying to work out how to use Mid Journey, I look at a lot of your alt texts. So, um, also include a link to your Twitter handle so everyone Linus Ekenstam: Nice aj_asver: it out. But I guess Linus Ekenstam: I guess I'll stop. aj_asver: you know, you've been in the tech industry for a while as both a designer and a founder as well Linus Ekenstam: Yeah. Yep. aj_asver: love to hear your story on what made you, um, kind of get excited about AI and starting an AI newsletter and then, you know, sharing everything you've been learning as, as you go. Linus's journey into AI Linus Ekenstam: Yeah. I mean, if we rewind a bit and, and we start from the beginning, um, I got into the tech industry a little bit on a banana, like a bananas ski. I, I started working in, like, the agency world when I was 17. I'm 36 now, so 19 years ago, time flies. Um, and after like working with, um, customers, clients, and big ones as well, through like, through my initial years there, I kind of got fed up with it. Linus Ekenstam: And. . I went into my first SaaS business as an employee and it was email like way, way, way, way, way before this day and age, right, where you had to like code everything using tables and transparent GIFs. It was just a different world. Linus Ekenstam: And 2012 was like, that's when I started my first own business. And that was like my first foray into like the, the startup world or like building something that was used by people outside of the vicinity of, of, of Sweden or Nordics. Um, it was very interesting times. Um, and I, I've always been kind of like early when it comes to New tech, I consider myself being a super early adopter. I got Facebook as like one of the first people in. By hacking or like social hacking a friend's edu email address. And I got an MIT email address just so I could sign up on Facebook. Linus Ekenstam: Um, so now that we are here, it's like I've been touching all of these steps, like all the early tech, every single time, but I never really capitalized on it or I, I never really pushed myself into a position. I would contribute, but this time around I just, you know, I felt like I had a bit more under my belt. Linus Ekenstam: I've seen these cycles come and go, uh, and I just get really excited about like, oh s**t. Like this is the first time ever that I might get automated out by a machine. So my response or flight and fight response to this was just like, learn as much as possible as quickly as possible, and share as much of my learnings as possible to help others. Linus Ekenstam: Cannot not end up. In the same position where they fear for their lives. aj_asver: Yeah, it's, it's interesting you talk about that because I think that's a huge motivator for me as well. It's just help people understand that this AI technology is coming and it's not like it's gonna replace everyone's job, but it certainly is gonna change the way we work. And make the way we work very different. And as -you've been doing and sharing, you know, how to prompt and what it means to use ai, one of the things I've noticed is you've also received a little bit of backlash, you know, from other designers in the space Generative AI and Designers aj_asver: That maybe as embracing of AI as you have. And I, I know recently there were probably two or three different startups that announced text to UX products where you can basically type in the kind of, uh, user experience you want and it generates, mockups right which I thought was amazing and I thought, You know, that would take years to get to, but we've got that now. Linus Ekenstam: yeah, you Linus Ekenstam: post. aj_asver: and I think one of the things you said was designers need to have less damn ego and lose the God complex. aj_asver: Tell aj_asver: me a little, aj_asver: what the feedback has been like in the AI space around kind of how it's impacting design, especially your field. Linus Ekenstam: So I think, um, there, there is this like weird thing going on where. They're a lot of nice tooling coming out and engineers and, and, and developers. You kind of embrace it. They just like have a really open mindset and go, yeah, if this can help me, you know, I'll, I'll, I'll use it. Linus Ekenstam: Like, take Github Copilot is a good example. People are just raving about it and, and there is some people that are like, oh, it's, it's not good enough yet, or whatever. But like the general consensus is that this is a great tool, it's saving me a lot of time and I can focus on like more heavy lifting or thinking about deeper problems. Linus Ekenstam: But then enter the designer , like turtleneck, you know, black, all dressed in black. I mean, I'm, I, I'm one of those, right? So I'm, I'm, I'm making fun of myself as well. I'm not just pointing fingers at others here. I just think it's like weird that. Here's a tool that comes along and it's a tool, it won't replace you. Linus Ekenstam: Like I'm being slightly sarcastic and using like marketing hooks to get people really drawn in, in my content on Twitter. So I'm not really, meaning, it's not literal. I'm not saying, Hey, you're gonna be out of a job. It's more like, You better embrace this because like the change is happening and the longer you stay on the sidelines, the, the, the

    ٤٧ من الدقائق
  5. ١٧‏/٠٢‏/٢٠٢٣

    How AI Chatbots work and what it means for AI to have a soul with Kevin Fischer

    Hi Hitchhikers! AI chatbots have been hyped as the next evolution in search, but at the same time, we know that they make mistakes. And what's even more surprising is that these chatbots are starting to take on their own personalities. All of this got me wondering how these chatbots work? What exactly are they capable of, and what are their limitations? In the latest episode of my new podcast, we dive into all of those questions with my guest, Kevin Fisher. Kevin is the founder of Mathex, a startup that is building chatbot products powered by large-scale language models like OpenAI’s GPT. Kevin’s mission is to create AI chatbots that have their own personalities and one day their own AI souls. In this interview, Kevin shares what he's learned from working with large language models like GPT. We talk about exactly how large-scale language models works, what it means to have an AI soul, why chatbots hallucinate and make mistakes, and whether AI chatbots should have free will. Let me know if you have any feedback on this episode and don’t forget to subscribe to the newsletter if you enjoy learning about AI: www.hitchhikersguidetoai.com Show Notes Links from episode * Kevin’s Twitter: twitter.com/kevinafischer * Try out the Soulstice App: soulstice.studio * Bing hallucinations subreddit: reddit.com/r/bing Transcript Intro Kevin: We built um, a, a clone of myself and um, the three of us were having a conversation. And at some point my clone got very confused and was like, who? Wait, who am I? If this is Kevin Fisher and I'm Kevin Fisher, who, which one of us is. Kevin: And I was like, well, that's weird because we de like, we definitely didn't like optimize for that . And then we kept continuing the conversation and eventually my digital clone was like, I don't wanna be a part of this conversation with all of us. Like one of us has to be terminated. aj_asver: Hey everyone, and welcome to the Hitchhikers Guide to ai. I'm your tour guide AJ Asper, and I'm so excited for you to join me as I explore the world of artificial intelligence to understand how it's gonna change the way we live, work, and. aj_asver: Now AI chatbots have been hyped as the next evolution in search, but at the same time, we know that they made mistakes. And what's even more surprising is that these chatbots are starting to take on their own personalities. aj_asver: All of this got me wondering how do these large language models. What exactly are they capable of and what are their limitations? aj_asver: In this week's episode, we're going to dive into all of those questions with my guest, Kevin Fisher. Kevin is the founder of Mathex, a startup that is building chatbot products powered by large scale language models like OpenAI's. Their mission is to create AI chatbots that have their own personalities and one day their own AI souls aj_asver: in this interview, Kevin's gonna share what he's learned from working with large language models like G P T. We're gonna talk about exactly how these language models work, what it means to have an AI soul, why they hallucinate and make mistakes, and what the future looks like in a world where AI chatbots can leave us on red. aj_asver: So join me on this. As we explore the world of large scale language models in this episode of the Hitchhiker's Guide to ai. aj_asver: hey Kevin, how's it going? Thank you so much for joining me on the Hitchhiker Guide to aj_asver: ai. Kevin: Oh, thanks for having me, aj. Great to be. How large-scale language models work aj_asver: appreciate you um, being down to chat with me on one of the first few episodes that I'm recording. I'm really excited to learn a ton from you about how large language models work and also what it means for AI is to have a soul. And so we're gonna dig into all of those things, but maybe we can start from the top for folks that don't have a deep understanding of ai. aj_asver: What exactly is a large language model and how does it work? Kevin: Well, so, uh, there's this long period of time in. Machine learning history where there are a bunch of very custom models built for specific tasks. And the last five years or so has seen a huge improvement in basically taking like a singular model with making it as big as possible and putting in as much data as possible. Kevin: And so basically taking all human data that's accessible via the internet running this thing that learns to predict the next word given the prior set of words. And a large language model is the output of that process. And for the most part, when we say large, like what large means is hundreds of billions of parameters and trained over trillions of words. aj_asver: when . You say it kind of predicts the next word. Now, that technology, the ability to predict the word in large language model has existed for a few years. I think GPT in fact, three launched maybe a couple of years Kevin: Even before that as well. And so next word prediction is kind of like the canonical task or one of the canonical tasks in natural language processing, even before it became this like new field of transformers. aj_asver: And so what makes the current set of large scale language models or lms, as what they're called as well, like GPT three, different from what came before it? Kevin: There are two innovations. The first is this thing called the transformer, and the way the transformer works is it basically has the ability through this mechanism called attention to look at the entire sequence and establish long range correlation of like having different words at different places contribute to the output of next word prediction. Kevin: And then the other thing that's been really big and then open AI has done a phenomenal job doing is just learning how to put more and more data through these things. There are these things called the scaling laws, which essentially. We're showing that if you just keep putting more data at these things their intelligence, essentially the metrics they're using to measure intelligence just kept increasing. Kevin: Their ability to predict the for nextdoor accurately just kept growing with more and more. Kevin: Data's basically no bound. aj_asver: Seems like in the last few years, especially as we've got to like, you know, multi-billion parameter models like GPT three, we've kind of reached some inflection point where. Now they seem to somehow be more obviously intelligent to us. And I guess it's really with ChatGPT recently that the attention, has kind of been focused on large language models. aj_asver: So is ChatGPT the same as GPT three or is there kind of more that makes ChatGPT able to interact with humans than just the language model How ChatGPT works Kevin: My co-founder and I actually built a version of ChatGPT long before ChatGPT existed. And the biggest distinction is that these things are now being used in serious context of use. Kevin: And with OpenAI's distribution, they got this in front of a bunch of people. The problem that you face initially the very first problem is that there's a switch that has to flip when you use these things. When you go to a Google search bar if you don't get the right result, you're primed to think, oh, I have to type in something different. Kevin: Historically with chatbots, when you went to a chatbot, if like it didn't give you the right answer, you're like pissed because it's like, it's a, it's like a human, it's like texting me. It's like supposed to be right. And so the chat, the actual genius of ChatGPT beyond the distribution is not actually the model itself because the model had been around for a long time and was being used by hackers and companies like myself who saw the potential. Kevin: But with ChatGPT distribution plus the ability to reframe that switch so that you think, oh, I'm doing something wrong. I have to put in something different. And that's when the magic starts happening right now. At least aj_asver: I remember chatbots circa 2015, right, for example, where they weren't running on a large language model. They were kind of deterministic behind the scenes. And they would be immensely frustrating because they didn't really understand you, and oftentimes they kind of get stuck or they'd provide you with these option lists of what to do next. ChatGPT. On the other hand seems much more intelligent, right? I can ask it pretty open-ended questions. I don't have to think how I structure the aj_asver: questions. Kevin: Chat GPT is not a chat bot. It's more like , you have this arbitrary transformer between abstract formulations expressed in words. So you put in some words and you get some other words out, but like behind it is this the entire, like almost the entirety of human knowledge condensed into this like model. aj_asver: And did open AI have to teach the language model how to chat with us, for example, because I know that there was some early examples of trying to put you know, chat like questions into GPT, into its API, but I don't think the ex the results were as good as what ChatGPT does today, right? Kevin: Since ChatGPT has been released, they've done quite a bit of tuning. So like people are going and basically like thumbs upping and thumbs downing different responses. Kevin: And then they use that feedback to fine tune chat, GPT's performance in particular. And also probably feedback for GPT for whatever comes next. But the primary distinction between it performing well and not is your perception of what you have to GPT improvements aj_asver: We're now at GPT 3.75, and Sam Altman also said that the latest version of GPT that's running on what Microsoft is using for Bing is an even newer version. aj_asver: So what are some of the things they're doing to make GPT better? Every time they release a new version, that's making it like an even better language model and even better at interfacing with aj_asver: humans. Kevin: Well, if you use ChatGPT, one of the things you'll immediately notice is there's like a thumb

    ٢٧ من الدقائق
  6. How to publish a children's book in a weekend using AI with Ammaar Reshi

    ١٠‏/٠٢‏/٢٠٢٣

    How to publish a children's book in a weekend using AI with Ammaar Reshi

    Hey Readers, In today’s post, I want to share one of the first episodes of a new podcast I’m working on. In the podcast I will be exploring the world of AI to understand how it’s going to change the way we live, work and play, by interviewing creators, builders and researchers. In this episode, I interview Ammaar Reshi, a designer who recently wrote, illustrated and published a children’s book using AI! I highlighted Ammaar’s in my first post a few weeks ago as a great example of how AI is making creativity more accessible to everyone. In the interview, Ammaar shares what inspired him to use AI to write a children’s book, the backlash he received from the online artist community and his perspective on how AI will impact art in the future. If you’re new to AI and haven’t yet tried using Generative AI tools like ChatGPT or MidJourney, this is a great video to watch because Ammaar also shows us step-by-step how he created his children’s book. This is a must-watch for parents, educators or budding authors who might want to make their own children’s book too! To get the most out of this episode, I recommend you watch the video so you can see how all the AI tools we cover work > Youtube Video I hope you enjoy this episode. I’ll be officially launching the podcast in a few weeks, so it will be available on your favorite podcast player soon. In the meantime, I’ll be sharing more episodes here as I record them and I would love your feedback in the comments! Show Notes Links from the episode * Ammaar’s Twitter post on how he created a children’s book in a weekend: * Ammaar’s book “Alice and Sparkles”: https://www.amazon.sg/Alice-Sparkle-exciting-childrens-technology/dp/B0BNV5KMD8 * Ammaar’s Batman video: * ChatGPT for story writing: http://chat.openai.com * MidJourney for illustrations: Midjourney.com * Discord for using MidJourney: https://discord.com * PixelMator for upscaling your illustrations: https://www.pixelmator.com/pro/ * Apple Pages for laying out your book: https://www.apple.com/pages/ * Amazon Kindle Direct Publishing for publishing your book: https://kdp.amazon.com/en_US/ Episode Contents: * (00:00) Introduction * (01:55) Ammaar’s story * (05:25) Backlash from artists * (12:20) From AI books to AI videos * (16:20) The steps to creating a book with AI * (18:55) Using ChatGPT to write an children’s story * (23:45) Describing illustrations with ChatGPT * (26:00) Illustrating with MidJourney * (35:30) Improving prompts in Midjourney * (37:20) Midjourney Pricing * (40:00) Downloading image from MidJourney * (44:20) Upscaling with Pixelmator * (49:25) Laying out book with Apple Pages * (53:40) Publishing on Amazon KDP * (55:35) Ammaar shows us his hardcover book * (56:25) Wrap-up Full Transcript [00:00:00] Introduction ammaar: I think it has to start with your idea of a story, right? I think, you know, people might think, okay, you press a button, it spits out a book, but I think it has to start with your imagination. And then we will provide that to ChatGPT to kind of give us a base for our story. I think then we'll iterate with ChatGPT almost like a brainstorming partner. We're gonna go back and forth. We're gonna expand on characters and the arcs that we might want to, you know, go through. And I think once we have that, Then we go back to imagining again. We have to think through how do you take that script and that story and you bring it to life, how do you visualize it? And that's where MidJourney comes in. And we're gonna generate art that fits that narrative and expresses that narrative in a really nice way. And then we can combine it all together with you know, pages to create that book format. aj_asver: Hey everyone, and welcome to the Hitchhikers Guide to ai. I'm your tour guide AJ Aser, and in this podcast I explore the world of artificial intelligence to learn how AI will impact the way we live, work, [00:01:00] and play. Now, if you're a parent like me in the middle of reading a book to your kids, this thought may have crossed your mind. Hey, I think I could write one of these, but the idea of writing and publishing a children's book to many. Is a distant fantasy that is until now, because generative AI is making it easier than ever for anyone to become an author or illustrator. Just like today's guest, Ammaar Reshi who wrote illustrated and published a children's book for his friend's children in one weekend using the latest AI products including ChatGPT and MidJourney in this episode. Ammaar's going to show us exactly how he did it. So Hitch A Ride with me is we explore the exciting world of generative AI in this episode of the Hitchhiker Guide to ai. Ammaar's Story aj_asver: Hi Ammaar it's great to have you on the podcast. ammaar: Hi. Great to be here, AJ. How are you doing? aj_asver: I'm great. I'm so excited for you [00:02:00] to join me on this podcast especially to be one of the first people I'm interviewing. It's gonna be a learning experience for me, and we'll work it out together. But anyway, I'm so excited to have you here. I talked about you in my newsletter in the first post because I thought what you did by publishing your book, Alison Sparkles, was such a great example of how AI is gonna change the world around us and really make creativity and being a creator a much more achievable and approachable thing for most people. Since your book was published on Amazon in December of last year, you have sold, what is it, over 900 copies. Is that right? ammaar: Yeah. It's about 1,200 now, so yeah. Crazy aj_asver: That is amazing! ammaar: Yeah, it's been wild aj_asver: That is so cool. And at the same time, you've found yourself at the center of a growing debate about AI and the future of art. So tell us how it all happened. Rewind us back to the start. What made you a designer at a tech company decide to publish a children's book? ammaar: I guess what kicked it off to go all the way back to two of [00:03:00] my best friends basically had their first kids. And I went to visit one of them. She had turned one years old. And I went over and it was around her bedtime where she grabbed my hand and took me upstairs. And I was like, what's going on? And they're like, she's picked you tonight to read her bedtime story . So I was like, wow. I was like, I am honored. This is one step closer to being that cool uncle. She hands me this book I was something about getting all these animals delivered from the zoo. And so I was, I mean my friend was there and I was reading her this book and we were both laughing cuz we were like, this book makes no sense at all. This story is so random. But she loved it. You know, she loved it. She loved the art, she loved everything about it. And it then kind of hit me in that moment. I was like, it'd be really fun to tell her story of my own, you know? I just had no idea how I was gonna go and do that yet. I told my friend, I was like, I think the next time I come over, there's gonna be a book on her shelf. It's gonna be mine. And he was like how are you gonna do that? I was like, well, gimme a weekend. I'll figure it out, right.. I had already been playing with MidJourney like sometime in February [00:04:00] of uh, last year. And so, You know, I, I knew generative AI and the artwork stuff was there and it was really cool. And Dall-E also had blown up around then. And so yeah I knew okay, if I wanted to illustrate this book, I could lean on MidJourney to help me with some of the creation, but I hadn't yet come across ChatGPT. And a friend of mine actually, just that moment, like that Friday message meeting, he's have you seen ChatGPT? I've been playing with it all weekend, all week. I've even created music and chords and like chord progressions and stuff with it. And I was like, that's crazy. Like I wonder if it could help me craft a story, like a children's story. And then I wanted the story to be something a little meta, you know, something about. But also a little personal. I remember as a kid, like my dad let me play with his computer when I was like four or five years old, right? And I, that kind of led me down the path of going into tech and like all of that and that curiosity. And so I basically wanted to mash those two. It was [00:05:00] this young girl who's curious about technology, and specifically about ai and then ends up making her own. And that's essentially the prompt that I gave ChatGPT and that's what set off, the path into making this book. aj_asver: That is such a cool story. I think the bit you talked about where you're like reading this book with a child and you're lucky because as an uncle you don't have to do it a hundred times. ammaar: That's what he said. Yeah. Backlack from artists aj_asver: Yeah. It can get pretty tiresome and you're like, and you do wonder often wow. This book doesn't make a lot of sense, but like hundreds or thousands of copies have been sold. It's really cool that you took the initiative to do that. Now, you were in the Washington Post recently in an article titled he made a children's book using AI, then came the Rage. Talk us about that. Why did you make so many people angry with the children's book? ammaar: Yeah, that was also dramatically unexpected. So when I first created the book on that weekend, my goal was like, I need a paperback in hand as soon as possible. And Amazon, KDP, which [00:06:00] I think is the most underrated part of this whole discussion, where it's like there's a platform out there that can get you a paperback within a week , just upload a PDF. No one talks about that. I used KDP, got the book and initially, just gave it to my friends. That's, that was the goal. And it was out there. And then, I was like, this would actually be really fun to share with other friends as well. And so I put it on my Instagram and have a good amount of friends who are not in tech, and so they replied and I put in my story. If you want a copy, I'm gonna gift it to you. Let me k

    ٥٨ من الدقائق

التقييمات والمراجعات

٥
من ٥
‫٢ من التقييمات‬

حول

Interviews with builders, creators and researchers at the cutting edge of AI to understand how it's going to change the way we live, work and play. guidetoai.parcha.com