How We Made That App

SingleStore
How We Made That App

Welcome to “How We Made That App,” where we explore the crazy, wild, and sometimes downright bizarre stories behind the creation of some of the world’s most popular apps, hosted by the always charming and devastatingly handsome Madhukar Kumar. After starting his career as a developer and then as a product manager, he is now the Chief Marketing Officer at SingleStore. And he’s here to take you on a journey through the data, challenges, and obstacles that app developers face on the road to creating their masterpieces. On each episode, we’ll dive deep into the origins of a different app and find out what went into making it the success it is today. We’ll explore the highs and lows of development, the technical challenges that had to be overcome, and the personalities and egos that clashed along the way. With a signature blend of irreverent humor, snarky commentary and, razor-sharp wit, we’ll keep you entertained and informed as we explore the cutting edge of app development. So grab your favorite coding language, crank up the volume, and join us for “How We Made That App,” brought to you by the top app-building platform wizards at SingleStore.

  1. 04/30/2024

    How Flowise is Changing the GenAI App Revolution

    Join us on this intriguing journey where host Madhukar Kumar uncovers the story of FlowiseAI, an AI-powered chatbot tool that soared to fame in the open-source community. Henry Heng, the Founder of FlowiseAI, shares the inception of FlowiseAI was out of the need to streamline repetitive onboarding queries. Listen in as Henry shares the unexpected explosion of interest following its open-sourcing and how community engagement, spearheaded by creators like Leon, has been pivotal to its growth. The conversation takes a fascinating turn with the discussion of Flowise’s versatility, extending to AWS and single store's creative uses for product descriptions, painting a vivid picture of the tool's expansive potential.Madhukar and Henry discuss the dynamic realm of data platforms, touching on the integration of large language models into developer workflows and the inevitable balance between commercial giants and open-source alternatives. Henry brings a personal perspective to the table, detailing his use of Fowise for managing property documentation and crafting an accompanying chatbot. Henry also addresses the critical issue of data privacy in enterprise environments, exploring how Flowwise approaches these challenges. The strategy behind monetizing Flowwise is also revealed, hinting at an upcoming cloud-hosted iteration and its future under the Y Combinator umbrella.  Don't miss out on this insightful conversation on how FlowiseAI is revolutionizing GenAI!Key Quotes: “What I've experienced is that first you go through the architect. So the architect of companies and the senior teams as well will decide what architecture that we want to go with. And usually, I was part of the conversation as well.  We tend to decide between, NoSQL or SQL depending on the use cases that we are using. For schema that are like fast changing schema or inconsistent, not like tabular structured data, we often use NoSQL or MongoDB. And for structured data, we use MySQL from my previous company. That's how we kind of like decide based on the use cases.”“Judging from the interactions that I have with the community, I would say 80 percent of them are using OpenAI and  OpenSource is definitely catching up but is still lagging behind OpenAI. But I do see the trend that is starting to pick up, like especially you have the MixedRoute, you have Lama2 as well. But the problem is that I think the cost is still the major factor. Like, people tend to go to which large language models has the lowest cost, right?”Timestamps(00:00) Building FlowiseAI to open source(5:07) Innovative Use Cases of Flowwise(10:15) Types of users of Flowise(19:39) Database Architecture and Future Technology(32:30) Quick hits with HenryLinksConnect with HenryVisit FlowiseAIConnect with MadhukarVisit SingleStore

    37 min
  2. 04/16/2024

    Pioneering AI Teaching Models with Dev Aditya

    On this episode of How We Made That App, join host Madukar Kumar as he delves into the groundbreaking realm of AI in education with Dev Aditya, CEO and Co-Founder of the Otermans Institute. Discover the evolution from traditional teaching methods to the emergence of AI avatar educators, ushering in a new era of learning.Dev explores how pandemic-induced innovation spurred the development of AI models, revolutionizing the educational landscape. These digital teachers aren't just transforming classrooms and corporate training. They're also reshaping refugee education in collaboration with organizations like UNICEF.Dev’s deep dive into the creation and refinement of culturally aware and pedagogically effective AI. He shares insights into the meticulous process behind AI model development, from the MVP's inception with 13,000 lines of Q&A to developing a robust seven billion parameter model, enriched by proprietary data from thousands of learners.We also discuss the broader implications of AI in data platforms and consumer businesses. Dev shares his journey from law to AI research, highlighting the importance of adaptability and logical thinking in this rapidly evolving field. Join us for an insightful conversation bridging the gap between inspiration and innovation in educational AI!Key Quotes: “People like web only and app only, right? They like it. But in about July this year, we are launching alpha versions of our products as Edge AI. Now that's going to be a very narrowed down version of language models that we are working on right now, taking from these existing stacks. So that's going to be about 99 percent our stuff. And it's, going to be running on people's devices. It's going to help with people's privacy. Your data stays in your device. And even as a business, it actually helps a lot because I am hopefully,  going to see a positive difference in our costs because a lot of that cloud costs now rests in your device.”“My way of dealing with AI is, is narrow intelligence, break a problem down into as many narrow points as possible, storyboard, storyboard color, as micro as possible. If you can break that down together, you can teach each agent and each model to do that phenomenally well. And then it's just an integration game. It will do better than a human being in, you know, as a full director of a movie. Also, if you know, if you, from the business logic standpoint, understand what does a director do, it is possible, theoretically. I don't think people go deep enough to understand what a teacher does, or what a doctor is just not a surgeon, right? How they are thinking, what is their mechanism? If you can break that down. You can easily make, like, probably there are 46, I'm just saying, 46 things that a doctor does, right? If you have 46 agents working together, each one knowing that,be amazing. That's a different game. I think agents are coming.”Timestamps(00:00) - AI Avatar Teachers in Education(09:29) - AI Teaching Model Development Challenges(13:27) - Model Fine-Tuning for Knowledge Augmentation(25:22) - Evolution of Data Platforms and AI(32:15) - Technology Trends in Consumer BusinessLinksConnect with DevVisit the Otermans Institute Connect with MadhukarVisit SingleStore

    46 min
  3. 04/02/2024

    Revolutionizing Analytics Through User Privacy with Jack Ellis

    In this episode of How We Made That App, host Madhukar welcomes Jack Ellis, CTO and co-founder of Fathom Analytics, who shares the inside scoop on how their platform is revolutionizing the world of web analytics by putting user privacy at the forefront. With a privacy-first ethos that discards personal data like IP addresses post-processing, Fathom offers real-time analytics while ensuring user privacy. Breaking away from the traditional cookie-based tools like Google Analytics. Jack unpacks the technical challenges they faced in building a robust, privacy-centric analytics service, and he explains their commitment to privacy as a fundamental service feature rather than just a marketing strategy.Jack dives into the fascinating world of web development and software engineering practices. Reflecting on Fathom's journey with MySQL and PHP, detailing the trials and tribulations of scaling in high-traffic scenarios. He contrasts the robustness of PHP and the rising popularity of frameworks like Laravel with the allure of Next.js among the younger developer community. Jack also explores the evolution from monolithic applications to serverless architecture and the implications for performance and scaling, particularly as we efficiently serve millions of data points.Jack touches on the convergence of AI with database technology and its promising applications in healthcare, such as enhancing user insights and decision-making. Jack shares intriguing thoughts on how AI can transform societal betterment, drawing examples from SingleStore's work with Thorn. You don’t want to miss this revolutionizing episode on how the world of analytics is changing! Key Quotes: “When we started selling analytics people they were a bit hesitant to pay for analytics but over time people have started valuing privacy over everything And so it's just compounded from there as people have become more aware of the issues and people absolutely still will only use Google Analytics but the segment of the market that is moving towards using solutions like us is growing.”“People became used to Google's opaque ways of processing data. They weren't sure what data was being stored, how long were they keeping the IP address for. All of these other personal things as well. And we came along and we basically said, we're not interested in that. tracking person A around multiple different websites. We're actually only interested in person A's experience on one website. We do not, under any circumstances, want to have a way to be able to profile an individual IP address across multiple entities. And so we invented this mechanism where the web traffic would come in and we'd process it and we'd work out whether they're unique and whatever else. And then we would discard the personal data.”“The bottleneck for most applications is not your web framework, it's always your database and I ran through Wikipedia's numbers, Facebook's numbers and I said it doesn't matter, we can add compute, that's easy peasy, it's always the database, every single time, so stop worrying about what framework you're using and pick the right database that has proven that it can actually scale.”“If you're using an exclusively OLTP database, you might think you're fine. But when you're trying to make mass modifications, mass deletions, mass moving of data, OLTP databases seem to fall over. I had RDS side by side with SingleStore, the same cost for both of them, and I was showing people how quickly SingleStore can do stuff. That makes a huge difference, and it gives you confidence, and I think that you need a database that's going to be able to do that.”Timestamps(00:55) Valuing consumer’s privacy (06:01) Creating Fathom Analytics' architecture(20:48) Compounding growth to scale(23:08) Structuring team functions(25:39) Developing features and product design(38:42) Advice for building applicationsLinksConnect with JackVisit Fathom AnalyticsConnect with MadhukarVisit SingleStore

    41 min
  4. 03/19/2024

    Revolutionizing Language Models and Data Processing with LlamaIndex

    On this episode of How We Made That App, host Madhukar Kumar welcomes Co-Founder and CEO of LlamaIndex, Jerry Liu! Jerry takes us from the humble beginnings of GPT Index to the impactful rise of Lamaindex, a game-changer in the data frameworks landscape. Prepare to be enthralled by how Lama Index is spearheading retrieval augmented generation (RAG) technology, setting a new paradigm for developers to harness private data sources in crafting groundbreaking applications. Moreover, the adoption of Lamaindex by leading companies underscores its pivotal role in reshaping the AI industry. Through the rapidly evolving world of language model providers discover the agility of model-agnostic platforms that cater to the ever-changing landscape of AI applications. As Jerry illuminates, the shift from GPT-4 to Cloud 3 Opus signifies a broader trend towards efficiency and adaptability. Jerry helps explore the transformation of data processing, from vector databases to the advent of 'live RAG' systems—heralding a new era of real-time, user-facing applications that seamlessly integrate freshly assimilated information. This is a testament to how Lamaindex is at the forefront of AI's evolution, offering a powerful suite of tools that revolutionize data interaction. Concluding our exploration, we turn to the orchestration of agents within AI frameworks, a domain teeming with complexity yet brimming with potential. Jerry delves into the multifaceted roles of agents, bridging simple LLM reasoning tasks with sophisticated query decomposition and stateful executions. We reflect on the future of software engineering as agent-oriented architectures redefine the sector and invite our community to contribute to the flourishing open-source initiative. Join the ranks of data enthusiasts and PDF parsing experts who are collectively sculpting the next chapter of AI interaction!Key Quotes: “If you're a fine-tuning API, you either have to cater to the ML researcher or the AI engineer. And to be honest, most AI engineers are not going to care about fine-tuning, if they can just hack together some system initially, that kind of works. And so I think for more AI engineers to do fine-tuning, it either has to be such a simple UX that's basically just like brainless, you might as well just do it and the cost and latency have to come down. And then also there has to be guaranteed metrics improvements. Right now it's just unclear. You'd have to like take your data set, format it, and then actually send it to the LLM and then hope that actually improves the metrics in some way. And I think that whole process could probably use some improvement right now.”“We realized the open source will always be an unopinionated toolkit that anybody can go and use and build their own applications. But what we really want with the cloud offering is something a bit more managed, where if you're an enterprise developer, we want to help solve that clean data problem for you so that you're able to easily load in your different data sources, connect it to a vector store of your choice. And then we can help make decisions for you so that you don't have to own and maintain that and that you can continue to write your application logic. So, LlamaCloud as it stands is basically a managed parsing and injection platform that focuses on getting users like clean data to build performant RAG and LLM applications.”“You have LLMs that do decision-making and tool calling and typically, if you just take a look at a standard agent implementation it's some sort of query decomposition plus tool use. And then you make a loop a little bit so you run it multiple times and then by running it multiple times, that also means that you need to make this overall thing stateful, as opposed to stateless, so you have some way of tracking state throughout this whole execution run. And this includes, like, conversation memory, this includes just using a dictionary but basically some way of, like, tracking

    41 min
  5. 02/20/2024

    Data Dreams and AI Realities with Premal Shah

    In this engaging episode, host Madhukar Kumar dives deep into the world of data architecture, deployment processes, machine learning, and AI with special guest Premal Shah, the Co-Founder and Head of Engineering at 6sense. Join them as Premal traces the technological evolution of Sixth Sense, from the early use of FTP to the current focus on streamlining features like GitHub Copilot and enhancing customer interactions with GenAI.Discover the journey through the adoption of Hive and Spark for big data processing, the implementation of microservice architecture, and massive-scale containerization. Learn about the team's cutting-edge projects and how they prioritize product development based on data value considerations.Premal also shares valuable advice for budding engineers looking to enter the field. Whether you're a tech enthusiast or an aspiring engineer, this episode provides fascinating insights into the ever-evolving landscape of technology!Key Quotes: “What is important for our customers, is that 6sense gives them the right insight and gives them the insight very quickly. So we have a lot of different products where people come in and they infer the data from what we're showing. Now it is our responsibility to help them do that faster. So now we are bringing in GenAI to give them the right summary to help them to ask questions of the data right from within the product without having to think about it more or like open a support ticket or like ask their CSM.”“We had to basically build a platform that would get all of our customer's data on a daily basis or hourly basis and process it every day and give them insights on top of it. So,  we had some experience with Hadoop and Hive at that time. So we used that platform as like our big data platform and then we used MySQL as our metadata layer to store things like who is the customer, what products are there, who are the users, et cetera. So there was a clear separation of small data and big data.”“Pretty soon we realized that the world is moving to microservices, we need to make it easy for our developers to build and deploy stuff in the microservice environment. So, we started investing in containerization and figuring out, how we could deploy it, and at that same time Kubernetes was coming in so with using docker and Kubernetes we were able to blow up our monolith into microservices and a lot of them. Now each team is responsible for their own service and scaling and managing and building and deploying the service. So the confluence of technologies and what you can foresee as being a challenge has really helped in making the transition to microservices.”“We brought in like SingleStore to say, ‘let's just move all of our UIs to one data lake and everybody gets a consistent view.’ There's only one copy. So we process everything on our hive and spark ecosystem, and then we take the subset of the process data, move it to SingleStore, and that's the customer's access point.”“We generally coordinate our releases around a particular time of the month, especially for the big features, things go behind feature flags. So not every customer immediately gets it. You know, some things go in beta, some things go in direct to production. So there are different phases for different features. Then we have like test environments that we have set up, so we can simulate as much as possible, uh, for the different integrations. Somebody has Salesforce, somebody has Mercado, Eloqua, HubSpot. All those environments can be like tested. ”“A full stack person is pretty important these days. You should be able to understand the concepts of data and storage and at least the basics. Have a backing database to build an application on top of it, able to write some backend APIs, backend code, and then build a decent looking UI on top of it. That actually gives you an idea of what is involved end to end in building an application. Versus being just focused on I only do X versus Y. You need the

    1h 3m

About

Welcome to “How We Made That App,” where we explore the crazy, wild, and sometimes downright bizarre stories behind the creation of some of the world’s most popular apps, hosted by the always charming and devastatingly handsome Madhukar Kumar. After starting his career as a developer and then as a product manager, he is now the Chief Marketing Officer at SingleStore. And he’s here to take you on a journey through the data, challenges, and obstacles that app developers face on the road to creating their masterpieces. On each episode, we’ll dive deep into the origins of a different app and find out what went into making it the success it is today. We’ll explore the highs and lows of development, the technical challenges that had to be overcome, and the personalities and egos that clashed along the way. With a signature blend of irreverent humor, snarky commentary and, razor-sharp wit, we’ll keep you entertained and informed as we explore the cutting edge of app development. So grab your favorite coding language, crank up the volume, and join us for “How We Made That App,” brought to you by the top app-building platform wizards at SingleStore.

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada