Is Generative AI disruptive or sustaining?A quick recap of Clayton Christensen's conceptual framework of disruptive and enabling innovations... Disruptive InnovationDisruptive Innovation: This refers to a process where a smaller company with fewer resources successfully challenges established incumbent businesses. Disruptive innovations typically start by capturing the lower end of the market – offering products or services that are more affordable and accessible. Over time, these innovations improve in quality and performance, eventually displacing the established competitors. Disruptive innovations often change the competitive landscape and can lead to the creation of entirely new markets. A classic example is how digital photography disrupted the traditional film photography industry. Arguments For GenAI As Disruptive InnovationLow-End Disruption: Christensen often emphasized how disruptive innovations initially target the lower end of the market. In the case of Generative AI, it could initially appeal to smaller businesses or individuals who couldn't previously afford professional services in fields like design, content creation, or data analysis. Market Transformation: Generative AI has the potential to create new markets and value networks, especially in fields like art, content creation, and design, where it enables the creation of novel content that was previously not possible or required extensive human effort. Accessibility: By democratizing skills that were once niche or expert-level (like graphic design, coding, or prose writing), generative AI can disrupt traditional industries by making these skills accessible to a wider audience. Cost Efficiency: It can significantly reduce costs in content production, potentially disrupting sectors reliant on human labor for these tasks. Innovative Business Models: The technology could lead to new business models, particularly in personalized content creation, marketing, and customer interaction, disrupting conventional business strategies. Arguments Against GenAI As Disruptive InnovationDependency on Existing Infrastructure: Generative AI is highly dependent on existing data and computing infrastructure, suggesting it's more of an evolution than a radical market disruptor. Ethical and Regulatory Constraints: Potential ethical issues and regulatory hurdles, especially around data privacy and intellectual property, might slow down its disruptive impact. Integration with Current Systems: Rather than replacing existing systems, generative AI is often used to enhance them, suggesting a more gradual market evolution. Sustaining InnovationSustaining Innovation: Sustaining innovations, on the other hand, do not disrupt existing markets but rather evolve them. These innovations enhance or improve existing products, services, or processes, making them more efficient, effective, or accessible. They tend to support and extend the life of existing companies or industries rather than replacing them. An example of sustaining innovation could be the evolution of smartphones, where each new model offers improvements and additional features that enhance user experience but do not necessarily disrupt the existing market in the way the first smartphones did. Arguments For GenAI As Sustaining InnovationEnhancing Current Products: Generative AI often acts as an enhancement to existing digital products, like improving software with AI capabilities, which aligns with sustaining innovation. Gradual Improvement: The technology is seeing incremental improvements, aligning with the idea of gradual enhancements characteristic of sustaining innovation. Appealing to Existing Market: In many cases, it serves the existing market better by offering more efficient, high-quality outputs (like in graphic design, coding, or data analysis). Arguments Against GenAI As Sustaining InnovationPotential for Market Transformation: The long-term potential of generative AI could be to completely transform markets, not just sustain them. Beyond Mere Improvement: Generative AI introduces capabilities (like creating new forms of art or generating new data) that go beyond simple improvements of existing products. Altering Consumer Behavior: Its ability to change how consumers interact with technology (for instance, preferring AI-generated content) suggests a shift in market dynamics, not just sustaining existing ones. Further Reading📺 Clayton Christensen: Disruptive innovation(Clayton Christensen presenting at the Saïd Business School at the University of Oxford, uploaded to YouTube in June 2013) 📃 What Is Disruptive Innovation?(Christensen, Raynor, and McDonald for the December 2015 issue of the Harvard Business Review) 📃 Sustaining vs. Disruptive Innovation: What's the Difference? (Catherine Cote for Harvard Business School Online, February 2022) 📺 Disruptive Technology vs. Sustaining Technology (Ashley Hodgson on YouTube, December 2022) 📃 Differences between early adopters of disruptive and sustaining innovations (Reinhardt and Gurtner, 2015) Generative AI EconomicsThe Briefest of OverviewsThe emerging Generative AI sphere breaks the model of software economics, to an extent. Traditional Software Economics: Building and launching a new SaaS product (for example) is low CapEx, high OpEx, and high margin. Foundation Model Developer Economics: High CapEx (considering that as of right now progress is gated by access to chips). High OpEx (considering that data acquisition, data engineering, model training, and most importantly model inference are massive cost centers). Training and Inference CostsDiscriminative AIGenerative AITraining CostsHighHighInference CostsLowHighAI Training: The Learning PhaseThink of AI training like teaching a student. In this phase, you're giving the AI model lots of examples (this is your data) to learn from. These examples could be anything from pictures of cats and dogs to customer reviews. How It Works: The AI model looks at this data and tries to find patterns. For instance, it might notice that cat pictures often have pointy ears and whiskers. It's like studying for an exam — the model is trying to learn as much as possible from the data it's given.The Goal: The aim is to make the AI understand these patterns well enough so that it can make its own decisions or predictions later. It's all about the model learning the rules of the game from the examples it sees.The Challenge: This phase can be resource-heavy. It requires a lot of computational power (think high-end GPUs or specialized hardware), and it can take a lot of time, depending on how complex the task is.AI Inference: The Application PhaseNow, imagine the student (our AI model) has graduated and is ready to apply what it learned in the real world. This is the inference phase. Inference in Discriminative AI: Overview: This is like asking the AI model a multiple-choice question. You present it with new data (like an image or a piece of text), and based on its training, it categorizes or identifies this data. Think of it as asking, "Based on what you've learned, what do you think this is?"Application: It's widely used in tasks like image recognition (identifying objects in pictures), spam detection (categorizing emails), or sentiment analysis (understanding if a review is positive or negative).Inference in Generative AI: Overview: Here, instead of categorizing, the AI is creating something new. It's like giving the AI a set of ingredients (data and conditions) and asking it to cook up a new dish (output). This output could be a piece of text, an image, or even music.How It Works: The model uses patterns it learned during training to generate new, original content. For instance, if it's been trained on a lot of landscape paintings, it can generate a new painting that doesn't exist yet but looks like it could belong to the same collection.Application: Generative AI is used in creative fields like art generation (creating new images), content writing (generating articles or stories), or even in generating synthetic data for further AI training.Further Reading📃 Navigating the High Cost of AI Compute (Appenzeller et al. for the Andreessen Horowitz blog, April 2023) 📃 Compute and Energy Consumption Trends in Deep Learning Inference (Desislavov et al., March 2023. ArXiv ID: 2109.05472) 📃 How Inferencing Differs From Training in Machine Learning Applications (Sam Fuller for Semiconductor Engineering, January 2022) 📃 The Inference Cost Of Search Disruption – Large Language Model Cost Analysis (Dylan Patel and Afzal Ahmad in SemiAnalysis. February 2023) Where The Value LiesAs with any emerging technology, there are generally three types of players: Those building core technology (e.g. doing whatever equivalent of "bench science" is in their field)Those packaging/implementing core technologies into specialized applicationsThose providing the infrastructure to builders and packagersAt least at this stage, it's clear that captured value is redounded to those participants, in rank order: Infrastructure Providers — e.g. Cloud providers (Microsoft Azure, Google Cloud Platform, Amazon Web Services, etc.), Semiconductor developers (Nvidia, Graphcore, Groq, Cerebras. etc ... plus all the chip development efforts at big companies like Amazon and Microsoft, etc)Foundation Model builders — e.g. OpenAI, Anthropic, Deepgram, etc.Implementers — e.g. Companies that build "wrappers" around infrastructure and foundation models. Put differently: Companies/projects which integrate with foundation model APIs and package outputs from said APIsFurther Reading📃 Exploring opportunities in the gen AI value chain (Härlin et al. for McKinsey Digital, April 2023) 📃 The value chain of general-purpose AI (Küspert et al. for the Ada Lovelace Institute, February 2023) 📃 “Behind the Hype: A Deep Dive into the AI Value Chain” (Arun Rao on Hash Collision, June 202