Stewart Squared

Stewart Alsop III reviews a broad range of topics with his father Stewart Alsop II, who started his career in the personal computer industry and is still actively involved in investing in startup technology companies. Stewart Alsop III is fascinated by what his father was doing as SAIII was growing up in the Golden Age of Silicon Valley. Topics include: - How the personal computing revolution led to the internet, which led to the mobile revolution - Now we are covering the future of the internet and computing - How AI ties the personal computer, the smartphone and the internet together

  1. Episode #76: Dear Hollywood, Give Up: Lessons from Napster, Netflix, and the Inevitable

    -3 ДН.

    Episode #76: Dear Hollywood, Give Up: Lessons from Napster, Netflix, and the Inevitable

    In this episode of the Stewart Squared podcast, host Stewart Alsop III speaks with his father Stewart Alsop II about the ongoing battle between Hollywood and Silicon Valley, focusing on the Warner Brothers Discovery saga involving potential buyers Netflix and Paramount (backed by tech investor David Ellison). Stewart Alsop II argues that Hollywood needs to stop "clutching their pearls" and accept that technology always wins in media—pointing to how this same pattern played out with Napster and the music industry. The conversation explores how the media landscape has shifted from broadcast television to cable to streaming, why Netflix's mastery of user experience gives it an edge over legacy studios, and how new immersive experiences like Meow Wolf represent the future of entertainment. They also discuss how AI coding tools are changing software development, the transition from large language models to world models, and why accepting technological defeat quickly is the only way forward for traditional media companies. Timestamps 00:00 The Dynamic Between Hollywood and Silicon Valley09:42 The Evolution of Movie Experiences19:39 The Future of Media and Immersive Experiences29:33 The Intersection of AI, Video Games, and Coding33:54 Understanding World Models and Their Complexity40:04 The Shift from Producer to Consumer Control47:11 The Fragmentation of Media and Its Consequences51:09 Accepting Defeat in the Tech Business55:55 The Future of Media in a Streaming World Key Insights 1. Technology Always Wins in Media Transformations: Throughout history, from the music industry's Napster revolution to newspapers and now Hollywood, the pattern is clear—technology fundamentally transforms every media sector it touches. The only viable strategy for legacy media companies is to stop resisting and adapt as quickly as possible. Those who clutch their pearls and defend old business models inevitably lose, while those who embrace technological change survive and sometimes thrive in the new landscape.2. The Paramount-Netflix Battle Represents a False Choice: Hollywood's preference for David Ellison's Paramount over Netflix to acquire Warner Brothers Discovery is misguided because both are fundamentally tech-driven companies. David Ellison, raised at the knee of Larry Ellison and Steve Jobs, is as much a "tech bro" as any Netflix executive. The real issue isn't choosing between Hollywood and Silicon Valley—it's that Hollywood has already lost and doesn't realize both options represent technology's dominance over traditional studio culture.3. Tech Value in Media Means Treating Users as Individuals, Not Cattle: The fundamental technological advantage Netflix has perfected is creating comprehensive user profiles and tailoring experiences to individual preferences. This manifests in details like the "skip intro" and "skip recap" buttons that minimize friction. Legacy services like Amazon Prime Video often fail at these seemingly small details, revealing they don't understand that technology's value lies in giving consumers control and personalized experiences rather than treating them as a mass audience in a factory farm model.4. The Music Industry Provides the Blueprint for Media's Future: When recorded music distribution collapsed with Napster, the industry had to return to music's fundamental economic drivers throughout human history: live performance, touring, and merchandise. Taylor Swift exemplifies this new model—owning her library as an asset while generating primary income through tours and merch. This same pattern will play out in film, where streaming handles distribution while new models emerge for creating value around content rather than distribution itself.5. Meow Wolf Represents a New Transcendent Media Form: Unlike traditional media that forces one dominant experience, Meow Wolf creates collaborative, multi-sensory experiences involving filmmakers, painters, welders, and every media type. Their upcoming Los Angeles exhibit in a former movie theater directly challenges Hollywood by offering agency to visitors rather than passive consumption. This represents where media is heading—beyond movies, beyond video games, into something entirely new that cannot be defined by comparing it to existing forms.6. Generational Differences in Information Processing Are Technology-Driven: Video games taught younger generations to process massive amounts of information rapidly ("twitchy"), fundamentally changing how people interact with media. Similarly, AI tools like Claude are now teaching a new generation how programming logic works, even without traditional coding skills. Each technological wave creates new cognitive capabilities, with younger generations naturally adapting to handle information flows that overwhelm older generations accustomed to different media paradigms.7. The Current AI Revolution Will Fragment Into Specialized Domains: While LLMs have revolutionized text-based tasks like coding, the next frontier is world models that can represent physical reality through pixels, movement, and spatial relationships rather than just language. Leaders like Yann LeCun and Fei-Fei Li recognize that LLMs are already legacy technology, and the competition has moved to who can build comprehensive world models first. Those still investing heavily in LLM infrastructure, like Meta, risk fighting yesterday's battle while the future moves beyond them.

    1 ч. 5 мин.
  2. Episode #75: The Real-Time Problem: Why LLMs Hit a Wall and World Models Won't

    5 ФЕВР.

    Episode #75: The Real-Time Problem: Why LLMs Hit a Wall and World Models Won't

    In this episode of the Stewart Squared podcast, host Stewart Alsop III sits down with his father Stewart Alsop II to explore the emerging field of world models and their potential to eclipse large language models as the future of AI development. Stewart II shares insights from his newsletter "What Matters? (to me)" available at salsop.substack.com, where he argues that the industry has already maxed out the LLM approach and needs to shift focus toward world models—a position championed by Yann LeCun. The conversation covers everything from the strategic missteps of Meta and the dominance of Google's Gemini to the technical differences between simulation-based world models for movies, robotics applications requiring real-world interaction, and military or infrastructure use cases like air traffic control. They also discuss how world models use fundamentally different data types including pixels, Gaussian splats, and time-based movement data, and question whether the GPU-centric infrastructure that powered the LLM boom will even be necessary for this next phase of AI development. Listeners can find the full article mentioned in this episode, "Dear Hollywood: Resistance is Futile", at https://salsop.substack.com/p/dear-hollywood-resistance-is-futile. Timestamps 00:00 Introduction to World Models01:17 The Limitations of LLMs07:41 The Future of AI: World Models19:04 Real-Time Data and World Models25:12 The Competitive Landscape of AI26:58 Understanding Processing Units: GPUs, TPUs, and ASICs29:17 The Philosophical Implications of Rapid Tech Change33:24 Intellectual Property and Patent Strategies in Tech44:12 China's Impact on Global Intellectual Property Key Insights 1. The Era of Large Language Models Has PeakedThe fundamental architecture of LLMs—predicting the next token from massive text datasets—has reached its optimization limit. Google's Gemini has essentially won the LLM race by integrating images, text, and coding capabilities, while Anthropic has captured the coding niche with Claude. The industry's continued investment in larger LLMs represents backward-looking strategy rather than innovation. Meta's decision to pursue another text-based LLM despite having early access to world model research exemplifies poor strategic thinking—solving yesterday's problem instead of anticipating tomorrow's challenges.2. World Models Represent the Next Paradigm ShiftWorld models fundamentally differ from LLMs by incorporating multiple data types beyond text, including pixels, Gaussian splats, time, and movement. Rather than reverting to the mean like LLMs trained on historical data, world models attempt to understand and simulate how the real world actually works. This represents Yann LeCun's vision for moving from generative AI toward artificial general intelligence, requiring an entirely different technological approach than simply building bigger language models.3. Three Distinct Categories of World Models Are EmergingWorld models are being developed for fundamentally different purposes: creating realistic video content (like OpenAI's Sora), enabling robotics and autonomous vehicles to navigate the physical world, and simulating complex real-world systems like air traffic control or military operations. Each category has unique requirements and challenges. Companies like Niantic Spatial are building geolocation-based world models from massive crowdsourced data, while Maxar is creating visual models of the entire planet for both commercial and military applications.4. The Hardware Infrastructure May Completely ChangeThe GPU-centric data center architecture optimized for LLM training may not be ideal for world models. Unlike LLMs which require brute-force processing of massive text datasets through tightly coupled GPU clusters, world models might benefit from distributed computing architectures using alternative processors like TPUs (Tensor Processing Units) or even FPGAs. This could represent another paradigm shift similar to when Nvidia pivoted from gaming graphics to AI processing, potentially creating opportunities for new hardware winners.5. Intellectual Property Strategy Faces Fundamental DisruptionThe traditional patent portfolio approach that has governed technology competition may not apply to AI systems. The rapid development cycle enabled by AI coding tools, combined with the conceptual difficulty of patenting software versus hardware, raises questions about whether patents remain effective protective mechanisms. China's disregard for intellectual property combined with its manufacturing superiority further complicates this landscape, particularly as AI accelerates the speed at which novel applications can be developed and deployed.6. Real-Time Performance Defines Competitive AdvantageTechnologies like Twitch's live streaming demonstrate that execution excellence often matters more than patents. World models require constant real-time updates across multiple data types as everything in the physical world continuously changes. This emphasis on real-time performance and distributed systems represents a core technical challenge that differs fundamentally from the batch processing approach of LLM training. Companies that master real-time world modeling may gain advantages that patents alone cannot protect.7. The Technology Is Moving Faster Than Individual ComprehensionEven veteran technology observers with 50 years of experience find the current pace of AI development challenging to track. The emergence of "vibe coding" enables non-programmers to build functional applications through natural language, while specialized knowledge about components like Gaussian splats, ASICs, and distributed architectures becomes increasingly esoteric. This knowledge fragmentation creates a divergence between technologists deeply engaged with these developments and the broader population, potentially representing an early phase of technological singularity.

    55 мин.
  3. Episode #74: From Cold War to AI War: Navigating Power, Surveillance, and the Future of Democracy

    29 ЯНВ.

    Episode #74: From Cold War to AI War: Navigating Power, Surveillance, and the Future of Democracy

    In this episode of the Stewart Squared podcast, host Stewart Alsop sits down for a wide-ranging conversation that starts with insurance concepts but quickly expands into discussions about geopolitical systems, AI development, and patent law. The conversation covers the breakdown of the post-Reagan world order, the rise of surveillance technology through organizations like ICE, citizen intelligence networks like Protect 612 in Minneapolis, and the challenges of intellectual property protection in the age of LLMs. Stewart Alsop II shares insights from his venture capital experience at NEA regarding patent processes and discusses various AI researchers' perspectives, particularly expressing alignment with Yann LeCun's views on the future limitations of current language models. The episode also touches on smart home technology, with Stewart Alsop II describing his Lutron lighting system and discussing how researchers like Andrej Karpathy are applying AI to home automation. Timestamps 00:00 Exploring the Intersection of Insurance and Crypto03:58 The Evolution of Global Power Dynamics07:59 The Role of Technology in Modern Governance11:50 Understanding Bureaucracy and Its Implications15:52 The Impact of Social Media on Public Perception19:44 The Future of AI and Intellectual Property23:48 Navigating the Complexities of Modern Economies Key Insights 1. The Global Power Structure is in Fundamental Transition: The post-WWII and post-Cold War systems have ended, leaving an unstable world with Trump, Putin, and Xi Jinping as "dictatorial type people" creating uncertainty. The US-Soviet balance has been replaced by a US-China rivalry with Russia as a declining but disruptive force, while oil dynamics shift as the US and Venezuela combined now have more reserves than OPEC countries.2. Technology is Democratizing Intelligence and Surveillance: Citizens are using technology to monitor government activities, as seen in Minneapolis where groups like Protect 612 use real-time intelligence networks to track ICE operations. This creates a two-way surveillance dynamic where both government and citizens have unprecedented monitoring capabilities, fundamentally changing power dynamics.3. Intellectual Property Protection is Breaking Down in the AI Era: The traditional patent system cannot effectively protect AI innovations like LLMs because they're based on data manipulation rather than discrete inventions. This represents a fundamental shift from the venture capital model that relied heavily on IP moats, forcing companies toward "blitzscaling" strategies that depend on speed rather than legal protection.4. AI Development Has Reached a Critical Philosophical Divide: Leading AI researchers have fundamentally different views about AI's future impact, from Hinton's pessimism to Ng's optimism. The author aligns with Yann LeCun's view that current LLMs are "tapped out" and innovation must move beyond current architectures, suggesting we're at an inflection point requiring new algorithmic approaches.5. Authoritarian Tendencies are Emerging Across Political Spectrums: Both left and right have abandoned faith in liberal representative government, with COVID policies demonstrating authoritarian impulses on the left while figures like Curtis Yarvin advocate for a return to monarchy-like CEO governance on the right. This represents a crisis of democratic legitimacy requiring technological solutions.6. Practical AI Applications are Revolutionizing Daily Life: Tools like Antigravity and Claude are enabling non-programmers to automate complex tasks through natural language commands, from web browsing to smart home management. This democratization of programming capabilities represents a fundamental shift in how humans interact with technology systems.7. Venture Capital's Traditional Model is Being Disrupted: The historical VC approach of funding IP-protected innovations for 20+ years is being challenged by AI's inability to be patented and the speed of technological change. Companies like Palantir evolved from service-heavy models to AI-driven platforms, while social media companies succeeded without patent protection through rapid scaling strategies.

    1 ч. 9 мин.
  4. Episode #73: The Network Effect: How We Went from Manual Data Transfer to Global Information Warfare

    22 ЯНВ.

    Episode #73: The Network Effect: How We Went from Manual Data Transfer to Global Information Warfare

    In this wide-ranging episode of Stewart Squared, host Stewart Alsop sits down with his guest Stewart Alsop II to explore everything from the surprisingly complex world of 1980s data transfer—when moving files from a Commodore to a Mac required physical cables and serious technical know-how—to how AI is revolutionizing venture capital deal-making and legal negotiations. The conversation weaves through the evolution of computing from simple calculators to today's network-connected world, examines how AI tools like Claude are transforming enterprise programming, and discusses the changing metrics for startup success in an era where small teams can accomplish what once required large organizations. They also touch on global strategic shifts, the role of social media in modern politics, and the fundamental question of what computation actually gives us as a society, all while considering whether we're witnessing AI "eating the world" or simply the latest chapter in humanity's ongoing relationship with rapidly evolving technology. Timestamps 00:00 Navigating the Landscape of Venture Capital02:53 Understanding Investment Structures and Risks05:46 The Role of Preferences in Financing08:50 The Evolution of Private Equity and Growth Equity11:43 The Impact of AI on Venture Capital17:41 The Future of Companies in an AI-Driven World28:38 The Inefficiencies of Big Tech31:58 The Evolution of Social Media Strategies32:28 Political Dynamics in Venezuela35:19 Global Power Shifts and Their Implications39:16 The Role of Technology in Modern Politics42:49 Generational Changes in Technology51:19 The Historical Context of Computing Key Insights 1. Angel vs. VC Investment Philosophy: Stewart Alsop II distinguishes between angel investing (betting on founders with smaller checks of $25K-$100K based on personal conviction) and venture capital investing (requiring board seats and downside protection). Angels write off failures completely, while VCs structure deals to protect against various scenarios through term sheets and preferences.2. The Preference Stack Reality: Venture financing creates a "pancake stack" of preferences where later investors get paid first in liquidation events. This system protects professional investors but can disadvantage founders and earlier investors, especially in down rounds. The complexity increases with each financing round as new investors often punish prior rounds that didn't achieve expected returns.3. AI's Strategic Differentiation: Rather than "AI eating everything," success comes from strategic focus. Anthropic's Claude excels at enterprise programming tasks, while Google caught up to OpenAI through patient, targeted development. The winners are companies that make smart strategic decisions about where to apply AI, not just those with the most advanced technology.4. Technology Shifts Change Success Metrics: Each technological shift invalidates previous success metrics. The "mythical man-month" concept showed that adding more programmers doesn't linearly increase productivity. Now AI is similarly transforming how we measure programming effectiveness, potentially making smaller teams even more advantageous as AI handles routine coding tasks.5. The Network Revolution's Historical Context: The episode contrasts today's seamless data transfer with 1980s reality, when moving data between different computers (like Commodore to Mac) required physical connections and complex technical knowledge. This highlights how networking fundamentally transformed computing from isolated calculation machines to interconnected systems.6. Generational Acceleration: Technology change is accelerating across generations. Stewart Alsop II lived through analog-to-digital transformation, while younger generations experience continuous technological shifts. This creates both opportunities and anxiety as people struggle to find stable ground in constantly evolving technological landscapes.7. Geopolitical Strategy and Technology: Current global events, from Venezuela to AI development, reflect how technology and traditional power structures intersect. Success requires understanding both technological capabilities and human strategic decision-making, as pure technological superiority doesn't guarantee geopolitical or business success.

    59 мин.
  5. Episode #72: From Yahoo's Directory to Apple's Neural Chips: The Evolution of Structured Knowledge

    15 ЯНВ.

    Episode #72: From Yahoo's Directory to Apple's Neural Chips: The Evolution of Structured Knowledge

    In this episode of Stewart Squared, host Stewart Alsop explores the critical role of ontologies in computing with his father, guest Stewart Alsop II. The conversation covers how early internet pioneers like Yahoo and Amazon used ontologies to organize information, making it machine-readable, and examines whether companies like Apple might be leveraging ontological approaches for knowledge management. The discussion ranges from the historical Dewey Decimal System to modern applications in AI, the evolution of hardware-software integration, Apple's strategic positioning in the AI landscape, and the development of cloud computing infrastructure. Stewart Alsop II provides insights on technology readiness levels, the nature of LLMs as databases rather than active systems, and Apple's trust-focused strategy under Tim Cook's leadership. The hosts also touch on the geopolitical implications of cloud infrastructure, including China's data center investments in Brazil, and debate the future of personal computing devices in an AI-driven world. Timestamps 00:00 Welcome and ontology introduction, discussing how Yahoo and Amazon created ontologies for search and product catalogs to make data machine-readable.05:00 Dewey Decimal System analogy for ontologies, explaining how Yahoo used subject matter organization before LLMs eliminated directory needs.10:00 AI limitations in structured domains like coding, law, and music versus inability to create genuinely new solutions independently.15:00 Regulated industries using ontologies for documentation, challenges of AI handling unpredictable regulatory changes like RFK Jr's vaccine positions.20:00 Hardware-software boundaries discussion, Apple's virtualization success across different processor architectures with minimal cathedral-like teams.25:00 Apple's neural accelerators in M5 chips for local AI workloads, Apple Intelligence missteps and team restructuring away from Google-thinking.30:00 LLMs as inert databases requiring tools for activation, distinguishing between large and small language models on devices.35:00 Apple's personal computing vision with local LLMs, real-time data challenges versus static training model limitations.40:00 Cloud computing evolution from company data centers to modern real-time databases, searching for original cloud terminology origins.45:00 Technology readiness levels for hardware versus software's artistic squishiness, hardware fails hard while software fails soft principle. Key Insights 1. Ontologies as Machine Reading Systems: Ontologies serve as structured frameworks that enable machines to read and understand data, similar to how the Dewey Decimal System organized libraries. Early internet companies like Yahoo and Amazon built ontologies for search and product catalogs, making information machine-readable. While LLMs have reduced reliance on traditional directories, ontologies remain crucial for regulated industries requiring extensive documentation.2. AI Excels in Structured Domains: Large language models perform exceptionally well in highly structured environments like coding, law, and music because these domains follow predictable patterns. AI can convert legacy code across programming languages and help with legal document creation precisely because these fields have inherent logical structures that neural networks can learn and replicate effectively.3. AI Cannot Innovate Beyond Structure: A fundamental limitation is that AI cannot create truly novel solutions outside existing structures. It excels at solving specific, well-defined problems within known frameworks but struggles with unstructured challenges requiring genuine innovation. This suggests AI will augment human capabilities rather than replace creative problem-solving entirely.4. Apple's Device-Centric AI Strategy: Apple is uniquely positioned to fulfill the original personal computing vision by building AI directly into devices rather than relying on cloud-based solutions. Their integration of neural accelerators into M-series chips enables local LLM processing, potentially creating truly personal AI assistants that understand individual users while maintaining privacy.5. The Trust Advantage in Personal AI: Trust becomes a critical differentiator as AI becomes more personal. Apple's long-term focus on privacy and user trust, formalized under Tim Cook's leadership, positions them favorably for personal AI applications. Unlike competitors focused on cloud-based solutions, Apple's device-centric approach aligns with growing privacy concerns about personal data.6. LLMs as Intelligent Databases, Not Operating Systems: Rather than viewing LLMs as active agents, they're better understood as sophisticated databases where intelligence emerges from relationships between data points. LLMs are essentially inert until activated by tools or applications, similar to how a brain requires connection to a nervous system to function effectively.7. Hardware-Software Integration Drives AI Performance: The boundary between hardware and software increasingly blurs as AI capabilities are built directly into silicon. Apple's ability to design custom chips with integrated neural processing units, communications chips, and optimized software creates performance advantages that pure software solutions cannot match, representing a return to tightly integrated system design.

    47 мин.
  6. Episode #71: The AI Momentum Trap: When Venture Models Replace Business Models

    8 ЯНВ.

    Episode #71: The AI Momentum Trap: When Venture Models Replace Business Models

    In this episode of the Stewart Squared Podcast, host Stewart Alsop sits down with his father Stewart Alsop II for another fascinating father-son discussion about the tech industry. They dive into the Osborne effect - a business phenomenon from the early computer days where premature product announcements can destroy current sales - and explore how this dynamic is playing out in today's AI landscape. Their conversation covers OpenAI's recent strategic missteps, Google's competitive response with Gemini and TPUs, the circular revenue patterns between major tech companies, and why we might be witnessing fundamental shifts in the AI chip market. They also examine the current state of coding AI tools, the difference between LLMs and true AGI, and whether the tech industry's sophistication can prevent historical bubble patterns from repeating. Timestamps00:00 The Osborne Effect: A Historical Perspective05:53 The Competitive Landscape of AI12:03 Understanding the AI Bubble21:00 The Value of AI in Coding and Everyday Tasks28:47 The Limitations of AI: Creativity and Human Intuition33:42 The Osborne Effect in AI Development41:14 US vs China: The Global AI Landscape Key Insights1. The Osborne Effect remains highly relevant in today's AI landscape. Adam Osborne's company collapsed in the 1980s after announcing their next computer too early, killing current sales. This same strategic mistake is being repeated by AI companies like OpenAI, which announced multiple products prematurely and had to issue a "code red" to refocus on ChatGPT after Google's unified Gemini offering outcompeted their fragmented approach.2. Google has executed a masterful strategic repositioning in AI. While companies like OpenAI scattered their efforts across multiple applications, Google unified everything into Gemini and developed TPUs (Tensor Processing Units) for inference and reasoning tasks, positioning themselves beyond just large language models toward true AI capabilities and forcing major companies like Anthropic, Meta, and even OpenAI to sign billion-dollar TPU deals.3. The AI industry exhibits dangerous circular revenue patterns reminiscent of the dot-com bubble. Companies are signing binding multi-billion dollar contracts with each other - OpenAI contracts with Oracle for data centers, Oracle buys NVIDIA chips, NVIDIA does deals with OpenAI - creating an interconnected web where everyone knows it's a bubble, but the financial commitments are far more binding than simple stock investments.4. Current AI capabilities represent powerful tools rather than AGI, despite the hype. As Yann LeCun correctly argues, Large Language Models that predict the next token based on existing data cannot achieve true artificial general intelligence. However, AI has become genuinely transformative for specific tasks like coding (where Claude dominates) and language translation, making certain professionals incredibly productive while eliminating barriers to prototyping.5. Anthropic has captured the most valuable market segment by focusing on enterprise programmers. While Microsoft's Copilot failed to gain traction by being bolted onto Office, Anthropic strategically targeted IT departments and developers who have budget authority and real technical needs. This focus on coding and enterprise programming has made them a serious competitive threat to Microsoft's traditional enterprise dominance.6. NVIDIA's massive valuation faces existential risk from the shift beyond LLMs. Trading at approximately 25x revenue compared to Google's 10x, NVIDIA's $4.6 trillion valuation depends entirely on GPU demand for training language models. Google's TPU strategy for inference and reasoning represents a fundamental architectural shift that could undermine NVIDIA's dominance, explaining recent stock volatility when major TPU deals were announced.7. AI will excel at tasks humans don't want to do, while uniquely human capabilities remain irreplaceable. The future likely involves AI handling linguistic processing and routine tasks, physical AI managing robotic applications, and ontologies codifying business logic, but creativity, intuition, and imagination represent fundamentally human capacities that cannot be modeled or replicated through data processing, regardless of scale or sophistication.

    46 мин.
  7. Episode #70: From Twitter to Threads: Escaping the Training Data Mines of Late Capitalism

    1 ЯНВ.

    Episode #70: From Twitter to Threads: Escaping the Training Data Mines of Late Capitalism

    In this episode of the podcast, host Stewart Alsop III engages in a wide-ranging conversation with Stewart Alsop II about data training, social media competition between X and Threads, and the broader technological landscape from semiconductors to AI. The discussion covers everything from Taiwan's dominance in chip manufacturing through TSMC, the evolution of supercomputers from Seymour Cray's innovations to modern GPU clusters, and the challenges facing early-stage companies trying to scale specialized technologies like advanced materials for semiconductor manufacturing. The conversation also touches on the complexities of cryptocurrency adoption, the changing nature of work in an increasingly specialized economy, and the implications of AI data centers on power consumption and infrastructure. Timestamps 00:00 The Rise of Threads and Competition with X 03:01 The Semiconductor Landscape: TSMC vs. Intel 06:03 The Role of Supercomputers in Modern Science 09:00 AI and the Future of Data Centers 11:46 The Evolution of Computing: From Mainframes to Clusters 14:54 The Impact of Moore's Law on Semiconductor Technology 17:52 Heat Management in High-Performance Computing 31:01 Power and Cooling Challenges in AI Data Centers 33:42 Battery Technology and Mass Production Issues 35:33 The Importance of Specialized Jobs in the Economy 38:54 The Evolution of ARM and Its Impact on Microprocessors 42:49 The Shift in Software Development with AI 46:50 Trust and Data Privacy in the Cloud 49:45 The Democratization of Investing and Its Challenges 53:52 The Regulatory Landscape of Cryptocurrency Key Insights1. TSMC's foundry dominance stems from strategic focus, not outsourcing. Taiwan Semiconductor Manufacturing Company became the global chip leader by specializing purely in manufacturing chips for other companies, while Intel failed because they couldn't effectively balance making their own chips with serving as a foundry for competitors. This wasn't about unions or cheap labor - it was about TSMC doing foundry work better than anyone else.2. Scale economics have fundamentally transformed computing infrastructure. The shift from custom supercomputers like Seymour Cray's machines to clusters of networked mass-produced computers represents a broader principle: you can't compete against scale with handcrafted solutions. Today's "supercomputers" are essentially networks of standardized components communicating at extraordinary speeds through fiber optics.3. AI infrastructure is creating massive resource bottlenecks. Sam Altman has cornered the market on DRAM memory essential for AI data centers, while power consumption and heat dissipation have become national security issues. The networking speed between processors, not the processors themselves, often becomes the limiting factor in these massive AI installations.4. Trust is breaking down across institutions and platforms. From government competence to platform reliability, trust failures are driving major shifts. Companies like Carta are changing terms of service to use customer data for AI training, while social media platforms like Twitter/X are being used as training data farms, prompting migrations to alternatives like Threads.5. Personal software development is becoming democratized while enterprise remains complex. Individuals can now build functional software for personal use through AI coding assistance, but scaling to commercial applications still requires traditional expertise in manufacturing, integration, and enterprise sales processes.6. Cryptocurrency regulation is paradoxically centralizing a decentralized system. Trump's GENIUS Act forces stablecoin issuers to become banks subject to transaction censorship, while major Bitcoin holders like Michael Saylor introduce leverage risks that could trigger broader market instability.7. User experience remains the critical barrier to technology adoption. Despite decades of development, cryptocurrency interfaces are still incomprehensible to normal users, requiring complex wallet addresses and multi-step processes that prevent mainstream adoption - highlighting how technical sophistication doesn't guarantee usability.

    1 ч. 2 мин.
  8. Episode #69: From Floppy Disks to Claude Code: Riding the AI Dragon

    25.12.2025

    Episode #69: From Floppy Disks to Claude Code: Riding the AI Dragon

    In this episode of Stewart Squared, host Stewart Alsop III talks with his father, Stewart Alsop II, covering a wide range of technology topics from their unique generational perspective where the father often introduces cutting-edge tech to his millennial son rather than the reverse. The conversation spans from their experiences with Meta's Threads platform and its competition with X (formerly Twitter), to the evolution of AI from 1980s symbolic AI through today's large language models, and Microsoft's strategic shifts from serving programmers to becoming an enterprise-focused company. They also explore the historical development of search technologies, ontologies, and how competing technologies can blind us to emerging possibilities, drawing connections between past computing paradigms and today's AI revolution. To learn about Stewart Alsop II’s firsthand experience with Threads, check out his Substack at salsop.substack.com. Timestamps00:00 Stewart III shares how his dad unusually introduces him to new tech like Threads, reversing typical millennial-parent dynamics05:00 Discussion of Stewart's Chinese hardware purchase and Argentina's economic challenges with expensive imports and subsidies10:00 Analyzing Twitter's transformation under Musk into a digital warlord platform versus Threads serving normal users15:00 Threads algorithm differences from Facebook and Instagram, photographer adoption, surpassing Twitter's daily active users20:00 Threads provides original Facebook experience without ads while competing directly with Twitter for users25:00 Exploring how both Musk and Zuckerberg collect training data for AI through social platforms30:00 Meta's neural tracking wristband and Ray-Ban glasses creating invisible user interfaces for future interaction35:00 Reflecting on living in the technological future compared to 1980s symbolic AI research limitations40:00 Discussing symbolic AI, ontologies, and how Yahoo and Amazon used tree-branch organization systems45:00 Examining how Palantir uses ontologies and relational databases for labeling people, places, and things50:00 Neuro-symbolic integration as solution to AI hallucination problems using knowledge graphs and validation layers55:00 Google's strategic integration approach versus OpenAI's chat bot focus creating competitive pincer movement Key Insights1. Social Media Platform Evolution Through AI Strategy - The discussion reveals how Threads succeeded against Twitter/X by offering genuine engagement for ordinary users versus Twitter's "digital warlord" model that only amplifies large followings. Zuckerberg strategically created Threads as a clean alternative while abandoning Facebook to older users stuck in AI-generated loops, demonstrating how AI considerations now drive social platform design.2. Historical AI Development Follows Absorption Patterns - The conversation traces symbolic AI from 1980s ontology-based systems through Yahoo's tree-branch search structure to modern neuro-symbolic integration. Nothing invented in computing disappears; instead, older technologies get absorbed into new systems. This pattern explains why current AI challenges like hallucinations might be solved by reviving symbolic AI approaches for provenance tracking.3. Enterprise vs Consumer AI Strategies Create Competitive Advantages - Microsoft's transformation from a programmer-focused company under Gates to an enterprise company under Satya exemplifies strategic positioning. While OpenAI focuses on consumer subscriptions and faces declining signups, Anthropic's enterprise focus provides more stable revenue. The enterprise environment makes AI agents more viable because business requirements are more predictable than diverse consumer needs.4. Integration Beats Best-of-Breed in Technology Competition - Google's recent AI comeback demonstrates the Microsoft Office strategy: integrating all AI capabilities into one platform rather than forcing users to choose between separate tools. This integration approach historically defeats specialized competitors, as seen when Microsoft Office eliminated WordPerfect and Lotus by bundling everything together rather than competing on individual features.5. Technology Prediction Limitations and Pattern Recognition - The discussion highlights how humans consistently fail to predict technology developments beyond 2-3 years, while current developments within 12 months are predictable. This creates blind spots where dominant technologies (like transformers) capture all attention while other developments (like the metaverse) continue evolving unnoticed, requiring pattern recognition skills that current AI lacks due to reliance on historical data.6. Network Effects Transformed Computing Fundamentally - The shift from isolated computers with small datasets in the 1980s to today's high-speed global networks created possibilities unimaginable to early AI researchers. This network transformation explains why symbolic AI failed initially but might succeed now, and why companies like Palantir can use ontologies effectively with massive connected datasets that weren't available during the 1980s AI bubble.7. Professional Identity Boundaries Shape Technology Adoption - The distinction between hobbyist programmers seeking creative expression and IT professionals whose job is to "say no" and maintain standards reveals how professional roles influence technology adoption. This dynamic explains both historical patterns (like the Apple vs enterprise IT conflicts) and current challenges (like Microsoft Copilot adoption issues), showing how organizational structures affect technological progress beyond pure technical capabilities.

    59 мин.

Об этом подкасте

Stewart Alsop III reviews a broad range of topics with his father Stewart Alsop II, who started his career in the personal computer industry and is still actively involved in investing in startup technology companies. Stewart Alsop III is fascinated by what his father was doing as SAIII was growing up in the Golden Age of Silicon Valley. Topics include: - How the personal computing revolution led to the internet, which led to the mobile revolution - Now we are covering the future of the internet and computing - How AI ties the personal computer, the smartphone and the internet together

Еще от провайдера «Crazy Wisdom»