Semi Doped

Vikram Sekar and Austin Lyons

The business and technology of semiconductors. Alpha for engineers and investors alike.

  1. 2 TIMER SIDEN

    Power as the Next Physics Wall for AI

    What's common to optics and power that ruins everything in the era of AI? Resistance. The same physics that drove interconnects to optics is now driving low-voltage power delivery up to 800V. Austin Lyons (Chipstrat) and Vik Sekar (Vik's Newsletter) unpack it using the Kyber rack as an example. At 600kW and 48V, you're pushing 12,500 amps through a single rack. Power loss scales with I². The math doesn't work. The fix is 800V — and the parts come straight from the EV traction inverter ecosystem (SiC, GaN, IGBTs). We cover the full grid-to-GPU power conversion chain (substation, utility room, PSU, intermediate bus converter, VRM), why vertical power delivery is the CPO equivalent for power, and why the power industry is a much wider open problem than optics or HBM. Plus the new topology fight: 800V → 48V (reuse the existing 48V infrastructure) vs 800V → 6V (skip 48V entirely, like TI and Navitas are pushing). We also touch Coherent's six-inch indium phosphide ramp at Järfälla, Sweden, and why margins are the real read-through next quarter. Relevant reading: Vik's Substack post on power: https://www.viksnewsletter.com/p/power-delivery-as-the-next-physics-wall Google TPU 8i / 8t blog (Boardfly deep dive): https://cloud.google.com/blog/products/compute/tpu-8t-and-tpu-8i-technical-deep-dive Get more of Austin and Vik daily, free! Sign up here: https://www.semidoped.com/ Follow Chipstrat: Newsletter: https://www.chipstrat.com X: https://x.com/austinsemis Follow Vik: Newsletter: https://www.viksnewsletter.com/ X: https://x.com/vikramskr Chapters (00:00) Intro (01:41) Memory tax: inflation, not innovation (03:46) Boardfly: 16 hops to 7 (05:12) Coherent's six-inch indium phosphide ramp (12:15) Power is the next physics wall (15:08) Why 48V breaks at 600kW: 12,500 amps (23:05) 800V and vertical power delivery: CPO for power (30:34) Grid to GPU: every stage is a different supply chain (39:20) 800V → 48V or skip straight to 6V?

    42 min.
  2. Masterclass on Google's TPU v8 Networking

    24. APR.

    Masterclass on Google's TPU v8 Networking

    Google's Cloud Next 2026 keynote? Fire. 🔥 The TPU is now two chips instead of one — 8t for training, 8i for inference — but more interestingly, it's two scale-up networking topologies too. Austin Lyons (Chipstrat) and Vik Sekar (Vik's Newsletter) walk through what actually changed, one day after the announcement. OCS? Yes. AECs? Yep. Copper? Yep. Optics? Yep. We cover Virgo (Google's 47 petabit/second scale-out fabric, built entirely on OCS), Boardfly (the new scale-up topology for MoE inference that cuts hop count from 16 to 7), and the 3D torus Google still uses for training. Why is optical circuit switching the substrate of Google's data center? Why do active electrical cables still carry scale-up traffic inside racks? Why did Google split the CPU layer too, with custom ARM Axion head nodes to keep the TPUs fed? Along the way we trace the Dragonfly topology lineage to a 2008 paper by John Kim, Bill Dally, Steve Scott, and Dennis Abts. Abts went on to build Groq's rack-scale interconnect before landing at Nvidia. Chapters:  0:00 Intro  0:21 Two TPUs for two workloads  2:31 HBM, SRAM, and Axion CPUs  7:22 Why networking is the new bottleneck  17:14 Virgo: rebuilding scale-out on optics  25:24 3D torus Rubik's Cube scale-up for training  34:50 Boardfly: scale-up for MoE inference  42:07 Workload-specific everything Follow Chipstrat: Newsletter: https://www.chipstrat.com X: https://x.com/austinsemis Follow Vik: Newsletter: https://www.viksnewsletter.com/ X: https://x.com/vikramskr

    47 min.
  3. Meta VP Matt Steiner on Ads Infra, GPUs, MTIA, and LLM-Written Kernels

    20. APR.

    Meta VP Matt Steiner on Ads Infra, GPUs, MTIA, and LLM-Written Kernels

    Matt Steiner, VP of Monetization Infrastructure, Ranking & AI Foundations at Meta, walks through how Meta's ad system actually works, and why the infrastructure behind it differs from what you'd build for LLMs. We cover Andromeda (retrieval on a custom NVIDIA Grace Hopper SKU Meta co-designed), Lattice (consolidating N ranking models into one), GEM (Meta's Generative Ads Recommendation foundation model), and the adaptive ranking model, a roughly one-trillion-parameter recommender served at sub-second latency. We get into why recommender workloads aren't embarrassingly parallel like LLMs (the "personalization blob"), what that means for Meta's MTIA custom silicon roadmap, and how LLM-written kernels (KernelEvolve) flipped the economics of running a heterogeneous hardware fleet. Demand for software engineering has actually gone up as the price has come down. Meta now wants ~100x more optimized kernels per chip. Read the full transcript at https://www.chipstrat.com/p/an-interview-with-meta-vp-matt-steiner Chapters: 0:00 Intro and scale 0:39 How Meta's ad system works 2:00 Meta Andromeda and the custom NVIDIA SKU 3:30 Lattice: consolidating ranking models 5:00 GEM, Meta's ads foundation model 6:30 Adaptive ranking for power users 8:17 The scale: 3B DAUs at sub-second latency 9:40 Why longer interaction histories matter 10:45 The anniversary gift analogy 12:57 A decade of compute evolution 15:21 Meta's infra as a CP-SAT problem 16:07 Co-designing Grace Hopper with NVIDIA 17:47 Matching compute shape to workload 18:26 Influencing hardware and software roadmaps 20:23 MTIA: why ads aren't LLMs 22:07 The personalization blob and I/O ratios 26:38 One trillion parameters at sub-second latency 28:26 Heterogeneous hardware trade-offs 29:30 KernelEvolve: LLMs writing custom kernels 33:30 GenAI and recommender systems cross-pollination 35:21 The 2-year infrastructure outlook 37:00 Why demand for software engineering is rising 38:53 How Matt stays on top of it all Relevant reading: KernelEvolve (Meta Engineering): https://engineering.fb.com/2026/04/02/developer-tools/kernelevolve-how-metas-ranking-engineer-agent-optimizes-ai-infrastructure/ Follow Chipstrat: Newsletter: https://www.chipstrat.com X: https://x.com/chipstrat

    40 min.
  4. Reiner Pope (MatX): Designing AI Chips From First Principles for LLMs

    9. APR.

    Reiner Pope (MatX): Designing AI Chips From First Principles for LLMs

    Reiner Pope is the co-founder and CEO of MatX, the startup building chips designed from first principles for LLMs. Before MatX, Reiner was on the Google Brain team training LLMs, and his co-founder Mike Gunter was on the TPU team. They left Google one week before ChatGPT was released. A counterintuitive throughput insight from the conversation: “Low latency means small batch sizes. That is just Little’s law. Memory occupancy in HBM is proportional to batch size. So you can actually fit longer contexts than you could if the latency were larger. Low latency is not just a usability win, it improves throughput.” We get into: • The hybrid SRAM + HBM bet, and why pipeline parallelism finally works • Overcoming the CUDA moat • Why frontier labs are willing to bet on an AI ASIC startup • Memory-bandwidth-efficient attention, numerics, and what MatX publishes (and what it does not) • Why 95% of model-side news is noise for chip design • Why sparse MoE drives MatX to “the most interconnect of any announced product” • How MatX uses AI for its own chip design • The biggest challenges ahead Chapters: 00:00 “We left Google one week before ChatGPT” 00:24 Intro: who is MatX 01:17 Origin story: leaving Google for LLM chips 02:21 GPT-3 and the “too expensive” problem 04:25 Why buy hardware that is not a GPU 05:52 Overcoming the CUDA moat 08:46 Early investors 09:35 The name MatX 09:59 The chip: matrix multiply + hybrid SRAM/HBM 12:11 Why pipeline parallelism finally works 14:22 Reading papers and Google going dark 15:20 Research agenda: attention and numerics 17:06 Five specs and meeting customers where they are 19:24 Why frontier labs are the natural first customer 20:32 Workloads: training, prefill, decode 22:18 Little’s law and the throughput case for low latency 24:29 Interconnect and MoE topology 26:35 Inside the team: 100 people, full stack 28:32 Agentic AI: 95% noise for hardware 30:35 KV cache sizing in an agentic world 32:11 How MatX uses AI for chip design (Verilog + BlueSpec) 34:23 Go to market: proving credibility under NDA 35:12 Porting effort for frontier labs 36:34 Biggest skepticism: manufacturing at gigawatt scale 37:32 Hiring plug Austin Lyons @ Chipstrat: https://www.chipstrat.com Vik Sekar @ Vik's Newsletter: https://www.viksnewsletter.com/

    39 min.

Om

The business and technology of semiconductors. Alpha for engineers and investors alike.

Måske vil du også synes om