The Data Center Frontier Show

Endeavor Business Media

Welcome to The Data Center Frontier Show podcast, telling the story of the data center industry and its future. Our podcast is hosted by the editors of Data Center Frontier, who are your guide to the ongoing digital transformation, explaining how next-generation technologies are changing our world, and the critical role the data center industry plays in creating this extraordinary future.

  1. HACE 2 DÍAS

    7x24 Exchange's Dennis Cronin on the Data Center Workforce Crisis

    The data center industry is racing into the AI era with bigger campuses, tighter timelines, and unprecedented infrastructure complexity. But in this episode of The Data Center Frontier Show Podcast, 7x24 Exchange International founding member and Mission Critical Global Alliance (MCGA) board member Dennis Cronin argues the industry’s biggest constraint may be the one it talks about least: people. Cronin’s message is direct: the “talent cliff” isn’t coming; it’s already here. Based on recent research into open roles, he estimates 467,000 to 498,000 openings in core data center positions (facilities and ops leadership, electrical, generator/UPS, HVAC, controls), plus another ~514,000 emerging roles tied to AI infrastructure, sustainability, and cyber-physical security—bringing the total to roughly one million jobs the industry needs to fill. A major driver is what Cronin calls the “five-year experience trap”: employers require five years of experience even for entry-level roles, but newcomers can’t get experience without being hired. The result is widespread talent poaching, involving workers jumping from site to site for 10–20% raises, without expanding the overall labor pool. Cronin also highlights a frequently missed reality in public policy debates: the job multiplier effect. While data centers may have lean direct staffing, they support a much larger ecosystem of contractors, service providers, and manufacturers, from generator and UPS technicians to security integrators and the electrical/mechanical supply chain, many of whom are already scrambling to hire. On training, Cronin explains why company-run programs and commercial training aren’t enough on their own. Internal academies often produce siloed specialists trained for a single operator’s environment, while commercial courses, often ~$1,000 per day per person, are typically designed to upskill people already in the industry, not onboard new entrants. MCGA’s strategy focuses on community colleges as the most scalable on-ramp: affordable programs, scholarships, and hands-on labs that can produce strong technicians in two-year degrees. Cronin cites programs at Cleveland Community College (NC), Northern Virginia Community College, and Southside Community College (VA), noting that dozens of schools are exploring data center curricula but funding remains a barrier. Cronin’s proposed solution is a true workforce ecosystem: outreach, standardized curriculum, certification labs, structured apprenticeships, and employer commitments. He also advocates replacing the “five years” requirement with an entry-level certification that proves foundational knowledge, i.e. acronyms and language, reading one-lines, SOPs/MOPs, and crucially, safety and situational awareness in electrical and mechanical environments. Finally, Cronin tackles the money question. With $60B in data centers announced this year, he says the industry needs a major, shared investment across operators, vendors, contractors, and manufacturers to fund training and scholarships at scale. The stakes are operational: in an era of gigawatt AI facilities and shrinking margins for error, workforce readiness is now a mission-critical issue.

    35 min
  2. 17 FEB

    Execution, Power, and Public Trust: Rich Miller on 2026’s Data Center Reality

    In the latest episode of The DCF Show Podcast, Data Center Frontier founder Rich Miller joins present DCF Editor in Chief Matt Vincent and Senior Editor David Chernicoff to examine where the data center industry stands as AI infrastructure moves from announcement to execution. Miller also discusses his new Data Center Richness podcast and Substack project, which explores how data center professionals consume content and learn about the rapidly evolving industry. With information overload now a reality, Miller’s goal is to distill the most important signals shaping infrastructure decisions. The conversation then turns to what defines 2026 for data centers: execution. After a year filled with megaproject announcements, the industry now faces the harder task of actually delivering campuses at AI scale—often under severe power constraints. With utilities struggling to keep pace, on-site generation is shifting from temporary solution to long-term strategy, as developers seek reliable ways to power projects while easing community concerns about grid impacts. Public resistance has also become a major factor. Miller notes that community opposition is now delaying or halting billions of dollars in projects, forcing operators to rethink how they engage with local stakeholders. Issues like power pricing and water usage are increasingly central to project approval. On the technology front, Nvidia’s roadmap continues to reshape infrastructure planning, with rack densities rising sharply, liquid cooling becoming standard, and new power distribution models emerging to support AI factories. At the same time, Miller expects the market to stratify, with some operators specializing in AI factories while others serve cloud and enterprise demand. The discussion also touches on nuclear power’s future role, with data centers positioning themselves as anchor customers, though meaningful SMR deployment remains years away. Ultimately, Miller argues that the industry is moving faster than ever, and 2026 will reveal how well today’s massive investments translate into real deployments. As he concludes: the next phase belongs to those who can deliver.

    39 min
  3. 10 FEB

    Nomads at the Frontier: PTC 2026 Signals an Execution Phase for Digital Infrastructure

    In this installment of Nomads at the Frontier, Data Center Frontier Editor-in-Chief Matt Vincent checks in with Nomad Futurist founders Nabeel Mahmood and Phillip Koblence for on-the-ground reflections from PTC 2026 in Hawaii, and a clear signal that the digital infrastructure market is shifting from hype to delivery. Mahmood says PTC 2026 reaffirmed the move toward integrated digital infrastructure, with attendance continuing to grow and conversations increasingly translating into real progress. But the defining theme across AI, investment, and deployments was power. As Koblence puts it, “all of those questions are power”—and unlike prior years, the tone has moved from speculative site talk to “show me the money, show me the power,” with real timelines and secured capacity. The episode digs into the industry’s evolving stance on behind-the-meter generation, which is increasingly treated as the most viable medium-term path to getting online as grid bureaucracy and interconnection delays become the “long pole in the tent.” The discussion also tackles the sustainability tension in that shift: why the industry often kicks the can down the road, what alternative options (fuel cells, hydrogen) may offer, and why nuclear timelines don’t solve the near-term gap. Mahmood and Koblence also emphasize that the buildout isn’t just a power story; it’s a people and community story. Workforce shortages remain structural and long-lived, and community acceptance is now central to the industry’s “license to build.” Nomad Futurist’s mission, they argue, is becoming a bridge between digital infrastructure and the public, demystifying what the industry is, why it matters, and how the next generation can enter it. Finally, the conversation pressures-tests the AI boom: Mahmood predicts the “mega-scale AI factory” bubble will burst within three to five years, with growth shifting toward inferencing closer to users, but he still expects the sector to normalize into sustained double-digit expansion. And on Nvidia’s roadmap, both founders call for realism: megawatt racks may be coming, but as Koblence notes, “there are zero facilities” today that can support a 1–1.5 MW rack at scale.

    33 min
  4. 3 FEB

    Google Cloud on Operationalizing AI: Why Data Infrastructure Matters More Than Models

    In the latest episode of the Data Center Frontier Show Podcast, Editor in Chief Matt Vincent speaks with Sailesh Krishnamurthy, VP of Engineering for Databases at Google Cloud, about the real challenge facing enterprise AI: connecting powerful models to real-world operational data. While large language models continue to advance rapidly, many organizations still struggle to combine unstructured data (i.e. documents, images, and logs) with structured operational systems like customer databases and transaction platforms. Krishnamurthy explains how vector search and hybrid database approaches are helping bridge this gap, allowing enterprises to query structured and unstructured data together without creating new silos. The conversation highlights a growing shift in mindset: modern data teams must think more like search engineers, optimizing for relevance and usefulness rather than simply exact database results. At the same time, governance and trust are becoming foundational requirements, ensuring AI systems access accurate data while respecting strict security controls. Operating at Google scale also reinforces the need for reliability, low latency, and correctness, pushing infrastructure toward unified storage layers rather than fragmented systems that add complexity and delay. Looking toward 2026, Krishnamurthy argues the top priority for CIOs and data leaders is organizing and governing data effectively, because AI systems are only as strong as the data foundations supporting them. The takeaway: AI success depends not just on smarter models, but on smarter data infrastructure. 🎧 Listen to the full episode to explore how enterprises can operationalize AI at scale.

    32 min
  5. 27 ENE

    Applied Digital CEO Wes Cummins

    Applied Digital CEO Wes Cummins joins Data Center Frontier Editor-in-Chief Matt Vincent to break down what it takes to build AI data centers that can keep pace with Nvidia-era infrastructure demands and actually deliver on schedule. Cummins explains Applied Digital’s “maximum flexibility” design philosophy, including higher-voltage delivery, mixed density options, and even more floor space to future-proof facilities as power and cooling requirements evolve. The conversation digs into the execution reality behind the AI boom: long-lead power gear, utility timelines, and the tight MEP supply chain that will cause many projects to slip in 2026–2027. Cummins outlines how Applied Digital locked in key components 18–24 months ago and scaled from a single 100 MW “field of dreams” building to roughly 700 MW under construction, using fourth-generation designs and extensive off-site MEP assembly—“LEGO brick” skids—to boost speed and reduce on-site labor risk. On cooling, Cummins pulls back the curtain on operating direct-to-chip liquid cooling at scale in Ellendale, North Dakota, including the extra redundancy layers—pumps, chillers, dual loops, and thermal storage—required to protect GPUs and hit five-nines reliability. He also discusses aligning infrastructure with Nvidia’s roadmap (from 415V toward 800V and eventually DC), the customer demand surge pushing capacity planning into 2028, and partnerships with ABB and Corintis aimed at next-gen power distribution and liquid cooling performance.

    29 min
  6. 20 ENE

    Cadence’s Sherman Ikemoto on Digital Twins, Power Reality and Designing the AI Factory

    AI data centers are no longer just buildings full of racks. They are tightly coupled systems where power, cooling, IT, and operations all depend on each other, and where bad assumptions get expensive fast. On the latest episode of The Data Center Frontier Show, Editor-in-Chief Matt Vincent talks with Sherman Ikemoto of Cadence about what it now takes to design an “AI factory” that actually works. Ikemoto explains that data center design has always been fragmented. Servers, cooling, and power are designed by different suppliers, and only at the end does the operator try to integrate everything into one system. That final integration phase has long relied on basic tools and rules of thumb, which is risky in today’s GPU-dense world. Cadence is addressing this with what it calls “DC elements”:  digitally validated building blocks that represent real systems, such as NVIDIA’s DGX SuperPOD with GB200 GPUs. These are not just drawings; they model how systems really behave in terms of power, heat, airflow, and liquid cooling. Operators can assemble these elements in a digital twin and see how an AI factory will actually perform before it is built. A key shift is designing directly to service-level agreements. Traditional uncertainty forced engineers to add large safety margins, driving up cost and wasting power. With more accurate simulation, designers can shrink those margins while still hitting uptime and performance targets, critical as rack densities move from 10–20 kW to 50–100 kW and beyond. Cadence validates its digital elements using a star system. The highest level, five stars, requires deep validation and supplier sign-off. The GB200 DGX SuperPOD model reached that level through close collaboration with NVIDIA. Ikemoto says the biggest bottleneck in AI data center buildouts is not just utilities or equipment; it is knowledge. The industry is moving too fast for old design habits. Physical prototyping is slow and expensive, so virtual prototyping through simulation is becoming essential, much like in aerospace and automotive design. Cadence’s Reality Digital Twin platform uses a custom CFD engine built specifically for data centers, capable of modeling both air and liquid cooling and how they interact. It supports “extreme co-design,” where power, cooling, IT layout, and operations are designed together rather than in silos. Integration with NVIDIA Omniverse is aimed at letting multiple design tools share data and catch conflicts early. Digital twins also extend beyond commissioning. Many operators now use them in live operations, connected to monitoring systems. They test upgrades, maintenance, and layout changes in the twin before touching the real facility. Over time, the digital twin becomes the operating platform for the data center. Running real AI and machine-learning workloads through these models reveals surprises. Some applications create short, sharp power spikes in specific areas. To be safe, facilities often over-provision power by 20–30%, leaving valuable capacity unused most of the time. By linking application behavior to hardware and facility power systems, simulation can reduce that waste, crucial in an era where power is the main bottleneck. The episode also looks at Cadence’s new billion-cycle power analysis tools, which allow massive chip designs to be profiled with near-real accuracy, feeding better system- and facility-level models. Cadence and NVIDIA have worked together for decades at the chip level. Now that collaboration has expanded to servers, racks, and entire AI factories. As Ikemoto puts it, the data center is the ultimate system—where everything finally comes together—and it now needs to be designed with the same rigor as the silicon inside it.

    35 min

Acerca de

Welcome to The Data Center Frontier Show podcast, telling the story of the data center industry and its future. Our podcast is hosted by the editors of Data Center Frontier, who are your guide to the ongoing digital transformation, explaining how next-generation technologies are changing our world, and the critical role the data center industry plays in creating this extraordinary future.

También te podría interesar