The Data Center Frontier Show

Endeavor Business Media

Welcome to The Data Center Frontier Show podcast, telling the story of the data center industry and its future. Our podcast is hosted by the editors of Data Center Frontier, who are your guide to the ongoing digital transformation, explaining how next-generation technologies are changing our world, and the critical role the data center industry plays in creating this extraordinary future.

  1. 4D AGO

    Google Cloud on Operationalizing AI: Why Data Infrastructure Matters More Than Models

    In the latest episode of the Data Center Frontier Show Podcast, Editor in Chief Matt Vincent speaks with Sailesh Krishnamurthy, VP of Engineering for Databases at Google Cloud, about the real challenge facing enterprise AI: connecting powerful models to real-world operational data. While large language models continue to advance rapidly, many organizations still struggle to combine unstructured data (i.e. documents, images, and logs) with structured operational systems like customer databases and transaction platforms. Krishnamurthy explains how vector search and hybrid database approaches are helping bridge this gap, allowing enterprises to query structured and unstructured data together without creating new silos. The conversation highlights a growing shift in mindset: modern data teams must think more like search engineers, optimizing for relevance and usefulness rather than simply exact database results. At the same time, governance and trust are becoming foundational requirements, ensuring AI systems access accurate data while respecting strict security controls. Operating at Google scale also reinforces the need for reliability, low latency, and correctness, pushing infrastructure toward unified storage layers rather than fragmented systems that add complexity and delay. Looking toward 2026, Krishnamurthy argues the top priority for CIOs and data leaders is organizing and governing data effectively, because AI systems are only as strong as the data foundations supporting them. The takeaway: AI success depends not just on smarter models, but on smarter data infrastructure. 🎧 Listen to the full episode to explore how enterprises can operationalize AI at scale.

    32 min
  2. JAN 27

    Applied Digital CEO Wes Cummins

    Applied Digital CEO Wes Cummins joins Data Center Frontier Editor-in-Chief Matt Vincent to break down what it takes to build AI data centers that can keep pace with Nvidia-era infrastructure demands and actually deliver on schedule. Cummins explains Applied Digital’s “maximum flexibility” design philosophy, including higher-voltage delivery, mixed density options, and even more floor space to future-proof facilities as power and cooling requirements evolve. The conversation digs into the execution reality behind the AI boom: long-lead power gear, utility timelines, and the tight MEP supply chain that will cause many projects to slip in 2026–2027. Cummins outlines how Applied Digital locked in key components 18–24 months ago and scaled from a single 100 MW “field of dreams” building to roughly 700 MW under construction, using fourth-generation designs and extensive off-site MEP assembly—“LEGO brick” skids—to boost speed and reduce on-site labor risk. On cooling, Cummins pulls back the curtain on operating direct-to-chip liquid cooling at scale in Ellendale, North Dakota, including the extra redundancy layers—pumps, chillers, dual loops, and thermal storage—required to protect GPUs and hit five-nines reliability. He also discusses aligning infrastructure with Nvidia’s roadmap (from 415V toward 800V and eventually DC), the customer demand surge pushing capacity planning into 2028, and partnerships with ABB and Corintis aimed at next-gen power distribution and liquid cooling performance.

    29 min
  3. JAN 20

    Cadence’s Sherman Ikemoto on Digital Twins, Power Reality and Designing the AI Factory

    AI data centers are no longer just buildings full of racks. They are tightly coupled systems where power, cooling, IT, and operations all depend on each other, and where bad assumptions get expensive fast. On the latest episode of The Data Center Frontier Show, Editor-in-Chief Matt Vincent talks with Sherman Ikemoto of Cadence about what it now takes to design an “AI factory” that actually works. Ikemoto explains that data center design has always been fragmented. Servers, cooling, and power are designed by different suppliers, and only at the end does the operator try to integrate everything into one system. That final integration phase has long relied on basic tools and rules of thumb, which is risky in today’s GPU-dense world. Cadence is addressing this with what it calls “DC elements”:  digitally validated building blocks that represent real systems, such as NVIDIA’s DGX SuperPOD with GB200 GPUs. These are not just drawings; they model how systems really behave in terms of power, heat, airflow, and liquid cooling. Operators can assemble these elements in a digital twin and see how an AI factory will actually perform before it is built. A key shift is designing directly to service-level agreements. Traditional uncertainty forced engineers to add large safety margins, driving up cost and wasting power. With more accurate simulation, designers can shrink those margins while still hitting uptime and performance targets, critical as rack densities move from 10–20 kW to 50–100 kW and beyond. Cadence validates its digital elements using a star system. The highest level, five stars, requires deep validation and supplier sign-off. The GB200 DGX SuperPOD model reached that level through close collaboration with NVIDIA. Ikemoto says the biggest bottleneck in AI data center buildouts is not just utilities or equipment; it is knowledge. The industry is moving too fast for old design habits. Physical prototyping is slow and expensive, so virtual prototyping through simulation is becoming essential, much like in aerospace and automotive design. Cadence’s Reality Digital Twin platform uses a custom CFD engine built specifically for data centers, capable of modeling both air and liquid cooling and how they interact. It supports “extreme co-design,” where power, cooling, IT layout, and operations are designed together rather than in silos. Integration with NVIDIA Omniverse is aimed at letting multiple design tools share data and catch conflicts early. Digital twins also extend beyond commissioning. Many operators now use them in live operations, connected to monitoring systems. They test upgrades, maintenance, and layout changes in the twin before touching the real facility. Over time, the digital twin becomes the operating platform for the data center. Running real AI and machine-learning workloads through these models reveals surprises. Some applications create short, sharp power spikes in specific areas. To be safe, facilities often over-provision power by 20–30%, leaving valuable capacity unused most of the time. By linking application behavior to hardware and facility power systems, simulation can reduce that waste, crucial in an era where power is the main bottleneck. The episode also looks at Cadence’s new billion-cycle power analysis tools, which allow massive chip designs to be profiled with near-real accuracy, feeding better system- and facility-level models. Cadence and NVIDIA have worked together for decades at the chip level. Now that collaboration has expanded to servers, racks, and entire AI factories. As Ikemoto puts it, the data center is the ultimate system—where everything finally comes together—and it now needs to be designed with the same rigor as the silicon inside it.

    35 min
  4. JAN 15

    Sustainable Data Centers in the Age of AI: Page Haun, Chief Marketing and ESG Strategy Officer, Cologix

    AI is reshaping the data center industry faster than any prior wave of demand. Power needs are rising, communities are paying closer attention, and grid timelines are stretching. On the latest episode of The Data Center Frontier Show, Page Haun of Cologix explains what sustainability really looks like in the AI era, and why it has become a core design requirement, not a side initiative. Haun describes today’s moment as a “perfect storm,” where AI-driven growth meets grid constraints, community scrutiny, and regulatory pressure. The industry is responding through closer collaboration among operators, utilities, and governments, sharing long-term load forecasts and infrastructure plans. But one challenge remains: communication. Data centers still struggle to explain their essential role in the digital economy, from healthcare and education to entertainment and AI services. Cologix’s Montreal 8 facility, which recently achieved LEED Gold certification, shows how sustainable design is becoming standard practice. The project focused on energy efficiency, water conservation, responsible materials, and reduced waste, lowering both environmental impact and operating costs. Those lessons now shape how Cologix approaches future builds. High-density AI changes everything inside the building. Liquid cooling is becoming central because it delivers tighter thermal control with better efficiency, but flexibility is the real priority. Facilities must support multiple cooling approaches so they don’t become obsolete as hardware evolves. Water stewardship is just as critical. Cologix uses closed-loop systems that dramatically reduce consumption, achieving an average WUE of 0.203, far below the industry norm. Sustainability also starts with where you build. In Canada, Cologix leverages hydropower in Montreal and deep lake water cooling in Toronto. In California, natural air cooling cuts energy use. Where geography doesn’t help, partnerships do. In Ohio, Cologix is deploying onsite fuel cells to operate while new transmission lines are built, covering the full cost so other utility customers aren’t burdened. Community relationships now shape whether projects move forward. Cologix treats communities as long-term partners, not transactions, by holding town meetings, working with local leaders, and supporting programs like STEM education, food drives, and disaster relief. Transparency ties it all together. In its 2024 ESG report, Cologix reported 65% carbon-free energy use, strong PUE and WUE performance, and expanded environmental certifications. As AI scales, openness about impact is becoming a competitive advantage. Haun closed with three non-negotiables for AI-era data centers: flexible power and cooling design, holistic resource management, and a real plan for renewable energy, backed by strong community engagement. In the age of AI, sustainability isn’t a differentiator anymore. It’s the baseline.

    23 min
  5. JAN 6

    Databank CFO Kevin Ooley on Financing for Scale in the AI Era

    In this episode of The Data Center Frontier Show, DCF Editor in Chief Matt Vincent speaks with Kevin Ooley, CFO of DataBank, about how the operator is structuring capital to support disciplined growth amid accelerating AI and enterprise demand. Ooley explains the rationale behind DataBank’s expansion of its development credit facility from $725 million to $1.6 billion, describing it as a strong signal of lender confidence in data centers as long-duration, mission-critical real estate assets. Central to that strategy is DataBank’s “Devco facility,” a pooled, revolving financing vehicle designed to support multiple projects at different stages of development; from land and site work through construction, leasing, and commissioning. The conversation explores how DataBank translates capital into concrete expansion across priority U.S. markets, including Northern Virginia, Dallas, and Atlanta, with nearly 20 projects underway through 2025 and 2026. Ooley details how recent deployments, including fully pre-leased capacity, feed a development pipeline supported by both debt and roughly $2 billion in equity raised in late 2024. Vincent and Ooley also dig into how DataBank balances rapid growth with prudent leverage, managing interest-rate volatility through hedging and refinancing stabilized assets into fixed-rate securitizations. In the AI era, Ooley emphasizes DataBank’s focus on “NFL cities,” serving enterprise and hyperscale customers that need proximity, reliability, and scale while Databank delivers power, buildings, and uptime, and customers source their own GPUs. The episode closes with a look at Databank’s long-term sponsorship by DigitalBridge, its deep banking relationships, and the market signals—pricing, absorption, and customer demand—that will ultimately dictate the pace of growth.

    21 min

Ratings & Reviews

4.7
out of 5
11 Ratings

About

Welcome to The Data Center Frontier Show podcast, telling the story of the data center industry and its future. Our podcast is hosted by the editors of Data Center Frontier, who are your guide to the ongoing digital transformation, explaining how next-generation technologies are changing our world, and the critical role the data center industry plays in creating this extraordinary future.

You Might Also Like