When we say 'the cloud' what we mean is 'the data centre'. Globally, data centres are projected to consume over 1000 terawatt hours in 2026. What does that mean for energy production, distribution, and consumption? Guest Phil Harris, Cerio President and CEO, joins thinkenergy to shed light on something we all rely on but may not fully understand. From efficiency to sustainability, environmental concerns to Cerio's role improving how data centres manage energy. Listen in for the future of cloud computing. - Related links ● Cerio: https://www.cerio.ai/ ● Phil Harris on LinkedIn: https://www.linkedin.com/in/paharris/ ● Trevor Freeman on LinkedIn: https://www.linkedin.com/in/trevor-freeman-p-eng-8b612114 ● Hydro Ottawa: https://hydroottawa.com/en To subscribe using Apple Podcasts: https://podcasts.apple.com/us/podcast/thinkenergy/id1465129405 To subscribe using Spotify: https://open.spotify.com/show/7wFz7rdR8Gq3f2WOafjxpl To subscribe on Libsyn: http://thinkenergy.libsyn.com/ --- Subscribe so you don't miss a video: https://www.youtube.com/user/hydroottawalimited Follow along on Instagram: https://www.instagram.com/hydroottawa Stay in the know on Facebook: https://www.facebook.com/HydroOttawa Keep up with the posts on X: https://twitter.com/thinkenergypod --- Transcript: Trevor Freeman 00:07 Welcome to think energy, a podcast that dives into the fast, changing world of energy through conversations with industry leaders, innovators and people on the front lines of the energy transition. Join me, Trevor Freeman, as I explore the traditional, unconventional and up and coming facets of the energy industry. If you have any thoughts, feedback or ideas for topics we should cover, please reach out to us at thinkenergy@hydroottawa.com. Hi everyone, and welcome back. Data centres have come up a number of times on this show, and for very good reason, they have become a key underpinning technology for so much of our lives, every time we pull out that phone from our pockets to pull up directions or buy something online or doom, scroll on your social media or new site of choice, every time you use your phone stream a movie, leverage an AI model, whatever you end up using it for, it's funny as I read this list, I'm sure there's like some university student out there who's thinking, man, what is this old man talking about? We don't use our phones for that, whatever the kids are doing these days, whatever we're doing these days with our phones, with our computers, our tablets, et cetera. All of that leverages infrastructure that most of us have never seen and, quite frankly, probably don't really understand we talk about the cloud like it's this amorphous, nebulous thing, but in reality, we're talking about real hardware in a real building that uses real energy, mainly electricity, a lot of water. And this isn't really new, like we've been leveraging centralized data centres for many years now, but what is changing is the scale of the data centres that we're seeing now, and the pace of growth in computing power that we need to do, the things that we want to do, and that our data centres are able to deliver. So just to throw a few numbers at it, the traditional data centre servers that maybe power the early days of on demand online streaming services, for example, they used anywhere from five to 15 kilowatts per rack. But modern server racks that are used to power AI searches, for example, can hit anywhere from 60 to 100 kilowatts per rack. This is great from a power output per rack perspective, but it means massive energy needs, and that is showing up in the size of load requests that we're seeing from new data centres. New data centres today are asking for service connections that are orders of magnitude higher than those built even just five years ago, globally, data centres are projected to consume over 1000 terawatts in 2026 or terawatt hours, sorry, in 2026 and just a quick kind of refresher from high school or wherever you would have learned this, a terawatt is 1000 gigawatts, which is 1000 megawatts. So 1000 terawatt hours, which is roughly equivalent to the annual electricity demand from the country of Japan, an entire country. So given all of this, there are a lot of incentives to find ways to maximize efficiency and reduce some of that energy demand, and that's where my next guest, Phil Harris and his company Cerio come into play. I'll let Phil get into the details of exactly what Cerio does, but essentially, their goal is to reimagine the data centre to maximize sustainability and reduce energy needs. Phil is Cerio's President and CEO, and has been in the networking and data centre industry for over 35 years, including at well known companies like Intel and Cisco. And I'm really excited about this conversation. One to understand, how do we make data centres a little bit more efficient, or maybe a lot more efficient, but also just to really understand, like, what are we talking about when we talk about a data centre? What is actually happening, what is physically inside these buildings, and we'll get into a little bit of that in our conversation. So Phil, welcome to the show. Phil Harris 04:13 Well, thanks, Trevor. I appreciate it. Trevor Freeman 04:13 So Phil, obviously we're here today to talk about your work building sustainable data centres, or trying to make data centres a little bit more sustainable. But before we get into that. You know, you've spent your career, you know, decades of your career at different tech giants. Let's call them in telecisco to to mention, you've seen quite a bit of change. No doubt, over your time, has that changed, like, does this industry change linearly? Does it grow fairly steady, or is it kind of big jumps? And are we on the cusp of any major shifts? What can you kind of tell us about the future of this, this sector, data, tech, etc? Phil Harris 04:48 It's interesting, I think, as companies start, and I was at companies like Cisco, for example, when it was a very small company to when it was very large company. And this should be no surprise for anybody, the bigger the company gets, the harder. It is to change, and they really find that the only way they change is when they absolutely have to, not because they want to, and that's a combination of just inertia and shareholders expectations and a whole bunch of things. So I would say that the bigger the company is, the harder is them, for them to react. And so I think small, nimble companies tend to do much better when there's a lot of transformational technology and development and changes in the overall ecosystem we live in. I think just the second part of your question, you know, I look at the current situation as a point in time where a lot of companies will have to make some significant changes, simply because we're hitting too many walls, technological walls, commercial walls, geopolitical walls, that are really sort of confining what people can do. So I think what's going to about to happen is we're about to see a significant change, and this is not atypical in the industry. If we think about back into the into the start of what we would think of today as computer science around mainframes that were happening in the 60s. You know, for about a decade and a half, two decades, there was a lot of dominance around a particular way of doing things. And then some new innovational technology came along that rapidly changed, that scaled out, and it went from a very dominant set of players to a much larger number of smaller players who could then provide more innovation and more scale and more choice. And I think we're about to see that transition occurring as well. Trevor Freeman 06:25 So is this, is there sort of like an analogous time, 10 years ago, 20 years ago? Are we on the cusp of, like, the big, the big change that we've seen before? Like, what would you compare this to? You know, in the last 2030, years? Phil Harris 06:40 Yeah. I mean, I think there's been eras of compute. And if we say, I mean, we can find analogies outside of the compute world, but let's just stay in the compute, computing science world. I gave the mainframe example as one, and then we went to what we call client server, which scaled out rapidly. Telephony. We went from large, big telephone exchanges that started in in the government space, went to very large organizations. Now, basically we've completely scaled out how we make phone calls to use that now 20th century as a terminology. Nobody really makes telephone calls anymore. And we went through this with cloud computing and the Internet, where there was a change in the approach to the way we did things that suddenly gave us a scale out mentality, rather than a scale up mentality. And I think that's what we have to key in on here. Is it that we can take some of you? I was on a panel yesterday where we were talking about scale, and I say, well, to scale or not to scale? That is not the question. It's how do we scale? Do we continue to scale up, which is the current model, or do we start to think about scaling out, which is a more distributed model? So we go from a small number of big things to a large number of smaller things. And typically in computer science, whatever you want to start, storage, compute, memory, telephony, everything we've ever done goes through this arc. Trevor Freeman 07:59 Yeah, it's it's interesting, and it's, there's obviously my brain's gonna immediately try and find those, those similarities between my world that I live in on the energy side of things. And it's the same question, like, there, there's, there is no path where we're not expanding the amount of energy we need. We're not going to be using more energy. But there are different ways to do that, and there are different paths we can take the business as