In the last couple of posts, I’ve looked at the design of the power supply for the IT side of things and have argued that we should supply the computers directly with DC. In an ideal world, I could stop there: the power in a data centre would be for the almost exclusive use of IT, and everything else would be structure, security, and bits and pieces. But, we don’t live in an ideal world, and one way in which it fails to measure up to that standard is heat.
From an engineering point of view, computers are equivalent to electric toasters: they turn electricity into heat. Consequently, a large component of any data center is expelling that heat. Worse yet, if the air coming into computers is not cool enough, the computers will be damaged. Most computers are designed to the ASHREA A3 standard:
- Temperature: 18-27C
- Temperature Stability 5C/hr
- Upper moisture limit 60% RH and 15C Dew Point
- Lower moisture 5.5C
There are many parts of the world where this describes the climate, so there is therefore something to be said for Switch‘s approach to keeping within these limits. Switch put their data centre campuses in Reno and Las Vegas, Nevada, where the ambient conditions fall within these limits for nearly all of the year. Open the doors, blow the cool outside air in, have a few big fans to extract the heat, and your computers are operating within limits.
Unfortunately, at least 50% of the world’s population lives in South, South East and East Asia, and although it would be possible to put all the world’s data centres in places with cool climates and connect over fibre, the delays introduced by network latency would be problematic for real-time computing, and the costs of building those networks is high. In addition, many of countries have a legal requirement that data be held within country. So the reality is that most data is going to be in the same country as the consumer.
In nearly all of South and South-East Asia, the ambient conditions are way outside the ASHREA limits. We not only have to move large quantities of hot air generated by computers out of the data centre, we also have to cool and de-humidify the air coming in. That’s going to take a lot of energy.
Irrespective of the technology – chilled water for big data centres and DX for small ones are the most common - cooling requires some combination of compressors, pumps and fans. At the heart of all these is the electric motor. As the physics of the matter are such that DC electric motors are much less efficient that AC electric motors, we need an AC supply. As gen sets and grid power are AC, this leads to a very simple arrangement:
There seems little scope for simplification. But here’s on radical thought.
Generator sets consist of an engine, and electrical plants of a turbine, connected to an alternator. The alternator converts rotary kinetic energy to electrical energy. The motors at the heart of each compressor, pump and fan convert electricity to rotary kinetic motion. So one approach is to cut out the middle man and install a system of drive shafts and gear boxes that drives those compressors, pumps and fans directly.
I don’t know how to do the numbers for this, but I strongly suspect that whatever we gain by a self-cancelling two-fold conversion of energy, we lose in the inefficiencies of drive shafts and gear boxes. We also add multiple points of failure – and mechanical things fail much more often that solid-state devices – and a maintenance nightmare.
But here’s another possibility. Large car factories have a huge compressed air plant, and that air is piped to the individual robots. The robots themselves are operated by switching on and off the supply of air at the joints. Similarly, a data center could have a single source of compressed air that is piped to the compressors, fans and what-not.
I don’t have the knowledge to develop that thought, so I’ll stick with conventional cooling technologies (and welcome thoughts from people who do have the knowledge to develop that thought).