Monthly Archives : March 2017

My Perfect Green Data Centre (5) – Cooling Common Sense

There are a few simple things we can do to reduce the huge amounts of energy that data centers consume in keeping cool.

Insulation. I do not understand why the construction industry in Asia is so completely clueless about cavity walls and other ways of keeping cool air in and hot air out, but it is. So, for a start, let’s be clever in our use of construction materials. Structurally, a single-floor data centre is a shed. At least make a shed with insulated ceiling and walls.

Geothermal Piling. This is in widespread residential and industrial use in northern Europe. The ground itself is a great heatsink, and using geothermal piles to draw excess heat into the ground removes a lot of excess heat for free.

Air Containment. Computers suck cold air in through the front and blow hot air out of the back. Barely a single data centre in Thailand, for example, makes any attempt to keep the cold and hot air separate. The result is that the cooling system works overtime. This leads to huge inefficiencies, and much higher energy consumption than would otherwise be required.

Temperature: The specification says 18-27C. That means it’s safe to run the intake air at 27C. There is no need whatsoever to turn the air-con down to 21C, as happens in so many data centres. Some people may defend this by saying that, should the cooling go off due to power or other failure, it’s possible to run the equipment for longer before the room becomes so hot that it’s necessary to depower the IT equipment. True, but the amount of time it gains is seconds. In a recent experiment, UI killed the air-con: the room heated by 5C in one minute (yes, sixty seconds). So you buy about 72 seconds extra run time for that huge extra cooling cost.

Layout: Work with your tenants / clients to avoid hotspots. A single cold- or hot-air containment unit needs to be cooled for the hottest area. If one rack’s humming away at 10kW and the rest are ticking over at 2kW, the former will drive the air-flow. Fix it.

So much for colo. In any case, I suspect I’m preaching to the converted as it’s the clients who think they know everything who are the biggest problem. Here’s one for cloud in section…

Pod 1

and in plan.

Pod 2

So, what’s going on here?

The idea is a kind of super-containment vessel which uses natural convection to assist the cold- and hot-air separation.

The first thing to note is height. The only reason our racks are 42U is because most humans can’t reach any higher. However, in a cloud environment, where the estate is almost completely homogenous, and where computers that break don’t need to be fixed – ever – three is no reason for humans to come in (at least, not to fix the computers, although specially trained technicians may need to service other stuff).

Without humans, we can stack computers much higher, and stacking higher allows us to take advantage of natural convection.

Cold air is injected down the middle of the tower. It will need to be blown, but at least cold air falls all by itself, so it will need to be blown less hard than conventional systems which blow cold air up from the floor void.

The computers are arranged in a rotunda, the fronts facing the central core and the backs facing the outer side. They take cold air in through the front and blow hot air out through the back.

The hot air rises from up the outside of the rotunda. This is an enclosed space, so the chimney effect will accelerate the hot air up, sucking hot air out of the backs of computers on the higher levels. As with the cold air injection, the hot air extraction will still require some fans, but those fans will work a lot less hard than in conventional systems.

In addition, with the hot air on the outside of the rotunda, it may be possible to dissipate some heat using heat fins or – next post – evaporative cooling. In practice, these would be on the side of the vessel facing away from the sun, and the side facing the sun would carry solar panels.

How much will this save? I don’t know; I don’t have the technical knowledge to run this through a Computational Fluid Dynamics (CFD) model. But the HVAC people I’ve shown this to concur that it’s likely to yield at least some saving, probably a few percent. But even 5% of 500 racks * 10 servers per rack at a PUE of 2 is 125,000W.

And, best of all, it will look much more interesting than the average data centre, which is to all external appearances a shed.

Previous  Next

My Perfect Green Data Centre (4) – DC, Postscript

In the last post, I looked at the DC side of things and concluded that there’s a lot to be said for eliminating all those PSUs and invertors (and wire), and regarding the battery pack as the immediate upstream supply of the IT load. The problem I was left with was that if we regard the batteries as the primary power source, we’d need a truly gargantuan battery pack.

The simple answer is that the battery pack is still a secondary power source, to provide power in the small windows when the primary power source is being switched over – i.e., while the generator sets start up.

A more interesting question is whether we can eliminate the batteries. After all, most data centers rely on lead acid batteries and, although lead and sulphuric acid are cheap and plentiful, even a few hundred racks requires several tons of the stuff – and the batteries need to be replaced every few years. Disposing of used lead and acid is a nasty business. So batteries have a big carbon footprint of their own.

As I stated in the second post, there are many ways of storing energy. Batteries are one, but so are flywheels. Flywheels will last the full life-span of the data centre, but produce AC (as do gen sets). How could this work?

Here’s the last post’s schematic:

Power chain batteries

If the power source (at the top) is photovoltaic, which produces DC directly, then this works. However, if the power source is AC, this is missing one very important component: a rectifier to turn that AC to DC before it gets to the batteries:

Power chain batteries and rectifier

So, what we’re in effect doing is combining 10,000 PSUs into a single, industrial scale rectifier. Now let’s add the flywheels:

Power chain flywheel

Which gives us the best of both worlds.

As a footnote, many people disparage flywheels because they only provide power for a few seconds while the generators power up. This is not long enough to perform an orderly shutdown of the computers.

I’ll come back to this when I tackle cooling, but the short answer is that the usual 15 minutes of battery life is 14 minutes longer than operational conditions will be maintained anyway.

In short, we’re looking at two divergent topologies that depend on whether the main power source is AC or DC. Before we decide which main power source is optimal, we’ll have to look at the AC load. Which is for a future post.

Previous  Next

My Perfect Green Data Centre (4) – DC

Here’s the back of a typical server (taken from the HP DL380 spec sheet):

HP DL360 Server

HP DL360 Server

Those things with the fans in the bottom right are the Power Supply Units (PSUs) – i.e. where you plug the power in. Servers have two of them and, in a conventional data center, each PSU is connected to a separate source. The exact configuration depends on what tier the data centre is; here’s a typical arrangement for Tier 3/4:

Power chain

So, if we start at the bottom, each PSU is connected to a local distribution panel (PDU). Each of these will in turn usually go back to an intermediate distribution panel (DB) which in turn goes to the main distribution panel (MDB). Each MDB will go to its own power source or sources – usually a combination of gen sets and utility / grid power. The way in which the UPSs are fitted between the DBs and MDBs depends on various factors which needn’t detain us here.

This is daft.

  • The computer itself does not need 230V AC / 110V AC. It needs DC.
  • The battery pack for a typical UPS supplies DC.
  • So, in a conventional design, when the computers run off UPS power, we take DC from the batteries, turn it into AC, send it to the computer, and turn it back to DC.
  • Those two conversions cancel each other out.

Talk to any telecoms person, and they’ll confirm the daftness. Telecoms has used 48V DC pretty much since Alexander Graham Bell invented the modern telephone, and conventional land lines to this day leave the exchange at 48V DC. So why would one use AC to begin with?

One possible answer is power loss. When you transmit electricity down wires, some electricity is lost in the form of heat. The amount of power lost is proportional to the square of the current. As the total power is the product of the voltage and current, high voltage and low current results in less power loss than low voltage and high current. The downside to this is that higher voltages are more lethal than lower voltages, but we can set that to one side for this post.

Is power loss something that should concern us in a data centre? The maths is straightforward so let’s take some typical numbers.

  1. An average server consumes 500W of power.
  2. There is 100 meters of wire between the server and its power source.
  3. The power loss in the wire has a resistance of 6E-8 Ohms per meter (I got the numbers from here, and took a mid-point).

So, in a conventional set-up, the power is delivered as 250V AC, and the current will be 2 Amps (250V * 2A = 500W), and the power loss will be

(2A)^2 * 100M * 6E-8Ω/M = 0.000024W

If the power is delivered as 5V DC, the current will be 100 Amps, and the power loss will be

(100A)^2 * 100M * 6E-8Ω/M = 0.06W.

That is proportionally a huge difference. But the AC calculation does not include the loss in converting from AC to DC and back again. Even if that’s only 0.1% both ways, that’s an additional .5W to convert DC to AC at the inverters and the same again to convert AC back to DC in the PSUs, for a total of 1.000024W, which is rather a lot more than 0.06W. So DC-DC is the clear winner.

This raises two questions. The first is whether it’s worth worrying about; the second is what to do about it if we should worry about it.

If your data centre has 500 racks and 10 servers per rack, that’s 5,000 servers. The total power loss for AC is 5,000*1.000024= 5000.12W. If we deliver DC, the total power loss is 5,000*0.06 = 300W. However, in both cases, the total power required by the IT is 500*5,000 = 2,500,000W. Whether the power loss is 5000.12W or 300W, it’s negligible compared to the total load. So, on a pure numbers basis, it’s immaterial.

But, and as I said in my first post in this series, I’m taking a holistic view of “green.” I’m not just worried about how economical a data centre can be in its usage of power. I’m also concerned about minimizing the overall carbon footprint. If we’re to do that, we need to look further than calculations such as the above. In terms of the power loss, the difference between AC and DC is negligible. But getting rid of 10,000 PSUs and some pretty beefy inverters does make a difference. Manufacturing that stuff – and destroying it safely when it’s finished – has an impact. That impact is not only avoidable; all those inverters and PSUs cancel each other out: they are functionally equivalent to nothing. So, if we’re to take “green” at all seriously, ditch the lot.

The next objection will be that batteries supply 12V and most computers need 3.3V, 5V or 12V. As it happens, the industry is moving towards a single 12V supply, with conversion to the lower voltages being done on the motherboard. But batteries can be designed for pretty much any voltage and, just as data centers today offer the choice between 2- and 3-phase, there’s no reason they can’t offer the choice between different DC voltages.

Another advantage of DC is less wire. Most computer equipment today has three wires: live, neutral and earth. With 110V/230V, the earth is necessary as these voltages are lethal. 12V is not lethal, and it is safe to ground the neutral wire in a DC circuit (it isn’t with AC). The average data center contains literally tons of copper wiring and reducing three wires to two is a hell of a lot less copper to quarry, refine, turn into wire and ship, and a hell of a lot less PVC to insulate it with.

And, yes, supplying DC direct to the computers would save a lot of money: no invertors, far fewer PSUs, and a third less wire.

So: what to do about it? The obvious and simplest answer is that we make the primary source of power for the computer equipment the battery pack:

Power chain batteries

Now, this may seem like a simple change, and I have been quite deliberate in making the minimum possible change to the drawings to maintain this illusion. However,putting on my Accredited Tier Designer hat, there remains a serious issue if one’s operating within the Uptime Institute framework. According to UI, the primary source of power for a data centre is the on-site power generation. That on-site power must be able to run the data center indefinitely, and should have a minimum capacity of 96 hours. If the primary power source is the batteries, the battery pack for 96 hours would be gargantuan.

I’ll fix that in the next post.

Previous  Next