Posts By : Chris Maden

My Perfect Green Data Centre (7) – Power

Up to this point, I’ve looked at how data centres use power, and made a few suggestions on how reducing that power usage would reduce a data centre’s green footprint. As it happens nearly all of these changes reduce both capital and operational costs, so there’s a self-interest in being green.

It’s now time to look at the side where compromises are unavoidable: the sources of a data centre’s power. After all, if we go to all this trouble only for some smoke-belching power station to cover the surroundings in soot as it broils our planet, it’s in vain.

There is also the argument that data centres are not in the business of producing power. This is nonsense – your data wouldn’t have gen sets and batteries if data centres were not in that business, and the UI standard states that the on-site power is, from a design perspective, the primary power. Utility power is an option to save money; there are two UI-certified Tier-3 data centres that have are fully autonomous in power, with no connection to the grid.

And in South East Asia, there are cities of hundreds of thousands of people that consume less power than a single, medium-sized data centre. We have as much responsibility for the power we use as they (which is a lot).

So let’s rule:

  • Coal: Oh dear.
  • “Clean coal”: This is a myth. Coal consists of carbon, and burning anything amounts to mixing it with oxygen. Carbon + oxygen = carbon dioxide (or, to those who did chemistry at school, C+O2 = CO2). All coal is dirty.
  • Oil: Need I say anything?
  • LPG: Less bad, but less CO2 isn’t the same as no CO2.
  • Hydro: Although hydro doesn’t produce greenhouse gases as such, flooding large valleys full of rain forest has an impact, and those valleys are often cleared of their inhabitants with little regards for those inhabitants’ well-being. Plus, the carbon used in constructing these dams is not insignificant. So hydro is the probably the least bad conventional generation technology, and the least green of the renewables.
  • Wind: The closer one is to the poles (as in North and South), the better wind works. I’m concerned with data centres in the Tropics – with a capital T to indicate the area of the Earth between the Tropics of Cancer and Capricorn – and, in this part of the world, except when there’s a typhoon or hurricane, winds tend to be light. As a result, wind power isn’t effective.
  • Geo-thermal: This is great where the Earth’s mantle is thin, but building a data centre in such areas has independent disadvantages (such as being submerged in lava).
  • Bio-mass: This amounts to burning stuff, and there are two main variants. The first is the use of crops in general, and corn/maize in particular, which produces lots of CO2 and starves people; the second is to burn rubbish. Overall, I regard this technology as being “green” rather than green, but I’d like to hear if I’m wrong.

That leaves us with solar, and the next two posts will focus on the engineering possibilities with solar. In this one, I’ll say a few words about off-setting.

The idea of off-setting is that data centres tend to be near to urban areas where land is expensive, but solar farms need a lot of land, so are only economic if they’re built far from urban areas where land is cheap. What a green data centre owner does, therefore, is to build a solar farm which generates enough power for his data centre – say 5MW - in some rural area and feed that into the grid. The data centre draws a like amount from the grid. Although the power the data centre consumes is a mixture of dirty and clean, at least the net effect is zero.

This is a neat solution, but it faces a major hurdle in that many grids are unwilling or unable to purchase power on economically viable terms. The key issue is what’s known as the Feed-in Tariff, and the problem is that governments aren’t always very good at pricing the FITs.

A related problem is that most data centre operators want to invest in data centres, not solar farms, so need to find a partner to build the solar farm for them.

My issue with off-setting is that it doesn’t go the heart of the problem; it’s more of an out-of-sight-out-of mind approach. The data centre is still consuming dirty power, and building a nice solar farm a few hundred miles away has the feel of an accounting trick. So, in the next couple of posts, I’ll look at two possibilities for on-site generation.

Previous  Next

My Perfect Green Data Centre (6) – Pause for Breath

What was supposed to be one or two posts on DC became four, and what was supposed to be a single post on AC became three. It’s time to pause and consider.

The main idea so far is on the DC side of things. Supply computers with the DC they need rather than the AC which is convenient for us to provide, and three big impacts arise:

  1. The need for invertors (DC-AC convertors) and PSUs (AC-DC convertors) is eliminated. Manufacturing, shipping and destroying these components has a big carbon impact.
  2. A major source of 0ver-capacity is removed, thus allowing for leaner designs. A leaner design will be less wasteful, and less waste is always a good thing and nearly always green.
  3. Those data centres that use batteries need far fewer batteries, again with a big carbon impact.

On the AC side of things, the main load is cooling. All resources expended on cooling basically go up in smoke, so the more efficient cooling is, the better. Cooling is well-understood, but there is much more that data centres could do in terms of little things. Add all of these little things up and, although it may only be one percent here and a couple there, the cumulative effect would be to consume far less electricity.

The flip side is that, if we are to consolidate huge DC power supplies as the primary power source for IT, but stick with AC supplies for cooling, we may end up with a more complex design.


Power chain summary


(I’ve included A and B cooling systems to show how a Tier-4 or some Tier-3 configurations may, though need not necessarily look. For Tier-2, remove either cooling system and locate the redundancy within the system.)

Or, if we simply get rid of batteries and use flywheel or DRUPS:

Power chain summary flywheel

This may not look very different from a conventional topology, but with 70% of the power now in the DC side, it is very different. It also brings to mind a separate, though related point: how clean is the power that comes in at the top? The next few posts will look therefore look at clean power in general. After those, I’ll try and put all the pieces together.

Previous  Next

My Perfect Green Data Centre (4) – DC, Joining Rails

A further brief thought on the virtues of DC.

Remember this diagram?


Power chain batteries and rectifier

One legitimate question is why there are two PSUs in the typical server. Part of the answer is that PSUs break down: the internal fans break, the electronics get broken by power surges, broken power cables and the like. Eliminating PSUs also eliminates a source of unreliability. Without the PSUs, the only things going into the back are the live and neutral wires of a +12V DC power supply. The other answer is we have two PSUs because, at least in a Tier-3 or -4 data centre, it is sometimes necessary to shut down either the A or the B power supply, either for maintenance or for re-configuration.

For the latter reason, there is a certain degree of sense in delivering two separate wires to the server. But with DC, we have a lot more freedom about what happens between the power source and the server. AC delivers the power in a sine wave. If you join two AC supplies, their respective sine waves must be synchronized: if they’re not, you’ll get a James Bond result of sparks and pyrotechnics. But, if you join two DC supplies of the same voltage, nothing goes wrong. And this enables a further saving.

If we do this:

Power chain batteries and rectifier AC join

that red line will quite probably be red in reality. But if we do this:

Power chain batteries and rectifier DC join


The green line will be fine.

What does this buy us?

It buys us a much smaller battery pack. Battery life is measured in kilowatt hours, so a battery that can supply a kilowatt for one hour will have a 1 kW hr life, but so will a battery that can supply 10 kilowatts for six minutes.

A typical data centre will allow about 10 minutes of battery life. If this is based on a 1 MW load, that’s 166kW hours which, at a little over 2 kW hours per battery, is 800 batteries. But in the first drawing above, we need two complete battery packs, one on A and one on B, so a total of 1,600 batteries.

Using the topology with the green line, connecting the DC before the batteries (and possibly after them, too), means that we can combine these two battery packs.

Now, it won’t be quite that simple. We still need some redundancy in order that we can take complete sets of batteries off-line to replace or service them (lead-acid batteries are supposed to be topped up with water, and individual batteries should be inspected or monitored). But if we break our 800 battery requirement into 8 packs of 100 batteries each, and add a pack for redundancy, we’ve saved 700 batteries.

Power chain batteries and rectifier DC join multiple battery packs

As batteries last maybe 3 years, and a data centre lasts 20 years, that’s over 4,000 batteries over the life of the data centre. That’s a lot of green, space saved, cabling and wires saved. And all for giving the computers the power they need rather than the power we have.

Previous  Next



My Perfect Green Data Centre (5) – AC

In the last couple of posts, I’ve looked at the design of the power supply for the IT side of things and have argued that we should supply the computers directly with DC. In an ideal world, I could stop there: the power in a data centre would be for the almost exclusive use of IT, and everything else would be structure, security, and bits and pieces. But, we don’t live in an ideal world, and one way in which it fails to measure up to that standard is heat.

From an engineering point of view, computers are equivalent to electric toasters: they turn electricity into heat. Consequently, a large component of any data center is expelling that heat. Worse yet, if the air coming into computers is not cool enough, the computers will be damaged. Most computers are designed to the ASHREA A3 standard:

  • Temperature: 18-27C
  • Temperature Stability 5C/hr
  • Upper moisture limit 60% RH and 15C Dew Point
  • Lower moisture 5.5C

There are many parts of the world where this describes the climate, so there is therefore something to be said for Switch‘s approach to keeping within these limits. Switch put their data centre campuses in Reno and Las Vegas, Nevada, where the ambient conditions fall within these limits for nearly all of the year. Open the doors, blow the cool outside air in, have a few big fans to extract the heat, and your computers are operating within limits.

Unfortunately, at least 50% of the world’s population lives in South, South East and East Asia, and although it would be possible to put all the world’s data centres in places with cool climates and connect over fibre, the delays introduced by network latency would be problematic for real-time computing, and the costs of building those networks is high. In addition, many of countries have a legal requirement that data be held within country. So the reality is that most data is going to be in the same country as the consumer.

In nearly all of South and South-East Asia, the ambient conditions are way outside the ASHREA limits. We not only have to move large quantities of hot air generated by computers out of the data centre, we also have to cool and de-humidify the air coming in. That’s going to take a lot of energy.

Irrespective of the technology – chilled water for big data centres and DX for small ones are the most common - cooling requires some combination of compressors, pumps and fans. At the heart of all these is the electric motor. As the physics of the matter are such that DC electric motors are much less efficient that AC electric motors, we need an AC supply. As gen sets and grid power are AC, this leads to a very simple arrangement:


There seems little scope for simplification. But here’s on radical thought.

Generator sets consist of an engine, and electrical plants of a turbine, connected to an alternator. The alternator converts rotary kinetic energy to electrical energy. The motors at the heart of each compressor, pump and fan convert electricity to rotary kinetic motion. So one approach is to cut out the middle man and install a system of drive shafts and gear boxes that drives those compressors, pumps and fans directly.

I don’t know how to do the numbers for this, but I strongly suspect that whatever we gain by a self-cancelling two-fold conversion of energy, we lose in the inefficiencies of drive shafts and gear boxes. We also add multiple points of failure – and mechanical things fail much more often that solid-state devices – and a maintenance nightmare.

But here’s another possibility. Large car factories have a huge compressed air plant, and that air is piped to the individual robots. The robots themselves are operated by switching on and off the supply of air at the joints. Similarly, a data center could have a single source of compressed air that is piped to the compressors, fans and what-not.

I don’t have the knowledge to develop that thought, so I’ll stick with conventional cooling technologies (and welcome thoughts from people who do have the knowledge to develop that thought).

Previous  Next

My Perfect Green Data Centre (5) – Evaporative Cooling

I started thinking about this subject five years ago and, back then, I wondered if evaporative cooling would be useful in the humid tropics.

The form of evaporative cooling with which us humans are best associated is sweat. The reason sweat cools us down is not that we are covered in water, but that the water, when it evaporates, cools us.

In the dry air of Europe and North America, humidity is generally low, so evaporative cooling has not taken off. As it would seem that most HVAC engineers are educated in this tradition, whenever I mentioned evaporative cooling, I was steered quickly away. Even Dr. Hot, a consultant in thermodynamics, rubbished the idea. As he was the expert, I parked evaporative cooling in my bag of good-ideas-that-turned-out-to-be-rubbish.

Dr. Hot was dead wrong: so much for experts. A month ago I went to a data center fair in Hong Kong and came across Munters, a company that makes industrial-scale evaporative cooling for data centers. While I accept a certain amount of hype, their cooling system is the choice at Supernap’s new Tier-4 data centre in Thailand, and Munters claim that it saves so much power that, even in the tropics, a PUE of 1.2 is achievable – a massive saving on energy.

On the back of an envelope

IT Load PUE Cooling Power
2,500,000 2.0 2,500,000
2,500,000 1.7 1,750,000
2,500,000 1.3 750,000

So, the difference between a PUE of 1.7 and 1.3 is a 1MW generator set. A huge saving in capital cost (being green pays!), but also a significant reduction in the pollution that gen sets spew into the atmosphere, and the environmental impact of building, shipping and ultimately destroying the things.

That’s already good in a colo environment. In the self-contained pod that I sketched in my previous post, the idea would be to make the outer casing the evaporative cooler. Munter’s design re-circulates the air inside, so the idea would be to put the heat exchange on the wall, and the fans and pumps on the roof. Probably a rather expensive experiment, but I can’t help wondering if that may get the PUE down as far as it can go.

Previous Next

My Perfect Green Data Centre (4) – DC, Postscript 2

Here’s another advantage to delivering DC to the back of the computer: overcapacity.

A couple of years ago, I moved a client’s IT estate from its in-house server rooms to a colo operator. The in-house server rooms were not separately metered and the client had large call centers, so overall high electricity consumption. There was no way of establishing the power consumption, but the colo operator needed to know how much power to reserve for the base load. In the end, I got hold of some spec sheets for “typical servers” in their estate and made an educated guess of 2kW / rack. I always specify power-consumption be monitored at the rack level, so when we’d moved them in, I added up the numbers. The core switches, which came with 3kW PSUs, were pulling 1.2kW for the entire rack; the single biggest consumer was just over 4kW. The average was 1.3kW – I’d oversized by 30%.

When an engineer specifies a PSU for a computer, he does so based on maxima: the maximum number of processors, disks, RAM, etc. He then allows a safety margin. PSUs are manufactured in standard sizes, so he chooses the next standard size up. (HP’s tool is here.) He also assumes that the computer will run flat-out. Put all this together, and a computer that ticks over on 200W for most of its life will have a 500W PSU. Yet data centers are obliged to design a supply that can deliver this peak theoretical load. What we end up designing for is the sum of the maxima, rounded up.

If we deliver DC direct to the computers, we can eliminate the over-capacity due to (a) the rounding up and (b) using a sum when an average would suffice.

These can add up to big numbers. A server that requires 330W ends up with a 500W supply, and the data center provider ends up sizing for the 500W, not the 330W. Add that up across 5,000 servers, and we’re overdesigning by 850kW just because we’re rounding up.

The difference between the average peak demand and total peak demand is more difficult to put a number to, but the idea is that not every computer is going to run flat-out at the same time. At any given time, some computers will be running flat-out consuming all 330W, many will be rumbling along at, say, 200W, and a few will be fast asleep at basically 0W. If I assume a (slightly skewed) normal distribution, we end up with the difference between 5,000*330W and 5,000*200W = 650kW.

Add these two numbers together, and based on the 5,000*500W that we started with, and we have the difference between 2.5MW and 1MW. Yes, that’s a 60% reduction in the power we design for. And it’s not only the electricity supply, but the cooling too ends up over-sized. This all has a carbon footprint: we’re buying batteries, invertors, flywheels, gen sets, cooling, the whole lot, that will never be used.

Of course, in the real world we’d have to allow for various other factors, and we’d need actual data. But the point remains that by centralizing our PSUs into a couple of industrial-scale PSUs and distributing DC, we can come up with a much leaner design.

Previous  Next

My Perfect Green Data Centre (5) – Cooling Common Sense

There are a few simple things we can do to reduce the huge amounts of energy that data centers consume in keeping cool.

Insulation. I do not understand why the construction industry in Asia is so completely clueless about cavity walls and other ways of keeping cool air in and hot air out, but it is. So, for a start, let’s be clever in our use of construction materials. Structurally, a single-floor data centre is a shed. At least make a shed with insulated ceiling and walls.

Geothermal Piling. This is in widespread residential and industrial use in northern Europe. The ground itself is a great heatsink, and using geothermal piles to draw excess heat into the ground removes a lot of excess heat for free.

Air Containment. Computers suck cold air in through the front and blow hot air out of the back. Barely a single data centre in Thailand, for example, makes any attempt to keep the cold and hot air separate. The result is that the cooling system works overtime. This leads to huge inefficiencies, and much higher energy consumption than would otherwise be required.

Temperature: The specification says 18-27C. That means it’s safe to run the intake air at 27C. There is no need whatsoever to turn the air-con down to 21C, as happens in so many data centres. Some people may defend this by saying that, should the cooling go off due to power or other failure, it’s possible to run the equipment for longer before the room becomes so hot that it’s necessary to depower the IT equipment. True, but the amount of time it gains is seconds. In a recent experiment, UI killed the air-con: the room heated by 5C in one minute (yes, sixty seconds). So you buy about 72 seconds extra run time for that huge extra cooling cost.

Layout: Work with your tenants / clients to avoid hotspots. A single cold- or hot-air containment unit needs to be cooled for the hottest area. If one rack’s humming away at 10kW and the rest are ticking over at 2kW, the former will drive the air-flow. Fix it.

So much for colo. In any case, I suspect I’m preaching to the converted as it’s the clients who think they know everything who are the biggest problem. Here’s one for cloud in section…

Pod 1

and in plan.

Pod 2

So, what’s going on here?

The idea is a kind of super-containment vessel which uses natural convection to assist the cold- and hot-air separation.

The first thing to note is height. The only reason our racks are 42U is because most humans can’t reach any higher. However, in a cloud environment, where the estate is almost completely homogenous, and where computers that break don’t need to be fixed – ever – three is no reason for humans to come in (at least, not to fix the computers, although specially trained technicians may need to service other stuff).

Without humans, we can stack computers much higher, and stacking higher allows us to take advantage of natural convection.

Cold air is injected down the middle of the tower. It will need to be blown, but at least cold air falls all by itself, so it will need to be blown less hard than conventional systems which blow cold air up from the floor void.

The computers are arranged in a rotunda, the fronts facing the central core and the backs facing the outer side. They take cold air in through the front and blow hot air out through the back.

The hot air rises from up the outside of the rotunda. This is an enclosed space, so the chimney effect will accelerate the hot air up, sucking hot air out of the backs of computers on the higher levels. As with the cold air injection, the hot air extraction will still require some fans, but those fans will work a lot less hard than in conventional systems.

In addition, with the hot air on the outside of the rotunda, it may be possible to dissipate some heat using heat fins or – next post – evaporative cooling. In practice, these would be on the side of the vessel facing away from the sun, and the side facing the sun would carry solar panels.

How much will this save? I don’t know; I don’t have the technical knowledge to run this through a Computational Fluid Dynamics (CFD) model. But the HVAC people I’ve shown this to concur that it’s likely to yield at least some saving, probably a few percent. But even 5% of 500 racks * 10 servers per rack at a PUE of 2 is 125,000W.

And, best of all, it will look much more interesting than the average data centre, which is to all external appearances a shed.

Previous  Next

My Perfect Green Data Centre (4) – DC, Postscript

In the last post, I looked at the DC side of things and concluded that there’s a lot to be said for eliminating all those PSUs and invertors (and wire), and regarding the battery pack as the immediate upstream supply of the IT load. The problem I was left with was that if we regard the batteries as the primary power source, we’d need a truly gargantuan battery pack.

The simple answer is that the battery pack is still a secondary power source, to provide power in the small windows when the primary power source is being switched over – i.e., while the generator sets start up.

A more interesting question is whether we can eliminate the batteries. After all, most data centers rely on lead acid batteries and, although lead and sulphuric acid are cheap and plentiful, even a few hundred racks requires several tons of the stuff – and the batteries need to be replaced every few years. Disposing of used lead and acid is a nasty business. So batteries have a big carbon footprint of their own.

As I stated in the second post, there are many ways of storing energy. Batteries are one, but so are flywheels. Flywheels will last the full life-span of the data centre, but produce AC (as do gen sets). How could this work?

Here’s the last post’s schematic:

Power chain batteries

If the power source (at the top) is photovoltaic, which produces DC directly, then this works. However, if the power source is AC, this is missing one very important component: a rectifier to turn that AC to DC before it gets to the batteries:

Power chain batteries and rectifier

So, what we’re in effect doing is combining 10,000 PSUs into a single, industrial scale rectifier. Now let’s add the flywheels:

Power chain flywheel

Which gives us the best of both worlds.

As a footnote, many people disparage flywheels because they only provide power for a few seconds while the generators power up. This is not long enough to perform an orderly shutdown of the computers.

I’ll come back to this when I tackle cooling, but the short answer is that the usual 15 minutes of battery life is 14 minutes longer than operational conditions will be maintained anyway.

In short, we’re looking at two divergent topologies that depend on whether the main power source is AC or DC. Before we decide which main power source is optimal, we’ll have to look at the AC load. Which is for a future post.

Previous  Next

My Perfect Green Data Centre (4) – DC

Here’s the back of a typical server (taken from the HP DL380 spec sheet):

HP DL360 Server

HP DL360 Server

Those things with the fans in the bottom right are the Power Supply Units (PSUs) – i.e. where you plug the power in. Servers have two of them and, in a conventional data center, each PSU is connected to a separate source. The exact configuration depends on what tier the data centre is; here’s a typical arrangement for Tier 3/4:

Power chain

So, if we start at the bottom, each PSU is connected to a local distribution panel (PDU). Each of these will in turn usually go back to an intermediate distribution panel (DB) which in turn goes to the main distribution panel (MDB). Each MDB will go to its own power source or sources – usually a combination of gen sets and utility / grid power. The way in which the UPSs are fitted between the DBs and MDBs depends on various factors which needn’t detain us here.

This is daft.

  • The computer itself does not need 230V AC / 110V AC. It needs DC.
  • The battery pack for a typical UPS supplies DC.
  • So, in a conventional design, when the computers run off UPS power, we take DC from the batteries, turn it into AC, send it to the computer, and turn it back to DC.
  • Those two conversions cancel each other out.

Talk to any telecoms person, and they’ll confirm the daftness. Telecoms has used 48V DC pretty much since Alexander Graham Bell invented the modern telephone, and conventional land lines to this day leave the exchange at 48V DC. So why would one use AC to begin with?

One possible answer is power loss. When you transmit electricity down wires, some electricity is lost in the form of heat. The amount of power lost is proportional to the square of the current. As the total power is the product of the voltage and current, high voltage and low current results in less power loss than low voltage and high current. The downside to this is that higher voltages are more lethal than lower voltages, but we can set that to one side for this post.

Is power loss something that should concern us in a data centre? The maths is straightforward so let’s take some typical numbers.

  1. An average server consumes 500W of power.
  2. There is 100 meters of wire between the server and its power source.
  3. The power loss in the wire has a resistance of 6E-8 Ohms per meter (I got the numbers from here, and took a mid-point).

So, in a conventional set-up, the power is delivered as 250V AC, and the current will be 2 Amps (250V * 2A = 500W), and the power loss will be

(2A)^2 * 100M * 6E-8Ω/M = 0.000024W

If the power is delivered as 5V DC, the current will be 100 Amps, and the power loss will be

(100A)^2 * 100M * 6E-8Ω/M = 0.06W.

That is proportionally a huge difference. But the AC calculation does not include the loss in converting from AC to DC and back again. Even if that’s only 0.1% both ways, that’s an additional .5W to convert DC to AC at the inverters and the same again to convert AC back to DC in the PSUs, for a total of 1.000024W, which is rather a lot more than 0.06W. So DC-DC is the clear winner.

This raises two questions. The first is whether it’s worth worrying about; the second is what to do about it if we should worry about it.

If your data centre has 500 racks and 10 servers per rack, that’s 5,000 servers. The total power loss for AC is 5,000*1.000024= 5000.12W. If we deliver DC, the total power loss is 5,000*0.06 = 300W. However, in both cases, the total power required by the IT is 500*5,000 = 2,500,000W. Whether the power loss is 5000.12W or 300W, it’s negligible compared to the total load. So, on a pure numbers basis, it’s immaterial.

But, and as I said in my first post in this series, I’m taking a holistic view of “green.” I’m not just worried about how economical a data centre can be in its usage of power. I’m also concerned about minimizing the overall carbon footprint. If we’re to do that, we need to look further than calculations such as the above. In terms of the power loss, the difference between AC and DC is negligible. But getting rid of 10,000 PSUs and some pretty beefy inverters does make a difference. Manufacturing that stuff – and destroying it safely when it’s finished – has an impact. That impact is not only avoidable; all those inverters and PSUs cancel each other out: they are functionally equivalent to nothing. So, if we’re to take “green” at all seriously, ditch the lot.

The next objection will be that batteries supply 12V and most computers need 3.3V, 5V or 12V. As it happens, the industry is moving towards a single 12V supply, with conversion to the lower voltages being done on the motherboard. But batteries can be designed for pretty much any voltage and, just as data centers today offer the choice between 2- and 3-phase, there’s no reason they can’t offer the choice between different DC voltages.

Another advantage of DC is less wire. Most computer equipment today has three wires: live, neutral and earth. With 110V/230V, the earth is necessary as these voltages are lethal. 12V is not lethal, and it is safe to ground the neutral wire in a DC circuit (it isn’t with AC). The average data center contains literally tons of copper wiring and reducing three wires to two is a hell of a lot less copper to quarry, refine, turn into wire and ship, and a hell of a lot less PVC to insulate it with.

And, yes, supplying DC direct to the computers would save a lot of money: no invertors, far fewer PSUs, and a third less wire.

So: what to do about it? The obvious and simplest answer is that we make the primary source of power for the computer equipment the battery pack:

Power chain batteries

Now, this may seem like a simple change, and I have been quite deliberate in making the minimum possible change to the drawings to maintain this illusion. However,putting on my Accredited Tier Designer hat, there remains a serious issue if one’s operating within the Uptime Institute framework. According to UI, the primary source of power for a data centre is the on-site power generation. That on-site power must be able to run the data center indefinitely, and should have a minimum capacity of 96 hours. If the primary power source is the batteries, the battery pack for 96 hours would be gargantuan.

I’ll fix that in the next post.

Previous  Next

My Perfect Green Data Centre (3) – AC/DC

AC/DC is not the name of the rock band, nor an allusion to anything regarding sexual preferences.

As I said in my last post, electricity consists of electrons in motion. However, there are two basic modes of motion: coming and going, or plain going: Alternating Current (AC) and Direct Current (DC). There’s pretty comprehensive wiki article on AC here.

Why should this matter? Simple. Data centres put electricity to two fundamentally different types of work. The first type of work is IT. All IT runs on DC. Just because it’s got a 110V/230V AC socket at the back doesn’t mean it consumes AC. The first thing the computer does is convert the AC to DC.

The second work to which electricity is put is cooling. There are quite a few technologies for this, and the optimal technology depends on the size of the data centre and the climate, but they all boil down to the same three types of components: heat exchanges, pumps and fans.

Why does this matter? After all, we’ve been converting AC to DC on an industrial scale for over a century, and, since the advent of the switch mode power supply, the conversion loss from AC to DC is tiny.

Here are a few reasons:

  1. Photovoltaics generate DC, not AC. Converting DC to AC has never been as efficient as the other way around.
  2. Batteries produce DC.
  3. Industrial scale pumps and fans are much more efficient with AC (3-phase) than DC.
  4. In order to get really green, we need to get really efficient. All the easy wins have been won; we need to look at the margins to find those remaining gains.
  5. Engineering excellence.

So, in the next part(s), I’ll look at the DC part. I’ll move on to the AC part when I’ve finished with that.

Previous  Next