My Perfect Green Data Centre (7) – Concentrated Solar Power My Perfect Green Data Centre (8) – A Robotic Cloud Data Centre

My Perfect Green Data Centre (8) – Solar Meta-Topology

So far I’ve discussed the topology and engineering within a data centre. In this post, I take a big step back to produce a blueprint for any global internet provider such that, by combining follow-the-clock computing with a much lower-density arrangement, we get to a much lower carbon footprint.

I will now unpack that in reverse order.

Let’s start at the IT load. In an early post, I stated the ASHRAE A3 standard recommended thermal guidelines, which I have since assumed. But that’s not the whole story. Getting holding of the ASHRAE standards has become rather difficult as, like everyone else, they’ve become selfish with the information and want people to pay. However, there’s the link here guided me to the paper Clarification to ASHRAE Thermal Guidelines, which I believe is in the public domain, and which shows the full picture: although 18-27C is the recommended operating range of temperatures and <60% the recommended humidity, the allowable range is 5-40C / <85% (and, for A4, 5-45C / < 90%). Ask my laptop: computers can work in high heat and humidity and, if they’re solid state, can last for many years. Put a motherboard built to ASHRAE A4 standard in the middle of a room in the tropics, aim a fan at it, and it will run almost indefinitely.

However, in a data centre, we don’t have a single motherboard in the middle of a room. We have many motherboards. This leads to lots of heat being generated in a relatively small area, and the problem of cooling in data centres is not caused by the computers per se, but rather by our insistence on packing lots of computers into a small space. The more we spread the computers out, the easier they become to cool. If we spread them out sufficiently, and if they’re built to withstand a high intake temperature, all we need to do is extract the hot output air.

The reason that we pack them in is that, in the past, it was held to be important that the computers were close to the people they served. This meant that data centres were built in or near urban areas where land is expensive, so the extra cost of an expensive cooling system was justified on the basis that computers could be packed.

But things have changed. From a security standpoint, the farther away the data centre is from urban areas, the better. From the point of view of land-cost, likewise. And it is now far cheaper to run fibre to a rural area than to buy land in an urban area.

Next, let’s take 1,350W/m2 insolation as a physical constraint. I suspect that we’ll never do much better than 50% recovery, so we can generate, say, 650W/m2. One of the problems I’ve been tackling in the last two posts has been that our load has a much higher density: if we’re stacking ten servers to a rack, then for every 0.6m2 rack we need 6m2 of solar panel. Allowing for the fact that the racks themselves are spread out, I concluded that every m2 of rack needs 2.5m2 of solar power (whether PV or CSP).

Here’s my proposal.

The power from a single panel, 650W, is enough to power a mid-range motherboard with a few disks. So, rather than having a huge array of panels over there powering a whole bunch of low- to medium-density computing over here, put the computing where the sun shines. Assuming the motherboard is built to ASHRAE A4, the problem is not intake temperature, but extracting the heat. To address this, we design the motherboard such that all the hot stuff is at the top, and we put a couple of fans at the bottom to blow the heat out. We protect the motherboard from rain by enclosing it (hopefully in some low-footprint material).

Panel with Computer

Okay, no prizes for the artwork. The rhomboid-ey thing is the solar panel, the computer’s strapped to the back, and the arrows indicate cool(ish) air coming in and hot air being blown out.

Or, perhaps, we have compute-panels and disk-panels, arranged in some repeating matrix:

comps and Disks

 

Each panel+motherboard is in principle self-contained. However – a problem for a mathematical topologist – panels should be interconnected in such a way that a cloud passing over doesn’t cause dead zone to zoom across the array. The entire array, and the weather, should be monitored so that, when it gets cloudy, selected computers can be de-powered depending on the extent of the darkness.

At night, instead of using batteries, de-power the entire solar-compute farm, and off-load the computing to the next solar-compute farm to the west. In the morning, computing will arrive from the sol-comp farm to the east.

Now let me push it further. In a world with starving people, it makes little sense to replace tracts of agricultural land with solar arrays when there are millions of small villages across the tropics that already have roofs. Put a few solar-compute panels on each roof, and interconnect them by sticking a wifi tower in the village. Pay the village or villagers not with rent, but by providing each house with a battery, some LED lights so that the children can study at nights, and an induction cooker so that villagers don’t cut down trees for firewood. Recruit a couple of villagers and put a NOC/SOC on their tablet to give them responsibility for their own village solar-compute array. Make this part of a training and recruitment program so that the NOC/SOC have a future that goes beyond installing the solar-compute panels and fixing them when they break.

If computing is going to follow the clock, sooner or later it’s going to fall into an ocean. Most oceans have land to the north or south, but the Pacific Ocean is a particularly large hop. In the north, it’s possible to string data centres along the west coast of North America and the east coast of Eurasia, and in the south, the Polynesian archipelago could keep the bits and bytes being crunched.

That covers the edges. For the ocean itself, with the oil industry in its death throes (what a shame they don’t use all that money and power to switch business models to renewables rather than sticking with the cook-the-planet model), quite a lot of large floating structures – oil rigs, supertankers and the like – will be available on the cheap. They could form a string of permanently moored data centres that run off some combination of wave, wind and solar power.

Does this sound more like science fiction than engineering? Perhaps. But nothing in the above is beyond the reach of today’s technology and engineering: we can design motherboards, chips and disks that run reliably in hot and humid places; PV technology is already almost there; the underlying control systems, network connectivity and ability to shunt data are also already there. What’s lacking is not the engineering or the technology, but the will. Google, Amazon, Facebook, where are you?

Not on this blog. So, in the next post, I’ll take all the bits and put them together in a more conventional, less frightening way,

Previous  Next

About Chris Maden
Related Posts
  • All
  • By Author
  • By Category
  • By Tag

Leave a Reply