My Perfect Green Data Centre (8) – A Robotic Cloud Data Centre My Perfect Green Data Centre (9) – The Perfect Green Data Centre

My Perfect Green Data Centre (8) – Colocation: The Best I Can Do

I’ve seen some amazing items of equipment in colocation data centres. The winner has to a drum printer, but I’ve seen passive 4-way hubs, any number of free-standing modems, more desktops than I care to remember, and that’s before we get to the tape drives, optical jukeboxes and voice recording equipment. And that’s only the digital stuff. The analogue equipment for telephony comes in shapes and sizes of which Heath Robinson would be proud.

So, although pre-mounting stuff in trays and having robots insert those trays into a sealed monocoque of the type I suggested in the last post may work 90% of the time, the 10% will kill any attempt at robotics. The range of things that goes into a colocation data centre is just to vast for anything but the current row and aisle, human-access data centre. Having untrained IT guys clambering around a vertical chamber, laden with heavy and expensive equipment, is a non-starter.

As to robots, forget it. They’re too expensive to be cost effective. And human incursion is binary: one either designs for it or doesn’t. If there are likely to be even a few of incursions a year, one must design for it, and one reaps the costs of both the robots and the infrastructure needed to deal with humans.

So what is there to do? Quite a lot. They’re all little things, and most of them have to do not so much with the engineering, but with the dynamics of how colocation providers interact with their clients.

On the engineering, insulate your building, deploy geothermal piling, install hot or cold aisle containment, use evaporative cooling, but also, look at the load. It is much more energy efficient to cool three 4kW racks than to cool a single 10kW and two 1kW racks, yet clients arrange their IT in the latter way all the time. Put intelligent PDUs on each and every rack, monitor the heck out the whole white space with temperature sensors, and re-arrange your client’s equipment to balance the temperature. Yes, there are constraints on cable length and demands of proximity, but even within these, the equipment layout can be optimized.

To do this, the contracts need to be restructured to reward the clients for good behaviour. Most colocation contracts I’ve seen (and I’ve seen many) rather than rewarding the client for good behaviour, reward the data centre operator for their colocation clients’ bad behaviour. It does not have to be this way. Colo providers sell space, electricity, network bandwidth and time. Break the electrical component down into base IT load (which is fixed) and cooling (which can almost always be reduced), and you incentivise the client to work with you to reduce his heat load.

Don’t allow your clients to use cages. I’ve mapped out the topology of enough networks by looking at the backs of racks to know that it can be done – but I had keys so didn’t have to peer through the grate. But the bigger points are that all IT estates consist of the same stuff – servers, storage and network - and unless you know what the hardware does and what the IP addresses are (yes, they’re supposed to be labeled…) knowing that the estate consists of servers, storage and networks is like knowing that a building consists of bricks, wood and metal. Furthermore, with virtualization technologies, the hardware is so abstracted from the network topology as to be almost irrelevant. So the cages serve almost no useful purpose in securing the estate. Yet cages are terrible for data centres: they screw up the airflow, stressing the cooling system. If anything, they offer a false sense of security: the real threat’s at the end of the wire, not some guy photographing your racks..

And anyway, cold air containment systems provide most of the perceived security advantages of cages.

And then there’s the Network and Security Operations Centres. Here’s the next generation NOC/SOC:

Phone

I’m not advertising or advocating the Huawei 9 Mate – the picture’s just to make the point: the technology is already available to monitor all systems and alert people when unexpected things happen. It can tell if a human is in the white space, check that against whether or not there should be a human in the white space, and which part of the white space that human should be in. SNMP is hardly new and any half-decent DCIM will send text messages, e-mails and the like if it thinks a component is breaking. All those flashy monitors and screens in the NOC and SOC serve no useful engineering purpose.

And then there are the little things. Use LED lighting, not fluorescent tubes, and switch the lights off it they’re not needed. Stay away from CFCs if you have DX units. If you’re using evaporative cooling, there’s no chilled water so no need for a raised floor – so don’t install one. I think that there’s no need for any permanent staff in a data centre but, if you must have people there hard at it doing nothing most of the time, give them LED lighting, efficient air-con, too.

Plant a few trees.

And finally, source clean power. If you must offset, offset, but at least put a solar panel on the roof. If nothing else, it can keep the staff cool and in light.

Previous  Next

About Chris Maden
Related Posts
  • All
  • By Author
  • By Category
  • By Tag

Leave a Reply