As electronic data runs more of our everyday lives, industry and commerce are looking to outsource to web-based services in a bid to curb the growing thirst for power from IT. While this marks a push towards better energy efficiency, data centres still have opportunities to improve performance. By David Appleyard
Industry and commerce are increasingly going digital. Process controllers, drives, sensors, monitoring and data analytics, logistics and manufacturing, customer engagement and sales – the digital ‘envelopment’ of the commercial and industrial sector is extensive and growing.
Burdened with an ever-increasing volume of data and process requirements, commercial & industrial players are looking to third-party web-based services as a way to both avoid increasing investment in suitable IT infrastructure, and to increase energy efficiency.
Jack Pouchet, Vice President of Business Development and Energy Initiatives, Hyperscale Solutions at Emerson Network Power believes outsourcing data services marks a key opportunity to improve energy efficiency: “It makes sense for people to take a look at [outsourcing]. Everything is run electronically, all the data gets uploaded immediately, we’re building more and more electronic records, but do we really need to house these 15 or 20 or 30 racks of servers that are networking here? If we eliminate it we can put in revenue generating space.
“Can we find a co-location facility that meets our privacy and security rules and regulations? It’s going to run much more effectively from an energy efficiency stand point.”
Pouchet explains: “The reason why these [outsourcing] companies are becoming successful is that up to a certain size, and people argue that’s somewhere between say 1 MW of workload to 3 MW of workload but somewhere in that range, it’s hard to beat the value that you can get from a co-location or outsource partner.
“They’ve got an incredibly efficient buildings, incredibly efficient IT kit.”
But while some large server farms operated by well-known Internet brands provide shining examples of ultra-efficient data centres, small, medium, and corporate data centres are responsible for the vast majority of energy consumption by the sector. They are generally much less efficient.
And of course the cloud – a distributed network of file servers and processor banks – is now collectively a massive ‘industrial’ user of energy. Essentially densely packed racks of computer processing units festooned with cables, power hungry server farms consume vast quantities of energy in performing calculations. There is also a considerable requirement for cooling as a result.
According to a recent Lux Research report, globally data centres already use about 50 GW of electricity on equipment to cool their systems – 40% more than New York City on the hottest day of the year.
Indeed, according to a report from analysis firm Digital Power Group, collectively IT consumes some 10% of global power production currently. Google, Amazon, Apple and Microsoft collectively used 10 million MWh of electricity in 2015, including 50 data centres.
“The mega-sized data centres have remarkable energy and cooling requirements, and technologies are competing to supply the robust cooling infrastructure necessary to support these facilities at the lowest total cost of operation,” explains Alex Herceg, lead author of the report.
Lux conclude that evaporative cooling has emerged as the dominant cooling method for data centres of greater than 1 MW, with a 10-year total cost of ownership (TCO) of $575/kW. Facebook mainly uses this technology to run some of the world’s most efficient data centres, Lux says. Computer room air conditioning (CRAC) is the best option for smaller data centres with a 10-year TCO of $5,000 – $15,000 for a server capacity of 150 kW/m2, depending on the cost of electricity, Lux.
However, Pouchet warns of potential consequences from switching the dominant cooling technology from an electrical basis to a water basis: “We are going to use less electricity, but we are going to use more water. When a billion people wake up in the morning around the globe without access to water, we as the data industry really need to take a look at how much water we are using.”
Data centres purge carbon
One approach to reduce environmental impact and increasingly being adopted by the more prominent names in cloud services is the use of renewable energy.
For example, in November 2015, Amazon Web Services, Inc. announced that it had contracted with EDP Renewables to construct and operate a 100 MW wind farm in Paulding County, Ohio, USA. Part of the company’s plan to source 100% of its power from renewables, the so-called Amazon Wind Farm US Central will produce some 320 GWh annually once commissioned next year. This is one of a number of substantial wind power projects developed on behalf of AWS. As of April 2015, approximately 25% of the power consumed by the company’s global infrastructure came from renewable sources. By the end of 2016, they intend to reach 40%.
Amazon is apparently trailing its rivals, Microsoft says its global operations have been carbon neutral since 2012 and the company recently made its first direct partnership with a utility in Virginia for a 20 MW solar project. This is in addition to project builds using power purchase agreements, such as the Pilot Hill Wind Project, a 175 MW wind development in Illinois.
However, more interesting avenues for energy efficiency come from a growing trend to site data centres in cold climates such as Finland or Iceland, where more use can be made of ambient air cooling.
Google, which says it has been carbon neutral since 2007, claims its data centres use 50% less energy than the typical system. They raise the internal temperature, use outside air for cooling, and build custom servers, to achieve this, they add.
For instance, in March 2009, Google purchased the Summa Mill from Finnish paper company Stora Enso and converted it into a data centre, cooled using sea water from the Bay of Finland.
Microsoft has also recently been researching subsea servers, which are physically located beneath the sea surface.
Smarter architecture yields efficiency benefits
Key to improving energy efficiency is a better understanding of the data centre environment, says Doug Alger, Cisco IT Architect: “Data centres are certainly complex environments with a variety of different technologies being used within them. Taking into account the physical infrastructure elements, there is certainly a lot that can be done and to some extent is already being done to make processes more efficient. However, if we then look at what can be done with the computing hardware and what needs to be done to make this more efficient in conjunction with the physical infrastructure, then it is clear that there are several layers that need to be addressed .”
He notes the use of hot and cold aisle designs to improve passive heat dissipation: “We’ve had several data centre builds over the lastfew years where we’ve taken this process a step further, introducing enclosed cabinets with chimneys on them. The cold air goes into the front of the cabinet while the warm air in this case ends up going up that chimney, efficiently separatingthose air flows.
“Essentially, we’ll design the data centre in a way that we can take advantage fall of the cold air falling into these cold aisles, and as a consequence the warm air passively rising up through the chimneys.”
He concludes: “Furthermore, as an industry we areseeing anincreasing move to understand what these different elements of data centre cooling are and how to measure them. However, I think there is still work to be done on how to define productive work that is happening as a result of a data centre and as a result, improve efficiency.”
Improving data system architecture
Another significant opportunity to improve efficiency comes from reducing processor idle time and idle energy consumption, as a recent Natural Resource Defence Council (NRDC) report makes clear: “The largest issues and opportunities for energy savings include the under-utilisation of data centre equipment and the misalignment of incentives, including in the fast growing multi-tenant data centre market segment,” the report says.
Pouchet makes this observation by way of illustration: “If you owned a motor vehicle that operated the way servers and storage and networking gear does – meaning they idled consuming as much energy as during operating – we could not suck enough oil out of the ground fast enough to feed them.”
He explains: “A server sitting idle – and they spend somewhere between 50% and 95% of their operational life in idle mode – consumes 40% to 50% of the energy that they do when they are actually working. This is that staggering amount of energy and the industry doesn’t seem to be concerned about it at this point of time. There’s been discussion, but there seems to be an awful lot of pushback from the server manufacturers, the chip manufacturers, the memory manufacturers.
Again, Pouchet emphasis the role of consumers in driving energy efficiency: “We’re telling people that, if you are going to buy gear, ask a simple question: ‘How much energy does it take on idle?’ When you’re evaluating gear, make that one of your criteria – why? Because most of the time it’s idle, that is just a fact of life.”
Alger states this is one area of active engagement by Cisco: “There’s been a great increase in data centre virtualisation. Computing functions that in the past would have been handled by separate systems, you now have all of those competing demands put on a smaller number of devices and the CPUs of those devices end up running more efficiently. Cisco’s unified computing system has a significant amount of cabling reduction compared to equipment in the past. If you can remove a lot of those cables, your air flow is going to be better to that device, therefore you’ve just made your cooling system a little bit more efficient. There are functionality benefits, there are energy benefits, as well as infrastructure streamlining and reduction.”
Alger also points to other passive measures that can be adopted to improve data centre energy efficiency: “Even simple measures such as making the outside colour of the building a light colour, for example we’ll make the roof white to reduce the heat load from the sun.”
Overall, Alger sounds a note of optimism: “I think we are going to continue to see growth in the demand on data centres, in turn I think we will see greater intelligence applied, both for the equipment and what computing systems are capable of enabling. And, for those who are designing and operating data centres, we are purpose-building these facilities to enable our businesses and we’re choosing locations that make good sense to have a data centre in them. We are also looking at the climate conditions so we are able to use outside cooling, looking at where can utilise less expensive, reliable power and focusing on ways that we can bring technology to bear. We believe that this shift in technology and processes is necessary if we are meet the demand placed on the today’s data centres, and in doing so increase efficiency and productivity.”
“We know everyone out there is doing more and more things on-line. The good news is that there is a lot more attention on energy efficiency today. From a Cisco perspective, we’re incredibly focused on improving efficiency in these environments, reducing the growth in energy demand relative to services demand.”
David Appleyard is a contributing editor to E2.