Research and Development

Green Computing

Direct-Chip Cooling project team consisting of staff from EETD, HPC Services, Asetek, Cisco, and Intel

More than ever before, today's supercomputers require an increasing amount of power and cooling to operate. Faced with the prospect of having to limit its high performance computing capability, the IT Division, in conjunction with the Lab's Energy Environment Technologies Division and industry partners, embarked on a program to increase the capacity of it's 40-year old 5600 sq ft data center while implementing energy efficient strategies to make the most of the existing facility.

Wireless Sensor Monitoring
In 2007, the IT Division engaged Synapsense, which at the time was just beginning development of its real-time, wireless, monitoring application, to deploy a system which would permit detailed analysis of the environmental conditions (humidity and temperature) along with air pressure and power monitoring at hundreds of points within the datacenter.
Once this system was deployed, the team used CFD modeling and this data to begin to change the airflow and make other operational adjustments in the datacenter.  The team undertook a variety of fixes, some small, and some large:
  • Floor tile tuning to improve air pressure
  • Hot Aisle/Cold Aisle Isolation
  • Conversion of the overhead plenum to hot air return
  • Chimney extension of CRAC returns to connect to the overhead ceiling plenum
  • Installation of plastic curtains to further reduce hot aisle/cold aisle mixing
  • Installation of water-cooled doors based on non-chilled water (collaboration with the vendor to reduce energy use)
  • Piloting of fully enclosed water-cooled racks
  • Use of higher ambient temperature setpoints to improve efficiency

These measures taken together allowed the IT Division to increase it's scientific computing capability by over 50% since 2007 when the room was assumed to be at capacity. The culmination of this initial work occurred in November 2010, when LBL became one of the first organizations in the federal space, and among a handful of smaller data centers in the world, to be able to calculate and view the data center’s Power Utilization Effectiveness (PUE) in real-time.   This critical metric, which indicates the power used by the infrastructure in the data center in comparison to the power used by the computers themselves, helps staff manage the data center on a dynamic basis to best achieve environmental goals. 

Variable Speed Drive (VSD) Cooling
In 2012, we partnered again with Synapsense to retrofit our datacenter CRACs (Computer Room Air Conditioner) with variable-speed controls on the fans and demonstrated that variable-speed operation is possible and that it results in significant energy savings and improved cooling and reliability. The study found an energy use reduction of 24 percent compared to constant-speed fan operation. More importantly, it provided the ability to dynamically respond to meet the needs of changing scientific computation loads - which caused large fluctuations in power and cooling demand- in our data center. In May 2013, LBNL and Synapsense were recognized as a finalist for the Green Enterprise IT Award 2013  by the Symposium Uptime Institute for this work

Direct-to-Chip Liquid Cooling

For 2013, IT and EETD are working with several vendors to evaluate the use of direct-to-chip water cooling for our compute clusters. Used extensively by PC gamers wanting to overclock their home PC processors for maximum performance, this technology is now being adapted for large scale use in the datacenter environment. By using non-chilled water in our datacenter, this should provide free cooling - up to the limits of the building plumbing - for our systems. As part of this project, we will be running our regular mix of scientific jobs on the system and measuring the heat load rejected into the water system. Researchers interested in participating - by running their computations - should contact HPC Services manager Gary Jung.