Nanotechnology Now

Our NanoNews Digest Sponsors
Heifer International



Home > Press > HP Supercomputer at NREL Garners Top Honor

GSA Administrator Dan Tangherlini (left), NREL Associate Lab Director Bryan Hannegan, and NREL Director Dan Arvizu discuss the high performance computer Peregrine during a tour of the ESIF. NREL collaborated with HP and Intel to develop the innovative warm-water, liquid-cooled supercomputer, which recently won an R&D 100 award. Peregrine is the first installation of the HP Apollo 8000 platform, which uses more than 31,000 Intel Xeon processors providing a total compute capability of 1.19 petaflops.
Photo by Dennis Schroeder, NREL
GSA Administrator Dan Tangherlini (left), NREL Associate Lab Director Bryan Hannegan, and NREL Director Dan Arvizu discuss the high performance computer Peregrine during a tour of the ESIF. NREL collaborated with HP and Intel to develop the innovative warm-water, liquid-cooled supercomputer, which recently won an R&D 100 award. Peregrine is the first installation of the HP Apollo 8000 platform, which uses more than 31,000 Intel Xeon processors providing a total compute capability of 1.19 petaflops.

Photo by Dennis Schroeder, NREL

Abstract:
A supercomputer created by Hewlett-Packard (HP) and the Energy Department's National Renewable Energy Laboratory (NREL) that uses warm water to cool its servers, and then re-uses that water to heat its building, has been honored as one of the top technological innovations of the year by R&D Magazine.

HP Supercomputer at NREL Garners Top Honor

Golden, CO | Posted on October 19th, 2014

Supercomputers are hot—literally and figuratively.

The behemoths that can crunch a quadrillion calculations each second are needed to simulate and model everything from weather patterns to high finance to the movement of nanoparticles and celestial objects—and to analyze big data almost everywhere. All of these calculations heat things up. A typical supercomputing data center has rack after rack of servers, and each of those servers burns hot inside if not for a cooling mechanism—usually forced air driven by fans, which requires significant electricity.

When NREL outgrew its old data center and was drawing up plans for its Energy Systems Integration Facility (ESIF), ideas for a new data center were met with thoughts on how to live up to NREL's mission of being a living laboratory for energy efficiency and sustainability.

"Computers generate significant quantities of waste heat that is typically just thrown away," said Steve Hammond, director of the Computational Science Center at NREL. "Our vision was to build a showcase facility, to integrate the computer and data center with the building and do it with a holistic view toward energy efficiency.

"We spent a lot of time talking with people in the computer industry, telling them where we were headed," Hammond added. "'If we want to do this, you might want to consider the following…,' that type of thing."
NREL's Desire to Go Green Fit with HP's Plans for Liquid-Cooled Supercomputers

As planners were drafting specifications for the ESIF building, "some people from HP came to us saying they had an idea about how to cool supercomputers efficiently with liquid cooling," Hammond said.

HP Distinguished Technologist Nic Dube picks up the timeline from there.

"At the same time that NREL was ramping up the effort to build a new facility that would be a world leader in energy efficiency, we at HP had been working on a project called Apollo—a liquid-cooled supercomputer platform," Dube said. "Availability was initially targeted for a year later than Steve's timeline, but we decided to accelerate the program to meet NREL's goals."

The NREL data center would be an ideal showcase for the technology HP was proposing. Key to NREL's mission is to be a model for energy efficiency, and HP wanted to demonstrate that there could be a broad market for liquid-cooled high performance computers. "We went very aggressively after the bid," Dube said.

The result is the high performance computer called Peregrine at the ESIF. Peregrine is the first installation of the HP Apollo 8000 platform, which uses more than 31,000 Intel Xeon processors providing a total capacity of 1.19 petaflops.

Peregrine provides sufficient heat to meet the needs of the 182,500-square-foot ESIF, and combined with an energy-efficient data center is saving NREL about $1 million a year in energy costs. In all, the ESIF consumes 74% less energy than the national average for office buildings. It has been designated a LEED Platinum building and was named 2014 Laboratory of the Year by R&D Magazine.

There were plenty of hurdles to clear in designing the first system in the HP Apollo 8000 series, but the thermodynamic fundamentals are quite straightforward and easily replicable, Dube said. "The big picture is simple. You take heat from something that generates heat and send it to something that requires heat."

The challenge was to build liquid cooling not just in an exotic way, but in a way that was simple, reliable, and cost effective enough that it could work for a wide array of large computers—not just those in federal labs, but computers with a broad range of customers and applications.

The Apollo system, which uses liquid cooling rather than forced air, packs amazing computational capacity into a small space. "For heat exchange [e.g., cooling], liquids are orders of magnitude more effective than air, and the pump energy needed to circulate the liquid cooling is much less than the fan energy to move the equivalent amount of air," Dube and Hammond noted. Using liquid cooling allowed HP to pack the servers more densely and still keep them cool, rather than having to spread the servers out in a data center measured in acres in order to cool them sufficiently with air. Within a standard rack footprint of 2 feet by 4 feet, the HP Apollo 8000 platform can pack as many as 288 processors. That's four times the density of typical racks for high performance computers—and it means a much smaller footprint and lower cost.

Capturing Heat, Using It to Warm the Entire Building

Because the servers are cooled with warm water, rather than cold, the HP Apollo system doesn't need to be in a data center supported by compressor-based chillers, which are both energy hungry and expensive. Pipes carry the water right to the critical components, exploiting the thermal advantage of water over traditional air-cooled systems that force chilled air through heat sinks. If a supercomputer drawing a megawatt of power needs chillers for cooling, there may be an additional 500 kilowatts of energy needed to power the chillers, just to cool the supercomputer. The evaporative cooling used at the ESIF calls for about one-tenth of that cooling cost, because the water supplied for cooling can be 75°F, not 45°F or 50°F.

Water flowing to the servers is about 75°F. While it cools the servers, the servers in turn heat the water, so that by the time the liquid finishes a pass of the data center, its temperature has risen to 95°F or warmer. That's a sufficient temperature to serve as the primary source of heat for the ESIF's office and lab spaces. The waste heat that warms the building via the hot water in the pipes circulates back to cool down the racks of servers, completing the loop. The HP Apollo system is designed so that maintenance on servers can be performed without opening any liquid connections. That's an important safety feature, as it keeps expensive electronics away from water.

But that's not all. The water heated by the data centers is also used under the front plaza and walkway outside the building to melt snow and ice. And that heat isn't just wasted in the summer—it's used to complete the loop for the cooling system that lowers the building's temperature during the hot days of June, July, and August.

Knowing Specs, Goals Helped Lower Cost

HP's goal was to demonstrate that liquid cooling can be simple; NREL's aim was to build an energy-efficient data center and integrate the supercomputer with the data center—and the potential energy savings—into the ESIF building as a whole. Before the pipes were routed, the team learned everything it needed to know about the dynamics of the building—the height of the ceilings, flow rates, supply and return temperatures, locations of the freight elevators, strength of the floor. Dube said the final product was enhanced because NREL knew exactly what it wanted, and that challenged HP to meet hard goals in a short timeline. "Because NREL was able to give us detailed specs like that, we were able to deliver a product far above our original target. Steve and NREL had really done some good analysis of where the industry needed to get."

One key time saver during installation was modular plumbing, 6-foot lengths with flanges on either end. The pipes were pre-assembled and pre-tested in the factory, and they employ quick-disconnect stainless connectors and flexible hoses. "That allowed us to put in 18 racks in four days, instead of four weeks," Hammond said.

The HP Apollo system has very sophisticated control systems as well; it's not actually as simple as treating the supercomputer as a furnace. The building requires a minimum temperature of water coming out of the data center for a heat source. High-tech engineering allows a varying flow rate within the servers, which maintains a constant water output temperature whether the computer is running at full load or idle, while also allowing for a range of temperature at the system's water inlet.

Collaboration Was Key, Say HP and NREL

Dube praised the collaboration. "You always encounter hurdles in a project like this, but we would sit down with the NREL team and work out the challenges—'This is the metric we need to meet; now how do we make that happen?'"

Hammond said NREL is very pleased with the system. "We took delivery of the first racks in August of 2013, had the ribbon-cutting with the Energy Secretary in late September, passed the acceptance test in November, and were in production in January. That's an impressive timeline considering this is a first-of-its-kind system.

"HP got to showcase its state-of-the-art platform, and NREL has an energy-efficient, showcase data center that cost less to build than if we had built something less energy efficient," Hammond said. "We didn't have to look at how many years it would take us to recoup our investment. It cost less to build and less to operate from day one."

— Bill Scanlon

####

For more information, please click here

Contacts:
Media may contact:

Heather Lammers
303-275-4084

Copyright © National Renewable Energy Laboratory (NREL)

If you have a comment, please Contact us.

Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.

Bookmark:
Delicious Digg Newsvine Google Yahoo Reddit Magnoliacom Furl Facebook

Related Links

Learn more about high performance computing at NREL:

Related News Press

News and information

Researchers develop artificial building blocks of life March 8th, 2024

How surface roughness influences the adhesion of soft materials: Research team discovers universal mechanism that leads to adhesion hysteresis in soft materials March 8th, 2024

Two-dimensional bimetallic selenium-containing metal-organic frameworks and their calcinated derivatives as electrocatalysts for overall water splitting March 8th, 2024

Curcumin nanoemulsion is tested for treatment of intestinal inflammation: A formulation developed by Brazilian researchers proved effective in tests involving mice March 8th, 2024

Laboratories

A battery’s hopping ions remember where they’ve been: Seen in atomic detail, the seemingly smooth flow of ions through a battery’s electrolyte is surprisingly complicated February 16th, 2024

NRL discovers two-dimensional waveguides February 16th, 2024

Catalytic combo converts CO2 to solid carbon nanofibers: Tandem electrocatalytic-thermocatalytic conversion could help offset emissions of potent greenhouse gas by locking carbon away in a useful material January 12th, 2024

Three-pronged approach discerns qualities of quantum spin liquids November 17th, 2023

Openings/New facilities/Groundbreaking/Expansion

OCSiAl expands its graphene nanotube production capacities to Europe June 17th, 2022

GLOBALFOUNDRIES Moves Corporate Headquarters to its Most Advanced Semiconductor Manufacturing Facility in New York April 27th, 2021

Oxford Instruments Plasma Technology relocates to advanced manufacturing facility: Move driven by exceptional business growth February 12th, 2021

RIT to upgrade Semiconductor and Microsystems Fabrication Laboratory through $1 million state grant: Upgrades to clean room will enhance university’s research capabilities in photonics, quantum technologies and smart systems August 16th, 2019

Govt.-Legislation/Regulation/Funding/Policy

What heat can tell us about battery chemistry: using the Peltier effect to study lithium-ion cells March 8th, 2024

Researchers’ approach may protect quantum computers from attacks March 8th, 2024

The Access to Advanced Health Institute receives up to $12.7 million to develop novel nanoalum adjuvant formulation for better protection against tuberculosis and pandemic influenza March 8th, 2024

Optically trapped quantum droplets of light can bind together to form macroscopic complexes March 8th, 2024

Announcements

What heat can tell us about battery chemistry: using the Peltier effect to study lithium-ion cells March 8th, 2024

Curcumin nanoemulsion is tested for treatment of intestinal inflammation: A formulation developed by Brazilian researchers proved effective in tests involving mice March 8th, 2024

The Access to Advanced Health Institute receives up to $12.7 million to develop novel nanoalum adjuvant formulation for better protection against tuberculosis and pandemic influenza March 8th, 2024

Nanoscale CL thermometry with lanthanide-doped heavy-metal oxide in TEM March 8th, 2024

NanoNews-Digest
The latest news from around the world, FREE




  Premium Products
NanoNews-Custom
Only the news you want to read!
 Learn More
NanoStrategies
Full-service, expert consulting
 Learn More











ASP
Nanotechnology Now Featured Books




NNN

The Hunger Project