harvard 2

Lenovo and Intel Advance Forward in High Performance Computing (HPC)

Lenovo and Intel are fuelling the first liquid-cooled supercomputer at Harvard University in a broad adoption of high performance computing (HPC) and exascale technology that will enable discoveries into earthquake forecasting, predictions on the spread of disease and star formation

Earlier, Lenovo and Intel shared one of the their first fruits of their HPC/AI collaboration by powering the supercomputers at the Simons Foundation’s Flatiron Institute. Today, they announced that Harvard University’s Faculty of Arts and Sciences Research Computing (FASRC) has chosen Lenovo DCG and Intel to power their very first liquid-cooled supercomputer known as the FASRC Cannon cluster, a large-scale HPC system that supports the  Science, Engineering, Social Science, Public Health, and Education modeling and simulation for more than 600 lab groups and over 4,500 Harvard researchers. .

Harvard’s FASRC Cannon cluster is  is critical to the thousands of researchers working to improve earthquake aftershock forecasting using machine learning, model black holes using event horizon telescope data, map invisible ocean pollutants, identify new methods for flu tracking and prediction, and develop a new statistical analysis technique to better understand the details of star formation.

FASRC sought to refresh its previous cluster, Odyssey by increasing the performance of each individual processor, knowing that 25% of all calculations are run on a single core. Liquid cooling is paramount to support the increased levels of performance today, and the extra capacity needed to scale in the future.

The Cannon cluster includes 670 Lenovo ThinkSystem SD650 servers featuring Lenovo NeptuneTM direct-to-node water-cooling, and Intel Xeon Platinum 8268 processors consisting of 24 cores per socket and 48 cores per node. Each Cannon node is now several times faster than any previous cluster node, with jobs like geophysics models of the Earth performing 3-4 times faster than the previous system. In the first four weeks of production operation, Cannon completed over 4.2 million jobs utilizing over 21 million CPU hours.

Scott Yockel, director of research computing at Harvard University’s Faculty of Arts and Sciences said, “Science is all about iteration and repeatability. But iteration is a luxury that is not always possible in the field of university research because you are often working against the clock to meet a deadline.

With the increased compute performance and faster processing of the Cannon cluster, our researchers now have the opportunity to try something in their data experiment, fail, and try again. Allowing failure to be an option makes our researchers more competitive.”