High Performance Computing at Berkeley Lab

BERKELEY LABORATORY RESEARCH COMPUTING
Berkeley Lab provides Lawrencium, a 1148-node (20,436 computational cores) Linux cluster to its researchers needing access to computation to facilitate scientific discovery. The system, which consists of shared nodes and PI-contributed Condo nodes is equipped with an infiniband interconnect and has access to a 1.8 PB parallel filesystem storage system. Large memory, GPU and Intel Phi Knight's Landing nodes are also available.

HIGH PERFORMANCE COMPUTING SERVICES

We offer comprehensive Linux cluster support, including pre-purchase consulting, procurement assistance, installation, and ongoing support for PI-owned clusters. Our HPC User Services consultants can also help you to get your applications performing well. UC Berkeley PIs can also make use of our services through the very successful Berkeley Research Computing (BRC) program available through UC Berkeley Research ITAltogether the group manages over 47,000 compute cores and over 2100 users across 267 research projects for the Lab and UC Berkeley.

NEWS
Oct 11, 2017 - Computing for Free - Announcing the PI Computing Allowance
The PI Computing Allowance (PCA) is a new program that provides up to 300K Service Units (SUs) of free compute time per fiscal year to all qualified Berkeley Lab PIs, where one SU is equivalent to one compute cycle on the latest standard hardware. The purpose of the PCA program is to outreach to to areas of science where the use of computing to accomplish science is relatively new. Go here for details on how to apply.

Oct 1, 2017 - Lawrencium LR5 Cluster now available
We recently announced the availability of the new Lawrencium LR5 4032-core Broadwell compute cluster which consists of 144 ea. compute nodes equipped with Intel 14-core Broadwell processors. As before, researchers can purchase compute nodes to add to Lawrencium Condo and they will receive free cluster support in exchange for their excess cycles.

Aug 1, 2017 - GPU Condo Pool now open for PIs
We have found that many users can take advantage of the inexpensive single-precision compute power of Nvidia's consumer GPU boards so we are now taking orders for users who want to buy into our new GPU pool. This new configuration is a 1U, dual Haswell processor machine with 4 ea. GTX1080Ti cards at the cost of $8300. Interested users can contact Gary Jung <gmjung@lbl.gov> for more details.


Nov 14, 2016 - LBNL Singularity wins HPCWire's Editors' Choice Award
At SC16 in Salt Lake City, Tom Tabor, publisher of HPCWire, presented Greg Kurtzer, for his work on Singularity, with HPCWire's 2016 Editor's Choice Award for one of the Top 5 new Technologies to Watch. These annual awards are highly coveted as prestigious recognition of achievement by the HPC industry and community. Staffer Krishna Muriki, along with Kurtzer, also ran two standing room only Singularity tutorials at the SC16 Intel Developers Conference.

Nov 11, 2016 - Juypter Notebook now available on Lawrencium
Jupyter Notebook is a useful web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. We've extended Juypterhub so that it can now leverage the Lawrencium cluster resource in order to support code needing high performance computing and to reduce turnaround time. Lawrencium users can go to our online documentation see how to get started. 


Oct 20, 2016 - LBNL Singularity featured in HPCWire
This week's issue of HPCWire features Singularity - a user-space container solution designed for HPC. Developed by LBNL architect Greg Kurtzer, Singularity,  is a platform to support users that have different system environment needs than what is typically provided on HPC resources. Users can develop on their Ubuntu laptop and then package up and run their Singularity container on a Linux cluster running a different operating system. What is different about Singularity is that it leverages a workflow and security model that makes it viable on multi-tenant HPC resources without requiring any modifications to the scheduler or system architecture. Read more.

Apr 3, 2016 - HPCS at Lustre Users Group Meeting 2016
HPCS Storage Lead John White will be giving at talk this week at the annual Lustre Users Group 2016 conference. John's presentation will provide an introduction to the challenges involved in providing parallel storage to a condo-style HPC infrastructure.

Mar 22, 2016 - Run on Lawrencium for Free - New Low Priority QoS
We are pleased to announce the “Low Priority QoS (Quality of Service)” pilot program which allows all users to run jobs requesting up to 64 nodes and up to 3 days of runtime on the Lawrencium Cluster resources at no charge when running at a lower priority. Users should check the Lawrencium user page for specific instructions for submitting jobs to the new QoS.

Feb 25, 2016 - Meet Climate Scientist Jennifer Holm
Climate scientist Jennifer Holm with the Climate and Ecosciences Division uses Lawrencium to run simulations for the DOE-funded NGEE Tropics project which studies how tropical forests are going to respond to a changing climate. See the Lab's facebook post here.





FEATURED PROJECTS

Center for Financial Technology We are partnered with PI John Wu of the Computational Research Division to build a 128-node, 3072-core Haswell cluster to support a collaboration between the Lab and Delaware Life. The cluster is being used to investigate modeling of financial markets.

Molecular Foundry We recently put a new 175-node, 4224-core Haswell cluster, ETNA, into production for the Foundry Theory Group led by PI David Prendergast

Green Computing Under a $1M grant from the California Electric Commission, we are working with the Lab's Energy Technologies Area and Asetek Data Center Cooling to do a large-scale demo of direct chip liquid cooling for some of our clusters. Installation is slated for late 2017.

The San Diego Supercomputer Center (SDSC) is making major high-performance computing resources available to the UC and Lab community through a new introductory program called HPC@UC. Researchers can apply for awards up to 1M core-hours on SDSC's new Comet Supercomputer.

Big Data at the ALS
We build a Data Pipeline using a Fast 400MB/s CCD, a 78,392 core GPU cluster and a 260TB Data Transfer Node with Globus Online for PI David Shapiro to do the X-ray diffraction 3D image reconstruction at the new COSMIC Beamline 7.0.1. Read more here about how their project set the microscopy record by achieving the highest resolution ever.