High Performance Computing at Berkeley Lab

Berkeley Lab provides Lawrencium, a 1148-node (20,436 computational cores) Linux cluster to its researchers needing access to computation to facilitate scientific discovery. The system, which consists of shared nodes and PI-contributed Condo nodes is equipped with an infiniband interconnect and has access to a 1.8 PB parallel filesystem storage system. Large memory, GPU and Intel Phi Knight's Landing nodes are also available.


We offer comprehensive Linux cluster support, including pre-purchase consulting, procurement assistance, installation, and ongoing support for PI-owned clusters. Our HPC User Services consultants can also help you to get your applications performing well. UC Berkeley PIs can also make use of our services through the very successful Berkeley Research Computing (BRC) program available through UC Berkeley Research ITAltogether the group manages over 47,000 compute cores and over 2100 users across 267 research projects for the Lab and UC Berkeley.

Nov 13, 2017 - 
LBNL Singularity wins HPCWire Editors and Readers Choice Awards
LBNL's Singularity Container software has won two highly coveted HPCWire awards announced at Supercomputing 2017 this week. In addition to winning the Editors Choice award for the second year in a row, Singularity was recognized with the Reader's Choice award for Best HPC Programming Technology. 

Oct 11, 2017 - Computing for Free - Announcing the PI Computing Allowance
The PI Computing Allowance (PCA) is a new program that provides up to 300K Service Units (SUs) of free compute time per fiscal year to all qualified Berkeley Lab PIs, where one SU is equivalent to one compute cycle on the latest standard hardware. The purpose of the PCA program is to outreach to to areas of science where the use of computing to accomplish science is relatively new. Go here for details on how to apply.

Oct 1, 2017 - Lawrencium LR5 Cluster now available
We recently announced the availability of the new Lawrencium LR5 4032-core Broadwell compute cluster which consists of 144 ea. compute nodes equipped with Intel 14-core Broadwell processors. As before, researchers can purchase compute nodes to add to Lawrencium Condo and they will receive free cluster support in exchange for their excess cycles.

Aug 1, 2017 - GPU Condo Pool now open for PIs
We have found that many users can take advantage of the inexpensive single-precision compute power of Nvidia's consumer GPU boards so we are now taking orders for users who want to buy into our new GPU pool. This new configuration is a 1U, dual Haswell processor machine with 4 ea. GTX1080Ti cards at the cost of $8300. Interested users can contact Gary Jung <gmjung@lbl.gov> for more details.

Nov 14, 2016 - LBNL Singularity wins HPCWire's Editors' Choice Award
At SC16 in Salt Lake City, Tom Tabor, publisher of HPCWire, presented Greg Kurtzer, for his work on Singularity, with HPCWire's 2016 Editor's Choice Award for one of the Top 5 new Technologies to Watch. These annual awards are highly coveted as prestigious recognition of achievement by the HPC industry and community. Staffer Krishna Muriki, along with Kurtzer, also ran two standing room only Singularity tutorials at the SC16 Intel Developers Conference.

Nov 11, 2016 - Juypter Notebook now available on Lawrencium
Jupyter Notebook is a useful web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. We've extended Juypterhub so that it can now leverage the Lawrencium cluster resource in order to support code needing high performance computing and to reduce turnaround time. Lawrencium users can go to our online documentation see how to get started. 

Oct 20, 2016 - LBNL Singularity featured in HPCWire
This week's issue of HPCWire features Singularity - a user-space container solution designed for HPC. Developed by LBNL architect Greg Kurtzer, Singularity,  is a platform to support users that have different system environment needs than what is typically provided on HPC resources. Users can develop on their Ubuntu laptop and then package up and run their Singularity container on a Linux cluster running a different operating system. What is different about Singularity is that it leverages a workflow and security model that makes it viable on multi-tenant HPC resources without requiring any modifications to the scheduler or system architecture. Read more.

Apr 3, 2016 - HPCS at Lustre Users Group Meeting 2016
HPCS Storage Lead John White will be giving at talk this week at the annual Lustre Users Group 2016 conference. John's presentation will provide an introduction to the challenges involved in providing parallel storage to a condo-style HPC infrastructure.

Mar 22, 2016 - Run on Lawrencium for Free - New Low Priority QoS
We are pleased to announce the “Low Priority QoS (Quality of Service)” pilot program which allows all users to run jobs requesting up to 64 nodes and up to 3 days of runtime on the Lawrencium Cluster resources at no charge when running at a lower priority. Users should check the Lawrencium user page for specific instructions for submitting jobs to the new QoS.

Feb 25, 2016 - Meet Climate Scientist Jennifer Holm
Climate scientist Jennifer Holm with the Climate and Ecosciences Division uses Lawrencium to run simulations for the DOE-funded NGEE Tropics project which studies how tropical forests are going to respond to a changing climate. See the Lab's facebook post here.


ALICE (A Large Ion Collider Experiment)  LBNL has recently become one of the tier 2 computing sites for the Worldwide LHC Computing Grid in order to provide computing and data storage for the ALICE detector project under project lead Jeff Porter.

Center for Financial Technology We are partnered with PI John Wu of the Computational Research Division to build a 128-node, 3072-core Haswell cluster to support a collaboration between the Lab and Delaware Life. The cluster is being used to investigate modeling of financial markets.

Globus for Google Drive Using Google Drive for storage can be an exercise in babysitting data transfers. We partnered Globus to develop a connector to make big data transfers to and from Google Drive simple and painless.

Molecular Foundry We recently put a new 175-node, 4224-core Haswell cluster, ETNA, into production for the Foundry Theory Group led by PI David Prendergast

The San Diego Supercomputer Center (SDSC) is making major high-performance computing resources available to the UC and Lab community through a new introductory program called HPC@UC. Researchers can apply for awards up to 1M core-hours on SDSC's new Comet Supercomputer.

Big Data at the ALS
We build a Data Pipeline using a Fast 400MB/s CCD, a 78,392 core GPU cluster and a 260TB Data Transfer Node with Globus Online for PI David Shapiro to do the X-ray diffraction 3D image reconstruction at the new COSMIC Beamline 7.0.1. Read more here about how their project set the microscopy record by achieving the highest resolution ever.