Laboratory Research Computing - Systems

Overview

Lawrencium is Berkeley Lab's institutional computational resource for Berkeley Lab PIs and their collaborators. It consists of 4 cluster pools, LR2, LR3, LR4, and MAKO that are each equipped with a high performance, low latency Infiniband interconnect that is suitable for providing an stable, high performing resource for running wide diversity of scientific applications. Currently, they are all coupled with a BlueArc high performance NFS storage server that provides a home directory storage and a Data Direct Networks 890TB Lustre parallel filesystem that provides high performance scratch space to users.

Compute Nodes (LR4)
LR4 is the latest addition to the Lawrencium condo cluster consisting of 108 ea. 24-core Haswell compute nodes connected with a Mellanox FDR Infiniband fabric. Each node is a Dell PowerEdge C6320 server blade equipped with two Xeon Intel 12-core Haswell processors (24 cores in all) on a single board configured as an SMP unit. The core frequency is 2.3GHz and supports 16 floating-point operations per clock period with a peak performance 883 GFLOPS/node. Each node contains 64GB 2133Mhz memory.

Compute Nodes (LR3)
LR3 is an addition to the Lawrencium condo cluster consisting of 172 16-core compute nodes and 36 20-core nodes connected with a Mellanox FDR Infiniband fabric. Each node is a Dell PowerEdge C6220 server blade equipped with two Xeon Intel 8-core Sandybridge processors (16 cores in all) on a single board configured as an SMP unit. The core frequency is 2.6GHz and supports 8 floating-point operations per clock period with a peak performance of 20.8 GFLOPS/core or 322GFLOPS/node. Each node contains 64GB 1600Mhz memory. The newer 20-core C6220 nodes are similar, but use the 10-core IvyBridge processors running at 2.5Ghz and have 64Gb of 1866Mhz memory.

Compute Nodes (LR2)


LR2 is an additional pool of 170 compute nodes intended to augment LR1 to meet the increased computational demand from LBNL researchers. Nodes are a combination of IBM iDataPlex dx360M2, HP SL390, and Dell C6100 servers each equipped with two Xeon Intel hex-core 64-bit Westmere processors (12 cores per node) on a single board configured as an SMP unit. The core frequency is 2.66GHz and supports 4 floating-point operations per clock period with a peak performance of 10.64 GFLOPS/core or 128 GFLOPS/node. Each node contains 24GB of NUMA memory connected via a triple QuickPath Interconnect (QPI) channels. LR2 has a peak performance rating of 20TF, but provides better computational efficiency making it much faster than LR1 even though the peak rating is similar.

Compute Nodes (LR1) - Retired 9/30/2014

LR1 is the first pool of compute for the Lawrencium cluster.consisting of 228 compute nodes. Each node is a Dell poweredge 1950 equipped with two Xeon Intel quad-core 64-bit Harpertown processors (8 cores in all) on a single board configured as an SMP unit. The core frequency is 2.66GHz and supports 4 floating-point operations per clock period with a peak performance of 10.64 GFLOPS/core or 85 GFLOPS/node. Each node contains 16GB of memory. The memory subsystem has dual channel 1333 MHz Front Side Bus connecting to 667 MHz Fully Buffered DIMMS. Both processors share access to the memory controllers in the memory controller hub (HCM or North Bridge).

Data Transfer (data-xfer)
The data transfer node is an LRC server dedicated to performing transfers between LRC data storage resources such as the LRC Home Directory and Scratch Parallel Filesystem; and remote storage resources at other sites including the HPSS at NERSC (National Energy Research Scientific Computing Center). This server are being managed (and monitored for performance) as part of a collaborative effort between ESnet and LBLNet to enable high performance data movement over the high-bandwidth 10Gb ESnet wide-area network (WAN).