How to get an Account on Lawrencium

The Lawrencium cluster is open to all Berkeley Lab researchers needing access to high performance computing. Research collaborations are also welcome provided that there is a LBNL PI.

LBNL PIs wanting to obtain access to Lawrencium for their research project will need to complete the Requirements Survey and send it along with a list of anticipated users to LRC@lbl.gov. A unique group name will be created for the project and associated users. This group name will be used to setup allocations and report usage. Any LBNL researcher who wants a Lawrencium cluster account associated with an existing project can open a ticket at http://service-now.lbl.gov/. In the ticket, please specify the existing Lawrencium project and the PI's name.

Once the PI/project has been received by HPCS, it will be added to the new user form so that new users can associate themselves with an established PI/project when they go to request a new account. Users should go to http://service-now.lbl.gov/ and login using their LBNL LDAP username and password to make the request. Berkeley Lab PIs can also use the form to submit account requests for non-LBNL users associated with their project.

Accounts are added on a first come, first served basis upon approval of the PI for the project. For security reasons, access to Lawrencium will be through the use of one-time password tokens. Users will be required to complete our user agreement in order to get their one-time password token generator and their account.

Questions regarding setting up new projects or requesting new accounts can be directed to LRC@lbl.gov

How to get an Account on the UX8 and PROJECTS SVN servers

UX8 is a 48-core, 256GB general purpose server that is available to users for doing programming development on their HPC applications. It features the same programming environment as is available on the Lawrencium cluster and has licenses for Mathematica. UX8 users are can also access our PROJECTS SVN subversion server to manage their software development. Users should go to http://service-now.lbl.gov/ and login using their LBNL LDAP username and password to make the request. Berkeley Lab PIs can also use the form to submit account requests for non-LBNL users associated with their project.  


Allocations

Computer Time: We are currently not using an allocation process to allocate compute time to individual projects. Instead, usage and priority will be regulated by a scheduler policy intended to provide a level of fairness across users. If needed, a committee consisting of scientific division representatives will review the need for allocations if demand exceeds supply.

Cost: At this time, there is a charge of $0.01/SU or Service Unit for compute cycles. Newer hardware is charged at the rate of 1 SU per 1 processor core-hr. Compute cycles on older hardware is charged at the rate of less than 1 SU per core-hour to account for the differences in compute performance. Current the charges are as follows:

 System     Rate Compute Node Description
 LR4     1 SU per core-hr Intel Haswell 24-core nodes
 LR3      0.75 SU per core-hr Intel SandyBridge 16-core and IvyBridge 20-core nodes
 MAKO          0.50 SU per core-hr Intel Nehalem 8-core nodes
 LR2 0.50 SU per core-hr Intel Westmere 12-core nodes

There is a nominal charge of $25/mo/user for the use of Lawrencium and UX8; to cover the costs of home directory storage and backups. Users should note that their jobs are allocated resources by the node so, for example, a job running on a 12-core LR2 node will be charged 12 SUs/hr for the use of that node. Similarly, a job running on an 20-core LR3 node will be charged 20 SUs x 0.75 SU/hr = 15 SUs/hr or $0.15/hr for the use of that LR1 node. Account fees and cpu usage will appear as LRCACT and LRCCPU in the LBL Cost Browser.

Storage: Home directory space will have a quota set at 10GB per user. Users may also use the /clusterfs/lawrencium shared filesystem which does not have a quota; this file system is intended for short term use and should be considered volatile. Backups are not performed on this file system. Data is subject to periodic purge policy wherein any files which are not accessed with in the last 14 days will be deleted. Users should make sure to have a back up of these files to some external permanent storage as soon as they are generated on the cluster.

Lustre: Lustre parallel file system is also now available for Lawrencium cluster users. The file system is built with 4 OSS and 15 OST servers with a capacity of 400 TB. The default striping is set to 4 OSTs with strip size of 1 MB. All the Lawrencium cluster users will receive a directory created under /clusterfs/lawrencium with the above default stripe values set. This is a scratch file system, so its mainly intended for storing large input or output files for running jobs and for all the parallel I/O needs on the Lawrencium cluster. This file system is intended for short term use and should be considered volatile. Backups are not performed on this file system. Data is subject to periodic purge policy wherein any files which are not accessed with in the last 14 days will be deleted. Users should make sure to have a back up of these files to some external permanent storage as soon as they are generated on the cluster. 

Closing User Accounts: The PI for the project or the main contact is responsible for notifying HPCS to close user accounts and the disposition of the user's software, files and data. In some cases, users share software and data from their home directory and others may depend on them. For this reasons, only account terminations have to be requested by PI, the main account or the user of the account. Users accounts are not automatically deactivated upon termination of an employee because many people change their employment status, but remain engaged with the project.


Acknowledgements

Please acknowledge Lawrencium in your publications. A sample statement is:

This research used the Lawrencium computational cluster resource provided by the IT Division at the Lawrence Berkeley National Laboratory (Supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231)