Hemisphere is a 64-node cluster that has provided the bulk of the CSC's processing capacity from 2003-2007. Each node contains dual 2.4 GHz Intel Xeon processors and 2GB of memory. Since Hemisphere and Occam are no longer under warranty, we are not adding new accounts at this time.
All accounts at the CSC have access to Hemisphere. Simply use an SSH client to connect to hemisphere.cs.colorado.edu. To move files to or from Hemisphere, use an SCP client to connect to hemisphere.cs.colorado.edu.
For more information on shared resources, such as where to store your files, see the Getting Started guide.
|Online Date:||March 2003|
|Total Compute Nodes:||64|
|Processors per Node:||Dual Intel Xeon 2.4 GHz|
|RAM per Node:||2.0 GB|
Additional reports for Hemisphere can be found on Ganglia.
The following common applications are installed. If you would like another application installed for system-wide use, please contact the administration team.
Debugging and Performance
Other Popular Applications
The following common libraries are installed. If you would like another library installed, please contact the administration team.
|NCAR Command Language (NCL)||/opt/ncl/lib|
The following compilers are available on Hemisphere in the following locations:
Intel Compilers 9.1—default
Portland Group Compilers—default
Hemisphere supports MPI using MPICH on Gigabit Ethernet. (Hemisphere used to have a Dolphin SCI torus with Scali MPI for parallel processing, but this has been deprecated as the machine exceeded its 3-year maintenance contract.)
To use MPICH, first select a compilation version in /opt. The gcc compilation usually works best, but if you have complex code that links C, C++, F77, and F90, you might need to try a version created with the same compiler you are using. Then use the following compilation directives:
|Linking Libraries||-L/opt/mpich-version/lib -lmpich|
To run a MPICH program on Hemisphere, use the following line in your PBS batch script file:
/opt/mpich-version/bin/mpirun -machinefile $PBS_NODEFILE <program and arguments>
To avoid difficulties, make sure to specify the full path to ‘mpirun’. If you attempt to run a program compiled with one MPI library with the mpirun command shipped with another MPI implementation, your program may fail to run or every process will execute as the only process in a one-task communicator.
The Hemisphere nodes are controlled using the PBS batch scheduling system. The queues are configured to support a large user community with different job types. In particular, we support users debugging code that require short turnaround time for small jobs, users running large parallel jobs, and users running a large quantity of single processor jobs for parameter studies.
To meet the demands of this job mix, the following queues are available: