Occam is a 27-node IBM blade cluster. Each node contains dual IBM PowerPC 970 processors and 2.5 GB of memory.Since Hemisphere and Occam are no longer under warranty, we are not adding new accounts at this time.
All accounts at the CSC have access to Occam. Simply use a SSH client to connect to occam.cs.colorado.edu. To move files to or from Occam, use a SCP client to connect to fileserver.cs.colorado.edu.
For more information on shared resources, such as where to store your files, see the Getting Started guide.
Important Note: Occam is a IBM PowerPC 970 system that support compiling in both 32-bit and 64-bit modes. Make sure to specify the correct compiler options to compile and link in the desired mode. In addition, make sure to specify the correct libraries for linking; you must link with libraries intended for the desired bit mode. In most cases, the IBM xlf/xlc produce the highest performance 64-bit code.
Additional reports for Occam can be found on Ganglia.
|Online Date:||October 2004|
|Total Compute Nodes:||27|
|Processors per Node:||Dual IBM PowerPC 970|
|RAM per Node:||2.5 GB|
The following common applications are installed. If you would like another application installed for system-wide use, please contact the administration team.
The following common libraries are installed. If you would like another library installed, please contact the administration team.
The following compilers are available on Occam in the following locations:
The gcc compiler produces 32-bit code by default. You must specify the correct gcc compiler options to use 64-bit mode. Common options include the following:
LD=”ld -m elf64ppc -L/usr/lib64 -L/usr/X11R6/lib64/”
AS=”gcc -c -m64”
Occam supports MPI using MPICH and MPICH2 libraries, with both libraries runing over Ethernet.
To use MPICH, use the following compilation directives:
|Compilation Headers||-I /opt/mpich-version/include|
|Linking Libraries||-L /opt/mpich-version/lib -lmpich|
Note that you must specify the correct version of MPICH for the compiler and bit level you are using, such as xlf, gcc32, or gcc64.
To run a MPICH program on Occam, use the following line in your PBS batch script file:
/opt/mpich-version/bin/mpirun -machinefile $PBS_NODEFILE <program and arguments>
To avoid difficulties, make sure to specify the full path to ‘mpirun’. If you attempt to run a program compiled with one MPI library with the mpirun command shipped with another MPI implementation, your program may fail to run or every process will execute as the only process in a one-task communicator.
The Occam nodes are controlled using the PBS batch scheduling system. Because Occam is less utilized than Hemisphere, the queues have no resource or walltime limitations. The following queues are available: