Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


  • The cluster, known as Hydra, is made of
    1. two login nodes,
    2. one front-end node,
    3. a queue manager/job scheduler (the UNIVA Grid Engine or UGE), and
    4. a slew of compute nodes.


  • From either login node you submit and monitor your job(s) via the queue manager/scheduler.
  • The queue manager/job scheduler is the Grid Engine, simply GE or UGE.

  • The Grid Engine runs on the front-end node (, hence the front-end node should not be used as a login node.
    • There is no reason for users to ever have to log on Hydra-5.


  • All the nodes (login, front-end and compute nodes) are interconnected
    • via Ethernet (at 10Gbps, aka 10GbE), and
    • via InfiniBand (at 40Gbps or higher, aka IB).

  • The disks are mounted off 3 types of dedicated devices:
    1. The A NetApp filer for /home, /data, and /pool (via 10GbE),
    2. A GPFS for /scratch (via IB)
    3. A NAS for /store (via 10GbE), a near-line storage only available on some nodes.


The cluster runs a Linux distribution that is specific to clusters: it is called Rocks, and we run version 6.3, and CentOS 6.9.. We use BirghtCluster (8.2) to deploy CentOS 7.6 (Core).

  • As for any Unix system, you must properly configure your account to access the system resources.
    Your ~/.bash_profile, or ~/.profile, and or ~/.cshrc files need to be adjusted accordingly.
  • The configuration on the cluster is different from the one on the CF- or HEA-managed machines (for SAO users).
    We have implemented the command module to simplify the configuration of your Unix environment.
  • You can look in the directory ~hpc/ for examples of configuration files (with ls -la ~hpc).

Available Compilers

  1. GNU compilers (gcc, g++, gfortran, g90)
  2. Intel compilers and the Cluster Studio (debugger, profiler, etc: ifort, icc, ...)
  3. Portland Group (PGI) compilers and the Cluster Dev Kit (debugger, profiler, etc: pgf77, pgcc, ...)


  1. We have 128 run-time licenses for IDL, GDL is and FL are available too.
  2. Tools like MATLAB, JAVA, PYTHON, R. Julia, etc... are available; and
  3. the Bioinformatics and Genomics support group has installed a slew of packages.


The cluster is located in Herndon, VA and is managed by ORISORCS/OCIO (Office of Research Information Computing Services/Office of the Chief Information Officer).


  • DJ Ding (, the the system administrator (at OCIO, Herndon, VA).
    • As the sys-admin, he is responsible to keep the cluster operational and secure.
  • Rebecca Dikow ( provides Bioinformatics and Genomics support (Data Science Lab/OCIO, Washington, D.C.). She ;
    • she is the primary support person for Bioinformatics and Genomics at SI.
  • Matthew Kweskin ( - NMNH/L.A.B., IT specialist (Washington, D.C.).


  • Sylvain Korzennik (, an astronomer at SAO (Cambridge, MA). He ;
    • he is the primary support person for astronomers at SAO.

Support is also provided by other OCIO staff members (networking, etc...).

Simple Rules

  • For sys-admin issues: (forgotten password, something is not working any more, etc):
  • For application support:   (how do I do this?, why did that fail?, etc):
    • SAO users should contact Sylvain, not  the CF or HEA support sys-admin groups, at,
    • Non-SAO users should contact Rebecca, Vanessa & Matt at
  • Password problems: go to the self-serve password page.
  • Please use these email addresses to let the SI/HPC support team address your issues as soon as possible,
    • rather than emailing individuals directly.

Mailing List

A mailing list, called HPPC-L on SI's listserv (i.e., at, hence is available to contact all the users of the cluster.


  • replies to these messages are by default broadcast to the entire list; and
  • you will need to set up a password on this listserv the first time you use it (look in the upper right, under "Options").


Last updated 05 Sep   SGK.