1. Preamble
  2. Access to the Cluster
  3. Using the Cluster
  4. Software Overview
  5. Support 

1. Preamble


The following figure is a schematic representation of the cluster:

Note that it does not represent the actual physical layout.

2. Access to the Cluster

Requesting Accounts

Secure Connection

SSH Clients

  1. Linux users, use ssh [<usenname>]@<host>, where
  2. MacOS users can use ssh [<usenname>]@<host> as explained above from a Terminal.
  3. PC/Windows users need to install a ssh client. Public domain options are:

See also the Comparison of SSH clients wikipedia page.

3. Using the Cluster

Indeed, the login nodes are for interactive use only like editing, compiling, testing, checking results, etc.... and, of course, submitting jobs.

The login nodes are not compute nodes (neither is the front-end node), and therefore they should not be used for actual computations, except short debugging interactive sessions or short ancillary computations.

The compute nodes are the machines (aka hosts, nodes, servers) on which you run your computations, by submitting a job, or a slew of jobs, to the queue system (the Grid Engine or GE).

This is accomplished via the qsub command, from a login node, and using a job script.

Do not run on the login or front-end nodes and do not run jobs out-of-band, this means:

 Remember that

The cluster is a shared resource: when a resource gets used, it is unavailable to others, hence:

4. Software Overview

The cluster runs a Linux distribution that is specific to clusters. We use BirghtCluster (8.2) to deploy CentOS 7.6 (Core).

Available Compilers

  1. GNU compilers (gcc, g++, gfortran, g90)
  2. Intel compilers and the Cluster Studio (debugger, profiler, etc: ifort, icc, ...)
  3. Portland Group (PGI) compilers and the Cluster Dev Kit (debugger, profiler, etc: pgf77, pgcc, ...)

Available Libraries

  1. MPI, for GNU, PGI and Intel compilers, w/ IB support;
  2. the libraries that come with the compilers;
  3. GSL, BLAS, LAPACK, ...

Available Software Packages

  1. We have 128 run-time licenses for IDL, GDL and FL are available too.
  2. Tools like MATLAB, JAVA, PYTHON, R. Julia, etc... are available; and
  3. the Bioinformatics and Genomics support group has installed a slew of packages.

Refer to the Software pages for further details. Other software packages have been installed by users, or can be installed upon request. 

5. Support

The cluster is located in Herndon, VA and is managed by ORCS/OCIO (Office of Research Computing Services/Office of the Chief Information Officer).

The cluster is supported by the following individuals

Support is also provided by other OCIO staff members (networking, etc...).

Simple Rules

Mailing List

A mailing list, called HPPC-L on SI's listserv (i.e., at si-listserv.si.edu, hence HPCC-L@si-listserv.si.edu) is available to contact all the users of the cluster.

This mailing list is used by the support people to notify the users of important changes, etc.

To email to that list you must log to the listserv and post your message. Note that:


Last updated   SGK.