1. Introduction
  2. Hardware
  3. Disk Space
  4. InfiniBand Fabric

1. Introduction

As of the August 2019 upgrade, Hydra-5 consists of:

2. Nodes

The Head Node: hydra-5.si.edu

It should never be accessed by users, except if directed by support staff for special operations.

The Login Nodes: hydra-login0[12].si.edu

You can use either node, depending on the node load.

The Compute Nodes: compute-NN-MM.local

3. Disk Space

The useful disk space available on the cluster is mounted off two dedicated devices (NetApp and GPFS),  the third one (NAS) is not accessible from the compute nodes, only from the login and i/o nodes; it is a near-line storage system.

The available public disk space is divided in several area (aka partitions):

It should be used as follows:

Name

Size

Typical Use

/home

10TB

For your basic configuration files, scripts and job files (NetApp

  • low quota limit but you can recover old stuff.

/data/sao

/data/genomics

40TB

30TB

 For important but relatively small files like final results, etc (NetApp)

  • medium quota limit, you can recover old stuff, but disk space is not released right away.

/pool/sao

/pool/genomics

/pool/biology

37TB

55TB

200GB

For the bulk of your storage (NetApp)

  • high quota limit, and disk space is released right away.

/scratch/genomics

/scratch/sao

400TB

ea

For temporary storage (GPFS)

  • fast storage, high quoya limit, and disk space is released right away.
/store/public270TBFor near-line storage.

Note that:

See a complete description at the Disk Space and Disk Usage page.

4. InfiniBand Fabric

All the nodes (i.e., the compute nodes, the login nodes, and the head node) are interconnected using not only the regular 10GbE network (Ethernet),

but also via a high-speed, low latency, communication fabric, known as the InfiniBand (IB):



Last updated   SGK