Page tree
Skip to end of metadata
Go to start of metadata

Welcome the Smithsonian Institution High Performance Computing Wiki.

This Wiki holds information for the use of HPC resources at the Smithsonian.

Central to the SI HPC resources is the Smithsonian Institution High Performance Cluster (SI/HPC), named Hydra.

  • High performance computing is administered by the Office of Research Computing.
  • The OCIO data center in Herndon, VA houses the high performance computing cluster, Hydra.

What's New

  • November 21, 2019 - Update
    1. The documentation on the Wiki has been updated to reflect the changes resulting from the most recent upgrade,
      • do not hesitate to contact us if you find errors.
    2. All the examples have been validated under Hydra-5,
      • they can be found under /home/hpc/examples. 
    3. Documentation on using the Interactive nodes is up-to-date:
      • qlogin, an alternative to qrsh, allows X-tunneling (plotting using X11 apps).
    4. Tools to check and monitor disk usage have been updated and validated:
      • disk-usage, quota+, parse-disk-quota-reports and check-disks-usage.
    5. A new tool, chage+, is an LDAP-aware query tool similar to chage:
      •  chage+ allows you to query LDAP entries, like when password expires, LDAP email, full name and affiliation, etc.
    6. All the nodes with local SSDs have /ssd properly configured and are included in the uTSSD.tq:
      • contact us if your application can benefit from using /ssd.
    7. All the nodes with GPUs are  properly configured and two GPU queues (batch and interactive) are available:
      • contact us if you want to be granted access to the GPUs.
    8. The run-time license for IDL has been upgraded to the latest version:
      • any version of IDL later than 8.5 can new use one of the 128 run-time licenses,
      • version 6.1 run-time is still available,
      • GDL & FL are also available (IDL-compatible open-source packages).
    9. The run-time environment for the latest Matlab version (R2019b) was installed.


  • September 6, 2019 - Hydra has been upgraded as follows:
    1. OS upgrade CentOS 6.2 to CentOS 7.6  (software),
    2. Cluster management change from Rocks 6.2 to Bright Cluster Management 8.2  (software),
    3. Migrated to LDAP for user account management,
    4. Changed version of GridEngine from SGE to UGE 8.6.6 (aka UNIVA, software),
    5. Migrated and installed 81 nodes (3,960 CPUs) to “Hydra-5” and retired oldest nodes (hardware)
    6. Installed a new 1.5PB high performance parallel file system (GPFS aka Spectrum Scale - hardware)
    7. Migrated 500TB NetApp (/home /data /pool) and 950TB near-line disk storage (aka NAS, /store)
    8. Moved /scratch  from NetApp to GPFS
  • A lot has changed, although we tried to keep the system backward compatible except for important details

(warning) (warning) (warning) Read the Hydra-5 Migration Notes (warning) (warning) (warning)

  • Reminders
    • All public disks (/pool and /scratch only) are scrubbed once a week. 

    • Reasonable requests to restore scrubbed files must be sent no later than the following Friday, by 5pm.

    • If  you have any questions or encounter any problems, email:

       SI-HPC-Admin@si.edufor sys-admin related issues,
      SI-HPC@si.edufor Bioinformatics/Genomics questions,
      hpc@cfa.harvard.edufor SAO users who need help.
  • Past news with more details can be found at the What's New page.

 FYI

  • References to Hydra (publications, proposals, etc) should mention the cluster as:
the Smithsonian Institution High Performance Cluster (SI/HPC).

Last updated   SGK/MPK.

Search this documentation


Recently Updated Pages

  • No labels