Welcome the Smithsonian Institution High Performance Computing Wiki.
This Wiki holds information for the use of HPC resources at the Smithsonian.
Central to the SI HPC resources is the Smithsonian Institution High Performance Cluster (SI/HPC), named Hydra.
- High performance computing is administered by the Office of Research Computing.
- The OCIO data center in Herndon, VA houses the high performance computing cluster, Hydra.
- The documentation is organized as follow:
- A quick start guide;
- Reference pages (technical information about the cluster);
- Cluster Upgrade documentation (past upgrades);
- Cookbooks pages (didactic information on how to run specific tools); (NYA: not yet available)
- FAQs, or frequently asked questions
What's New
- January 14, 2020 - Increased total slot limit
- The total number of slots (CPUs) a user can grab has been increased from 512 to 640.
- December 20, 2019 - Disk space re-tuning
- increased
/home
(doubled it) to 20T - doubled each user quota on
/home
(100G/2M → 200G/4M); - resized (increased)
/data/genomics
and/data/sao
to 45TB each; - resized (increased)
/pool/genomics
and/pool/sao
to 80TB each; - increased each user quotas to 1TB/2.5M on
/data/genomics
(doubled); - decreased
/scratch/genomics
and/scratch/sao
from 400T → 350TB each.
- increased
- November 21, 2019 - Update
- The documentation on the Wiki has been updated to reflect the changes resulting from the most recent upgrade,
- do not hesitate to contact us if you find errors.
- All the examples have been validated under Hydra-5,
- they can be found under
/home/hpc/examples.
- they can be found under
- Documentation on using the Interactive nodes is up-to-date:
qlogin,
an alternative toqrsh,
allows X-tunneling (plotting using X11 apps).
- Tools to check and monitor disk usage have been updated and validated:
disk-usage, quota+, parse-disk-quota-reports
andcheck-disks-usage.
- A new tool,
chage+
, is an LDAP-aware query tool similar tochage:
-
chage+
allows you to query LDAP entries, like when password expires, LDAP email, full name and affiliation, etc.
-
- All the nodes with local SSDs have
/ssd
properly configured and are included in theuTSSD.tq:
- contact us if your application can benefit from using
/ssd.
- contact us if your application can benefit from using
- All the nodes with GPUs are properly configured and two GPU queues (batch and interactive) are available:
- contact us if you want to be granted access to the GPUs.
- The run-time license for IDL has been upgraded to the latest version:
- any version of IDL later than 8.5 can new use one of the 128 run-time licenses,
- version 6.1 run-time is still available,
- GDL & FL are also available (IDL-compatible open-source packages).
- The run-time environment for the latest
Matlab
version (R2019b) was installed.
- The documentation on the Wiki has been updated to reflect the changes resulting from the most recent upgrade,
- September 6, 2019 - Hydra has been upgraded as follows:
- OS upgrade CentOS 6.2 to CentOS 7.6 (software),
- Cluster management change from Rocks 6.2 to Bright Cluster Management 8.2 (software),
- Migrated to LDAP for user account management,
- Changed version of GridEngine from SGE to UGE 8.6.6 (aka UNIVA, software),
- Migrated and installed 81 nodes (3,960 CPUs) to “Hydra-5” and retired oldest nodes (hardware)
- Installed a new 1.5PB high performance parallel file system (GPFS aka Spectrum Scale - hardware)
- Migrated 500TB NetApp (/home /data /pool) and 950TB near-line disk storage (aka NAS, /store)
- Moved
/scratch
from NetApp to GPFS
- A lot has changed, although we tried to keep the system backward compatible except for important details
Read the Hydra-5 Migration Notes
- Reminders
All public disks (
/pool
andonly) are scrubbed once a week./scratch
Reasonable requests to restore scrubbed files must be sent no later than the following Friday, by 5pm.
If you have any questions or encounter any problems, email:
SI-HPC-Admin@si.edu for sys-admin related issues, SI-HPC@si.edu for Bioinformatics/Genomics questions, hpc@cfa.harvard.edu for SAO users who need help.
- Past news with more details can be found at the What's New page.
Quick Links
- Cluster monitoring page
- QSub Generator (only accessible within the SI network)
FYI
- References to
Hydra
(publications, proposals, etc) should mention the cluster as:
the Smithsonian Institution High Performance Cluster (SI/HPC).
Last updated SGK/MPK.
Search this documentation
Livesearch | ||
---|---|---|
|
Recently Updated Pages
Recently Updated | ||||||||
---|---|---|---|---|---|---|---|---|
|