- November 21, 2019 - Update
- The documentation on the Wiki has been updated to reflect the changes resulting from the most recent upgrade.
- Do not hesitate to contact us if you find errors.
- All the examples have been validated under Hydra-5,
- they can be found under
/home/hpc/examples.
- they can be found under
- Documentation on using the Interactive nodes is up-to-date:
qlogin
is an alternative toqrsh
that allows X-tunneling (plotting using X11 apps).
- Tools to check and monitor disk usage have been updated and validated:
disk-usage, quota+, parse-disk-quota-reports
andcheck-disks-usage.
- New tool
chage+
is an LDAP-aware query tool, similar tochage:
-
chage+
allows you to query LDAP entries, like when password expires, LDAP email, full name and affiliation, etc..
-
- All the nodes with local SSDs have
/ssd
properly configured and are included in theuTSSD.tq:
- contact us if your application can benefit from using
/ssd.
- contact us if your application can benefit from using
- All the nodes with GPUs are properly configured and GPU queues (batch and interactive) are available:
- contact us if you want to be granted access to the GPUs.
- The run-time license for IDL has been upgraded to the latest version:
- Any version of IDL later than 8.5 can new use one of the 128 run-time license,
- Version 6.1 run-time is still available,
- GDL & FL and also available (IDL-compatible open-source packages).
- The run-time environment for the latest
Matlab
version (R2019b) was installed.
- The documentation on the Wiki has been updated to reflect the changes resulting from the most recent upgrade.
- September 6, 2019 - Hydra has been upgraded as follows:
- OS upgrade CentOS 6.2 to CentOS 7.6 (software),
- Cluster management change from Rocks 6.2 to Bright Cluster Management 8.2 (software),
- Migrated to LDAP for user account management,
- Changed version of GridEngine from SGE to UGE 8.6.6 (aka UNIVA, software),
- Migrated and installed 81 nodes (3,960 CPUs and 29TB shared memory) to “Hydra-5” and retired oldest nodes (hardware)
- Installed a new 1.5PB high performance parallel file system (GPFS aka Spectrum Scale - hardware)
- Migrated 500TB NetApp (/home /data /pool) and 950TB near-line disk storage (aka NAS, /store)
- Moved
/scratch
from NetApp to GPFS
- A lot has changed, although we tried to keep the system backward compatible except for important details
...