Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
  • November 21, 2019 - Update
    1. The documentation on the Wiki has been updated to reflect the changes resulting from the most recent upgrade.
      • Do not hesitate to contact us if you find errors.
    2. All the examples have been validated under Hydra-5,
      • they can be found under /home/hpc/examples. 
    3. Documentation on using the Interactive nodes is up-to-date:
      • qlogin  is an alternative to qrsh that allows X-tunneling (plotting using X11 apps).
    4. Tools to check and monitor disk usage have been updated and validated:
      • disk-usage, quota+, parse-disk-quota-reports and check-disks-usage.
    5. New tool chage+ is an LDAP-aware query tool, similar to chage:
      •  chage+ allows you to query LDAP entries, like when password expires, LDAP email, full name and affiliation, etc..
    6. All the nodes with local SSDs have /ssd properly configured and are included in the uTSSD.tq:
      • contact us if your application can benefit from using /ssd.
    7. All the nodes with GPUs are  properly configured and GPU queues (batch and interactive) are available:
      • contact us if you want to be granted access to the GPUs.
    8. The run-time license for IDL has been upgraded to the latest version:
      • Any version of IDL later than 8.5 can new use one of the 128 run-time license,
      • Version 6.1 run-time is still available,
      • GDL & FL and also available (IDL-compatible open-source packages).
    9. The run-time environment for the latest Matlab version (R2019b) was installed.


  • September 6, 2019 - Hydra has been upgraded as follows:
    1. OS upgrade CentOS 6.2 to CentOS 7.6  (software),
    2. Cluster management change from Rocks 6.2 to Bright Cluster Management 8.2  (software),
    3. Migrated to LDAP for user account management,
    4. Changed version of GridEngine from SGE to UGE 8.6.6 (aka UNIVA, software),
    5. Migrated and installed 81 nodes (3,960 CPUs and 29TB shared memory) to “Hydra-5” and retired oldest nodes (hardware)
    6. Installed a new 1.5PB high performance parallel file system (GPFS aka Spectrum Scale - hardware)
    7. Migrated 500TB NetApp (/home /data /pool) and 950TB near-line disk storage (aka NAS, /store)
    8. Moved /scratch  from NetApp to GPFS
  • A lot has changed, although we tried to keep the system backward compatible except for important details

...