...
- Hardware
- Add 16 new nodes (40c/380GB) for a total of 640 cores,
- Decommission oldest nodes (old compute-6* and compute-4-*),
- Add 1.5PB of high performance storage (parallel file system GPFS, aka Spectrum Scale),
- This is in addition to the 500TB of NetApp and the 950TB of near-line storage (NAS),
- The
/scratch
disk will be moved to the GPFS and increase in size from 100TB to 900TB (450TB+450TB for/scratch/genomics
and/scratch/sao
) - Move all the nodes to a 10Gb/s Ethernet connection (was 1Gb/s).
...
Upgrade schedule is 8/26 to 9/2/2019
Note | ||
---|---|---|
| ||
Plan your use of Hydra accordingly. |
...
- For biology and genomics software issues and/or incompatibilities that arise, please contact SI-HPC@si.edu,
- SAO users please contact Sylvain at hpc@cfa.harvard.edu.
...
List of Software Changes
...
- Job Scheduler
- We are switching to UGE, or Univa Grid Engine; that
- is backward compatible with SGE;
- offers some additional features;
- Note that the output of some commands will look different, and
some have different options
- We are switching to UGE, or Univa Grid Engine; that
- The list of queues and their limits will not changed,
- a few complex values will change (to use local SSD and GPU usage)
One important change is that
Code Block language text title the memory reservation value will no longer be specified per slots, but per jobs: the specification "-pe mthread 10 -l mres=20G,h_data=20G,h_vmem=20G,himem" must be changed to "-pe mthread 10 -l mres=200G,h_data=200G,h_vmem=200G,himem" to reserve 200GB of memory for this job
- Compilers
- The default compiler versions will change, and
- they had to be reloaded.
- For MPI jobs the compiler/flavor/version combos will change.
...