- How the changes will affect users
You still log in via
hydra-login02.si.edu, but with a new OS install, the ‘RSA fingerprint’ has changed, causing your next ssh to issue a warning. No need to email us, simply follow the instructions of your ssh client.
Note Hint: you may need to delete a line in your 'known hosts' file on your own computer, which for some users is here:
- There is no reason for you to log to the head node (
hydra-5.si.edu), so don’t; and logging on
hydra-4.si.eduhas been disabled.
- With BCM we migrated to LDAP for user account management, so you will no longer need to change your password at two places. Instead, you will do it once on either login node with the passwd command.
- We only migrated the ~250 users who have logged on the cluster over the past year. If you have not used Hydra for over a year, you won’t have an account any longer, and you will need to contact us (Rebecca for Biology users and Sylvain for SAO users).
- We heard from a couple of users who did not receive announcement emails posted to the HPCC-listserv. If you did not receive these reminders and announcements, please let Rebecca know ASAP.
- Before you try to log on to the new Hydra, you MUST reset your password using the email me a link option on this web page: (https://hydra-adm01.si.edu/ssp/?action=sendtoken) Follow that link to enter a new Enter your hydra username in the "Login" box to receive via email a link to reset your password.
- The link will be emailed to your canonical email, i.e. your @si.edu or @cfa.harvard.edu email. If you do not have one of these, it will be your institutional email (ends in .edu) if we have one on file.
- This site enforces new password rules, your password will still need to be changed every 90 days, but you now can now do it yourself (so you don’t have to email us). With the new self service password webpage you can reset your password if you forget your credentials or don't change your login in 90 days, so you no longer need to email us with password resets.
- The other option on that web page, to use it to change your current password, is not yet working, but will be fixed soon.
- As mentioned above in the Accounts section, the password change process
passwdis simplified. You only need to run this command on one of the login nodes, you no longer need to also run the command on the head node.
Location of stuff and modules
We now use BCM instead of, no longer Rocks, hence the locations of things have changed: no more
/opt, but lots of stuff under
If you load modules and use environment variables, everything should work. If you hardwired a system path, you need to do the right thing (use modules and environment variables).
Because we upgraded the OS version, the versions of a lot of the standard tools have also changed too, hence things might be different or new options are now available.
You still use qsub and alike, but it is now accessed via the uge module. That module is loaded for you, so unless you unload it, or do something weird, there is no need to do anything else.
The output of some GE’s commands will look different, and a few take different options.
The same queues are available, except that the interactive and I/O nodes are now different nodes (our newest nodes) and the GPU and SSD queues are not yet available. These two resources have yet to be configured.
Also, you will notice that the compute node naming has slightly changed: the compute nodes are all called
compute-NN-MM, with both NN and MM are always a 2-digit string and the value of NN is shorthand for the node model (all
compute-64-MMare all Dell R640). The fully qualified name is no longer
While this may be confusing and/or annoying, we have decided to change how the memory reservation is computed:
mresis no longer a per slot (thread, CPU), but a per job value.
All your jobs MUST be adjusted to use the new value, which is the old value multiplied by the number of slots (threads, CPUs).
h_vmemremain per slot values.
- You will still use the same rule as what was on hydra-4 to determine if your job should go on the high-compute or high-memory queues. If >6GB/slot is requested, the job needs to go on the himem queues:
QSubGen has yet to be adjusted to reflect this change (stay tuned)!
In other words a:
-pe mthread 10 -l mres=20G,h_data=20G,h_vmem=20G,himem
must be replaced by:
-pe mthread 10 -l mres=200G,h_data=20G,h_vmem=20G,himem
to reserve 200GB for the job.
- The same 3 compilers are available (gcc, icc, pgi), although the default version of the compilers has changed. You should not need to recompile your code (unless it is a Fortran code compiled with GCC).
- For MPI jobs, the combos of available compilers & MPI flavors has slightly changed. We hope to have this documented soon. Support for OpenMPI w/ GCC had to be dropped. Contact Sylvain if this is a problem.
- Bioinformatics tools
- Please see the wiki page for details about the Bioinformatics Software and Modules: https://confluence.si.edu/display/HPC/Bioinformatics+Software+and+Module+Information
- As of 9/3/2019, the bioinformatics software packages have been newly installed on Hydra-5 (with a few exceptions - those that have been transferred from Hydra-4 are noted). Note that the versions and/or module names may be different than what was on Hydra-4. An attempt was made to install the most up-to-date versions and make the module names as standardized as possible.
- Job files will need to be adjusted to reflect these changes.
- Users looking for software not on this list can compile/install in their own space or request installation by emailing SI-HPC@si.edu.
- Note that with our limited time, we prioritize installing software that many users need and is well-documented.
- Please contact SI-HPC@si.edu if you notice any issues with bioinformatics software and modules.
- Storage (disk space)
- The various disk locations are the same:
/scratch, except that
/scratchis now 9x bigger, faster and
/scratch/saois now separate from /scratch/genomics.
- Everyone now has a directory under
- If your application is I/O intensive consider switching from using /pool to /scratch.
- If you already had stuff under /scratch, we have copied its content from the NetApp to the GPFS.
But because we had a lot to copy (120TB), we started the copying on Aug 22 at 22:54, and then rsync’d the content. This means that any file you had under
/scratchwhen we started copying and have deleted afterwards might have been copied. So check your stuff if you were actively using
/scratchafter Aug 22 at 22:54.
/scratchis a GPFS file system, no longer an NFS one. It should be faster, but a few commands do not work the same way, like
Note For the GPFS, we recommend that you use:
df -h --output=source,fstype,size,used,avail,pcent,filenot just
and to get quota information, use:
module load tools/local
- The command
quota+takes options quite similar to quota (check the man page, and it will be explained on the Wiki once it is updated). Advanced users can load the mmfs module and run the GPFS specific commands
- We no longer allow the use of local disk space (
/state/partition1no longer exists). Use
- For those who have project-specific disk space on
/scratchspace, we copied
/scratchand the old
/poolis mounted read-only. Contact us for more details.
- Since we freed over 100TB on the NetApp, we will reorganize the NetApp storage: larger volumes and bigger quotas to come in the next few weeks.
- The various disk locations are the same:
- Local tools
- The tools accessible via the module tools/local have been reorganized, most have been renamed, while some have been dropped.
- What was loaded via tools/local was not only moved to a different location (handled by the module) but broken into tools/local and tools/local+, and the name shortened when they had an extension (like
qstat+). A few poorly chosen names have been drastically changed. Once the Wiki pages are revamped this will be explained in detail there.
If you want things to be the same, load tools/local-bc (bc stands for backward compatible).
If you want to figure out what is what, try:
module help tools/local
module help tools/local+
module help tools/local-bc
The last one will show you what names have changed.
- Note that the names in some of the man pages have yet to be changed to the new names. All in due time.
- Cluster status page, scrubber and job monitoring
- The cluster status page is not yet up, but should be available within a week or so.
- We have suspended the scrubbing, but it will be resumed within a month or so.
- The job monitoring tools will be progressively restarted.