All the useful disk space available on the cluster is mounted off a dedicated device (aka appliance or server), a NetApp filer.
The available disk space is divided in several area (aka partitions):
/home
partition,
/data
partitions,/pool
partitions,/scratch
partitions.Note that:
/home
should not be used to keep large files, use /pool
instead;/scratch
is for temporary storage (i.e., while a job is running), it can be used if you need more disk space than what you can store under /pool
.Hydra
is highly reliable, none of the disks on the cluster are backed up. When copying to Hydra, especially large files, be sure to do it to the appropriate disk (and not /home
or /tmp
).
hydra
using scp
, sftp or rsync:
Hydra
you can only copy from trusted hosts (computers on SI or SAO/CfA trusted network, or VPN'ed),Hydra
to any host that allows external ssh
connections (if you can ssh
from Hydra to it, you can scp
, sftp and rsync
to it).rsync
, and limit the bandwidth to 20 MB/s (70 GB/h), with the "--bwlimit="
option:rsync --bwlimit=20000 ...
rm
, mv
and cp
can also create high I/O load, so consider to Access from the "outside" to SAO/CfA hosts (computers) is limited to the border control hosts (login.cfa.harvard.edu
and pogoX.cfa.harvard.edu
), instructions for tunneling via these hosts is explained on
A trusted or VPN'd computer running MacOS can use scp
, sftp or rsync
:
Terminal
application by going to /Applications/Utilities
and finding Terminal
. Alternatively you can use a GUI based ssh/scp
compatible tool like Cyberduck.
You will still most likely need to run VPN.
You can use scp
, sftp or rsync
if you install Cygwin - Note that Cygwin includes a X11 server.
Alternatively you can use a GUI based ssh/scp
compatible tool like FileZilla, WinSCP or Cyberduck.
You will still most likely need to run VPN.
(instructions missing)
Files can be exchanged with Dropbox using the script Dropbox-Uploader, which can be loaded using the tools/dropbox_uploader
module and running the dropbox
or dropbox_uploader.sh
script. Running this for script for the first time will give instructions on how to configure your Dropbox account and create a ~/.dropbox_uploader config file with authentication information.
Using this method will not sync your Dropbox, but will allow you to upload/download specific files.
To prevent the disks from filling up and hose the cluster, there is a limit (aka quota) on
inodes
": the sum of number of files and number of directories)Each quota type has a soft limit (warning) and a hard limit (error) and is specific to each partition. In other words exceeding the soft limit produces warnings; while exceeding the hard limit is not allowed, and results in errors.
Maximum | Quotas per user | NetApp | |||
---|---|---|---|---|---|
disk | disk space |
| snapshots | ||
Disk name | capacity | soft/hard | soft/hard | enabled? | Purpose |
|
| 50/ | 1.8/2M |
| For your basic configuration files, scripts and job files - your limit is low but you can recover old stuff up to 4 weeks. |
|
|
| 1/2M |
| For important but relatively small files like final results, etc. - your limit is medium, you can recover old stuff, but disk space is not released right away. For SAO users. |
|
| 1.0/2.0TB | 1/2M |
| For important but relatively small files like final results, etc. - your limit is medium, you can recover old stuff, but disk space is not released right away. For non-SAO users. |
/ |
|
| 4/5M |
| For the bulk of your storage - your limit is high, and disk space is released right away, for SAO users. |
|
| 1.8/2.0TB | 1.8/2M |
| For the bulk of your storage - your limit is high, and disk space is released right away, for non-SAO users. |
|
| 1.8/2.0TB | 1.8/2M |
| For the bulk of your storage - your limit is high, and disk space is released right away, for non-SAO users. |
|
|
| 1/1M |
| For temporary storage, if you need more than what you can keep in - SAO/non-SAO user should use |
/pool/sao_atmos | 25TB |
| 9M/10M | no | Project specific disk (SAO/ATMOS) |
/pool/sao_rtdc | 10TB | 2.8T/3.0T | 2.5M/3.0M | no | Project specific disk (SAO/RTDC) |
/pool/sylvain | 15TB | 14T/15T | 63M/65M | no | Project specific disk (SAO/Sylvain) |
inodes
and the hard limit is 5 million.inodes
quota before your space quota (too many small files).inode
usage.%(inode)>%(space)
your disk usage is inefficient,zip
or tar-compressed
sets.gzip
, bzip2
or compress
), andzip
or a tar-compressed set
, as follows: % zip archive.zip dir/
% tar -czf archive.tgz dir/
dir/
into a single zi
p or a tgz
file. You can then delete the content of dir/
with % rm -rf dir/
% unzip archive.zip
% tar xf archive.tgz
df
does not necessarily reflect the maximum size./scratch
, we will implement a FIFO (first in first out) model, where old files are deleted without warning to make space./scratch
files that you do not need for active jobs.The following tools can be used to monitor your disk usage.
You can use the following Un*x commands:
du | show disk use |
df | show disk free |
or
you can use Hydra-specific home-grown tools, (these require that you load the tool/local
module)
dus-report.pl | run du and parse its output in a more user friendly format |
disk-usage.pl | run df and parse its output in a more user friendly format |
You can also view the disk status at the cluster status web pages, either
The output of du
can be very long and confusing. It is best used with the option "-hs
" to show the sum ("-s
") and to print it in a human readable format ("-h
").
If there is a lot of files/directory, du
can take a while to complete.
For example:
% du -sh dir/ 136M dir/ |
The output of df
can be very long and confusing.
You can use it to query a specific partition and get the output in a human readable format ("-h
"), for example:
% df -h /pool/sao Filesystem Size Used Avail Use% Mounted on 10.61.10.1:/vol_sao 20T 15T 5.1T 75% /pool/sao |
You can compile the output of du
into a more useful report with the dus-report.pl
tool. This tool will run du
for you (can take a while) and parse its output to produce a more concise/useful report.
For example, to see the directories that hold the most stuff in /pool/sao/hpc
:
% dus-report.pl /pool/sao/hpc 612.372 GB /pool/sao/hpc capac. 20.000 TB (75% full), avail. 5.088 TB 447.026 GB 73.00 % /pool/sao/hpc/rtdc 308.076 GB 50.31 % /pool/sao/hpc/rtdc/v4.4.0 138.950 GB 22.69 % /pool/sao/hpc/rtdc/vX 137.051 GB 22.38 % /pool/sao/hpc/rtdc/vX/M100-test-oob-2 120.198 GB 19.63 % /pool/sao/hpc/rtdc/v4.4.0/test2 120.198 GB 19.63 % /pool/sao/hpc/rtdc/v4.4.0/test2-2-9 83.229 GB 13.59 % /pool/sao/hpc/c7 83.229 GB 13.59 % /pool/sao/hpc/c7/hpc 65.280 GB 10.66 % /pool/sao/hpc/sw 64.235 GB 10.49 % /pool/sao/hpc/rtdc/v4.4.0/test1 49.594 GB 8.10 % /pool/sao/hpc/sw/intel-cluster-studio 46.851 GB 7.65 % /pool/sao/hpc/rtdc/vX/M100-test-oob-2/X54.ms 46.851 GB 7.65 % /pool/sao/hpc/rtdc/vX/M100-test-oob-2/X54.ms/SUBMSS 43.047 GB 7.03 % /pool/sao/hpc/rtdc/vX/M100-test-oob-2/X220.ms 43.047 GB 7.03 % /pool/sao/hpc/rtdc/vX/M100-test-oob-2/X220.ms/SUBMSS 42.261 GB 6.90 % /pool/sao/hpc/c7/hpc/sw 36.409 GB 5.95 % /pool/sao/hpc/c7/hpc/tests 30.965 GB 5.06 % /pool/sao/hpc/c7/hpc/sw/intel-cluster-studio 23.576 GB 3.85 % /pool/sao/hpc/rtdc/v4.4.0/test2/X54.ms 23.576 GB 3.85 % /pool/sao/hpc/rtdc/v4.4.0/test2-2-9/X54.ms 23.576 GB 3.85 % /pool/sao/hpc/rtdc/v4.4.0/test2/X54.ms/SUBMSS 23.576 GB 3.85 % /pool/sao/hpc/rtdc/v4.4.0/test2-2-9/X54.ms/SUBMSS 22.931 GB 3.74 % /pool/sao/hpc/rtdc/v4.4.0/test2/X220.ms 22.931 GB 3.74 % /pool/sao/hpc/rtdc/v4.4.0/test2-2-9/X220.ms report in /tmp/dus.pool.sao.hpc.hpc |
You can rerun dus-report.pl
with different options on the same intermediate file, like
% dus-report.pl -n 999 -pc 1 /tmp/dus.pool.sao.hpc.hpc
to get a different report, to see the list down to 1%. Use
% dus-report.pl -help
to see how else you can use it.
The tool disk-usage.pl
runs df
and presents its output in a more friendly format:
% disk-usage.pl Filesystem Size Used Avail Capacity Mounted on NetApp.1:/vol_home/hpc 2.70T 1.02T 1.68T 38%/17% /home/hpc NetApp.1:/vol_sao 20.00T 14.91T 5.09T 75%/34% /pool/sao NetApp.1:/vol_sao_atmos 18.00T 13.80T 4.20T 77%/7% /pool/sao_atmos NetApp.1:/vol_sao_rtdc 1.00T 167.51G 856.49G 17%/1% /pool/sao_rtdc NetApp.1:/vol_genomics 20.00T 17.12T 2.88T 86%/57% /pool/genomics NetApp.1:/vol_data_genomics 1.90T 10.78G 1.89T 1%/3% /data/genomics NetApp.1:/vol_data/sao 3.60T 3.27T 336.17G 91%/37% /data/sao NetApp.1:/vol_data/admin 3.60T 3.27T 336.17G 91%/37% /data/admin NetApp.1:/vol_old_pools/cluster0 17.00T 15.60T 1.40T 92%/22% /pool/cluster0 NetApp.1:/vol_old_pools/cluster1 17.00T 15.60T 1.40T 92%/22% /pool/cluster1 NetApp.1:/vol_old_pools/cluster2 17.00T 15.60T 1.40T 92%/22% /pool/cluster2 NetApp.1:/vol_old_pools/cluster3 17.00T 15.60T 1.40T 92%/22% /pool/cluster3 NetApp.1:/vol_old_pools/cluster4 17.00T 15.60T 1.40T 92%/22% /pool/cluster4 NetApp.1:/vol_old_pools/cluster5 17.00T 15.60T 1.40T 92%/22% /pool/cluster5 NetApp.1:/vol_old_pools/cluster7 17.00T 15.60T 1.40T 92%/22% /pool/cluster7 NetApp.1:/vol_sylvain 14.18T 12.75T 1.43T 90%/51% /pool/sylvain NetApp.1:/vol_scratch/genomics 25.00T 19.53T 5.47T 79%/5% /scratch/genomics NetApp.1:/vol_scratch/sao 25.00T 19.53T 5.47T 79%/5% /scratch/sao NetApp.5:/vol/a2v1/genomics01 20.00T 0.05G 20.00T 1%/1% /scratch/genomics01 NetApp.5:/vol/a2v1/sao01 20.00T 0.05G 20.00T 1%/1% /scratch/sao01 |
Use
% disk-usage.pl -help
to see how else to use it. You can, for instance, get the disk quotas and the max size with:
% disk-usage.pl -quotas quotas: disk usage #(inodes) max Filesystem Size Used Avail Capacity soft/hard soft/hard size Mounted on NetApp.1:/vol_home/hpc 2.70T 1.02T 1.68T 38%/17% 50G/100G 1.8M/2.0M 8T /home/hpc NetApp.1:/vol_sao 20.00T 14.91T 5.09T 75%/34% 1.8T/2.0T 4.0M/5.0M 60T /pool/sao NetApp.1:/vol_sao_atmos 18.00T 13.80T 4.20T 77%/7% 8.0T/10.T 9.0M/10.M 22T /pool/sao_atmos NetApp.1:/vol_sao_rtdc 1.00T 167.51G 856.49G 17%/1% 2.8T/3.0T 2.5M/3.0M 11T /pool/sao_rtdc NetApp.1:/vol_genomics 20.00T 17.12T 2.88T 86%/57% 1.8T/2.0T 1.8M/2.0M 50T /pool/genomics NetApp.1:/vol_data_genomics 1.90T 10.78G 1.89T 1%/3% 1.0T/2.0T 1.0M/2.0M 10T /data/genomics NetApp.1:/vol_data/* 3.60T 3.27T 336.17G 91%/37% 2.8T/3.0T 35.M/40.M 20T /data/sao:admin NetApp.1:/vol_old_pools/cluster* 17.00T 15.60T 1.40T 92%/22% /pool/cluster0:1:2:3:4:5:7 NetApp.1:/vol_sylvain 14.18T 12.75T 1.43T 90%/51% 14.T/15.T 63.M/65.M 15T /pool/sylvain NetApp.1:/vol_scratch/* 25.00T 19.53T 5.47T 79%/5% 2.8T/3.0T 1.0M/1.0M 50T /scratch/genomics:sao NetApp.5:/vol/a2v1/* 20.00T 0.05G 20.00T 1%/1% 14.T/15.T 2.0M/2.0M 20T /scratch/genomics01:sao01 |
The Linux command quota
is working (as of August 2016) with the NetApp filers (old and new).
For example:
Disk quotas for user hpc (uid 7235): Filesystem blocks quota limit grace files quota limit grace 10.61.10.1:/vol_home 2203M 51200M 100G 46433 1800k 2000k 10.61.10.1:/vol_sao 1499G 1946G 2048G 1420k 4000k 5000k 10.61.10.1:/vol_scratch/genomics 48501M 2048G 4096G 1263 9000k 10000k 10.61.200.5:/vol/a2v1/genomics01 108M 14336G 15360G 613 10000k 12000k 10.61.10.1:/vol_home/hydra-2/dingdj 2203M 51200M 100G 46433 1800k 2000k |
reports your quotas. The -s
stands for --human-readable
, hence the 'k' and 'G'. While
% quota -q
will print only information on filesystems where your usage is over the quota. (man quota
)
We compile a daily quota report and provide tools to parse the quota report.
The daily quota report is written around 3:25am in a file called quota_report_MMDDYY, located in /share/apps/adm/reports
.
The string MMDDYY
corresponds to the date of the report: "012016
" for Jan 20 2016 report.
The format of this file is not very user friendly and users are listed by their user ID.
The Hydra-specific home-grown tool, (requires that you load the tool/local
module), parse-quota-report.pl
, will parse the quota report file and produce a more concise report.
For example:
% parse-quota-report.pl Disk quota report: show usage 75% above quota as of Wed Jan 20 03:43:05 2016 volume = /pool/genomics/ (vol_genomics) hard quota = 2.0TB or 5M files -- disk -- -- #files -- name usage %quota usage %quota ------------------------- ------- ------ ------ ------ /pool/genomics/savagea 3.34TB 167.0% 23.40M 468.0% *** Anna Savage - NZP /pool/genomics/tsuchiyam 1.67TB 83.5% 0.12M 2.4% Mirian Tsuchiya,NMNH/Botany /pool/genomics/sigelem 1.61TB 80.5% 0.00M 0.0% Erin Sigel - NMNH volume = /home/ (vol_home) hard quota = 100.0GB or 2M files -- disk -- -- #files -- name usage %quota usage %quota ------------------------- ------- ------ ------ ------ /home/tonylee 131.5GB 131.5% 0.16M 8.1% *** Tony Lee - SAO/AMP /home/zhangn 99.4GB 99.4% 1.06M 52.8% *** Ning Zhang,NMNH/VZ volume = /pool/sao/ (vol_sao) hard quota = 2.0TB or 5M files -- disk -- -- #files -- name usage %quota usage %quota ------------------------- ------- ------ ------ ------ /pool/sao/gchen 3.67TB 183.5% 0.20M 4.0% *** Guo-Xin Chen - SAO/SSP-AMP /pool/sao/afoster 2.00TB 100.0% 0.33M 6.7% *** Adam Foster - SAO/HEA /pool/sao/atripathi 1.87TB 93.5% 0.60M 11.9% Anjali Tripathi - SAO/AST volume = /pool/ (vol_sylvain) hard quota = 15.0TB or 65M files -- disk -- -- #files -- name usage %quota usage %quota ------------------------- ------- ------ ------ ------ /pool/sylvain 12.74TB 84.9% 33.03M 50.8% Sylvain G. Korzennik,SAO/SSP |
reports disk usage where it is at 75% above quota.
Or you can check usage for a specific user (like yourself) with
% parse-quota-report.pl -u <username>
for example:
% parse-quota-report.pl -u hpc Disk quota report: show usage 0% above quota only for hpc as of Wed Jan 20 03:43:05 2016 -- disk -- -- #files -- -hard quota- name usage %quota usage %quota disk sp #f ------------------------- ------- ------ ------ ------ -------- --- /data/sao/hpc 191.0MB 0.0% 0.00M 0.0% 3.00TB 40M /home/hpc 1.6GB 1.6% 0.04M 2.0% 100.0GB 2M /pool/sao/hpc 606.9GB 29.6% 0.55M 11.0% 2.00TB 5M /scratch/genomics/hpc 47.2GB 0.3% 0.00M 0.0% 15.00TB 10M |
Use
% parse-quota-report.pl -h
for the complete usage info.
Users whose quotas are above the 75% threshold will receive a warning email one a week (issued on Monday mornings). This is a warning, as long as you are below 100% you are OK.
Users won't be able to write on disks on which they have exceeded their hard limits.
Some of the disks on the NetApp filer have the so called "snapshot mechanism" enabled:
To recover an old version or a deleted file, foo.dat, that was (for example) in /data/genomics/frandsen/important/results/
:
% cd /data/genomics/.snapshot/XXXX/frandsen/important/results % cp -pi foo.dat /data/genomics/frandsen/important/results/foo.dat
% cd /data/genomics/.snapshot/XXXX/frandsen/important/results % cp -pi foo.dat /data/genomics/frandsen/important/results/old-foo.dat
-p"
will preserve the file creation date and the "-i"
will prevent overwriting an existing file. "XXXX
" is to be replaced by either:hourly.YYYY-MM-DD_HHMM
daily.YYYY-MM-DD_0010
weekly.YYYY-MM-DD_0015
YYY-MM-DD
is a date specification (i.e., 2015-11-01
).snapshot
are read-only:cp
, tar
or rsync
; butmv
) or deleted (rm
).In order to maintain free disk space on the public disks, we are about to implement disk scrubbing: removing old files and old empty directories.
We will remove old files and old empty directories from a set of disks on a regular basis.
Old empty directories will be deleted, old files will be, at first, moved away in a staging location, then deleted.
|
Since the scrubber moves old files away at first, and delete them later, there will be a grace period between the move and the deletion to allow user to request for some scrubbed files to be restored.
Requests to restore scrubbed file should be rare, reasonable and can only be granted while the scrubbed files are not yet deleted.
Past the grace period, the files are no longer available.
Users who want their scrubbed files restore will have to act promptly.
The following instructions explain
The disks that will be scrubbed are:
/pool/biology - 180 days
/pool/genomics - 180 days
/pool/sao - 180 days
/scratch/genomics - 90 days
/scratch/genomics01 - 90 days
/scratch/sao - 90 days
/scratch/sao01 - 90 days
Before we actually scrub old files we will run the scrubber to only notify the users what old files would have been scrubbed. Users should adjust their disk usage accordingly.
module load tools/scrubber
module help tools/scrubber
man <tool-name>
/pool/genomics/frandsenp
: show-scrubber-report /pool/genomics/frandsenp 160721
list-scrubbed-dirs /pool/genomics/frandsenp 160721 [<RE>]
where the <RE> is an optional regular-expression to limit the printout, w/o an RE your get the complete list.
list-scrubbed-files [-long] /pool/genomics/frandsenp 160721 [<RE>]
where again the <RE> is an optional regular-expression to limit the printout, w/o an RE your get the complete list;
the -long
option will produce a list that includes the files' age and size.
.
means any char,.*
means any set of chars,[a-z]
means any single character between a
and z,
^
means start of match,$
means end of match, etc (see gory details here). '^/pool/genomics/blah/project/.*\.log$'
means all the files that end in '.log'
under '/pool/genomics/blah/project/'
/pool/genomics/frandsenp/big-project
, you can:list-scrubbed-files /pool/genomics/frandsenp 160721 /pool/genomics/frandsenp/big-project > restore.list
'big-project/
' and save the list in restore.list
edit the file 'restore.lis
t' to trim it, with any text editor,verify-restore-list /pool/genomics/frandsenp 160721 restore.list
verify-restore-list -d /pool/genomics/frandsenp 160721 restore.list
TBD, since the files are not yet scrubbed
We are in the process of adding local SSDs (solid state disks) to some compute nodes (not all - stay tuned), and
for special cases it may be OK to use disk space local to the compute node.
Until we post here detailed instructions, you should contact us if your jobs can benefit fron either SSDs or local disk space.
Last Updated SGK/PBF.