Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The available disk space is divided in several area (aka volumes, filesets or partitions):

  • a small partition for basic configuration files and small storage, the /home partition,
  • a set of medium size partitions, one for SAO users, one for non-SAO users, the /data partitions,
  • a set of large partitions, one for SAO users, one for non-SAO users, the /pool partitions,
  • a second set of large partitions for temporary storage, the /scratch partitions.

Note

...

  • we impose quotas: limits on how much can be stored on each disk (partition/volume/fileset) by each user, and
  • we monitor disk usage;
  • /home should not be used to keep large files, use /pool instead;
  • /pool is for active temporary storage (i.e., while a job is running).
  • If you need more disk space or your job(s) use(s) a lot of I/Os,  use /scratch.
    • Both partitions (/pool and /scratch) are scrubbed (see below): old stuff is deleted to make sure there is space for active users.
  • None of the disks on the cluster are for long term storage:
    • please copy your results back to your "home" computer and
    • delete what you don't need any longer.
  • While the disk system on Hydra is highly reliable, none of the disks on the cluster are backed up.
  • Once you reach your quota you won't be able to write anything on that partition until you delete stuff.
  • A few nodes have local SSDs (solid state disks), contact us if your jobs can benefit from more disk space, SSDs or local disk space.

...

  • You can use the following Un*x commands:

    dushow disk use
    dfshow disk free

    or

  • you can use Hydra-specific home-grown tools, (these require that you load the tools/local or tools/local+ modules)

    dus-reportrun du and parse its output in a more user friendly format
    disk-usage run df and parse its output in a more user friendly format


  • You can also view the disk status at the cluster status web pages, either

    • here (at cfa.harvard.edu)
      or
    • here (at si.edu).

...

The Hydra-specific tools, (i.e., requires that you load the tools/local module):

  • show-quotasquota+ - show quota values
  • parse-disk-quota-reports - parse quota reports

Examples

...

  • quota+ - show quota values:
Code Block
languagetext
% quota+
Disk show-quotas for -uuser sylvain
Limited to user=sylvain
 (uid 10541):
Muonted on                             Used   Quota   Limit   Grace   Files   Quota   Limit   Grace
----------                          ------- quota ------- -------
filesys      ------- ------- ------- ------- -------
/home               type       name          11.00G  50.00G  100.0G       0  space73.13k   2.00M   2.00M       #files0
/data/sao:nasm:admin        user       sylvain              1.92T   7.60T   8.0TB    40.000M
/home  .00T       0  37.53M  78.00M  80.00M       0
/pool/sylvain                         8.79T  12.50T  14.00T       0  57.93M  71.00M  75.00M       0
/scratch/sao  user        sylvain               10.00G  11.00T  100.0GB12.00T       0       2.000M
/pool/sao:nasm  25.17M  26.21M       0
/scratch/sylvain          user       sylvain     6.63T  50.00T  50.00T       0   21.0TB89M  99.61M   5.000M
/scratch/genomics:sao:nasm user104.9M       0
/store/admin sylvain                  10.0TB    25.000M
/pool/sylvain   1.00G    none    none
/store/sylvain   user       sylvain              8.39T    30.0TBnone    75.000M

Use

   % show-quotas -h

none

Use quota+ -h, or read the man page (man quota+), for the complete usage info.

  • parse-disk-quota-reports will parse the disk quota report file and produce a more concise report:
No Format
nopaneltrue
% parse-disk-quota-reports
Disk quota report: show usage above 85% of quota, (warning when quota > 95%), as of Wed Nov 20 21:00:05 2019.

Volume=NetApp:vol_data_genomics, mounted as /data/genomics
                     --  disk   --     --  #files --     default quota: 512.0GB/1.25M
Disk                 usage   %quota    usage  %quota     name, affiliation - username (indiv. quota)
-------------------- ------- ------    ------ ------     -------------------------------------------
/data/genomics       512.0GB 100.0%     0.17M  13.4% *** Paul Frandsen, OCIO - frandsenp

Volume=NetApp:vol_data_sao, mounted as /data/admin or /data/nasm or /data/sao
                     --  disk   --     --  #files --     default quota:  2.00TB/5M
Disk                 usage   %quota    usage  %quota     name, affiliation - username (indiv. quota)
-------------------- ------- ------    ------ ------     -------------------------------------------
/data/admin:nasm:sao  1.88TB  94.0%     0.01M   0.1%     uid=11599

Volume=NetApp:vol_home, mounted as /home
                     --  disk   --     --  #files --     default quota: 100.0GB/2M
Disk                 usage   %quota    usage  %quota     name, affiliation - username (indiv. quota)
-------------------- ------- ------    ------ ------     -------------------------------------------
/home                 96.5GB  96.5%     0.41M  20.4% *** Roman Kochanov, SAO/AMP - rkochanov
/home                 96.3GB  96.3%     0.12M   6.2% *** Sofia Moschou, SAO/HEA - smoschou
/home                 95.2GB  95.2%     0.11M   5.6% *** Cheryl Lewis Ames, NMNH/IZ - amesc
/home                 95.2GB  95.2%     0.26M  12.8% *** Yanjun (George) Zhou, SAO/SSP - yjzhou
/home                 92.2GB  92.2%     0.80M  40.1%     Taylor Hains, NMNH/VZ - hainst

Volume=NetApp:vol_pool_genomics, mounted as /pool/genomics
                     --  disk   --     --  #files --     default quota:  2.00TB/5M
Disk                 usage   %quota    usage  %quota     name, affiliation - username (indiv. quota)
-------------------- ------- ------    ------ ------     -------------------------------------------
/pool/genomics        1.71TB  85.5%     1.23M  24.6%     Vanessa Gonzalez, NMNH/LAB - gonzalezv
/pool/genomics        1.70TB  85.0%     1.89M  37.8%     Ying Meng, NMNH - mengy
/pool/genomics        1.45TB  72.5%     4.56M  91.3%     Brett Gonzalez, NMNH - gonzalezb
/pool/genomics       133.9GB   6.5%     4.56M  91.2%     Sarah Lemer, NMNH - lemers

Volume=NetApp:vol_pool_kistlerl, mounted as /pool/kistlerl
                     --  disk   --     --  #files --     default quota: 21.00TB/52M
Disk                 usage   %quota    usage  %quota     name, affiliation - username (indiv. quota)
-------------------- ------- ------    ------ ------     -------------------------------------------
/pool/kistlerl       18.35TB  87.4%     0.88M   1.7%     Logan Kistler, NMNH/Anthropology - kistlerl

Volume=NetApp:vol_pool_nmnh_ggi, mounted as /pool/nmnh_ggi
                     --  disk   --     --  #files --     default quota: 15.75TB/39M
Disk                 usage   %quota    usage  %quota     name, affiliation - username (indiv. quota)
-------------------- ------- ------    ------ ------     -------------------------------------------
/pool/nmnh_ggi       14.78TB  93.8%     8.31M  21.3%     Vanessa Gonzalez, NMNH/LAB - gonzalezv

Volume=NetApp:vol_pool_sao, mounted as /pool/nasm or /pool/sao
                     --  disk   --     --  #files --     default quota:  2.00TB/5M
Disk                 usage   %quota    usage  %quota     name, affiliation - username (indiv. quota)
-------------------- ------- ------    ------ ------     -------------------------------------------
/pool/nasm:sao        1.78TB  89.0%     0.16M   3.2%     Guo-Xin Chen, SAO/SSP-AMP - gchen

reports disk usage when it is above 85% of the quota.

Use

...

Use parse-disk-quota-reports -h

...

, or read the man page (man parse-disk-quota-reports). for the complete usage info.

Note

  • Users whose quotas are above the 85% threshold will receive a warning email one a week (issued on Monday mornings).
    • This is a warning, as long as you are below 100% you are OK.
    • Users won't be able to write on disks on which they have exceeded their hard limits.

...