You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 29 Next »

  1. What Disks to Use
  2. How to Copy Files to/from Hydra
  3. Disk Quotas
  4. Disk Configuration
  5. Disk Usage Monitoring
  6. NetApp Snapshots: How to Recover Old or Deleted Files
  7. Public Disks Scrubber
  8. SSD and Local Disk Space

1. What Disks to Use

All the useful disk space available on the cluster is mounted off a dedicated device (aka appliance or server), a NetApp filer.

The available disk space is divided in several area (aka partitions):

  • a small partition for basic configuration files and small storage, the /home partition,
  • a set of medium size partitions, one for SAO users, one for non-SAO users, the /data partitions,
  • a set of large partitions, one for SAO users, one for non-SAO users, the /pool partitions,
  • a second set of large partitions for temporary storage, the /scratch partitions.

Note that:

  • we impose quotas: limits on how much can be stored on each partition by each user, and
  • we monitor disk usage;
  • /home should not be used to keep large files, use /pool instead;
  • /scratch is for temporary storage (i.e., while a job is running), it can be used if you need more disk space than what you can store under /pool.
    We are about to implement an automatic scrubber: old stuff will be deleted to make space.
  • None of the disks on the cluster are for long term storage, please copy your results back to your "home" computer and
    delete what you don't need any longer.
  • While the disk system on Hydra is highly reliable, none of the disks on the cluster are backed up.
  • Once you reach your quota you won't be able to write anything on that partition until you delete stuff.
  • We are in the process of adding local SSDs (solid state disks) to some compute nodes (not all - stay tuned), and
    for special cases it may be OK to use disk space local to the compute node.

    Contact us if your jobs can benefit from more disk space,  SSDs or local disk space.

2. How to Copy Files to/from Hydra

(warning) When copying to Hydra, especially large files, be sure to do it to the appropriate disk (and not /home or /tmp).

2a. To/From Another Linux Machine

  • You can copy files to/from hydra using scp, sftp or rsync:
    • to Hydra you can only copy from trusted hosts (computers on SI or SAO/CfA trusted network, or VPN'ed),
    • from Hydra to any host that allows external ssh connections (if you can ssh from Hydra to it, you can scp, sftp and rsync to it).

  • For large transfers (over 70GB, sustained), we ask users to use rsync, and limit the bandwidth to 20 MB/s (70 GB/h),  with the "--bwlimit=" option:
    • rsync --bwlimit=20000 ...
      If this pose a problem, contact us (Sylvain or Paul).

    • Baseline transfer rate from SAO to HDC (Herndon data center) is around 300 Mbps, single thread, or ~36 MB/s or ~126 GB/h (as of Aug. 2016)
      The link saturates near 500 Mbps (50% of Gbps) or 62 MB/s or 220 GB/h 

  • Remember that rm, mv and cp can also create high I/O load, so consider to
    • limit your concurrent I/Os: do not start a slew of I/Os at the same time, and
    • serialize your I/Os as much as possible:  run one after the other.

NOTE for SAO Users:

(lightbulb) Access from the "outside" to SAO/CfA hosts (computers) is limited to the border control hosts (login.cfa.harvard.edu and pogoX.cfa.harvard.edu), instructions for tunneling via these hosts is explained on

2b.From a Computer Running MacOS

A trusted or VPN'd computer running MacOS can use scp, sftp or rsync:

  • Open the Terminal application by going to /Applications/Utilities and finding TerminalTerminal.app
  • At the prompt, use scp, sftp or rsync, after cd'ing to the right place.
  • For large transfers limit the bandwidth and use "rsync --bwlimit=4000".


Alternatively you can use a GUI based ssh/scp compatible tool like Cyberduck.

You will still most likely need to run VPN.

2c. From a Computer Running Windows

(grey lightbulb) You can use scp, sftp or rsync if you install  Cygwin - Note that Cygwin includes a X11 server.

Alternatively you can use a GUI based ssh/scp compatible tool like FileZilla, WinSCP or Cyberduck.

You will still most likely need to run VPN.

2d. Using Globus

(instructions missing)

2e. Using Dropbox

Files can be exchanged with Dropbox using the script Dropbox-Uploader, which can be loaded using the tools/dropbox_uploader module and running the dropbox or dropbox_uploader.sh script. Running this for script for the first time will give instructions on how to configure your Dropbox account and create a ~/.dropbox_uploader config file with authentication information.

Using this method will not sync your Dropbox, but will allow you to upload/download specific files.

3. Disk Quotas

To prevent the disks from filling up and hose the cluster, there is a limit (aka quota) on

  • how much disk space and
  • how many files (in fact "inodes": the sum of number of files and number of directories)
    each user can keep.

Each quota type has a soft limit (warning) and a hard limit (error) and is specific to each partition. In other words exceeding the soft limit produces warnings; while exceeding the hard limit is not allowed, and results in errors.

4. Disk Configuration

 

 MaximumQuotas per userNetApp 
 diskdisk space

no. of files

snapshots 

Disk name

capacity

soft/hard

soft/hard

enabled?

Purpose

/home

10TB

50/100GB

1.8/2M

yes: 4 weeks

For your basic configuration files, scripts and job files

- your limit is low but you can recover old stuff up to 4 weeks.

/data/sao

10TB

2.8/3.0TB

1/2M

yes: 2 weeks

 For important but relatively small files like final results, etc.

- your limit is medium, you can recover old stuff, but disk space is not released right away.

For SAO users.

/data/genomics

10TB

1.0/2.0TB1/2M

yes: 2 weeks

For important but relatively small files like final results, etc.

- your limit is medium, you can recover old stuff, but disk space is not released right away.

For non-SAO users.

/pool/sao

40TB

1.8/2.0TB

4/5M

no

For the bulk of your storage

- your limit is high, and disk space is released right away, for SAO users.

/pool/genomics

40TB

1.8/2.0TB1.8/2M

no

For the bulk of your storage

- your limit is high, and disk space is released right away, for non-SAO users.

/pool/biology

5TB

1.8/2.0TB1.8/2M

no

For the bulk of your storage

- your limit is high, and disk space is released right away, for non-SAO users.

/scratch

50TB

2.8/3.0TB

1/1M

no, FIFO model

For temporary storage, if you need more than what you can keep in /pool

- SAO/non-SAO user should use /scratch/sao or /scratch/genomics, respectively

      
/pool/sao_atmos25TB

 8.0T/10TB

9M/10MnoProject specific disk (SAO/ATMOS)
/pool/sao_rtdc10TB2.8T/3.0T2.5M/3.0MnoProject specific disk (SAO/RTDC)
/pool/sylvain15TB14T/15T  63M/65MnoProject specific disk (SAO/Sylvain)

Notes

  • The notation
    • 2.8/3.0TB means that the soft limit is 2.8TB and the hard limit is 3.0TB of disk space, while
    • 4/5M means that the soft limit is 4 million inodes and the hard limit is 5 million.

  • It is inefficient to store a slew of small files and if you do you may reach your inodes quota before your space quota (too many small files).
    •  Some of the disk monitoring tools show the inode usage.
    • If your %(inode)>%(space) your disk usage is inefficient,
      consider archiving your files into zip or tar-compressed sets.

  • While some of the tool(s) you use may force you to be inefficient while jobs are running, you should remember to
    • remove useless files when jobs have completed,
    • compress files that can benefit from compression (with gzip, bzip2 or compress), and
    • archive a slew of files into a zip or a tar-compressed set, as follows:
         % zip archive.zip dir/
      or
         % tar -czf archive.tgz dir/
      both examples archive the content of the directory dir/ into a single zip or a tgz file. You can then delete the content of dir/ with
         % rm -rf dir/
  • You can unpack each type of archive with
       % unzip archive.zip
    or
       % tar xf archive.tgz

  • The sizes of the partitions (aka the various disks) on the NetApp will "auto-grow" until they reach the  listed maximum capacity,
    so the size shown by the traditional Un*x command, like df does not necessarily reflect the maximum size.

  • Once we secure more space for /scratch, we will implement a FIFO (first in first out) model, where old files are deleted without warning to make space.
    • There will be a minimum age limit, meaning that only files older than (let's say) 3 months will be deleted.
    • Older files will be deleted before the newer ones (FIFO),
    • We will run a scrubber an a regular interval.
  • In the meantime, we ask you to remove from /scratch files that you do not need for active jobs.

  • For projects that want dedicated disk space, such space can be secured with project's specific funds when we expand the disk farm (contact us).

5. Disk Monitoring

The following tools can be used to monitor your disk usage.

  • You can use the following Un*x commands:

    dushow disk use
    dfshow disk free

    or

  • you can use Hydra-specific home-grown tools, (these require that you load the tool/local module)

    dus-report.plrun du and parse its output in a more user friendly format
    disk-usage.pl run df and parse its output in a more user friendly format
  • You can also view the disk status at the cluster status web pages, either

    • here (at cfa.harvard.edu)
      or
    • here (at si.edu).
    Each site shows the disk usage and a quota report, compiled 4x and 1x a day respectively, and has links to plots of disk usage vs time.

Disk usage

The output of du can be very long and confusing. It is best used with the option "-hs" to show the sum ("-s") and to print it in a human readable format ("-h").

(warning) If there is a lot of files/directory, du can take a while to complete.

(lightbulb) For example:

% du -sh dir/
136M    dir/

 

The output of df can be very long and confusing.

(lightbulb) You can use it to query a specific partition and get the output in a human readable format ("-h"), for example:

% df -h /pool/sao
Filesystem           Size  Used Avail Use% Mounted on
10.61.10.1:/vol_sao   20T   15T  5.1T  75% /pool/sao

 

You can compile the output of du into a more useful report with the dus-report.pl tool. This tool will run du for you (can take a while) and parse its output to produce a more concise/useful report.

For example, to see the directories that hold the most stuff in /pool/sao/hpc:

% dus-report.pl /pool/sao/hpc
 612.372 GB            /pool/sao/hpc
                       capac.   20.000 TB (75% full), avail.    5.088 TB
 447.026 GB  73.00 %   /pool/sao/hpc/rtdc
 308.076 GB  50.31 %   /pool/sao/hpc/rtdc/v4.4.0
 138.950 GB  22.69 %   /pool/sao/hpc/rtdc/vX
 137.051 GB  22.38 %   /pool/sao/hpc/rtdc/vX/M100-test-oob-2
 120.198 GB  19.63 %   /pool/sao/hpc/rtdc/v4.4.0/test2
 120.198 GB  19.63 %   /pool/sao/hpc/rtdc/v4.4.0/test2-2-9
  83.229 GB  13.59 %   /pool/sao/hpc/c7
  83.229 GB  13.59 %   /pool/sao/hpc/c7/hpc
  65.280 GB  10.66 %   /pool/sao/hpc/sw
  64.235 GB  10.49 %   /pool/sao/hpc/rtdc/v4.4.0/test1
  49.594 GB   8.10 %   /pool/sao/hpc/sw/intel-cluster-studio
  46.851 GB   7.65 %   /pool/sao/hpc/rtdc/vX/M100-test-oob-2/X54.ms
  46.851 GB   7.65 %   /pool/sao/hpc/rtdc/vX/M100-test-oob-2/X54.ms/SUBMSS
  43.047 GB   7.03 %   /pool/sao/hpc/rtdc/vX/M100-test-oob-2/X220.ms
  43.047 GB   7.03 %   /pool/sao/hpc/rtdc/vX/M100-test-oob-2/X220.ms/SUBMSS
  42.261 GB   6.90 %   /pool/sao/hpc/c7/hpc/sw
  36.409 GB   5.95 %   /pool/sao/hpc/c7/hpc/tests
  30.965 GB   5.06 %   /pool/sao/hpc/c7/hpc/sw/intel-cluster-studio
  23.576 GB   3.85 %   /pool/sao/hpc/rtdc/v4.4.0/test2/X54.ms
  23.576 GB   3.85 %   /pool/sao/hpc/rtdc/v4.4.0/test2-2-9/X54.ms
  23.576 GB   3.85 %   /pool/sao/hpc/rtdc/v4.4.0/test2/X54.ms/SUBMSS
  23.576 GB   3.85 %   /pool/sao/hpc/rtdc/v4.4.0/test2-2-9/X54.ms/SUBMSS
  22.931 GB   3.74 %   /pool/sao/hpc/rtdc/v4.4.0/test2/X220.ms
  22.931 GB   3.74 %   /pool/sao/hpc/rtdc/v4.4.0/test2-2-9/X220.ms
report in /tmp/dus.pool.sao.hpc.hpc

You can rerun dus-report.pl with different options on the same intermediate file, like

   % dus-report.pl -n 999 -pc 1 /tmp/dus.pool.sao.hpc.hpc

to get a different report, to see the list down to 1%. Use

   % dus-report.pl -help 

to see how else you can use it.

 

The tool disk-usage.pl runs df and presents its output in a more friendly format:

% disk-usage.pl 
Filesystem                              Size     Used    Avail Capacity  Mounted on
NetApp.1:/vol_home/hpc                 2.70T    1.02T    1.68T  38%/17%  /home/hpc
NetApp.1:/vol_sao                     20.00T   14.91T    5.09T  75%/34%  /pool/sao
NetApp.1:/vol_sao_atmos               18.00T   13.80T    4.20T  77%/7%   /pool/sao_atmos
NetApp.1:/vol_sao_rtdc                 1.00T  167.51G  856.49G  17%/1%   /pool/sao_rtdc
NetApp.1:/vol_genomics                20.00T   17.12T    2.88T  86%/57%  /pool/genomics
NetApp.1:/vol_data_genomics            1.90T   10.78G    1.89T   1%/3%   /data/genomics
NetApp.1:/vol_data/sao                 3.60T    3.27T  336.17G  91%/37%  /data/sao
NetApp.1:/vol_data/admin               3.60T    3.27T  336.17G  91%/37%  /data/admin
NetApp.1:/vol_old_pools/cluster0      17.00T   15.60T    1.40T  92%/22%  /pool/cluster0
NetApp.1:/vol_old_pools/cluster1      17.00T   15.60T    1.40T  92%/22%  /pool/cluster1
NetApp.1:/vol_old_pools/cluster2      17.00T   15.60T    1.40T  92%/22%  /pool/cluster2
NetApp.1:/vol_old_pools/cluster3      17.00T   15.60T    1.40T  92%/22%  /pool/cluster3
NetApp.1:/vol_old_pools/cluster4      17.00T   15.60T    1.40T  92%/22%  /pool/cluster4
NetApp.1:/vol_old_pools/cluster5      17.00T   15.60T    1.40T  92%/22%  /pool/cluster5
NetApp.1:/vol_old_pools/cluster7      17.00T   15.60T    1.40T  92%/22%  /pool/cluster7
NetApp.1:/vol_sylvain                 14.18T   12.75T    1.43T  90%/51%  /pool/sylvain
NetApp.1:/vol_scratch/genomics        25.00T   19.53T    5.47T  79%/5%   /scratch/genomics
NetApp.1:/vol_scratch/sao             25.00T   19.53T    5.47T  79%/5%   /scratch/sao
NetApp.5:/vol/a2v1/genomics01         20.00T    0.05G   20.00T   1%/1%   /scratch/genomics01
NetApp.5:/vol/a2v1/sao01              20.00T    0.05G   20.00T   1%/1%   /scratch/sao01

Use

   % disk-usage.pl -help

to see how else to use it. You can, for instance, get the disk quotas and the max size with:

% disk-usage.pl -quotas
                                                                 quotas: disk usage #(inodes)  max 
Filesystem                              Size     Used    Avail Capacity  soft/hard  soft/hard size Mounted on
NetApp.1:/vol_home/hpc                 2.70T    1.02T    1.68T  38%/17%   50G/100G  1.8M/2.0M   8T /home/hpc
NetApp.1:/vol_sao                     20.00T   14.91T    5.09T  75%/34%  1.8T/2.0T  4.0M/5.0M  60T /pool/sao
NetApp.1:/vol_sao_atmos               18.00T   13.80T    4.20T  77%/7%   8.0T/10.T  9.0M/10.M  22T /pool/sao_atmos
NetApp.1:/vol_sao_rtdc                 1.00T  167.51G  856.49G  17%/1%   2.8T/3.0T  2.5M/3.0M  11T /pool/sao_rtdc
NetApp.1:/vol_genomics                20.00T   17.12T    2.88T  86%/57%  1.8T/2.0T  1.8M/2.0M  50T /pool/genomics
NetApp.1:/vol_data_genomics            1.90T   10.78G    1.89T   1%/3%   1.0T/2.0T  1.0M/2.0M  10T /data/genomics
NetApp.1:/vol_data/*                   3.60T    3.27T  336.17G  91%/37%  2.8T/3.0T  35.M/40.M  20T /data/sao:admin
NetApp.1:/vol_old_pools/cluster*      17.00T   15.60T    1.40T  92%/22%                            /pool/cluster0:1:2:3:4:5:7
NetApp.1:/vol_sylvain                 14.18T   12.75T    1.43T  90%/51%  14.T/15.T  63.M/65.M  15T /pool/sylvain
NetApp.1:/vol_scratch/*               25.00T   19.53T    5.47T  79%/5%   2.8T/3.0T  1.0M/1.0M  50T /scratch/genomics:sao
NetApp.5:/vol/a2v1/*                  20.00T    0.05G   20.00T   1%/1%   14.T/15.T  2.0M/2.0M  20T /scratch/genomics01:sao01

Monitoring Quota Usage

The Linux command quota is working (as of August 2016) with the NetApp filers (old and new).

For example:

% quota -s
Disk quotas for user hpc (uid 7235): 
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
10.61.10.1:/vol_home
                  2203M  51200M    100G           46433   1800k   2000k        
10.61.10.1:/vol_sao
                  1499G   1946G   2048G           1420k   4000k   5000k        
10.61.10.1:/vol_scratch/genomics
                 48501M   2048G   4096G            1263   9000k  10000k        
10.61.200.5:/vol/a2v1/genomics01
                   108M  14336G  15360G             613  10000k  12000k        
10.61.10.1:/vol_home/hydra-2/dingdj
                  2203M  51200M    100G           46433   1800k   2000k        

reports your quotas. The -s stands for --human-readable, hence the 'k' and 'G'. While

    % quota -q

will print only information on filesystems where your usage is over the quota. (man quota)

Other Tools

We compile a daily quota report and provide tools to parse the quota report.

The daily quota report is written around 3:25am in a file called quota_report_MMDDYY, located in /share/apps/adm/reports.

The string MMDDYY corresponds to the date of the report: "012016" for Jan 20 2016 report.

The format of this file is not very user friendly and users are listed by their user ID.

 

The Hydra-specific home-grown tool, (requires that you load the tool/local module), parse-quota-report.pl, will parse the quota report file and produce a more concise report.

For example:

% parse-quota-report.pl
Disk quota report: show usage 75% above quota as of Wed Jan 20 03:43:05 2016

volume = /pool/genomics/ (vol_genomics) hard quota =  2.0TB or 5M files
                          --  disk   --     --  #files --
name                      usage   %quota    usage  %quota 
------------------------- ------- ------    ------ ------
/pool/genomics/savagea     3.34TB 167.0%    23.40M 468.0% *** Anna Savage - NZP
/pool/genomics/tsuchiyam   1.67TB  83.5%     0.12M   2.4%     Mirian Tsuchiya,NMNH/Botany
/pool/genomics/sigelem     1.61TB  80.5%     0.00M   0.0%     Erin Sigel - NMNH

volume = /home/ (vol_home) hard quota = 100.0GB or 2M files
                          --  disk   --     --  #files --
name                      usage   %quota    usage  %quota 
------------------------- ------- ------    ------ ------
/home/tonylee             131.5GB 131.5%     0.16M   8.1% *** Tony Lee - SAO/AMP
/home/zhangn               99.4GB  99.4%     1.06M  52.8% *** Ning Zhang,NMNH/VZ

volume = /pool/sao/ (vol_sao) hard quota =  2.0TB or 5M files
                          --  disk   --     --  #files --
name                      usage   %quota    usage  %quota 
------------------------- ------- ------    ------ ------
/pool/sao/gchen            3.67TB 183.5%     0.20M   4.0% *** Guo-Xin Chen - SAO/SSP-AMP
/pool/sao/afoster          2.00TB 100.0%     0.33M   6.7% *** Adam Foster - SAO/HEA
/pool/sao/atripathi        1.87TB  93.5%     0.60M  11.9%     Anjali Tripathi - SAO/AST

volume = /pool/ (vol_sylvain) hard quota = 15.0TB or 65M files
                          --  disk   --     --  #files --
name                      usage   %quota    usage  %quota 
------------------------- ------- ------    ------ ------
/pool/sylvain             12.74TB  84.9%    33.03M  50.8%     Sylvain G. Korzennik,SAO/SSP

reports disk usage where it is at 75% above quota.

Or you can check usage for a specific user (like yourself)  with

   % parse-quota-report.pl -u <username>

for example:

% parse-quota-report.pl -u hpc
Disk quota report: show usage 0% above quota only for hpc as of Wed Jan 20 03:43:05 2016
                          --  disk   --     --  #files --    -hard quota-
name                      usage   %quota    usage  %quota    disk sp  #f
------------------------- ------- ------    ------ ------    -------- ---
/data/sao/hpc             191.0MB   0.0%     0.00M   0.0%      3.00TB 40M
/home/hpc                   1.6GB   1.6%     0.04M   2.0%     100.0GB  2M
/pool/sao/hpc             606.9GB  29.6%     0.55M  11.0%      2.00TB  5M
/scratch/genomics/hpc      47.2GB   0.3%     0.00M   0.0%     15.00TB 10M

Use

   % parse-quota-report.pl -h

for the complete usage info.

 

Users whose quotas are above the 75% threshold will receive a warning email one a week (issued on Monday mornings). This is a warning, as long as you are below 100% you are OK.

Users won't be able to write on disks on which they have exceeded their hard limits.

6. NetApp Snapshots: How to Recover Old or Deleted Files.

Some of the disks on the NetApp filer have the so called "snapshot mechanism" enabled:

  • This allow users to recover deleted files or access an older version of a file.
  • Indeed, the NetApp filer makes a "snapshot" copy of the file system (the content of the disk) every so often and keeps these snapshots up to a given age.
  • So if we enable hourly snapshot and set a two weeks retention, you can recover a file as it was hours ago, days ago or weeks ago, but only up to two weeks ago.
  • The drawback of the snapshot is that when files are deleted, the disk space is not freed until the deleted files age-out, like 2 or 4 weeks later.

How to Use the NetApp Snapshots:

To recover an old version or a deleted file, foo.dat, that was (for example) in /data/genomics/frandsen/important/results/:

  • If the file was deleted:
   % cd /data/genomics/.snapshot/XXXX/frandsen/important/results
   % cp -pi foo.dat /data/genomics/frandsen/important/results/foo.dat
  • If you want to recover an old version:
   % cd /data/genomics/.snapshot/XXXX/frandsen/important/results
   % cp -pi foo.dat /data/genomics/frandsen/important/results/old-foo.dat
  • The "-p" will preserve the file creation date and the "-i" will prevent overwriting an existing file. 
  • The "XXXX" is to be replaced by either:
    • hourly.YYYY-MM-DD_HHMM
    • daily.YYYY-MM-DD_0010
    • weekly.YYYY-MM-DD_0015
      where YYY-MM-DD is a date specification (i.e., 2015-11-01)
  • The files under .snapshot are read-only:
    • they be recovered using cp, tar or rsync; but
    • they cannot be moved (mv) or deleted (rm).

7. Public Disks Scrubber

In order to maintain free disk space on the public disks, we are about to implement disk scrubbing: removing old files and old empty directories.

What is Scrubbing?

We will remove old files and old empty directories from a set of disks on a regular basis.

Old empty directories will be deleted, old files will be, at first, moved away in a staging location, then deleted.

 

NOTE

  • At first, we will email users what files & directories would have been scrubbed, but we will not do any scrubbing.
  • This will allow for a transition period to allow users to adjust their disk usage.
  • Eventually we will notify the users and turn on the scrubber: at that point these old files and directories will go away!

 

Since the scrubber moves old files away at first, and delete them later, there will be a grace period between the move and the deletion to allow user to request for some scrubbed files to be restored.

Requests to restore scrubbed file should be rare, reasonable and can only be granted while the scrubbed files are not yet deleted.

Past the grace period, the files are no longer available.

Users who want their scrubbed files restore will have to act promptly.

 

The following instructions explain

  • What disks will be scrubbed.
  • What to do to access the scrubber's tools.
  • How to
    • look at the scrubber's report;
    • find out which old empty directories were scrubbed;
    • find out which old files were scrubbed;
    • create a recovery request.

What disks will be scrubbed?

The disks that will be scrubbed are:

  • /pool/biology       - 180 days
  • /pool/genomics      - 180 days
  • /pool/sao           - 180 days
  • /scratch/genomics   -  90 days
  • /scratch/genomics01 -  90 days
  • /scratch/sao        -  90 days
  • /scratch/sao01      -  90 days

(warning) Before we actually scrub old files we will run the scrubber to only notify the users what old files would have been scrubbed. Users should adjust their disk usage accordingly. (warning)

How to access the scrubber's tools

  • load the module:

      module load tools/scrubber

  • to get the list of tools, use:

      module help tools/scrubber 

  • to get the man page, accessible after loading the module, use:

       man <tool-name>

How to look at the scrubber's results

  • To look at the report for what was scrubbed on Jul 21 2016 under /pool/genomics/frandsenp:

       show-scrubber-report /pool/genomics/frandsenp 160721

  • To find out which old empty directories where scrubbed:

       list-scrubbed-dirs  /pool/genomics/frandsenp 160721 [<RE>]

 where the <RE> is an optional regular-expression to limit the printout, w/o an RE your get the complete list.

  • To find out which old files where scrubbed:

       list-scrubbed-files [-long] /pool/genomics/frandsenp 160721 [<RE>]

 where again the <RE> is an optional regular-expression to limit the printout, w/o an RE your get the complete list;

 the -long option will produce a list that includes the files' age and size.

  • (lightbulb) The <RE> (regular expressions) are PERL-style RE:
    • .     means any char,
    •  .*  means any set of chars,
    • [a-z] means any single character between a and z,
    • ^     means start of match,
    • $     means end of match, etc (see gory details here).
  • for example:

       '^/pool/genomics/blah/project/.*\.log$'  

means all the files that end in '.log' under '/pool/genomics/blah/project/'

How to produce a list of files to restore

  • To produce the list of files to restore as some of  the files scrubbed under /pool/genomics/frandsenp/big-project, you can:
  1. create a list with
    list-scrubbed-files /pool/genomics/frandsenp 160721 /pool/genomics/frandsenp/big-project > restore.list
     this will lists all the scrubbed files under 'big-project/' and save the list in restore.list

  2.  edit the file 'restore.list' to trim it, with any text editor,
     
  3. verify with:
    verify-restore-list /pool/genomics/frandsenp  160721 restore.list
    or use
    verify-restore-list -d /pool/genomics/frandsenp  160721 restore.list
      if the verification produced an error.

  4. Only then, and if the verification produced no error, submit your scrubbed file restoration request as follow:
    •  TBD, since the files are not yet scrubbed

8. SSD and Local Disk Space

We are in the process of adding local SSDs (solid state disks) to some compute nodes (not all - stay tuned), and
for special cases it may be OK to use disk space local to the compute node.

Until we post here detailed instructions, you should contact us if your jobs can benefit fron either SSDs or local disk space.


Last Updated SGK/PBF.

  • No labels