We have added SSDs, solid state disks, on a few compute nodes:
- These are local fast disks that can speed up applications that preform a lot of intensive I/O operations,
- hence such jobs should complete faster when using SSDs.
- Jobs that do not perform intensive I/Os should not use the SSDs - these are a scarce shared resource.
- Since these disks are local to the compute nodes:
- you cannot see the SSD from either login nodes,
- your job will be able to use the SSD only while the job is running, hence
- you need to prepare the files needed for a job prior to submitting the job, using /pool or /scratch, and
request the right amount of SSD disk space:
this is the maximum amount of disk space your job will need on the SSD at run-time, (similarly to the maximum of memory it will need).
- Your job script will have to
- copy those files to the SSD before processing them,
- adapted to use the SSD,
- upon completion, copy the results from the SSD elsewhere (/pool or /scratch),
- and delete what you wrote on the SSD.
- If you exceed the amount of disk space, i.e., your quota, your job won't be able to write any longer to the SSD,
hence your job script should stop when encountering such error.
Since we have only a few of them, they should be used only if your application greatly benefit from using an SSD.
The SSDs are currently only available for the
uTxlM.rq queue, a restricted queue. Contact Paul or Sylvain if you want to be authorized to use it.
Since you can't access the SSDs from a login node, you must prepare the data the job will need somewhere else, like on
/pool (or on
/scratch) before submitting a job.
Like for memory, you need to guestimate how much SSD space your job will need. You will not be able to use more SSD space than you requested.
Remember, your job will still be able to access the
/home, /data, /pool and
/scratch disks, hence you don't have to copy everything on the SSD,
only the I/O intensive part of the analysis should use the SSD.
How to Prepare my Data
- Create a subdirectory in
/scratch)and move or copy the data you will need, for example (as user smart1)
mkdir -p great/project/wild-cat
Now is have a directory for this case, and would copy the I/O intensive part of the required data set in it.
- While not required, you can pack these data in a compressed tar-ball
tar cfz ../wild-cat.tgz .
/pool/genomics/smart1/great/project/wild-cat.tgznow holds you input data set,
being compressed it is likely to be smaller than the content of
That directory can be deleted, unless you will need it later.
- Ancillary data and/or configuration files that are not causing intensive I/O can stay on a location under
How to Adjust my Job Script to Use the SSD
1 Copy the Data to the SSD
- At the top of your job script, load the
tools/ssdmodule and copy or extract your data set as follows:
- At the top of your job script, load the
The advantage of the compressed tar-ball is that the
.tgz file is likely to be smaller than the content of the directory, hence less I/O transfer from the
/pool disk, while un-compressing and writing to the SSD is fast,
2 Adjust the Script or a Configuration File
- You need to replace all references to
- this can be easily done at the shell script level, but not in a configuration file, i.e., for flags/options
execute -o /pool/genomics/smart1/great/project/wild-cat/result.dat
is replaced by
execute -o $SSD_DIR/result.dat
- Here is a simple trick to modify a configuration file:
Let's assume that your analysis uses a file
wow.conf, where for instance the full path of some files must be listed, like:
wow.conffile by a
wow.genfile as follows:
And create the
wow.conffile from the
wow.genat run-time by adding the following in the job script:
As long as
XXXXis not used for anything else, this will replace every occurrence of
XXXXby the value of the environment variable
3 Run the Analysis
- With your data copied to the SSD and with your commands and configuration files adjusted to use the SSD, run your analysis.
4 Copy the Results from the SSD
At the end of the job script, you must add instruction to copy the results of your analysis back to
If/when the results are easily identifiable, you can use
the commands mv
find, here are a few examples:
Move the directory where all the results are stored and the log file, delete the rest.
Move the directory where all the results are stored and the log file, delete the input (conservative approach, in case you missed something).
Move using the
Note, you can use
mv --updateon an explicit list (of files, directories, or file specification), not just * (everything), and you do not have to remove the rest, but can only remove what you know you can safely remove (conservative approach).
Find newer files and move them: the trick is to create a 'timestamp' file before starting the analysis.
That file can be used later to find any newer file with the
See previous comments and what to tar and what to remove: once you've
tar'd new stuff in
Using the timestamp file and the
BTW, the advantage of writing a
.tgzfile, rather than moving files is two fold, assuming your stuff is compressible: There are many more ways to accomplish this ....
You write less in the .tgz file, so it should be done faster (reading and compressing should be fast, writing is the slow step)
- you need less disk space for your output (since it is compressed).
The drawback being that you need to know how to handle/view/deal with a
A Pseudo Example
Here is what a job script might look like:
Last Updated SGK