Page tree
Skip to end of metadata
Go to start of metadata
  1. Introduction
  2. Matrix of Queues
    1. Notes
    2. How to Specify a Queue
    3. Examples
    4. Interactive Queue
    5. Memory Reservation
    6. Formats
    7. Host Groups
    8. CPU architecture
    9. Queue Selection Validation and/or Verification
  3. Hardware Limits
    1. Notes

1. Introduction

Every job running on the cluster is started in a queue.

  • The GE will select a queue based on the resources requested and the current usage in each queue.
  • If you don't specify the right queues or the right resource(s), your job will either not get queued,
  • wait forever and never run, or
  • start and get killed when it exceeds one the limits of the queue it was started in.

All jobs run in batch mode, and the default Rocks queue all.q is not available,  it is disabled.

2. Matrix of Queues

The set of available queues is a matrix of queues:

  • Four sets of queues:
    1. a high-cpu set, and
    2. a high-memory set, complemented by
    3. a very-high-memory restricted queue,
    4. an interactive queue, and
    5. a test queue.

  • The high-cpu and a high-memory sets of queues have different time limits: short, medium, long and unlimited.
  • The high-cpu queues are for serial or parallel jobs that do not need a lot of memory (less than 6GB per CPU),
  • the high-memory queues are for serial or multi-threaded parallel jobs that require a lot of memory (more then 6GB, but limited to 450GB,

  • the very-high-memory queue is reserved for jobs that need a very large amount of memory (over 450GB), and
  • a special queue is reserved for special projects that need special resources (lots of memory, benefit from SSDs).

  • There is also an interactive queue, to run interactively on a compute node, although it also has limits.

Here the list of queues and their characteristics (time versus memory limits) :

MemoryTime limit (soft CPU time)
per CPUT<7hT<6dT<30d

environments Type of jobs
6 GBsThC.qmThC.qlThC.quThC.q
mpich, orte, mthreadserial or parallel, that need less than 6GB of memory per CPU
450 GBsThM.qmThM.qlThM.quThM.q
mthreadserial or multi-threaded, 6GB < memory needed < 450GB
1 TB

mthreadserial or multi-threaded, memory needed > 450GB, restricted
1 TB

mthreadtest queue to access SSDs, restricted
256 GB

mthreadtest queue to access GPUs, restricted


8 GB

mthreadinteractive queue, use qrsh or qlogin instead of qsub; 12h of CPU 24h of wallclock.
8 GB



I/O queue, to access /store; 12h of CPU, 72h of wallclock.

256 GB

mthreadinteractive queue to access GPUs, restricted; 12h of CPU 24h of wallclock.


  • the listed time limit is the soft CPU limit (s_cpu),
    • the soft elapsed time limit (aka real time, s_rt) is twice the soft CPU limit for all the queues, except for the medium-T, 
    • for the medium-T queues, the elapsed time limits are 9 days (1.5 times the CPU limit), and
    • the hard time limits are 15m longer than the soft ones (i.e., h_cpu, and h_rt).
      Namely, in the short-time limit queues
      • a serial job is warned if it has exceeded 7 hours of consumed CPU, and killed after it has consumed 7 hours and 15 minutes of CPU, or
      • warned if it has spent 14 hours in the queue and killed after spending 14 hours and 15 minutes in the queue.
      • For parallel jobs the consumed CPU time is scaled by the number of allocated slots,
      • the elapsed time is not.
  • memory limits are per CPU,
    • so a parallel job, in a high-cpu queue can use up to NSLOTS x 6 GB, where NSLOTS is the number of allocated slots (CPUs) 
    • parallel jobs in the other queues are limited to multi-threaded jobs (no multi-node jobs)
  • memory usage is also limited by the available memory on a given node.
  • If you believe that you need access to the restricted or test queue, contact us.

How to Specify a Queue

  • By default jobs are most likely to run in the short high-cpu queue (sThC.q).
  • To select a different queue you need to either
    • specify the name of the queue (via -q <name>), or
    • pass a requested time limit (via -l s_cpu=<value> or -l s_rt=<value>).
  • To use a high-memory queue, you need to
    • specify the memory requirement (with -l mres=X,h_data=X,h_vmem=X)
    • confirm that you need a high-memory queue (-l himem)
    • select the time limit either by
      • specifying the queue name, or (via -q <name>), or
      • pass a requested time limit (via -l s_cpu=<value> or -l s_rt=<value>). 
  • To use the unlimited queues, i.e., uThC.q or uThM.q, you need to confirm that you request a low priority queue (-l lopri)

(grey lightbulb) Why do I need to add -l himem or -l lopri?

  • This prevents the GE from submitting a job to the high memory or unlimited queues that requested no or little resources, just because one of these queues are less used.
    It prevent the scheduler from "wasting" valuable resources.


qsub flagsMeaning of the request
-l s_cpu=48:00:0048 hour of consumed CPU (per slot)
-l s_rt=200:00:00200 hour of elapsed time
-q mThC.quse the mThC.q queue
-l mres=120G,h_data=12G,h_vmem=12G -pe mthread 1012GB of memory (per CPU),  for a 10 CPUs parallel job will reserve 120GB
-q mThM.q -l mres=12G,h_data=12G,h_vmem=12G,himem

to run in the medium-time high-memory queue.

This is a correct, i.e., complete, specification (memory use specs and himem)

-q uThC.q -l loprito run in the unlimited high-cpu queue, note the -l lopri
-q uTxlM.rq -l himemunlimited-time, extra-large-memory, restricted to a subset of users

All jobs that use more than 2 GB of memory (per CPU) should include a memory reservation and requirement with -l mres=X,h_data=X,h_vmem=X.

  • If you do not, your job(s) may not be able to grab the memory they need at run-time and crash, or crash the node.
  • memory reservation is for serial and mthread jobs, not MPI. MPI jobs can specify h_data=X and h_vmem=X resources.

Interactive Queue

You can start an interactive session on a compute node using the command qrsh or qlogin (not qsub, nor qsh)

  • Some compute nodes are set aside for interactive use,
    • the corresponding queue is named
  • To start an interactive session, use qrsh or qlogin
    • qrsh will start an interactive session on one of the interactive nodes,
      • it takes both options and arguments (like qsub)
    • qlogin is similar to qrsh, although
      • it will propagate the $DISPLAY variable, so you can use X-enabled applications (like to plot to the screen) if you've enabled X-forwarding, and
      • it does not take any argument, but will take options.
  • Unless you need X-forwarding, use qrsh

Limits on Interactive Queues

  • Like any other queue, the interactive queue has its own limits:

    CPU12h per slot (CPU/core)
    Elapsed Time24h per session
    Memory8GB per slot (CPU/core)
  • Like for qsub, you can request more than one slot (CPU/thread) with the -pe mthread N option,
    • where N is a number between 2 and 16, as in:

qrsh -pe mthread 4 

    • requesting more slots allows you also to use more memory (4 slots means up to 4 x 8G = 32G).
  • Each user is limited to one concurrent interactive session, and up to 16 slots (CPUs/cores).
  • The overall limits of 512 slots/user include all the queues (so if you have 512 slots use in batch mode, you won't be able to get an interactive session).

The NSLOTS Variable

  • As of Feb 3, 2020, $NSLOTS is properly propagated by qrsh, but not by qlogin.
    • There is no mechanism to propagate $NSLOTS with qlogin and enable X-forwarding.

(warning) Remember, Hydra is a shared resource, do not "waste" interactive slots by keeping your qlogin or qrsh session idle.

Memory Reservation

  • We have implemented a memory reservation mechanism,  (via -l mres=XX)
    • This allows the job scheduler to guarantee that the requested amount of memory is available for your job on the compute node(s),
      by keeping track of the reserved memory and not scheduling jobs that reserve more memory than available.
    • Hence reserving more than you will use prevents others (including your own other jobs) from accessing the available memory and
      indirectly the available CPUs (like if you use one, or even just a few, CPUs but grab most of the memory of a given compute node).

  • We have at least 2GB/CPUs, but more often 4GB/CPUs on the compute node, still
    (warning) it is recommended to reserve memory if your job will use more than 2GB/CPU, and
    set h_data=X and h_vmem=X to the value used in mres=XX.

  • (warning) Remember:
    • The memory specification is
      • per JOB in mres=XX - it is no longer scaled by the number of allocated slots/CPUs/threads
      • per CPU in h_data=X and h_vmem=X, it should be XX divided by the number of requested slots.
    • Memory is a scarce and expensive resource, compared to CPU, as we have fewer nodes with a lot of memory.
    • Try to guesstimate the best you can your memory usage, and
    • monitor the memory use of your job(s) (see Monitoring your Jobs ).

  • (warning) Note:
    • Do not hesitate to re-queue a job if it uses a lot more or a lot less than your initial guess, esp. if you plan to queue a slew of jobs;
    • memory usage scales with the problem in most often predictable ways,
      consider running some test cases to help you guesstimate, and
      trim down your memory reservation whenever possible: a job that requests oodles of memory may wait for a long time in the queue for that resource to free up.
    • Consider breaking down a long task into separate jobs if different steps need different type of resources.

Format Specifications

  • The format for
    • memory specification is a positive (decimal) number followed by a unit (aka a multiplier), like 13.4G for 13.4 GB;
    • CPU or RT time specification is h:m:s, like 100:00:00 for 100 hours (or "100::", while "100" means 100 seconds).
    see man sge_types.

Host Groups

The GE supports the concept of a host group, i.e., a list of hosts (compute nodes).

  • You can request that a job run only on computers in a given a host group, and
    we use these host groups to restrict queues to specific list of hosts.
  • To request computers of a given host group, use something like this  "-q mThC.q@@ib-hosts" (yes there is a double "@"),
  • In fact the queue specification to qsub can be a RE, so -q '?ThC.@@ib-hosts', means any high-cpu queue but only hosts on IB.
  • You can get the list of all the host groups with (show host group list)
       % qconf -shgrpl
  • and get the list of hosts for a specific host group  with
       % qconf -shgrp <host-group-name>
  • Here is the list of host groups:



     all the hosts


     high CPU hosts


     high memory hosts (521GB/host)


     extra large memory hosts (1TB/host)


     SM hosts (for special project)


     hosts on the IB


     hosts with 24 CPUs


     hosts with 40 CPUs


     hosts with 64 CPUs


     hosts with AVX-capable CPUs

    @mlx4-hostshosts with MLX4 IB
    @mlx5-hostshosts with MLX5 IB

CPU Architecture

  • The cluster is composed of compute nodes with different CPU architectures.
  • You can tell the scheduler to run your job on specific CPU architecture(s) using the cpu_arch resource.


  • The cluster's CPU architecture composition is as follows:
Compute NodesCPU Arch.CPU Model Name
compute-00-xxhaswellIntel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz
compute-43-xxhaswellIntel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz
compute-64-xxskylakeIntel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
compute-73-xxhaswellIntel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz
compute-79-xxskylakeIntel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
compute-81-xxpiledriverAMD Opteron(tm) Processor 6274
compute-81-xxpiledriverAMD Opteron(tm) Processor 6376
compute-82-xxsandybridgeIntel(R) Xeon(R) CPU E5-4617 0 @ 2.90GHz
ivybridgeIntel(R) Xeon(R) CPU E5-4640 v2 @ 2.20GHz
compute-84-xxskylakeIntel(R) Xeon(R) Platinum 8280 CPU @ 2.70GHz
compute-93-xxhaswellIntel(R) Xeon(R) CPU E7-8867 v3 @ 2.50GHz
broadwellIntel(R) Xeon(R) CPU E7-8860 v4 @ 2.20GHz


Grid Engine/Qsub

  • You can use the cpu_arch resource to request a specific architecture or a set of architectures or avoid specific architecture(a), like in

qsub -l cpu_arch=haswell

such job will only run on nodes with a haswell CPU architecture.

  • Alternatively, you can use logic constructs as follows:

qsub -l cpu_arch='!piledriver'          - any CPU, except piledriver (AMD).

qsub -l cpu_arch='haswell|skylake' - either haswell OR skylake CPUs.


  • You can retrieve the node's CPU architecture with the command get-cpu_arch, accessible after loading the module tools/local.
  • You can query the cpu_arch list within a queue with, for example,

qstat -F cpu_arch -q sThC.q

  • You can set the environment variable cpu_arch by loading the module tools/cpu_arch.


  • Each compiler allows you to specify specific target processors (aka CPU architectures).
  • The syntax is different for each compiler, read carefully the compiler's user guide.


  • Run only on two types of architectures:
Restrict to some type of arch
#$ -cwd -j y -N demo1 -o demo1.log
#$ -l cpu_arch='haswell|skylake'
  • Run a different executable depending on the node's CPU architecture, and tell the scheduler to avoid AMD processors:
Run different code for different arch
#$ -cwd -j y -N demo2 -o demo2.log
#$ -l cpu_arch='!pilerdriver'
module load tools/cpu_arch

Queue Selection Validation and/or Verification

  • You can submit a job script and verify if the GE can run it, i.e.,
    (lightbulb) can the GE find the adequate queue and allocate the requested resources?

  • The qsub flag "-w v" will run a verification, while "-w p" will poke whether the job can run, but the job will not be submitted:
       % qsub -w v my_test.job
       % qsub -w p my_test.job
    The difference being that "-w v" checks against an empty cluster, while "-w p" validates against the cluster as is status.

  • (warning) By default all jobs are submitted with "-w e", producing an error for invalid requests.
    Overriding it with "-w w" or "-w n" can result in jobs that are queued but will never run
    as they request more resources than will ever be available.

  • (lightbulb) You can also use the -verify flag, to print detailed information about the would-be job as though  qstat -j was used,
    including the effects of command-line parameters and the external environment, instead of submitting job:
      % qsub -verify my_test.job

3. Hardware limits

The following table shows the current hardware limits (to be reviewed)

QueueNumber ofNumber ofAvailable
namenodesslotsCPU per nodeMemoryComment
?ThC.q 70 3376 32 to 64 >4GB/CPUhigh-CPU queues
?ThM.q191144 24 to 112 >512GB per nodehigh-memory queuea
uTxlM.rq 317640 to 96 >1TB per nodeextra large memory queue, restricted
uTSSD.tq 5312n/an/alocal SSDs, restricted test queue
qrsh.iq240n/a256GB per node

use qrsh or qlogin (not qsh or qsub)

lTIO.sq28n/an/aI/O queue to access /store 
uTGPU.tq3n/an/an/aGPU batch queue, restricted test queue
qGPU.iq3n/an/an/aGPU interactive queue, restricted

The values in this table change as we modify the hardware configuration, you can verify them with either

   % qstat -g c


  % module load tools/local

  % qstat+ -gc


   % qhost


  • We also impose software limits, namely how much of the cluster a single user can grab (see resource limits in Submitting Jobs. )
  • If your pending requests will exceed these limits, your queued jobs will wait;
  • If you request inconsistent or unavailable resources, you will get the following error message:
    Unable to run job: error: no suitable queues.
    You can use "-w v" or "-verify" to track down why the GE can't find a suitable queue, as described elsewhere on this page.

Last updated    SGK

  • No labels