Your Disk Space


* Compute Canada documentation:

The Guillimin cluster features a global parallel storage system with a current capacity of 3.7 PB. As a user, you have access to 4 conceptually different storage areas, where you can keep your source codes, executables, permanent and temporary data, results of your calculation, etc.  Depending on the type of mentioned data, they must be kept in one of the following 4 areas:

Home directories

Home directories are mostly for storing your source codes, working on your programs, compiling, and possibly for storing other important information of small volume. Here are the key features of your home directory:

  • Location: /home/<username>
  • Size quota is applied: 10 GB
  • You home directory is by default readable by any user of the same group. You can change these access rights.
  • Backed up on a daily basis


  • Do not run large-scale calculations from your homedir, unless it is a small test run, which does not take much disk space. Remember, you have a limited available disks space in your home directory (10 GB)! Please, use your Project Space (see below) to run calculation.
  • Never put homedir as a scratch space in your your job scripts! Use a Scratch Space (see below) for that.

Project space

The purpose of the project space directory is to host all your computational projects. Here you can compiler your code, keep all necessary input data, output the results of your calculations, etc. This is the directory you should normally use to submit your jobs from. Here are the key features of your project space directory:

  • The project directory is automatically created not for every user, but for Resource Allocation Project (RAP) Identifier (rapID), which you obtain during registration on Compute Canada website. Therefore, all users, who are the part of the same Allocation Project share the same project directory.
  • Location: /gs/project/rapID/ or /sb/project/rapID/
  • Size quota of 1 TB is applied to the whole rapID directory, so it is the responsibility of the users who share the same project to keep its size below the limit.
  • Project directory has read/write access by all members of RAP, but you can impose different access rules on your own subdirectories.
  • There is NO backup policy for project directories

NOTICE: If the results of your computations are extremely valuable and their recalculation will take a lot of time, then a good idea is to copy them to your homedir (if the data has reasonable size), or to make a backup on your own external disk.

Global scratch space

Scratch directory is mostly used by running programs to keep temporary files during the run. The philosophy of the scratch directory is that information stored there is no longer needed once your job is terminated. A lot of third-party software packages preview the environment variable to point to the directory to store temporary data. This is what you exactly should use the scratch space for. Here are the key features of the scratch space directory:

  • Location: $SCRATCH (gs/scratch/<username>)
  • Per user size quota of 1 TB (New from 2013-12-17)
  • Read/Write access by user only
  • No backup policy
  • Auto-clean policy: All files that have not been modified in 45 days will be automatically deleted.

Regarding the auto-clean policy: The cleaning process is run on a monthly basis on the 15th day of each month. Each month users with scratch files that have been identified for deletion will receive an email notification one week in advance of the removal date (so on the 8th day of each month). The email will include reference to a list of files that will be deleted. The user can then preserve any important files either by copying them to their group project space or to external systems at their home institute as soon as possible prior to the 15th of the month.

NOTICE: You can also use scratch space for short-term storage of any data files, but keep in mind that by its definition the scratch space is subject to the auto-clean policy. The Guillimin support team reserves the right to update and modify the scratch space auto-clean policy as necessary to enable efficient utilization of the storage resources.

In brief: The scratch folder is intended only for temporary files. Longer term storage of files within your scratch folder is not permitted.  Please use either your project space or external systems at your home institute to store any files for longer than 45 days.

Local scratch space

Each computing node of Guillimin cluster has a scratch partition on it's local hard drive, available for users. This scratch directory is local to every node, and therefore is accessible only by codes running on that particular node. We recommend using this scratch space as a temporary storage of data, generated by process, running on that node only. Please be aware that this data will not be accessible by any other process running on another computing node. We recommend using local scratch if your program generates a lot of local data which is needed only during the runtime and only by that particular process. By saving it locally you will avoid sending large amounts of data through the network. Here are the key features of the local scratch:

  • Location based on your job ID: $LSCRATCH (/localscratch/$PBS_JOBID)
  • Size of /localscratch on every node: 300GB
  • No backup policy
  • The $PBS_JOBID directory is deleted once your job is finished

Writing and Reading Files on Local Shared Memory

Each computing node of Guillimin has a special space in memory, which is very fast, where one could write and read small temporary files. Here are the key features of the shared memory space:

  • Location based on your job ID: $RAMDISK (/dev/shm/$PBS_JOBID)
  • The /dev/shm space is half the size of the total amount of memory of the node. Since it resides in memory, the space used in /dev/shm must not be used by other processes. Otherwise, the node will get out of memory, and the job consuming the most memory will be cancelled. Here are the different sizes:
    • Size of /dev/shm on sw nodes : 18 GB
    • Size of /dev/shm on sw2 nodes : 32 GB
    • Size of /dev/shm on hb nodes : 12 GB
    • Size of /dev/shm on lm nodes : 36 GB
    • Size of /dev/shm on lm2 nodes : 64 GB
  • No backup policy
  • The $PBS_JOBID directory is deleted once your job is finished

How to check your QUOTA

For your home and scratch file directories, please use the command "myquota".  For your project directory, please use the command "prquota" to check. These two tools will print out quota report and help you to find out your QUOTA on our global file system. The "prquota" will provide not only project quota info but also user quota for each project space you belong too. Please try "prquota --help" for further information.

Detail for the quota report:

Update Time: It indicates when is the last time the quota info been updated.

Soft: Soft quota limit. Unlike hard quota limit, there is no physical restriction to prevent users from saving their data. However, Once soft quota limit reach, it will triggers 7 days Grace Period.

Hard: Hard quota limit will prevent users from writing data to GPFS disk space.

Expired: Once Soft quota limit reach, this parameter will show the days of 'Grace Period'. The file system will start counting down from 7 days till it end up with 'Expired'. After that, the hard quota limit reach.

File Number: on prquota -u, it shows user quota for each projectr space you belong to. The file number field show the total data number for each user on each project space.