Storage mounted at /scratch/work/user/ will be decommissioned very soon.
- will be remounted read-only on 2020-01-31
- will be removed from cluster on 2020-02-29
All login and compute nodes may access same data on shared file systems. Compute nodes are also equipped with local (non-shared) scratch, ramdisk and tmp file systems.
Policy (In a Nutshell)¶
Do not use for archiving!
Don't use shared file systems as a backup for large amount of data or long-term archiving mean. The academic staff and students of research institutions in the Czech Republic can use CESNET storage service, which is available via SSHFS.
Shared File Systems¶
Salomon computer provides two main shared file systems, the HOME file system and the SCRATCH file system. The SCRATCH file system is partitioned to WORK and TEMP workspaces. The HOME file system is realized as a tiered NFS disk storage. The SCRATCH file system is realized as a parallel Lustre file system. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both HOME/SCRATCH file systems for the purpose of sharing data with other users using fine-grained control.
HOME File System¶
The HOME file system is realized as a Tiered file system, exported via NFS. The first tier has capacity 100 TB, second tier has capacity 400 TB. The file system is available on all login and computational nodes. The Home file system hosts the HOME workspace.
SCRATCH File System¶
The architecture of Lustre on Salomon is composed of two metadata servers (MDS) and six data/object storage servers (OSS). Accessible capacity is 1.69 PB, shared among all users. The SCRATCH file system hosts the WORK and TEMP workspaces.
Configuration of the SCRATCH Lustre storage
- SCRATCH Lustre object storage
- Disk array SFA12KX
- 540 x 4 TB SAS 7.2krpm disk
- 54 x OST of 10 disks in RAID6 (8+2)
- 15 x hot-spare disk
- 4 x 400 GB SSD cache
- SCRATCH Lustre metadata storage
- Disk array EF3015
- 12 x 600 GB SAS 15 krpm disk
Understanding the Lustre File Systems¶
A user file on the Lustre file system can be divided into multiple chunks (stripes) and stored across a subset of the object storage targets (OSTs) (disks). The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing.
When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server (MDS) and the metadata target (MDT) for the layout and location of the file's stripes. Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval.
If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency so that all clients see consistent results.
Disk Usage and Quota Commands¶
Disk usage and user quotas can be checked and reviewed using following command:
$ it4i-disk-usage -h # Using human-readable format # Using power of 1024 for space # Using power of 1000 for entries Filesystem: /home Space used: 110GiB Space limit: 238GiB Entries: 40K Entries limit: 500K # based on filesystem quota Filesystem: /scratch Space used: 377GiB Space limit: 93TiB Entries: 14K Entries limit: 10M # based on Lustre quota Filesystem: /scratch Space used: 377GiB Entries: 14K # based on Robinhood Filesystem: /scratch/work Space used: 377GiB Entries: 14K Entries: 40K Entries limit: 1.0M # based on Robinhood Filesystem: /scratch/temp Space used: 12K Entries: 6 # based on Robinhood
In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command. Note that limits are imposed also on number of objects (files, directories, links, etc...) that are allowed to create.
To have a better understanding of where the space is exactly used, you can use following command to find out.
$ du -hs dir
Example for your HOME directory:
$ cd /home $ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr 258M cuda-samples 15M .cache 13M .mozilla 5,5M .eclipse 2,7M .idb_13.0_linux_intel64_app
This will list all directories which are having MegaBytes or GigaBytes of consumed space in your actual (in this example HOME) directory. List is sorted in descending order from largest to smallest files/directories.
To have a better understanding of previous commands, you can read manpages.
$ man lfs
$ man du
Extended Access Control List (ACL)¶
Extended ACLs provide another security mechanism beside the standard POSIX ACLs which are defined by three entries (for owner/group/others). Extended ACLs have more than the three basic entries. In addition, they also contain a mask entry and may contain any number of named user and named group entries.
ACLs on a Lustre file system work exactly like ACLs on any Linux file system. They are manipulated with the standard tools in the standard manner. Below, we create a directory and allow a specific user access.
[email@example.com ~]$ umask 027 [firstname.lastname@example.org ~]$ mkdir test [email@example.com ~]$ ls -ld test drwxr-x--- 2 vop999 vop999 4096 Nov 5 14:17 test [firstname.lastname@example.org ~]$ getfacl test # file: test # owner: vop999 # group: vop999 user::rwx group::r-x other::--- [email@example.com ~]$ setfacl -m user:johnsm:rwx test [firstname.lastname@example.org ~]$ ls -ld test drwxrwx---+ 2 vop999 vop999 4096 Nov 5 14:17 test [email@example.com ~]$ getfacl test # file: test # owner: vop999 # group: vop999 user::rwx user:johnsm:rwx group::r-x mask::rwx other::---
Default ACL mechanism can be used to replace setuid/setgid permissions on directories. Setting a default ACL on a directory (-d flag to setfacl) will cause the ACL permissions to be inherited by any newly created file or subdirectory within the directory. Refer to this page for more information on Linux ACL at RedHat guide.
Users home directories /home/username reside on HOME file system. Accessible capacity is 0.5 PB, shared among all users. Individual users are restricted by file system usage quotas, set to 250 GB per user. If 250 GB should prove as insufficient for particular user, contact support, the quota may be lifted upon request.
The HOME file system is intended for preparation, evaluation, processing and storage of data generated by active Projects.
The HOME should not be used to archive data of past Projects or other unrelated data.
The files on HOME will not be deleted until end of the users lifecycle.
The workspace is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files.
|User space quota||250 GB|
|User inodes quota||500 k|
The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. There are 54 OSTs dedicated for the SCRATCH file system.
Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 10 m inodes and 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. If 100 TB space or 10 m inodes should prove as insufficient for particular user, contact support, the quota may be lifted upon request.
The WORK workspace resides on SCRATCH file system. Users may create subdirectories and files in directories /scratch/work/user/username and /scratch/work/project/projectid. The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid.
The WORK workspace is intended to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up.
Files on the WORK file system are persistent (not automatically deleted) throughout duration of the project.
The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK.
The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory.
Users are advised to save the necessary data from the TEMP workspace to HOME or WORK after the calculations and clean up the scratch files.
Files on the TEMP file system that are not accessed for more than 90 days will be automatically deleted.
|WORK workspace||TEMP workspace|
|User space quota||100 TB|
|User inodes quota||10 M|
|Number of OSTs||54|
Local RAM Disk¶
Every computational node is equipped with file system realized in memory, so called RAM disk.
The local RAM disk is mounted as /ramdisk and is accessible to user at /ramdisk/$PBS_JOBID directory.
The RAM disk is private to a job and local to node, created when the job starts and deleted at the job end.
The local RAM disk directory /ramdisk/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
The local RAM disk file system is intended for temporary scratch data generated during the calculation as well as for high-performance access to input and output files. Size of RAM disk file system is limited. It is not recommended to allocate large amount of memory and use large amount of data in RAM disk file system at the same time.
Be very careful, use of RAM disk file system is at the expense of operational memory.
|Local RAM disk|
|Throughput||over 1.5 GB/s write, over 5 GB/s read, single thread, over 10 GB/s write, over 50 GB/s read, 16 threads|
Global RAM Disk¶
The Global RAM disk spans the local RAM disks of all the nodes within a single job.
The Global RAM disk deploys BeeGFS On Demand parallel filesystem, using local RAM disks as a storage backend.
The Global RAM disk is mounted at /mnt/global_ramdisk.
The global RAM disk is on-demand. It has to be activated by global_ramdisk=true in the qsub command.
$ qsub -q qprod -l select=4,global_ramdisk=true ./jobscript
This command would submit 4 node job in qprod queue, once running a 440GB RAM disk shared across the 4 nodes will be created. The RAM disk will be accessible at /mnt/global_ramdisk, files written to this RAM disk will be visible on all 4 nodes.
The file system is private to a job and shared among the nodes, created when the job starts and deleted at the job end.
The Global RAM disk will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
The files on the Global RAM disk will be equally striped across all the nodes, using 512k stripe size. Check the Global RAM disk status:
$ beegfs-df -p /mnt/global_ramdisk $ beegfs-ctl --mount=/mnt/global_ramdisk --getentryinfo /mnt/global_ramdisk
Use Global RAM disk in case you need very large RAM disk space. The Global RAM disk allows for high performance sharing of data among compute nodes within a job.
Be very careful, use of Global RAM disk file system is at the expense of operational memory.
|Global RAM disk|
|Throughput||3*(N+1) GB/s, 2 GB/s single POSIX thread|
N = number of compute nodes in the job.
|Mountpoint||Usage||Protocol||Net Capacity||Throughput||Space/Inodes quota||Access||Service|
|/home||home directory||NFS, 2-Tier||0.5 PB||6 GB/s||250 GB / 500 k||Compute and login nodes||backed up|
|/scratch/work||large project files||Lustre||1.69 PB||30 GB/s||100 TB / 10 M||Compute and login nodes||none|
|/scratch/temp||job temporary data||Compute and login nodes||files older 90 days removed|
|/ramdisk||job temporary data, node local||tmpfs||110 GB||90 GB/s||none / none||Compute nodes, node local||purged after job ends|
|/mnt/global_ramdisk||job temporary data||BeeGFS||(N*110) GB||3*(N+1) GB/s||none / none||Compute nodes, job shared||purged after job ends|
N = number of compute nodes in the job.
CESNET Data Storage¶
Do not use shared file systems at IT4Innovations as a backup for large amount of data or long-term archiving purposes.
The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use CESNET Storage service.
The CESNET Storage service can be used for research purposes, mainly by academic staff and students of research institutions in the Czech Republic.
User of data storage CESNET (DU) association can become organizations or an individual person who is either in the current employment relationship (employees) or the current study relationship (students) to a legal entity (organization) that meets the “Principles for access to CESNET Large infrastructure (Access Policy)”.
User may only use data storage CESNET for data transfer and storage which are associated with activities in science, research, development, the spread of education, culture and prosperity. In detail see “Acceptable Use Policy CESNET Large Infrastructure (Acceptable Use Policy, AUP)”.
The procedure to obtain the CESNET access is quick and trouble-free.
CESNET Storage Access¶
Understanding CESNET Storage¶
It is very important to understand the CESNET storage before uploading data. Read first.
Once registered for CESNET Storage, you may access the storage in number of ways. We recommend the SSHFS and RSYNC methods.
SSHFS: The storage will be mounted like a local hard drive
The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion.
First, create the mount point
$ mkdir cesnet
Mount the storage. Note that you can choose among the ssh.du1.cesnet.cz (Plzen), ssh.du2.cesnet.cz (Jihlava), ssh.du3.cesnet.cz (Brno) Mount tier1_home (only 5120M !):
$ sshfs firstname.lastname@example.org:. cesnet/
For easy future access from Anselm, install your public key
$ cp .ssh/id_rsa.pub cesnet/.ssh/authorized_keys
Mount tier1_cache_tape for the Storage VO:
$ sshfs email@example.com:/cache_tape/VO_storage/home/username cesnet/
View the archive, copy the files and directories in and out
$ ls cesnet/ $ cp -a mydir cesnet/. $ cp cesnet/myfile .
Once done, remember to unmount the storage
$ fusermount -u cesnet
Rsync provides delta transfer for best performance, can resume interrupted transfers
Rsync is a fast and extraordinarily versatile file copying tool. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.
Rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time. Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated.
More about Rsync here.
Transfer large files to/from CESNET storage, assuming membership in the Storage VO
$ rsync --progress datafile firstname.lastname@example.org:VO_storage-cache_tape/. $ rsync --progress email@example.com:VO_storage-cache_tape/datafile .
Transfer large directories to/from CESNET storage, assuming membership in the Storage VO
$ rsync --progress -av datafolder firstname.lastname@example.org:VO_storage-cache_tape/. $ rsync --progress -av email@example.com:VO_storage-cache_tape/datafolder .
Transfer rates of about 28 MB/s can be expected.