This page briefly describes the storage options that are available to you, who supports them, and how you can use best use them as part of your research computing.
NFS Mounted Storage
NFS storage includes your home directory and any leased storage that you have access to. It can be recognized by the path you use to access it (if the path starts with /nv, then you're using a NFS-mounted storage location).
Leased space has some key advantages: it's very stable, and it's the only type of storage easily accessible from all compute nodes. If your jobs don't do much input/output during a run and/or don't create/read very large quantities of data, then NFS-mounted storage might be the best option for you.
The disadvantages of NFS-mounted storage are cost and performance. Your home directory space is freely provided, but small, and it may not be a trival expense for you to lease the amount of disk space you actually need (please review what options you have to lease space from ITS, and deterrmine if your needs can be affordably met by leasing additional space). If your jobs do lots of IO or long and big reads/writes, then your job performance will be very poor.
NFS Mounted Storage is not managed by ARCS or the ITS-HPC support group (however, the ITS-HPC group does define the import definitions on Rivanna, and should be contacted when adding existing leased space so that Rivanna will know how to mount it). Please contact the help desk if you have any questions or require assistance with your NFS storage.
All Rivanna nodes with a connection to the Infiniband network (and the login node) mount Rivanna's Lustre filesystem; you can access /scratch/<your userID> on such nodes.
Your scratch space is freely provided, and one of the most versatile storage locations available to you -- for certain use cases and with the right tuning, /scratch performs extremely well: upwards of 10x faster than NFS-mounted storage. There are quotas imposed, but they are not strictly enforced: being over quota usually just means not being able to submit jobs until you're not over quota any more, and in many cases you can have your quota adjusted simply by asking.
While /scratch is usually the best and recommend option for all use cases, it does have a few major disadvantages. The biggest disadvantage is that /scratch is not accessible from the economy-queue nodes. Jobs heavily using /scratch often exhibit noticeable performance inconsistencies between runs (it is possible to achieve a lower but consistent rate, still superior to NFS-mounted performance, and this option is recommended for researchers who are sensitive to run-times).
If you wish to tune your scratch folder for performance specific to your job activity, please let us know.
The Lustre Mounted storage is managed by the ITS-HPC support group. With the exception of the economy queue nodes which use a copy of it, the /share/apps folder is located on the Lustre storage system. Please contact firstname.lastname@example.org if you have any questions or require assistance with /scratch or /share/apps. If you wish to request a very large increase in your /scratch quota, please request a Supplemental Storage Allocation
All UVA users have free access to UVA Box, a large cloud-based storage container. After using FastX to access Rivanna you can access your UVA Box account using Firefox, and then directly transfer data to and from your box account. You can just as easily access the same box account from home, or on the go: wherever you have Internet access. There are many other advantages to using UVA box, which are best described on their website.
The disadvantages of UVA Box are that transfers are very slow (much slower then NFS), and that running jobs cannot directly read or write data from your UVA Box space.
UVA Box is not managed by ARCS or the ITS-HPC support group. Please contact the help desk if you have any questions or require assistance with your UVA Box account.
There is an example batch script demonstrating how to use /scratch on the economy queue by way of shared memory on Rivanna in the /share/apps/templates/submission_scripts/ folder called economy_using_shm. It may not be trivial for you to take advantage of /dev/shm, or even applicable, but if it is you will benefit greatly by using it.
While most Rivanna compute nodes have a great deal of physical RAM, some of the codes that run on these nodes ignore RAM capacity, and instead try to force legendary amounts of input/output on a NFS storage location. This situation leads to extremely poor job performance, but in cases where all data can fit into shared memory (RAM), the performance improvements can be extreme.
There are some disadvantages of using shared memory to be mindful of:
- /dev/shm is local to a compute node: parallel codes running accross multiple nodes will have a very difficult time making good use of it.
- /dev/shm is shared memory: trying to use too much of it as a filesystem will cause problems.
- Share memory is wiped in the event of power loss.
Please contact email@example.com for more information.