Skip to content

Overview & Getting Started

Cluster Overview

The Statistics cluster provides an environment for running large computational tasks over many computers in parallel. It contains 82 compute nodes, with a total of 3352 CPU cores, 23TB of RAM and 4 GPUs.

Users interact with these compute nodes using a job scheduler called Slurm. Slurm automatically distributes and prioritizes jobs across the compute nodes, and queues jobs when no resources are immediately available.

A large collection of research computing software optimized for our cluster is available using the Module system.

Getting Started

This documentation is intended to be read in the order listed in the left hand index. Please review the documentation carefully before using the cluster.

The Tutorial listed at the end of the index is a step-by-step example walking through the steps required to access the cluster and run a simple R program in parallel. We recommend that new users try the tutorial after reading the documentation.

Need Help?

Contact us at help@stat.washington.edu with any questions, or suggestions on how we can improve this documentation. Our primary support method is email, but we are also happy to meet by Zoom or in person.