Job Submission
Overview / Slurm Basics¶
Anvil uses the Slurm Workload Manager for job scheduling and management. With Slurm, a user requests resources and submits a job to a queue. The system takes jobs from queues, allocates the necessary compute nodes, and executes them.
SSHing into Anvil lands on login node
Users will typically SSH to Anvil (<username>@anvil.rcac.purdue.edu) but note this lands you on a login node. Slurm should always be used to submit work as a job rather than running jobs directly on a login node.
On Anvil, you do not run programs directly on the system. Instead, you submit jobs to a queue. A queue is simply a waiting line for computing resources. When you submit a job, you tell the scheduler:
- How many resources you need (cores, GPUs, memory, etc.)
- How long the job will run
- What type of hardware you need
Running jobs on login node is against Anvil policy
All users share the login nodes, and running anything but the smallest test job will negatively impact everyone's ability to use Anvil.
The scheduler places your job in the appropriate queue and runs it when the requested resources become available. Different queues exist because different types of jobs have different needs. For example, some jobs need GPUs, some need large memory, and some only run for a short time. Separating these helps the system run efficiently and fairly for everyone.
ACCESS users with allocations can submit jobs to several types of queues:
- CPU queues – Standard computing jobs
- GPU queues – Jobs that require GPUs
- AI queues – Specialized hardware for AI workloads
- Large-memory queues – Jobs that need very large RAM
Anvil Queues
You can choose which queues you want access to by exchanging your Service Units (credits) for the corresponding queue (CPU, GPU, AI). You can have access to multiple queues if you divvy out your credits.
Other important queue considerations
- Anvil provides a debug queue for testing and debugging codes.
- Anvil supports shared-node jobs (more than one job on a single node). Many applications are serial or can only scale to a few cores. Allowing shared nodes improves job throughput, provides higher overall system utilization and allows more users to run on Anvil.
- Anvil supports long-running jobs - run times can be extended to four days for jobs using up to 16 full nodes.
- The maximum allowable job size on Anvil is 7,168 cores. To run larger jobs, submit a consulting ticket to discuss with Anvil support.
- Shared-node queues will be utilized for managing jobs on the GPU and large memory nodes.
Anvil Queues (Partitions)¶
| Queue Name | Node Type | Max Nodes per Job | Max Cores per Job | Max Duration | Max Running Jobs per User | Max Running + Submitted Jobs | Charging Factor |
|---|---|---|---|---|---|---|---|
| debug | regular | 2 nodes | 256 cores | 2 hrs | 1 | 2 | 1 |
| gpu-debug | gpu | 1 node | 2 GPUs | 0.5 hrs | 1 | 2 | 1 |
| wholenode | regular | 16 nodes | 2,048 cores | 96 hrs | 64 | 2500 | 1 (node-exclusive) |
| wide | regular | 56 nodes | 7,168 cores | 12 hrs | 5 | 10 | 1 (node-exclusive) |
| shared | regular | 1 node | 128 cores | 96 hrs | 1280 cores | – | 1 |
| highmem | large-memory | 1 node | 128 cores | 48 hrs | 2 | 4 | 4 |
| gpu | gpu | – | – | 48 hrs | – | – | 1 |
| ai | ai | – | – | 48 hrs | – | – | 1 |
GPU and AI Queues
- Maximum of 12 GPUs in use per user
- Maximum of 32 GPUs in use per allocation
Default Partition
Make sure to specify the desired partition when submitting your jobs (for example, -p wholenode). If no partition is specified, the job will be directed into the default partition (shared).
Charges for Whole-Node Partitions
If the partition is node-exclusive (e.g., wholenode and wide), even if you request only one core, the job will be allocated an entire node. The job will be charged for 128 cores, and squeue will reflect this allocation. See SU accounting for more details.
To display all Slurm partitions and their current usage, run:
Running Jobs¶
For interactive jobs, navigate to interactive jobs.
Job Submission Script¶
To submit work to a Slurm queue, you must first create a job submission file. This job submission file is essentially a simple shell script. It will set any required environment variables, load any necessary modules, create or modify files and directories, and run any applications that you need:
Mandatory SBATCH fields
You must at a minimum specify:
- Account (-A or --account): this is your allocation account. Run
$ mybalanceto see allocation accounts. - Partition (-p). Run
$ showpartitionsto view all available partitions.
Once your script is prepared, you are ready to submit your job.
The standard Slurm environment variables that can be used in the job submission file are listed in the table below:
Job Script Environment Variables
| Name | Description |
|---|---|
| SLURM_SUBMIT_DIR | Absolute path of the current working directory when you submitted this job |
| SLURM_JOBID | Job ID number assigned to this job by the batch system |
| SLURM_JOB_NAME | Job name supplied by the user |
| SLURM_JOB_NODELIST | Names of nodes assigned to this job |
| SLURM_SUBMIT_HOST | Hostname of the system where you submitted this job |
| SLURM_JOB_PARTITION | Name of the original queue to which you submitted this job |
Submitting a Job¶
Once you have a job submission file, you may submit this script to SLURM using the $ sbatch command. Slurm will find, or wait for, available resources matching your request and run your job there.
To submit your job to one compute node with one task:
Overriding #SBATCH
If you use the command line to specify resources, such as --nodes=1 above, that will override the #SBATCH --nodes configuration value in the job submission file.
Job Defaults
- time: 30 minutes of wall time, or clock time
- nodes: 1
Multi-Node Jobs
Each compute node in Anvil has 128 processor cores. In some cases, you may want to request multiple nodes. To utilize multiple nodes, you will need to have a program or code that is specifically programmed to use multiple nodes such as with MPI. Simply requesting more nodes will not make your work go faster. Your code must utilize all the cores to support this ability. To request 2 compute nodes with 256 tasks:
If more convenient, you may also specify any command line options to sbatch from within your job submission file, using the #SBATCH keyword:
Command-line vs. #SBATCH
If an option is present in both your job submission file and on the command line, the option on the command line will take precedence.
After you submit your job with sbatch, it may wait in the queue for minutes, hours, or even days.
Job queue times
How long it takes for a job to start depends on the specific queue, the available resources, and time requested, and other jobs that are already waiting in that queue. It is impossible to say for sure when any given job will start. For best results, request no more resources than your job requires.
Once your job is submitted, you can monitor the job status, wait for the job to complete, and check the job output.
Interactive jobs¶
In addition to the ThinLinc and OnDemand interfaces, users can also choose to run interactive jobs on compute nodes to obtain a shell that they can interact with. This gives users the ability to type commands or use a graphical interface as if they were on a login node.
To submit an interactive job, use sinteractive to run a login shell on allocated resources.
sinteractive accepts most of the same resource requests as sbatch, so to request a login shell in the compute queue while allocating 2 nodes and 256 total cores, you might do:
Wait times
You can check the predicted wait time for a queued job by running wait_time -j {your_job_id}
To quit your interactive job:
exit or Ctrl-D
Redirecting Job Output¶
It is possible to redirect job output to somewhere other than the default location with the --error and --output directives:
Holding a Job¶
Sometimes you may want to submit a job but not have it run just yet. You may be wanting to allow lab mates to cut in front of you in the queue - so hold the job until their jobs have started, and then release yours.
To place a hold on a job before it starts running, use the scontrol hold job command:
Once a job has started running it can not be placed on hold.
To release a hold on a job, use the scontrol release job command:
Monitoring Jobs¶
Canceling a Job¶
To stop a job before it finishes or remove it from a queue, use the scancel command:
Cancelling all your jobs
Use $ scancel -u $USER to cancel all jobs you currently have in the queue and running.
Checking Job Status¶
Once a job is submitted, there are several commands you can use to monitor the progress of the job. To see your jobs, use the squeue -u $USER command:
To retrieve useful information about your queued or running job, use the scontrol show job command with your job's ID number.
JobStatelets you know if the job is Pending, Running, Completed, or Held.RunTime and TimeLimitwill show how long the job has run and its maximum time.SubmitTimeis when the job was submitted to the cluster.- The job's number of Nodes, Tasks, Cores (CPUs) and CPUs per Task are shown.
WorkDiris the job's working directory.StdOutandStderrare the locations of stdout and stderr of the job, respectively.Reasonwill show why aPENDINGjob isn't running.
Wait times
You can check the predicted wait time for a queued job by running wait_time -j {your_job_id}
For historic (completed) jobs, you can use the $ jobinfo <jobid> command. While not as detailed as scontrol output, it can also report information on jobs that are no longer active. The $ jobscript <jobid> command outputs the full Slurm script used to launch the job.
Checking Job Output¶
Once a job is submitted and has started, it will write its standard output and standard error to files that you can read.
SLURM catches output written to standard output and standard error - what would be printed to your screen if you ran your program interactively. Unless you specified otherwise, SLURM will put the output in the directory where you submitted the job in a file named slurm- followed by the job id, with the extension out. For example slurm-3509.out.
stderr & stdout
Both stdout and stderr will be written into the same file, unless you specify otherwise.
If your program writes its own output files, those files will be created as defined by the program. This may be in the directory where the program was run, or may be defined in a configuration or input file. You will need to check the documentation for your program for more details.
Job Dependencies¶
Dependencies are an automated way of holding and releasing jobs. Jobs with a dependency are held until the condition is satisfied. Once the condition is satisfied jobs only then become eligible to run and must still queue as normal.
Job dependencies may be configured to ensure jobs start in a specified order. Jobs can be configured to run after other job state changes, such as when the job starts or the job ends.
These examples illustrate setting dependencies in several ways. Typically dependencies are set by capturing and using the job ID from the last job submitted.
To run a job after job myjobid has started:
To run a job after job myjobid ends without error:
To run a job after job myjobid ends with errors:
To run a job after job myjobid ends with or without errors:
To set more complex dependencies on multiple jobs and conditions:
Job Accounting¶
Current balance
You can use the $ mybalance command to check your current allocation usage.
CPU vs. GPU charges
On Anvil, the CPU nodes and GPU nodes are charged separately.
CPU Nodes¶
The charge unit for Anvil is the Service Unit (SU). This corresponds to the equivalent use of one compute core utilizing less than approximately 2G of data in memory for one hour.
Charges are based on resource request
Keep in mind that your charges are based on the resources that are tied up by your job and do not necessarily reflect how the resources are used.
For example, if you explicitly request --mem-per-cpu=2G, SLURM may allocate more resources than expected, since the default memory per core on Anvil is slightly less than 2GB (approximately 1896 MB). By requesting exactly 2GB per core, SLURM may allocate additional cores to meet the memory requirement, which can lead to higher SU charges.
Charges on jobs submitted to the shared queues are based on the number of cores and the fraction of the memory requested, whichever is larger. Jobs submitted as node-exclusive will be charged for all 128 cores, whether the resources are used or not.
Jobs submitted to the large memory nodes will be charged 4 SU per compute core (4x wholenode node charge).
GPU Nodes¶
1 SU corresponds to the equivalent use of one GPU utilizing less than or equal to approximately 120G of data in memory for one hour.
Each GPU node on Anvil has 4 GPUs and all GPU nodes are shared.
Filesystem¶
Filesystem storage is not charged.
Extended Examples¶
Python Jobs¶
TODO: Snippet
R Jobs¶
TODO: Snippet