Saluki Compute Cluster

Main Content

renegade dawg

Saluki Compute Cluster: Empowering Collaboration Through Computing

A Data Science and Machine Learning Computing Cluster - with SLURM Scheduler Access

 

Uniting Research, Powering Innovation

Introduction

Welcome to the Saluki Compute Cluster, the School of Computing’s high-performance computing resource, inspired by the renowned agility and endurance of our Saluki mascot. This name traces back to the legacy of Leland "Doc" Lingle, a celebrated Southern Illinois cross country and track coach inducted into the Saluki Hall of Fame in 1978, who helped propose "Salukis" as the school's nickname. As Doc's daughter, Dede Ittner, recounted to NCAA.com, the name captures a defining moment: “One day, Dad took our Saluki to the foot of the hill where he ran cross country. Dad started the car, and the dog ran alongside, keeping pace at 40 to 50 miles per hour. That’s what Salukis do.”

In the same spirit, the Saluki Compute Cluster is built to keep pace with your research, accelerating data science and machine learning projects and powering breakthroughs across cutting-edge computational tasks.

Audio Guide

Click the play button to listen to a brief introduction to the SLURM Scheduler and accessing Saluki Compute Cluster:

Request Access to Saluki Compute Cluster

To gain access to the Saluki Compute Cluster computing resources, please follow these steps:

  1. Visit the SSH Key Registration Page to submit your public SSH key.
  2. You will receive a confirmation email. Please reply to this email specifying which servers you would like access to.
  3. Once your request is processed, you will receive detailed instructions on how to connect to the appropriate servers.

Available Machines

  • DeepMindX – Equipped with dual NVIDIA A100 GPUs, ideal for deep learning tasks.
  • DataCrux – Equipped with dual NVIDIA Tesla V100S.
  • NeuroNet – Equipped with dual NVIDIA Tesla V100S.

SLURM Scheduler User Guide

This guide will help you get started with the SLURM scheduler, which powers Saluki Compute Cluster. It includes essential commands for submitting jobs, monitoring tasks, and managing your computational workloads.

To download the complete user guide, click here.

Running Interactive Jobs

To run an interactive bash shell job, use the srun command:

$ srun --pty --gres=gpu:1 --cpus-per-task=4 --mem=16G --time=04:00:00 bash

Using the following options:
  • --gres=gpu:1
    • Number of GPU
  • --cpus-per-task=4
  • --mem=16G
  • --time=04:00:00
    • Number of hours or D-HH:MM:SS

Running Non-Interactive Jobs

Submit non-interactive jobs using the sbatch command:

$ mkdir ~/slurmjob; cd ~/slurmjob; cp /tutorials/run_jupyter_shared.sbatch ~/slurmjob
$ sbatch ~/slurmjob/run_jupyter_shared.sbatch

Useful Links

For assistance, please contact Michael Allen Barkdoll, Computer Systems Architecture Specialist at admin@cs.siu.edu.