SuperPod Cluster FAQ

How do I get an SuperPod account?

Please refer to HKUST SUPERPOD - Account Applications

How do I login in SuperPod Cluster?

Please refer to HKUST SUPERPOD - HOW TO LOGIN

Why am I unable to connect to the SuperPod Cluster?

Before using the SuperPod, make sure you are connected to the campus network (either the wired connection or the "eduroam" Wi-Fi service). If you are off campus, use Secure Remote Access (VPN). For more information on setting up VPN, please visit the dedicated webpage.

How do I check available module in SuperPod Cluster?

Perform "module available" command in  slogin-* nodes in SuperPod.

How can I access GPU tools and CUDA in SuperPod Cluster?

Please note that GPU resources are not available on slogin-* nodes in SuperPod. Before running your slurm job, ensure that all CUDA modules are loaded.

How do I get started with Slurm?

Please refer to HKUST SUPERPOD - USE OF SLURM JOB SCHEDULING SYSTEM

How to use Apptainer (Singularity) in SuperPod Cluster?

Please refer to HKUST SUPERPOD - APPTAINER (SINGULARITY)

How can I reserve multiple GPUs in a partition?

For an interactive session, execute the following command:

srun -p normal --gpus=2 --pty $SHELL

This command opens a shell session so that you can interact with the allocated GPU resources. Alternatively, if you are using sbatch to submit your job, please define the partition as follows: 

#SBATCH -p normal
#SBATCH --gpus=2

This will ensure that your job is submitted to the "normal" partition for processing.

Scheduling Tip: Specifying the time required for the job with --time=<time> help Slurm to perform backfill scheduling and it can shorten the wait time to schedule.

How does the Slurm scheduler determine the job priority?

Please refer to HKUST SUPERPOD - JOB PRIORITY

How can I check disk quota?

For checking home or project directories:

df -h /home/your_net_id

df -h /project/your_prj_group

For checking scratch space:

lfs quota -h /scratch/your_prj_name