The NLM users that you have authorized can simply add the “gpunlm2025.2” QOS to their batch jobs submitted to the GPU partition, and these jobs will have higher priority, placing them ahead of all non buy-in jobs in the Biowulf job queue, as described in the MOU. The QOS can be specified by adding the following argument to the batch command or script:

--qos=gpunlm2025.2

Note that this argument will only work for batch jobs on Biowulf's gpu partition, not interactive jobs. Interactive jobs already run at a higher priority. When using this QOS, NLM DIR users will also have a slightly higher limit on the number of GPUs that they can use (64 as opposed to 56) in addition to the greatly increased priority. We will adjust the limit on the number of GPUs that can be used at high priority simultaneously based on the impact to the system and in coordination with you as described in the MOU.

FYI, NLM DIR users currently have access to 22,360,000 GPU compute units that were funded in FY2025. All jobs submitted with the gpunlm2025.2 QOS will be charged GPU compute units in accordance with the terms of the MOU (see Appendix A of the MOU for the charge rate for each type of GPU). CIT HPC will provide a weekly report showing per-user utilization and the overall used and remaining balances.

Example:

sbatch --time=36:00:00 --partition=gpu --gres=gpu:a100:1 --cpus-per-task=4 --qos=gpunlm2025.2 --mem=100g --error=${OUTPUT_DIR}/${EID}_log.err --output=${OUTPUT_DIR}/${EID}_log.out --job-name=${EID}_shap ${OUTPUT_DIR}/args.sh