RUNNING TRANSP ON THE FLUX CLUSTER (PPPL)
PREREQUISITES
Users must have:
- A valid UNIX login account at PPPL
- A valid TRANSP account
- SSH access to the FLUX cluster
Log in to FLUX:
ssh flux.pppl.gov
LOAD THE REQUIRED MODULE
After logging in, load the NTCC module required for TRANSP:
module load mod_ntcc ntcc/gcc/13.2.0/24.8.0_flux
Note:
As of today, ntcc/gcc/13.2.0/24.8.0_flux is the only module that supports running TRANSP on the FLUX cluster.
JOB SUBMISSION AND PARTITIONS
When a TRANSP job is started on FLUX, the system automatically detects the FLUX environment and submits the job to the appropriate partition.
| Option | Partition | Maximum Walltime |
|---|---|---|
| (default) | all | up to 24 hours |
| fast | short | up to 4 hours |
| big | lowpri | up to 96 hours |
- Default runs are submitted to the
longpartition - Use the
fastoption for short test runs - Use the
bigoption for long, low-priority runs
RECOMMENDED ENVIRONMENT VARIABLES
To avoid interactive prompts from tr_send, users are strongly encouraged to set:
export MDS_TRANSP_SERVER=TRANSPGRID.PPPL.GOV
export TR_EMAIL=[USERNAME]@pppl.gov
SCHEDULING NOTES
- Job priority on the FLUX cluster differs from the former McCune cluster
- Users are encouraged to explicitly set walltime
- Use the
fastorbigoptions when appropriate
MONITORING JOBS
Web interface:
https://transpgridmonitor.pppl.gov
Command line (on FLUX login node):
squeue -u "$USER"
JOB LOGS
While a job is running, the TRANSP log file is available at:
/scratch/shared/$(whoami)/transp_compute/<TOK>/<RUNID>/<RUNID>tr.log
Replace:
<TOK>with the tokamak name<RUNID>with the TRANSP run ID
JOB COMPLETION AND FAILURES
- If a job fails, the user will receive an email notification
- For PPPL users, completed runs are stored in the MDSplus tree on the TRANSPGRID server at PPPL
EXAMPLE WORKFLOW
-
Start a TRANSP run using the
fastpartition and interactive walltime selection:
Follow the interactive prompts to complete the setup.tr_start <RUNID> fast walltime -
Submit the run to the queue:
tr_send <RUNID>