Search code examples
cluster-computingcpuslurmsbatch

SLURM - How can I determine what specific CPUs a job is using?


I'm working on a tool for monitoring the jobs currently running on a cluster (19 nodes, 40 cores). Is there any way to determine which specific cpus each job in the slurm queue is using? I'm getting data using 'pidstat', 'mpstat', and 'ps -eFj', that tells me what processes are running on a particular core, but have no way to relate those process IDs to the Job IDs that Slurm uses. 'scontrol show job' gives a lot of information, but not specific cpu allocation. Is there any way to do this?

Heres the code that collects the data:

#!/usr/bin/env python

import subprocess
import threading
import time

def scan():
  data = [[None, None, None] for i in range(19)]
  def mpstat(node):
    if(node == 1):
      output = subprocess.check_output(['mpstat', '-P', 'ALL', '1', '1'])
    else:
      output = subprocess.check_output(['ssh', 'node' + str(node), 'mpstat', '-P', 'ALL', '1', '1'])
    data[node - 1][0] = output
  def pidstat(node):
    if(node == 1):
      output = subprocess.check_output(['pidstat', '1', '1'])
    else:
      output = subprocess.check_output(['ssh', 'node' + str(node), 'pidstat', '1', '1'])
    data[node - 1][1] = output
  def ps(node):
    if(node == 1):
      output = subprocess.check_output(['ps', '-eFj'])
    else:
      output = subprocess.check_output(['ssh', 'node' + str(node), 'ps', '-eFj'])
    data[node - 1][2] = output
  threads = [[None, None, None] for i in range(19)]
  for node in range(1, 19 + 1):
    threads[node - 1][0] = threading.Thread(target=mpstat, args=(node,))
    threads[node - 1][0].start()
    threads[node - 1][1] = threading.Thread(target=pidstat, args=(node,))
    threads[node - 1][1].start()
    threads[node - 1][2] = threading.Thread(target=ps, args=(node,))
    threads[node - 1][2].start()
  while True:
    alive = [[not t.isAlive() for t in n]  for n in threads]
    alive = [t for n in alive for t in n]
    if(all(alive)):
      break
    time.sleep(1.0)
  return(data)

Solution

  • By using the -d flag you can get the CPU_IDs of the job on each node as shown below.

    $ scontrol show job -d $SLURM_JOBID
    JobId=1 JobName=bash
       UserId=USER(UID) GroupId=GROUP(GID) MCS_label=N/A
       Priority=56117 Nice=0 Account=account QOS=interactive
       JobState=RUNNING Reason=None Dependency=(null)
       Requeue=1 Restarts=0 BatchFlag=0 Reboot=0 ExitCode=0:0
       DerivedExitCode=0:0
       RunTime=00:00:10 TimeLimit=02:00:00 TimeMin=N/A
       SubmitTime=2019-04-12T17:34:11 EligibleTime=2019-04-12T17:34:11
       StartTime=2019-04-12T17:34:12 EndTime=2019-04-12T19:34:12 Deadline=N/A
       PreemptTime=None SuspendTime=None SecsPreSuspend=0
       Partition=defq AllocNode:Sid=node2:25638
       ReqNodeList=(null) ExcNodeList=(null)
       NodeList=node1
       BatchHost=node2
       NumNodes=1 NumCPUs=2 NumTasks=1 CPUs/Task=2 ReqB:S:C:T=0:0:*:*
       TRES=cpu=2,mem=17600M,node=1
       Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=*
         Nodes=node1 CPU_IDs=12-13 Mem=17600 GRES_IDX=
       MinCPUsNode=2 MinMemoryCPU=8800M MinTmpDiskNode=0
       Features=(null) DelayBoot=00:00:00
       Gres=(null) Reservation=(null)
       OverSubscribe=OK Contiguous=0 Licenses=(null) Network=(null)
       Command=bash
       WorkDir=/home/USER
       Power=
    

    If this information is not enough you may find useful the output of scontrol pidinfo PID

    $ scontrol pidinfo 43734
    Slurm job id 21757758 ends at Fri Apr 12 20:15:49 2019
    slurm_get_rem_time is 6647