Search code examples
slurm

error: _slurm_rpc_node_registration node=xxxxx: Invalid argument


I am trying to setup Slurm - I have only one login node (called ctm-login-01) and one compute node (called ctm-deep-01). My compute node has several CPUs and 3 GPUs.

My compute node keeps being in drain state and I cannot for the life of me figure out where to start...


Login node

sinfo

ctm-login-01:~$ sinfo
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
debug*       up   infinite      1  drain ctm-deep-01

The reason?

sinfo -R

ctm-login-01:~$ sinfo -R
REASON               USER      TIMESTAMP           NODELIST
gres/gpu count repor slurm     2020-12-11T15:56:55 ctm-deep-01

Indeed, I keep getting these error messages in /var/log/slurm-llnl/slurmctld.log:

/var/log/slurm-llnl/slurmctld.log

[2020-12-11T16:17:39.857] gres/gpu: state for ctm-deep-01
[2020-12-11T16:17:39.857]   gres_cnt found:0 configured:3 avail:3 alloc:0
[2020-12-11T16:17:39.857]   gres_bit_alloc:NULL
[2020-12-11T16:17:39.857]   gres_used:(null)
[2020-12-11T16:17:39.857] error: _slurm_rpc_node_registration node=ctm-deep-01: Invalid argument

(Notice that I have set slurm.conf Debug to verbose and also set DebugFlags=Gres for more details on the GPU.)

These are the configuration files I have in all nodes and some of their contents...

/etc/slurm-llnl/* files

ctm-login-01:/etc/slurm-llnl$ ls
cgroup.conf  cgroup_allowed_devices_file.conf  gres.conf  plugstack.conf  plugstack.conf.d  slurm.conf
ctm-login-01:/etc/slurm-llnl$ tail slurm.conf 
#SuspendTime=
#
#
# COMPUTE NODES
GresTypes=gpu
NodeName=ctm-deep-01 Gres=gpu:3 CPUs=24 Sockets=1 CoresPerSocket=12 ThreadsPerCore=2 State=UNKNOWN
PartitionName=debug Nodes=ctm-deep-01 Default=YES MaxTime=INFINITE State=UP

# default
SallocDefaultCommand="srun --gres=gpu:1 $SHELL"
ctm-deep-01:/etc/slurm-llnl$ cat gres.conf 
NodeName=ctm-login-01 Name=gpu File=/dev/nvidia0 CPUs=0-23
NodeName=ctm-login-01 Name=gpu File=/dev/nvidia1 CPUs=0-23
NodeName=ctm-login-01 Name=gpu File=/dev/nvidia2 CPUs=0-23
ctm-login-01:/etc/slurm-llnl$ cat cgroup.conf 
CgroupAutomount=yes 
CgroupReleaseAgentDir="/etc/slurm-llnl/cgroup" 

ConstrainCores=yes 
ConstrainDevices=yes
ConstrainRAMSpace=yes
#TaskAffinity=yes
ctm-login-01:/etc/slurm-llnl$ cat cgroup_allowed_devices_file.conf 
/dev/null
/dev/urandom
/dev/zero
/dev/sda*
/dev/cpu/*/*
/dev/pts/*
/dev/nvidia*

Compute node

The logs in my compute node are the following.

/var/log/slurm-llnl/slurmd.log

ctm-deep-01:~$ sudo tail /var/log/slurm-llnl/slurmd.log 
[2020-12-11T15:54:35.787] Munge credential signature plugin unloaded
[2020-12-11T15:54:35.788] Slurmd shutdown completing
[2020-12-11T15:55:53.433] Message aggregation disabled
[2020-12-11T15:55:53.436] topology NONE plugin loaded
[2020-12-11T15:55:53.436] route default plugin loaded
[2020-12-11T15:55:53.440] task affinity plugin loaded with CPU mask 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000ffffff
[2020-12-11T15:55:53.440] Munge credential signature plugin loaded
[2020-12-11T15:55:53.441] slurmd version 19.05.5 started
[2020-12-11T15:55:53.442] slurmd started on Fri, 11 Dec 2020 15:55:53 +0000
[2020-12-11T15:55:53.443] CPUs=24 Boards=1 Sockets=1 Cores=12 Threads=2 Memory=128754 TmpDisk=936355 Uptime=26 CPUSpecList=(null) FeaturesAvail=(null) FeaturesActive=(null)

That CPU mask affinity looks weird...

Notice that I have already called sudo nvidia-smi --persistence-mode=1. Notice also that the aforementioned gres.conf file seems correct:

nvidia-smi topo -m

ctm-deep-01:/etc/slurm-llnl$ sudo nvidia-smi topo -m
        GPU0  GPU1  GPU2  CPU Affinity  NUMA Affinity
GPU0     X    SYS   SYS   0-23          N/A
GPU1    SYS    X    PHB   0-23          N/A
GPU2    SYS   PHB    X    0-23          N/A

Any other log or configuration I should take a clue from? Thanks!


Solution

  • It was all because of a typo!

    ctm-deep-01:/etc/slurm-llnl$ cat gres.conf 
    NodeName=ctm-login-01 Name=gpu File=/dev/nvidia0 CPUs=0-23
    NodeName=ctm-login-01 Name=gpu File=/dev/nvidia1 CPUs=0-23
    NodeName=ctm-login-01 Name=gpu File=/dev/nvidia2 CPUs=0-23
    

    Obviously, that should be NodeName=ctm-deep-01 which is my compute node! Jeez...