I was recently trying to bind SR-IOV VF pci devices to DPDK app in docker container, the expect scenario : each docker container runs DPDK application which take charge of one sriov VF. But DPDK application can see all sriov VF ports, and this will cause different docker container can handle other VF which may charged by another container.
the steps are:
(1) enable SR-IOV function, Virtual Functions are setup correctly
#lspci
04:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
04:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
04:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
04:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
(2) run two docker containers
#docker run --privileged --name="sriov_test" -v /mnt/huge:/mnt/huge -itd centos:latest
(3)bind VF port to igb_uio drv
#./dpdk-devbind.py -s
Network devices using DPDK-compatible driver
============================================
0000:04:10.1 '82599 Ethernet Controller Virtual Function 10ed' drv=igb_uio unused=ixgbevf,vfio-pci
0000:04:10.3 '82599 Ethernet Controller Virtual Function 10ed' drv=igb_uio unused=ixgbevf,vfio-pci
(4)run dpdk application my_basicforwd
#./my_basicfwd -l 1 --log-level 8 -- -p 1
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:00:1f.6 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:15b7 net_e1000_em
EAL: PCI device 0000:04:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:04:00.1 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:04:10.1 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10ed net_ixgbe_vf
EAL: PCI device 0000:04:10.3 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10ed net_ixgbe_vf
EAL: PCI device 0000:04:10.5 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10ed net_ixgbe_vf
EAL: PCI device 0000:04:10.7 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:10ed net_ixgbe_vf
timer period 33120229810
debug nb_ports 2
Port 0 MAC: 02 09 c0 4b b4 a7
Port 1 MAC: 02 09 c0 3c ce 0f
Above shows that dpdk can see two VF ports, another container dpdk application result is same. I was trying to set the VF ports to docker container's netns by : ip link set xxx netns xxx, unfortunately no use.
I was searching for a long time on net, but no use. Please help or try to give some ideas how to achieve this, thanks a lot.
note: Based on clarification in the comments, setting the expectation for Docker/Container vs Virtual Machine instances.
So when a DPDK application runs inside a docker/container, it has access to huge page, /dev/, and drivers just like a standalone application. one has to restrict the access to a physical device using allow/block in DPDK 20.11 onwards
and whitelist/blacklist in 20.08 and below
.
Hence following are the possible solution for running DPDK inside container/dpdk
-a or -b for DPDK 20.11 onwards
or -w or -b for DPDK 20.08 and below
.chown
to change the ownership to desired user account for DPDK device bound with uio_pci_generic\igb_uio
under /dev/uio
or vfio-pci
under /dev/vfio/
. This will limit access to the devices when DPDK is run with non sudo
modenote: please first make sure to run in non sudo
mode on the host, then identify the changes to be done in an environment variable, huge page access, /dev/. Then start the docker with right user under the desired name space.