Search code examples
dpdkdpdk-pmd

run l2fwd fail in two containers


I am ready to run l2fwd in two containers, both of them are in the same host, start container1 run l2fwd successful, once start run l2fwd in another container2, then both of them got Segmentation fault error, anyone met this error, thanks.

Host: 4 sriov-vf enabled, driver: vfio-pci
container1: docker run --privileged --name="vhost_user" -v /dev:/dev -v /tmp:/tmp -itd centos-cu:v3
container2: docker run --privileged --name="virtio_user" -v /dev:/dev -v /tmp:/tmp -itd centos-cu:v3

l2fwd logs:
container1:

container1:
#  ./l2fwd -l 2-3 -n 2 -w 0000:04:10.7 -w 0000:04:10.5 -- -p 0x3
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:04:10.5 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:10ed net_ixgbe_vf
EAL:   using IOMMU type 1 (Type 1)
EAL: PCI device 0000:04:10.7 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:10ed net_ixgbe_vf
MAC updating enabled
Lcore 2: RX port 0
Lcore 3: RX port 1
Initializing port 0... done: 
Port 0, MAC address: 02:09:C0:11:47:97

Initializing port 1... done: 
Port 1, MAC address: 02:09:C0:00:2C:47


Checking link statusdone
Port0 Link Up. Speed 10000 Mbps - full-duplex
Port1 Link Up. Speed 10000 Mbps - full-duplex
L2FWD: entering main loop on lcore 3
L2FWD:  -- lcoreid=3 portid=1
L2FWD: entering main loop on lcore 2
L2FWD:  -- lcoreid=2 portid=0

Port statistics ====================================
Statistics for port 0 ------------------------------
Packets sent:                        0
Packets received:                    0
Packets dropped:                     0
Statistics for port 1 ------------------------------
Packets sent:                        0
Packets received:                    5
Packets dropped:                     0
Aggregate statistics ===============================
Total packets sent:                  0
Total packets received:              5
Total packets dropped:               0
====================================================

Port statistics ====================================
Statistics for port 0 ------------------------------
Packets sent:                       23
Packets received:                   16
Packets dropped:                     0
Statistics for port 1 ------------------------------
Packets sent:                       16
Packets received:                   26
Packets dropped:                     0
Aggregate statistics ===============================
Total packets sent:                 39
Total packets received:             42
Total packets dropped:               0
====================================================

(start to run l2fwd in container2)
./run_l2fwd.sh: line 3:   116 Segmentation fault      (core dumped) ./l2fwd -l 2-3 -n 2 -w 0000:04:10.7 -w 0000:04:10.5 -- -p 0x3

container2:

# /l2fwd -l 0-1 -n 2 -w 0000:04:10.3 -w 0000:04:10.1 -- -p 0x3
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:04:10.1 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:10ed net_ixgbe_vf
EAL:   using IOMMU type 1 (Type 1)
EAL: PCI device 0000:04:10.3 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:10ed net_ixgbe_vf
MAC updating enabled
Lcore 0: RX port 0
Lcore 1: RX port 1
Initializing port 0... ./run_l2fwd.sh: line 3:    90 Segmentation fault      (core dumped) ./l2fwd -l 0-1 -n 2 -w 0000:04:10.3 -w 0000:04:10.1 -- -p 0x3

Solution

  • Mapping hugepages from files in hugetlbfs is essential for multi-process, because secondary processes need to map the same hugepages. EAL creates files like rtemap_0 in directories specified with --huge-dir option (or in the mount point for a specific hugepage size). The rte prefix can be changed using --file-prefix. This may be needed for running multiple primary processes that share a hugetlbfs mount point. Each backing file by default corresponds to one hugepage, it is opened and locked for the entire time the hugepage is used. This may exhaust the number of open files limit (NOFILE).