Search code examples
dpdkdpdk-pmd

dpdk-testpmd command executed and then hangs


I made ready dpdk compatible environment and then I tried to send packets using dpdk-testpmd and wanted to see it being received in another server. I am using vfio-pci driver in no-IOMMU (unsafe) mode. I ran

$./dpdk-testpmd -l 11-15 -- -i

which had output like

EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Using IOMMU type 8 (No-IOMMU)
EAL: Probe PCI driver: net_i40e (8086:1572) device: 0000:01:00.1 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_1>: n=179456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_0>: n=179456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
Port 0: E4:43:4B:4E:82:00
Checking link statuses...
Done

then

$set nbcore 4
Number of forwarding cores set to 4
testpmd> show config fwd
txonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 12 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=BE:A6:27:C7:09:B4

my nbcore is not being set correctly, even 'txonly' mode was not being set before I set the eth-peer addr. but some parameters are working. Moreover if I don't change the burst delay my server gets crashed as soon as I start transmitting through it has 10G ethernet port (80MBps available bandwidth by calculation). Hence, I am not seeing packets at receiving server by tailing tcpdump at corresponding receiving interface. What is happening here and what am I doing wrong?


Solution

  • based on the question & answers in the comments, the real intention is to send packets from DPDK testpmd using Intel Fortville (net_i40e) to the remote server. The real issue for traffic not being generated is neither the application command line nor the interactive option is set to create packets via dpdk-testpmd.

    In order to generate packets there are 2 options in testpmd

    1. start tx_first: this will send out a default burst of 32 packets as soon the port is started.
    2. forward mode tx-only: this puts the port under dpdk-testpmd in transmission-only mode. once the port is start it will transmit packets with the default packet size.

    Neither of these options is utilized, hence my suggestion is

    1. please have a walk through DPDK documentation on testpmd and its configuratio
    2. make use of either --tx-first or use --forward-mode=txonly as per DPDK Testpmd Command-line Options
    3. make use of either start txfirst or set fwd txonly or set fwd flwogen under interactive mode refer Testpmd Runtime Functions

    with this traffic will be generated from testpmd and sent to the device (remote server). A quick example of the same is "dpdk-testpmd --file-prefix=test1 -a81:00.0 -l 7,8 --socket-mem=1024 -- --burst=128 --txd=8192 --rxd=8192 --mbcache=512 --rxq=1 --txq=1 --nb-cores=2 -a --forward-mode=io --rss-udp --enable-rx-cksum --no-mlockall --no-lsc-interrupt --enable-drop-en --no-rmv-interrupt -i"

    From the above example config parameters

    • numbers of packets for rx-tx burst is set by --burst=128
    • number of rx-tx queues is configured by --rxq=1 --txq=1
    • number of cores to use for rx-tx is set by --nb-cores=2
    • to set flowgen, txonly, rxonly or io mode we use --forward-mode=io

    hence in comments, it is mentioned neither set nbcore 4 or there are any configurations in testpmd args or interactive which shows the application is set for TX only.

    The second part of the query is really confusing, because as it states

    Moreover if I don't change the burst delay my server gets crashed as soon as I start transmitting through it has 10G ethernet port (80MBps available bandwidth by calculation). Hence, I am not seeing packets at receiving server by tailing tcpdump at corresponding receiving interface. What is happening here and what am I doing wrong?

    assuming my server is the remote server to which packets are being sent by dpdk testpmd. because there is mention of I see packets with tcpdump (since Intel fortville X710 when bound with UIO driver will remove kernel netlink interface).

    it is mentioned 80MBps which is around 0.08Gbps, is really strange. If the remote interface is set to promiscuous mode and there is AF_XDP application or raw socket application configured to receive traffic at line rate (10Gbps) it works. Since there is no logs or crash dump of the remote server, and it is highly unlikely actual traffic is generated from testpmd, this looks more of config or setup issue in remote server.

    [EDIT-1] based on the live debug, it is confirmed

    1. The DPDK is not installed - fixed by using ninja isntall
    2. the DPDK nic port eno2 - is not connected to remote server directly.
    3. the dpdk nic port eno2 is connected through switch
    4. DPDk application testpmd is not crashing - confirmed with pgrep testpmd
    5. instead when used with set fwd txonly, packets flood the switch and SSH packets from other port is dropped.

    Solution: please use another switch for data path testing, or use direct connection to remote server.