Search code examples
builddpdkmeson-builddpdk-pmd

Cannot allocate memory :Failed to create packet memory pool (rte_pktmbuf_pool_create failed) - for port_id 0


I have upgraded DPDK from 17.02 to 21.11. RPM build was built and installed successfully. While running the custom application I saw the following error:

Cannot allocate memory#012ms_dpdk::port::port: Failed to create packet memory pool (rte_pktmbuf_pool_create failed) - for port_id

Function call parameters : rte_pktmbuf_pool_create(port-0,267008,32,0,2176,0)

  1. I have added std::string msg = rte_strerror(rte_errno); in error logs and it gives the output as Cannot allocate memory

  2. LDD output shows the libraries are linked properly and there are no "no found" entries.

    ldd /opt/NETAwss/proxies/proxy | grep "buf"
            librte_mbuf.so.22 => /lib64/librte_mbuf.so.22 (0x00007f795873f000)
    
    ldd /opt/NETAwss/proxies/proxy | grep "pool"
            librte_mempool_ring.so.22 => /lib64/librte_mempool_ring.so.22 (0x00007f7a1da3f000)
            librte_mempool.so.22 => /lib64/librte_mempool.so.22 (0x00007f7a1da09000)
    
  3. igb_uio is also loaded successfully.

    lsmod | grep uio
    
    igb_uio                 4190  1
    uio                     8202  3 igb_uio
    
  4. cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 512

    grep Huge /proc/meminfo
    
    AnonHugePages:    983040 kB
    
    ShmemHugePages:        0 kB
    
    HugePages_Total:     512
    
    HugePages_Free:      511
    
    HugePages_Rsvd:        0
    
    HugePages_Surp:        0
    
    Hugepagesize:       2048 kB
    
  5. When I run dpdk-testpmd it seems to be working fine. Below is the output of the test application.

      ./dpdk-testpmd
     EAL: Detected CPU lcores: 2
     EAL: Detected NUMA nodes: 1
     EAL: Detected static linkage of DPDK
     EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
     EAL: Selected IOVA mode 'PA'
     EAL: Probe PCI driver: net_vmxnet3 (15ad:7b0) device: 0000:13:00.0 (socket 0)
     TELEMETRY: No legacy callbacks, legacy socket not created
     testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176, socket=0
     testpmd: preferred mempool ops selected: ring_mp_mc
    
     Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
    
     Configuring Port 0 (socket 0)
     Port 0: 00:50:56:88:9A:43
     Checking link statuses...
     Done
     No commandline core given, start packet forwarding
     io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
     Logical Core 1 (socket 0) forwards packets on 1 streams:
       RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
    
       io packet forwarding packets/burst=32
       nb forwarding cores=1 - nb forwarding ports=1
       port 0: RX queue number: 1 Tx queue number: 1
         Rx offloads=0x0 Tx offloads=0x0
         RX queue: 0
           RX desc=0 - RX free threshold=0
           RX threshold registers: pthresh=0 hthresh=0  wthresh=0
           RX Offloads=0x0
         TX queue: 0
           TX desc=0 - TX free threshold=0
           TX threshold registers: pthresh=0 hthresh=0  wthresh=0
           TX offloads=0x0 - TX RS bit threshold=0
     Press enter to exit
    
     Telling cores to stop...
     Waiting for lcores to finish...
    
       ---------------------- Forward statistics for port 0  ----------------------
       RX-packets: 2              RX-dropped: 0             RX-total: 2
       TX-packets: 2              TX-dropped: 0             TX-total: 2
       ----------------------------------------------------------------------------
    
       +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
       RX-packets: 2              RX-dropped: 0             RX-total: 2
       TX-packets: 2              TX-dropped: 0             TX-total: 2
       ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    
     Done.
    
     Stopping port 0...
     Stopping ports...
     Done
    
     Shutting down port 0...
     Closing ports...
     Port 0 is closed
     Done
    
     Bye...
    

I am not able to figure out the root cause of this error. Any help is appreciated. Thanks


Solution

    1. Memory allocation failure happens by moving from DPDK 17.02 to 21.11. This is expected for fixed 512 * 2MB and memory requirements from custom application.
    2. DPDK 21.11 introduces new features like telemetry, fb_arrary, MP communication sockets, service cores, which requires more internal memory allocation (not everything is from HEAP region but hugepage).
    3. rte_pktmbuf_pool_create tries to create (267008 * 2176 + additional place holder) is about 0.8GB.

    hence with the above new memory model and services, a total huge page potential shoots over 1GB MMAPED area. Currently, the Huge pages allocated in the system are 512 * 2MB only.

    Solutions:

    1. reduce the number of MBUF from 267008 to a lower value like 200000 to satisfy the memory requirement.
    2. Increase the number of available huge pages from 512 to 600
    3. use the new EAL to use legacy memory, no telemetry, no multiprocess, no service cores, to reduce memory footprint.
    4. use real arg --socket-mem or -m, to fix the memory allocations.

    Note: the RPM package was not initially housing libdpdk.pc. This is required for obtaining platform-specific CFLAGS and LDFLAGS.