I have some question about how XDP Kernel code works on Linux Kernel.
My question is does XDP kernel code is working like hooks or one of process.
I have no certitude about this.
If I write XDP kernel codes to sleep 100ms and return XDP_PASS, is this cause bottleneck for other packets? or just CPU take cycles for sleep?
If working like hooks and asynchronous, there is no problem to process more than 10packets per second right? just linux network stack will have delayed packet receive?
or, If working like one of process, the maximum processable packets per second is limited to 10packets?
Thanks.
I'd tried to find out XDP Kernel docs on Linux docs. but hard to find.
My question is does XDP kernel code is working like hooks or one of process.
It works like a hook. When a XDP program is attached to a given network device, it is registered with the network driver (if the driver has XDP support). The driver will make the packet data for incoming traffic available and then call the XDP program. This can happen concurrently for different packets on different CPUs.
If I write XDP kernel codes to sleep 100ms and return XDP_PASS, is this cause bottleneck for other packets? or just CPU take cycles for sleep?
eBPF programs have no way to "sleep", there is no mechanism or function to do so. This is on purpose since it is in the best interest of the kernel to always do things as fast as possible. You try to mimic sleep functionality by making the program use a lot of CPU cycles, but there is a limit dictated by the verifier. More CPU time in XDP does decrease throughput if you are CPU bound. It does not have a big impact on slower network devices.
If working like hooks and asynchronous, there is no problem to process more than 10packets per second right? just linux network stack will have delayed packet receive? or, If working like one of process, the maximum processable packets per second is limited to 10packets?
The kernel internally does not have the concept of a process, that is only in userspace, the kernel does have threads. The logic behind the scheduling is out of scope, just know that there are more threads then logical CPUs. For tasks like networking, the kernel typically has 1 thread per logical CPU that is bound to that logical CPU to take advantage of CPU cache ect.
It is in the context of one of these kernel threads dedicated to reading packets from the network device and giving them to the network stack that XDP programs execute. On older kernels XDP programs cannot be interrupted, on newer kernels they can, but as soon as the scheduler switches back to its kthread it will continue on the same CPU.
This means that there are at max as many packets being processed at the same time as there are logical CPU cores on the system. However, both the network card and the network stack have buffers where more pending packets can be waiting.