Search code examples
redisbenchmarkingschedulercpu-usageepoll

How to lower CPU usage of finish_task_switch(), called by epoll_wait?


I've written a simple epoll-driven server to bench network/io performance. The server simply receive a request and send a response immediately. It is slower than redis-server 'get', 38k/s vs 40k/s. Both use redis-benchmark as the load runner, and both used cpu up (>99%).

bench redis-server: redis-benchmark -n 1000000 -c 20 -t get -p 6379

bench myserver : redis-benchmark -n 1000000 -c 20 -t get -p 6399

I've profiled them using linux perf, eliminated epoll_ctl in myserver (as what redis-server does). Now the problems is function finish_task_switch() takes too much cpu time, about 10%-15% (for redis-server and redis-benchmark are 3%, on the same machine).

The call flow (read it top-down) is
-> epoll_wait(25%)
-> entry_SYSCALL_64_after_hwframe(23.56%)
-> do_syscall_64(23.23%)
-> sys_epoll_wait(22.36%)
-> ep_poll(21.88%)
-> schedule_hrtimeout_range(12.98%)
-> schedule_hrtimeout_range_clock(12.74%)
-> schedule(11.30%)
-> _schedule(11.30%)
-> finish_task_switch(10.82%)

I've tried writing the server using raw epoll api, and using redis's api in redis/src/ae.c, nothing changed.
I've examined how redis-server and redis-benchmark use epoll, no tricks found.
The redis CFLAGS is used for myserver, same as redis-benchmark.
The CPU usage has nothing to do with level/edge-triggered, block or nonblock client fd, whether epoll_wait's timeout set or not.

#include <sys/epoll.h>
#include <sys/socket.h>
#include <unistd.h>

#include <stdio.h>
#include <stdlib.h> // exit
#include <string.h> // memset

#include "anet.h"

#define MAX_EVENTS 32

typedef struct {
    int fd;
    char querybuf[256];
} client;
client *clients;
char err[256];

#define RESPONSE_REDIS "$128\r\nxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\r\n"


static int do_use_fd(client *c)
{
    int n = read(c->fd, c->querybuf, sizeof(c->querybuf));
    if (n == 0) { printf("Client Closed\n"); return n; }
    n = write(c->fd, RESPONSE_REDIS, sizeof(RESPONSE_REDIS)-1);
    return n;
}

int main()
{
    struct epoll_event ev, events[MAX_EVENTS];
    int listen_sock, conn_sock, nfds, epollfd;

    epollfd = epoll_create(MAX_EVENTS);

    listen_sock = anetTcpServer(err, 6399, NULL, MAX_EVENTS);

    ev.events = EPOLLIN;
    ev.data.fd = listen_sock;

    epoll_ctl(epollfd, EPOLL_CTL_ADD, listen_sock, &ev);

    clients = (client *)malloc(sizeof(client) * MAX_EVENTS);
    memset(clients, 0, sizeof(client) * MAX_EVENTS);

    for (;;) {
        int n;
        struct sockaddr addr;
        socklen_t addrlen = sizeof(addr);

        nfds = epoll_wait(epollfd, events, MAX_EVENTS, 100);

        for (n = 0; n < nfds; ++n) {
            if (events[n].data.fd == listen_sock) {
                conn_sock = accept(listen_sock,
                                   (struct sockaddr *) &addr, &addrlen);
                anetNonBlock(err, conn_sock);
                ev.events = EPOLLIN;
                //ev.events = EPOLLIN | EPOLLET;
                ev.data.fd = conn_sock;
                epoll_ctl(epollfd, EPOLL_CTL_ADD, conn_sock,&ev);
                clients[conn_sock].fd = conn_sock;
            } else {
                client *c = &clients[events[n].data.fd];
                int ret = do_use_fd(c);
                if (ret == 0) {
                    epoll_ctl(epollfd, EPOLL_CTL_DEL, c->fd, &ev);
                }
            }
        }
    }
}

Solution

  • Server's listen fd is blocked. set it nonblocked lowers the usage of finish_task_switch to <2%.