Search code examples
performancedockerconfigcpuhaproxy

How is cpu config for haproxy handled within docker?


I'm wondering about haproxy performance from within a container. To make things simple if I have a vm running haproxy with this cpu config I know what to expect:

  nbproc 1
  nbthread 8
  cpu-map auto:1/1-8 0-7

If I want to port the (whole) config to docker for testing purposes without any fancy swarm magic or setup just docker so that I can understand how things map, I'd imagine that the cpu config gets simpler and that the haproxy instance is meant to scale. I guess I have two questions:

Would you even bother configuring cpu from within an haproxy docker container or would you scale the container from behind a service? Maybe you need both.

Can a single container utilise the above config as though it were running on the system as a daemon? Would docker / containerd even care about this config?

I know having 4 containers each with their own config with the cpu evenly mapped like so wouldn't scale or make any sense:

  nbproc 1
  nbthread 2
  cpu-map auto:1/1-2 0-1
  nbproc 1
  nbthread 2
  cpu-map auto:1/3-4 2-3
  nbproc 1
  nbthread 2
  cpu-map auto:1/5-6 4-5
  nbproc 1
  nbthread 2
  cpu-map auto:1/7-8 6-7

But it's this sort of saturation that I'm wondering about. Just how does haproxy / docker handle this sort of cpu nuance?


Solution

  • I've confirmed that there's little to no perceivable impact to service when running haproxy under containerd vs running under systemd using the image provided by haproxy. Running a single container -d with --network host and no limits on cpu or memory at worst I've seen a 2-3% impact on web external latency with live traffic peaked at about 50-60MB/sec, which itself is dependent on throughput and type of requests. On an 8 core vm with 4GB mem (host cpu is xeon 6130 Gold) and a gig interface the memory utilisation is almost identical. cpu performance also remains stable with potential 3-5% increase in utilisation. These tests are private and unpublished.

    As far as cpu configuration goes

    nbproc 1
      nbthread 8
      cpu-map auto:1/1-8 0-7
      master-worker
    

    This config maps 1:1 between containerd and systemd and yeilds the results already mentioned. The proc and threads will start up under containerd and function as you expect. This takes up about 80-90% of the total cpu (800%) which represents less than 1 fully loaded core at peak. So this container could be scaled with this configuration a further 8 times in theory, 5 or 6 times to leave some headroom.

    Also note that any fluctuations in these performance data are likely due to my environment. These tests were taken from a real environment acorss multiple sites not a test bed where I controlled every aspect. Also note depending on your host cpu and load your results will vary wildly.