Search code examples
kubernetesresourcesmicrok8s

Limiting microk8s maximum memory usage


We are using a self-hosted microk8s cluster (single-node for now) for our internal staging workloads. From time to time, the server becomes unresponsive and I can't even ssh into it. The only way out is a restart.

I can see that before the server crashes, its memory usage goes to the limit and the CPU load shoots up to over 1000. So running out of resources is likely to blame.

That brings me to the question - how can I set global limits for microk8s to not consume everything?


I know there are resource limits that can be assigned to Kubernetes pods, and ResourceQuotas to limit aggregate namespace resources. But that has the downside of low resource utilization (if I understand those right). For simplicity, let's say:

  • each pod is the same
  • its real memory needs can go from 50 MiB to 500 MiB
  • each pod is running in its own namespace
  • there are 30 pods
  • the server has 8 GiB of RAM
  1. I assign request: 50 Mi and limit: 500 Mi to the pod. As long as the node has at least 50 * 30 Mi = 1500 Mi of memory, it should run all the requested pods. But there is nothing stopping all of the pods using 450 Mi of memory each, which is under the individual limits, but still in total being 450 Mi * 30 = 13500 Mi, which is more than the server can handle. And I suspect this is what leads to the server crash in my case.

  2. I assign request: 500 Mi and limit: 500 Mi to the pod to ensure the total memory usage never goes above what I anticipate. This will of course allow me to only schedule 16 pods. But when the pods run with no real load and using just 50 Mi of memory, there is severe RAM underutilization.

  3. I am looking for a third option. Something to let me schedule pods freely and only start evicting/killing them when the total memory usage goes above a certain limit. And that limit needs to be configurable and lower than the total memory of the server, so that it does not die.


We are using microk8s but I expect this is a problem all self-hosted nodes face, as well as something AWS/Google/Azure have to deal with too.

Thanks


Solution

  • Since microk8s running on the host machine, then all resources of the host are allocated for it. That is why if you want to keep your cluster resources in borders, you have to manage them in one of the ways below:

    1. Setup LimitRange policy for pods in a namespace.

    A LimitRange provides constraints that can:

    • Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.
    • Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace.
    • Enforce a ratio between request and limit for a resource in a namespace.
    • Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.
    1. Use Resource Quotas per namespace.

    A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per namespace. It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that namespace.

    1. Assign necessary requests and limits for each pod.

    When you specify the resource request for Containers in a Pod, the scheduler uses this information to decide which node to place the Pod on. When you specify a resource limit for a Container, the kubelet enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. The kubelet also reserves at least the request amount of that system resource specifically for that container to use.