Search code examples
virtualizationgoogle-kubernetes-engine

Running the same service in a GKE container, compared to a GCE VM


This is a general question about GKE compared to GCE. If one is running a lightweight service on a single small GCE VM, is it a reasonable thing to do to try running that same service from a single GKE container on the same size instance? Or does the overhead of cluster management make this unfeasible?

Specifics: I'm serving a low-traffic website from a tiny (f1-micro) GCE VM. For various reasons I thought I'd try moving it to serve from an apache/nginx container, with the same hardware underneath. In practice though, I find that GKE won't even let you create a cluster of f1-micro instances unless it has at least 3 nodes - the release notes say this is so there will be enough memory to manage pods.

I'd supposed that the same service would take up similar resources whether in a VM or a container, but the GKE's 3-node restriction makes it sound like simply managing the cluster eats more memory than serving my site does in the first place. Is that the case, or is the restriction meant for much heaver services than mine? (For reference, you can actually create a 3-node cluster of f1-micro instances and then change the size to 1 node, and it seems to run normally, but I haven't tried actually running a service this way.)

Thanks!


Solution

  • GKE enables logging and monitoring by default, which runs Fluentd and Heapster pods in your cluster. These eat up a good chunk of memory. Even if you disable logging/monitoring, you still have to run Docker, Kubelet, and the DNS pod. That chews through the f1-micro's 600MB pretty quickly.

    I'd suggest a 1 node g1-small cluster over a 3 node (or 1 node) f1-micro. The per-node cluster-management overhead is smaller relatively, so your service would still be able to run in the same (or larger) footprint. But, if the resize-to-1 workaround is working for you, it seems fine to just roll with that.