How do I get a Cloud Run container that uses more than 2 GB run on custom GKE cluster?
Since Cloud Run uses knative, I wonder if it is possible to tweak deployment describtor with more allocated/allowed memory limit to run it on GKE.
apiVersion: serving.knative.dev/v1alpha1
kind: Revision
metadata:
...
How do I get a Cloud Run container that uses more than 2 GB run on custom GKE cluster?
The maximum memory that you can allocate to a container in Cloud Run Managed is 2 GB.
[UPDATE 2]
Cloud Run now supports 32 GB, but the larger memory features are in preview. This document provides details on memory limits.
[UPDATE]
For Cloud Run on Kubernetes, you can request more memory:
gcloud beta run deploy --image gcr.io/cloudrun/hello --memory=4G --cluster ha-cluster-1
Since Cloud Run uses knative, I wonder if it is possible to tweak deployment describtor with more allocated/allowed memory limit to run it on GKE.
Cloud Run Managed does not run in Knative, it runs in gVisor. I wrote an article that describes the Cloud Run infrastructure plus the Knative API that both Cloud Run Managed and Cloud Run on Kubernetes expose here. However, even with direct access to the Cloud Run Knative API, you cannot get around the imposed service limits.
The purpose of Cloud Run is to simplify deployments by abstracting away implementation details of the underlying infrastructure. There are cases where you should deploy directly to Kubernetes.