Am deploying springboot application in kubernets using Jib. When the service starting the memory usage is around 300MB but it grows up to 1.3gb over time. How to avoid this increase without any usage? The application is up and running. The API gateways are not open to user now still the memory is incrementing over time.
kubernets deployment configuration
# Source: services/charts/login/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: login
app.kubernetes.io/version: 1.16.0
name: login
spec:
selector:
matchLabels:
app.kubernetes.io/instance: release-name
app.kubernetes.io/name: login
template:
metadata:
labels:
app.kubernetes.io/instance: release-name
app.kubernetes.io/name: login
spec:
containers:
- env:
- name: APP_NAME
value: login-release-name
- name: JAVA_TOOL_OPTIONS
value: -Dspring.profiles.active=prod
image: dockerregistry.com/login:1.0.0
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command:
- sh
- -c
- sleep 10
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 30
name: login
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 30
resources:
limits:
cpu: 2000m
memory: 1Gi
requests:
cpu: 100m
memory: 1Gi
imagePullSecrets:
- name: regcred
terminationGracePeriodSeconds: 60
spring boot configuration for kubernets
server.port=8080
server.shutdown=graceful
spring.lifecycle.timeout-per-shutdown-phase=45s
server.tomcat.accept-count=100
server.tomcat.max-connections=8000
server.tomcat.connection-timeout=10000
server.tomcat.max-threads=200
server.tomcat.min-spare-threads=10
spring.datasource.url=jdbc:postgresql://${DB_HOST:#{"postgres"}}/postgres
spring.datasource.username=${DB_USER:#{"postgres"}}
spring.datasource.password=${DB_PASSWORD:#{"na"}}
spring.datasource.type=org.springframework.jdbc.datasource.DriverManagerDataSource
spring.datasource.driver-class-name=org.postgresql.Driver
Do we need to configure anything to limit the memory usage to 1GB limit? Now the kubernets will kill the pod if it goes beyond 1GB.
am creating the image using the Jib.
mvn compile com.google.cloud.tools:jib-maven-plugin:3.3.0:dockerBuild -Dimage=login -DskipTests
Update 05 02 2023 Changing the JVM Runtime to OpenJ9 reduced memory footprint dramatically,
also Changed to jre image instead of jdk
Updated command
mvn compile com.google.cloud.tools:jib-maven-plugin:3.3.0:dockerBuild -Djib.from.image=ibm-semeru-runtimes:open-8-jre -Dimage=$imageName -DskipTests
OLD Answer
After a long analysis I found that there is no memory leak from the application.
It was due to the container so I switched to openjdk:8-jdk-alpine
using Djib.from.image=
option in Jib
first I added some JVM option to set The JVM Heap But its also behaving same.
-XX:+UseG1GC -Xms100M -Xmx1G
(Garbage collection changed and set max heap to 1GB
using VisualVM suggest by @Mihai Pasca I analyzed the memory usage in the container.
updated command of Jib
mvn compile com.google.cloud.tools:jib-maven-plugin:3.3.0:dockerBuild -Djib.from.image=openjdk:8-jdk-alpine -Dimage=$imageName -DskipTests -Djib.container.environment=JAVA_TOOL_OPTIONS="-XX:+UseG1GC -Xms100M -Xmx1G -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=9010 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.rmi.port=9010 -Djava.rmi.server.hostname=localhost" -Djib.container.ports=8080,9010
And connected to 9010 from visual vm using add JMX option.
There is no over usage I found and the GC works fine
And memory usage by the container also fine and not growing continuesly its gorwing up and come down if no use. Previusly its continuly growing.
now I started the same in k8s working fine now.