Search code examples
kubernetesmicroservicesevent-driven

IPC microservices AMQP and resilience


I am creating an architecture for my microservices plateform running over Kubernetes. My current architecture looks like: Current Architecture

Description:

  • I use Flask to create an API RESTful.
  • For my IPC mechanism and Event Driven, I use RabbitMQ.
  • The microservice contains code for calling a Producer and Consumer RabbitMQ.
  • When the Flask app is started, a consumer is instancied whithin a child process (with multiprocessing library). The Process is not joined, and not killed during all living state of main app (flask).
  • The producer is instancied only when a request (POST/PUT) is called.

And I was wondering, what if my RabbitMQ crashed :

  • The consumer in the microservice will not longer live because it will raise an exception for connecting to rabbitMQ
  • The microservice API (FLASK) will continue to live

So my question is the following:
Is it a good practice to seperate the consumer process into a independant container ?

-> The container will run along side the main app in the same pod.
-> The sidecar consumer will have a liveness endpoint, so if RabbitMQ Crashed again, Kubernetes will start only this container.
-> The sidecar consumer will have access to database to write events.
-> The producer can stay in the main app (flask), more resilient for me.

It is correct to do that ?
Next Architecture


Solution

  • Yes, I think that's a good approach, because you rely on Kubernetes to check if that container is up or not (with the liveness probe), instead of doing that from the application.

    You can also monitor/alert events like that (containers which are down) using the observability stack of the cluster.