Search code examples
springperformancespring-bootrocketmq

Spring Boot application can handle tons of requests


I am using Spring-boot or Spring-Cloud Framework to develop a web application. The system will be mainly for processing HTTP Restful requests from client side, and then save them in to MySQL database.

But I plan to make it more expandable. It should be able to start more instance of each service and make the system can handle more incoming requests.

But I'm not sure that I am doing is right, could anyone come and help me check whether my current approach is reasonable, or raise any potential risks in my approach.

What I'm doing is:

  1. Service A receives requests in its controller, then asynchronously write them into RocketMQ. RocketMQ is used for clipping the peak.

  2. Service B then subscribe the RocketMQ topic Service A wrote into and cache the messages into Redis in format of list.

  3. Service C starts up a daemon thread checking the message numbers in Redis. If the cache list size reaches a certain value, it will pull all the messages and save them into MySQL, then flush the cache in Redis.


Solution

  • As always there can be more solutions of a single problem. Following suggestions are based on my daily work and experience as software architect.

    Facts

    Your system consists of three (micro) services (A, B and C), message broker (RocketMQ), cache (Redis) and database (MySQL). In comments you also mention that you plan to run it on F5 hardware and Docker.

    Suggestions

    Service A is exposed in front-end to handle HTTP requests. Asynchronous processing is used to manage load, however, efficiency is still limited with Service A's performance. Therefore Service A should be scalable to enable higher throughput. Perfomance of single unit must be evaluated (take a look at performance testing, stress testing ...) to determine scaling.

    To enable automated scaling of Docker containers you will need orchestration tool (such as Kubernetes) that will scale your system based on configured metrics. Also think of system resources that scaled system can use.

    Also services B and C can be easily scalable. Evaluate if features of Service B and Service C could be joined in a single service. Instead of B just putting new data in Redis it could also store it in MySQL. It depends on how much fragmentation you need and how you will manage extra complexity that comes with fragmentation. B will already react on published content while Service C seems to constantly pooling Redis cache for number of entries (this could be solved with keyspace notifications).

    Be careful when you read data from Redis, store it MySQL and flush it. You can easily miss or flush some data that was not stored in MySQL when or if you use one Redis key for all instances of services that writes in it.

    When dealing with asynchronous processing you often deal with eventual consistency, meaning that data that Service A handles will not be available right away for other services that might want to read it from MySQL (just a thought for wider picture, importance varies from case to case).