Search code examples
c#restmicroservices

How to scale RESTful API on several workers with one entry point?


I have C# microservice (.netcore 2.x) with RESTful API onboard, published on <someaddress>:<someport> (e.g. server:9999). It works good, but a bit slow because of tons of queries, need to make it work faster.

The first idea is to create several instances of microservice (i.e. workers) on different servers (e.g. node1:9999, node2:9999), but don't get how can i save initial entry point (server:9999)?

Do I need a third party soft like some load balancer or there is an easy way to make it work?


Solution

  • You'll definitely need something like a load balancer. A few options:

    • If your DNS server supports it, you could use DNS round robin: simply create multiple DNS entries for the same hostname. This is an extremely basic form of load balancing and will only work if you have enough different clients (because of DNS caching). This will not do anything if one of the nodes is down (half of the calls will fail).
    • You could use a software load balancer like nginx or haproxy. These could be installed on one of the nodes or on a third node. Most of these load balancers will be able to detect nodes that are down, but you will be vulnerable when the node with your load balancer is down.
    • A hardware load balancer: very expensive, but stable and fast.

    The second option is probably the best, if you only have one service with two nodes and fault tolerance is not your primary concern.