Search code examples
.netsecurityjwtmicroservices

Microservice communication security


Just a pet project i’m doing at home now that i’ve mostly locked myself indoors. Having said that, i’m trying to put it together with all the bells and whistles, mostly so i can learn some new stuff. I had implemented version 1 of this back in 2015, and it has been running like a champ on my raspberryPi for over 5 years now.

For the updated version (Angular 9/.net core 3.1), I’m thinking of using Auth0 as my auth provider (however i’m also considering using AWS Cognito). I’ve identified microservice boundaries for the app, and will be using RabbitMQ for inter-service messaging whenever possible.However there are a number of scenarios where one MS will need to get data from another MS to complete a request. Essentially something which in a monolith would have been a DB join. I did some reading on this over the weekend, and found that you can do that join on the front-end, in the API gateway, or by having MS A perform a REST query to MS B. The first feels a bit silly, because then the browser has to perform multiple requests and the second will require me to build a gateway (I was just planning on using NGINX as a reverse proxy). So I settled on the third option... REST requests from MS A to MS B.

My question...

How do you handle security when having inter-service communication?Options which I’ve considered, in order of ease:

  1. Forward the Auth header from MS A to MS B when making the request
  2. Host an internal instance of MS B which isn’t exposed to the internet via the reverse proxy, and only exposes functionality which will be needed in cross microservice communication. (Not sure if that explanation makes sense)
  3. Have MS A request it’s own token from the auth provider (Auth0, AWS Cognito, etc.) and call MS B with that token.

Is there a standard way of doing this?


Solution

  • These are good questions to be asking, but unfortunately there is not a "standard" way of doing this. I'd say your options would be between 1 and 2, and the reasons for each would be as follows:

    Option 1 - Forward the Auth header from MS A to MS B when making the request

    If you were to apply this practice at-scale with multiple different teams each owning different services, this would be the best route. The creator of MS B won't necessarily know who is making calls to them long-term, so in order to ensure that they don't impose security risks the easiest way to expose their service is to require a verifiable JWT. This would prevent someone from accidentally consuming their service inappropriately.

    Option 2 - Host an internal instance of MS B which isn’t exposed to the internet via the reverse proxy, and only exposes functionality which will be needed in cross microservice communication.

    You can think of this method kind of like TLS termination at a gateway. You can terminate TLS in each service (like option 1 for the JWTs), or you can terminate it at the gateway to ensure traffic from the outside world is encrypted before entry while allowing it to flow freely inside the walls. A gateway can also be the sole responsible party for verifying and decoding JWTs and then allowing traffic to flow freely internally once that check has passed.

    This option is very practical for applications maintained by single teams or developers because it's very easy to orchestrate. If each "internal" service responds simply to a user ID/subject to fulfill requests and trusts that the user ID was extracted from a verified JWT, you can build other internal systems much more easily. This solution is the easiest to get started with, but might be tricky if you cannot trust the other services or developers in your app or team.

    Option 3 - Have MS A request it’s own token from the auth provider (Auth0, AWS Cognito, etc.) and call MS B with that token

    I wouldn't recommend each service be issued it's own oauth token. JWTs from oauth providers like Auth0 are issued to users, not machines.

    That being said, a heavy addition to cloud security would be to credential and whitelist the traffic flowing between all services. This could be things like Kubernetes NetworkPolicys enforced by tools like Cilium, or they could be unique SSL certs designed to broker communication uniquely for each relationship between services in your stack. This kind of added security doesn't really have any bearing on your question above, but would be additional food-for-thought in thinking about protecting inter-process communication in a cloud environment.