Search code examples
scalaplayframeworkplayframework-2.0akkablocking

In Play, does every request spawn an Akka actor?


I've read that Play is built on Akka, so I wish to know if for every incoming request, an actor is spawned to serve.

Take this controller action for example:

def upload = Action(parse.multipartFormData) { implicit request =>
  request.body.file("picture").map { picture =>
    val client = new AmazonS3Client
    client.putObject("my-bucket", picture.filename, picture.ref.file)
  }.getOrElse {
    BadRequest("File missing")
  }
}

The upload happens synchronously, and often time I saw examples trying to wrap block of code like this in a Future. I think that if this request is being served by an Akka actor, doing so is not needed.

Please let me know if I am right or wrong, and your advice on consuming blocking services.


Solution

  • I don't think Play spawns a new actor for each request, and rather uses a pool of actors to handle them. It would kind of defeat the purpose of leveraging an actor's message queue to process requests if millions of actors could be spawned for millions of requests, all with the purpose of processing just one. At some point there should be an upper limit.

    Whether Play actually does that or not is irrelevant, though. By default, all Actions are handled asynchronously. The only difference between your code, and your code wrapped in Future using Action.async is that it would be using a convenience method to handle the Future. In the end, both will be functions like Request => Future[Result].

    I think that if this request is being served by an Akka actor, doing so is not needed.

    This isn't true, partially for the reasons above. Play uses a configurable thread pool (which the actors use) to handle requests. By default, it's sized at one thread per core. The actors share threads in order to do work. This means that if you have 4 core / 4 threads, and 100 actors (random number), if you have 4 blocking uploads, you will have 4 blocked threads. Regardless of the number of actors, you'll still hit thread starvation, and the other 96 are useless. And worse, this means that your server will no longer be able to process any requests until one of the uploads finishes and one of the threads is no longer blocked.

    You can mitigate this if you wrap your code in Future and use a separate ExecutionContext to block in. Wrapping in Future isn't enough, because the code is still blocking. You will have to block somewhere, but just don't do it in Play's default ExecutionContext that's used to handle requests. Instead, you can configure one specifically for handling uploads.


    application.conf

    # Configures a pool with 10 threads per core, with a maximum of 40 total
    upload-context {
      fork-join-executor {
        parallelism-factor = 10.0
        parallelism-max = 40
      }
    }
    

    Usage:

    implicit val uploadContext: ExecutionContext = Akka.system.dispatchers.lookup("upload-context")
    
    def upload = Action.async(parse.multipartFormData) { implicit request =>
      request.body.file("picture").map { picture =>
        val client = new AmazonS3Client
        Future(client.putObject("my-bucket", picture.filename, picture.ref.file))(uploadContext)
      }.getOrElse {
        Future.successful(BadRequest("File missing"))
      }
    }
    

    You can read more about thread pools here.