I have a fork join dispatcher configured for a service that only uses the client side of akka http (via a host connection pool):
my-dispatcher {
type = Dispatcher
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 256
parallelism-factor = 128.0
parallelism-max = 2048
}
}
The only thing the service logic does is request from an external source, unmarshal it using jawn, and then transform the jawn ast into a case class:
def get(uri: Uri)[T]: Future[T] = {
for {
response <- request(uri)
json <- Unmarshal(response.entity).to[Try[JValue]]
} yield json.transformTo[T]
}
I was wondering if it would be more efficient to use a fixed thread pool for this kind of workload. This service gets around 150 req/s and I'd like to keep CPU usage under 1 CPU (it currently hovers at around 1.25-1.5 CPUs).
According to the wisdom of the ancients, your workflow is I/O bound so you should pick an execution context backed by a CachedThreadPool
, however if throttling is desired you should back your context with a FixedThreadPool
.
Also, depending on the deployment environment you may be able to limit the Java process to one core at the OS level by setting CPU affinity.