Scala has its Execution Context as
import scala.concurrent.ExecutionContext.Implicits.global
Ans Play has its own Execution Context
import play.api.libs.concurrent.Execution.Implicits.defaultContext
Whats is the main difference and which one should we use and in which senario.
scala.concurrent.ExecutionContext.Implicits.global
(Scala std lib execution context) is the execution context provided by standard scala library. It is a special ForkJoinPool that using the blocking method to handle potentially blocking code in order to spawn new threads in the pool. You should not use this inside a play application as play will not have any control over it. Where asplay.api.libs.concurrent.Execution.Implicits.defaultContext
(Play execution context) usesactor dispatcher
. This is what should be used for play applications. Also it is good practice to offload blocking calls to different execution context other than play execution context. This way it will avoid play app running into starvation state.
Play execution context impl play.api.libs.concurrent.Execution.Implicits.defaultContext
val appOrNull: Application = Play._currentApp
appOrNull match {
case null => common
case app: Application => app.actorSystem.dispatcher
}
private val common = ExecutionContext.fromExecutor(new ForkJoinPool())
When app is not null it uses actorSystem.dispatcher
Scala standard execution context.
val executor: Executor = es match {
case null => createExecutorService
case some => some
}
This method creates the executor service taking into account available processors
and reading configuration.
def createExecutorService: ExecutorService = {
def getInt(name: String, default: String) = (try System.getProperty(name, default) catch {
case e: SecurityException => default
}) match {
case s if s.charAt(0) == 'x' => (Runtime.getRuntime.availableProcessors * s.substring(1).toDouble).ceil.toInt
case other => other.toInt
}
def range(floor: Int, desired: Int, ceiling: Int) = scala.math.min(scala.math.max(floor, desired), ceiling)
val desiredParallelism = range(
getInt("scala.concurrent.context.minThreads", "1"),
getInt("scala.concurrent.context.numThreads", "x1"),
getInt("scala.concurrent.context.maxThreads", "x1"))
val threadFactory = new DefaultThreadFactory(daemonic = true)
try {
new ForkJoinPool(
desiredParallelism,
threadFactory,
uncaughtExceptionHandler,
true) // Async all the way baby
} catch {
case NonFatal(t) =>
System.err.println("Failed to create ForkJoinPool for the default ExecutionContext, falling back to ThreadPoolExecutor")
t.printStackTrace(System.err)
val exec = new ThreadPoolExecutor(
desiredParallelism,
desiredParallelism,
5L,
TimeUnit.MINUTES,
new LinkedBlockingQueue[Runnable],
threadFactory
)
exec.allowCoreThreadTimeOut(true)
exec
}
}
This code is responsible for managed blocking. tries to create a new thread when blocking
is encountered in the code.
// Implement BlockContext on FJP threads
class DefaultThreadFactory(daemonic: Boolean) extends ThreadFactory with ForkJoinPool.ForkJoinWorkerThreadFactory {
def wire[T <: Thread](thread: T): T = {
thread.setDaemon(daemonic)
thread.setUncaughtExceptionHandler(uncaughtExceptionHandler)
thread
}
def newThread(runnable: Runnable): Thread = wire(new Thread(runnable))
def newThread(fjp: ForkJoinPool): ForkJoinWorkerThread = wire(new ForkJoinWorkerThread(fjp) with BlockContext {
override def blockOn[T](thunk: =>T)(implicit permission: CanAwait): T = {
var result: T = null.asInstanceOf[T]
ForkJoinPool.managedBlock(new ForkJoinPool.ManagedBlocker {
@volatile var isdone = false
override def block(): Boolean = {
result = try thunk finally { isdone = true }
true
}
override def isReleasable = isdone
})
result
}
})
}