I have been using the experimental versions of Kotlin coroutines under high concurrency for a long time, and the performance has always been excellent. The main logic could be simplified to the code below:
// works fine in kotlin 1.2 with 3000+ QPS for a 40-core host
launch {
// running in ForkJoinPool.commonPool() by default
// non-blocking IO function
val result = supendFunction()
doSomething(result)
}
However, after I updated kotlin to 1.3, and migrated to the formal version of coroutines, like this
// kotlin 1.3 version
GlobalScope.launch {
// running in DefaultDispatcher
// non-blocking IO function
val result = supendFunction()
doSomething(result)
}
The CPU usage rises from 2% to 50% without any exception or error thrown. The only difference that I notice is that the coroutines are no longer executed in ForkJoinPool.commonPool()
like before. Instead, they are running in DefaultDispatcher
threads, like DefaultDispatcher-worker-30
.
My questions are:
DefaultDispatcher
?DefaultDispatcher
in place of ForkJoinPool.commonPool()
by default?
- Why does it cost so much CPU usage with
DefaultDispatcher
?
It's a completely different implementation that optimizes for several performance targets, for example communication via a channel. It is subject to future improvements.
- Why does kotlin 1.3 use
DefaultDispatcher
in place ofForkJoinPool.commonPool()
by default?
Actually it has been using the Default
dispatcher all the time, but the resolution of Default
changed. In the experimental phase, it was equal to the CommonPool
but now it prefers the custom implementation.
- How to keep the behavior of coroutines just like before 1.3?
Set the kotlinx.coroutines.scheduler
system property to off
.