http4s-ember-server throws ClosedChannelException from time to time

Using http4s-ember-server on one of my applications I notice from time to time these exceptions appearing in my server's logs.

What is odd about them is that they don't follow my logger's appender or formatting style, they are just plain text like this, these stack lines go in separate lines so my kubernetes receives them as separate log messages while my normal exceptions and any application logs are formatted into a JSON, even exceptions are nicely in one block. So this must be coming from something low level.

I also noticed, only 1 of my 3 very similar applications throws these, so it can't be the meta/health, ready or buildinfo endpoints.

    at java.base/
    at java.base/
    at delay @$AsyncSocket.$anonfun$readChunk$2(SocketPlatform.scala:120)
    at delay @$AsyncSocket.$anonfun$readChunk$1(SocketPlatform.scala:120)
    at async @$AsyncSocket.readChunk(SocketPlatform.scala:114)
    at flatMap @$BufferedReads.$anonfun$read$1(SocketPlatform.scala:82)
    at delay @$BufferedReads.withReadBuffer(SocketPlatform.scala:52)
    at flatten$extension @ org.http4s.ember.server.internal.Shutdown$$anon$1.<init>(Shutdown.scala:71)
    at onError @ com.c1999.n1999.w1999.NWApp$.run(NWApp.scala:15) <- this is the only line that states my own code
    at getAndSet @ org.typelevel.keypool.KeyPool$.destroy(KeyPool.scala:120)

Has anyone encountered such exceptions with http4s servers?

The source code around line 15 looks like

val program =
      (for {
        twoConfigs <- NWAppConfig
          .onError(error => logger.error(error)(show"failed to load config with: $error")) // line 15 here
        (appConfig, requestContextConfig) = twoConfigs
        components <- NWComponents(appConfig, requestContextConfig)
        _ <-"started").toResource
        _ <- components.streamsJoined.compile.drain
          .onError(error => logger.error(error)(s"error in nw: $error"))
      } yield ()).use_


  • I ran into this same issue today and it took me a while to track down... What's happening here is that the finaliser is throwing a (benign) exception, which is being printed directly to the console via def reportFailure in the IOApp you're using to evaluate your program. This can be redirected to your preferred logging library by overriding the implementation - if you're in an IOApp, it's as simple as

    override protected def reportFailure(err: Throwable): IO[Unit] = myLogImpl(err)

    or if you're evaluating with an unsafeFoo method, instead of using, you can construct your own IORuntime with val myRuntime = IORuntimeBuilder.apply().setFailureReporter(t => myLogImpl(t)).build() and pass it to the unsafe method with e.g .unsafeRunSync()(myRuntime)