When working with matplotlib widgets and k3d to visualize 3D data, the requests are constructed such, that the graphical data is being packed into the HTTP header. This header size is limited in the underlying webserver used by jupyter server (tornado). If it is exceeded by the client request (e.g. visualizing a large plot), the following exception pops up in the log file of Jupyter:
[I 2021-11-30 15:42:35.323 ServerApp] Unsatisfiable read, closing connection: delimiter re.compile(b'\r?\n\r?\n') not found within 65536 bytes
So the buffer size of 64 KiB has been exceeded by the plot.
My question is how to permanently set a larger header size within Jupyter? I already tried something like this:
jupyter_server_config.py:
c.ServerApp.tornado_settings = {
"max_header_size": 500*1024**2,
"max_buffer_size": 1024**3,
}
WITHOUT any success.
Here is a related bug report with this error message: https://github.com/codota/TabNine/issues/255 suggesting that
"...changed the Tornado package's code because there does not seem to be a directive to pass options to the HTTPServer object in Jupyter's configuration."
I hope this does not hold true, but can be manipulated sanely somehow.
I hope this does not hold true, but can be manipulated sanely somehow.
Well, monkey patching is always an option in dynamic languages such as Python.
We're going to patch Tornado's http connection parameters and override the max_header_size
.
Put this code in your jupyter_server_config.py
file:
from tornado import http1connection
def init_patch(
self,
no_keep_alive=False,
chunk_size=None,
max_header_size=None,
header_timeout=None,
max_body_size=None,
body_timeout=None,
decompress=False,
):
self.no_keep_alive = no_keep_alive
self.chunk_size = chunk_size or 65536
self.max_header_size = 500*1024**2 # <- custom value
self.header_timeout = header_timeout
self.max_body_size = max_body_size
self.body_timeout = body_timeout
self.decompress = decompress
http1connection.HTTP1ConnectionParameters.__init__ = init_patch