Search code examples
google-chromeinternet-explorerhttpfirefoxhttp-pipelining

Why is pipelining disabled in modern browsers?


Many, if not all modern browsers are not using pipelined HTTP requests. In theory pipelining should speed up requests by reducing the number of round trip times required to fetch a website.

According to the HTTP standard, all servers must handle pipelined requests, so the problem should not be in lack of support on the servers.

I have seen some security concerns, such as a layer 7 DoS attack if a client pushes as many pipelined requests as possible to a URL that's performance-intensive for the server, ignoring any answers that might be received.

That would be a reason to turn pipelining support off on the server (violating the standard), but I cannot find any reason to turn it off on the clients.

It is however turned on by default on Android browsers and Chrome mobile.

Why are Chrome, Firefox, IE, Opera and Safari not using pipelined HTTP requests in their desktop (and sometimes mobile) version? What is their reasoning behind turning it off?


Solution

  • Pipelining is disabled for the following reasons:

    • Firefox:

    The bigger issue has frankly been head of line blocking and its impact on performance and robustness. Naïve pipelines simply make performance worse.

    • Chrome:

    The option to enable pipelining has been removed from Chrome, as there are known crashing bugs and known front-of-queue blocking issues. There are also a large number of servers and middleboxes that behave badly and inconsistently when pipelining is enabled. Until these are resolved, it's recommended nobody uses pipelining. Doing so currently requires a custom build of Chromium.

    In general:

    Buggy proxies are still common and these lead to strange and erratic behaviors that Web developers cannot foresee and diagnose easily.

    Pipelining is complex to implement correctly: the size of the resource being transferred, the effective RTT that will be used, as well as the effective bandwidth, have a direct incidence on the improvement provided by the pipeline. Without knowing these, important messages may be delayed behind unimportant ones. The notion of important even evolves during page layout! HTTP pipelining therefore brings a marginal improvement in most cases only.

    Pipelining is subject to the HOL problem.

    HTTP/2 offers an alternative:

    With HTTP/1.x, the browser has limited ability to leverage above priority data: the protocol does not support multiplexing, and there is no way to communicate request priority to the server. Instead, it must rely on the use of parallel connections, which enables limited parallelism of up to six requests per origin. As a result, requests are queued on the client until a connection is available, which adds unnecessary network latency. In theory, HTTP Pipelining tried to partially address this problem, but in practice it has failed to gain adoption.

    HTTP/2 resolves these inefficiencies: request queuing and head-of-line blocking is eliminated because the browser can dispatch all requests the moment they are discovered, and the browser can communicate its stream prioritization preference via stream dependencies and weights, allowing the server to further optimize response delivery.

    A proxy can be used as well:

    You can try something I did to speed up Konqueror in KDE3. I was dissatisfied that Konqueror did not have HTTP pipelining, so after some searching, I installed Polipo as a local HTTP/HTTPS/FTP proxy and set Konqueror to use it (localhost on port 8123 if I remember correctly). In addition to HTTP pipelining, Polipo also provided improved caching, and since it was a proxy, I could set every browser to use it and the caching would be shared between the browsers. (This also means that it is a good idea to disable each browser's independent caching.)

    Salesforce uses the following process:

    Salesforce has a powerful and field-tested approach for mitigating HOLB at the TCP layer: we decouple the relation between an HTTP request and a TCP connection. Think about your transport as composed of multiple TCP connections (as many as the network context would need). Any part of the HTTP request can go over any TCP connection. So if you hit the HOLB in one connection, it not only helps in mitigating affected requests, it also minimizes impact to other application requests using healthy connections. The result is an ability to enjoy the benefits of multiplexing and pipelining at the HTTP layer while minimizing risks of HOLB.

    References