What it is, how it’s different, and where it’s going
Today we’ll be learning about HTTP/3 — where we came from, how it’s different from previous protocols, and where it’s going. I learned most of this information from an excellent talk given by Daniel Stenberg, the inventor of curl. This High-Performance Programming video updated my knowledge on some more recent HTTP/3 stuff.
The main takeaway is that HTTP/3 operates over a new transport layer (QUIC) and, as a result, improves on legacy HTTP in a few fundamental ways. It is meant to be faster, more secure, and more reliable.
Before we can discuss how HTTP/3 improves on prior versions, we should understand the history of the protocol.
The original HTTP came out in 1991 without a version (it is now referred to as HTTP/0.9). It was quite simple — there were no headers, responses were a single HTML file, and the only method was
HTTP/1.0 came out in 1996 to make the protocol more extensible. It brought versioning information, status codes, and headers for transmitting metadata. It also expanded beyond simple HTML file returns by adding the
Content-Type header we all know and love today.
HTTP/1.0 left something to be desired, however. Each request/response between a client and server requires a unique TCP connection, which can add significant latency to page loads, especially when multiple resources are required.
Clients got around this by using multiple TCP connections in parallel, but this was resource intensive. Each connection still had significant latency from having to TLS handshake between client and server.
HTTP/1.1 came out the next year with the ability to use many parallel TCP connections. Connections could also be reused (avoiding the overhead of having to close and reopen those connections in HTTP/1.0 continually), and pipelining and chunking were added to boost performance further.
However, this came with a head-of-line (HOL) blocking flavor. As websites got more complex, clients quickly ran out of available TCP connections. As a result, new request/response cycles had to wait for an existing request to complete before it could use the connection.
HTTP/2 came out in 2015 to improve performance. It switched from transmitting text to sending binary and introduced the concept of streams to improve parallel request performance. By multiplexing a single TCP connection, clients could send many parallel requests without running out of connections and facing the HOL blocking from HTTP/1.1. HOL blocking was still a problem, however. It had just moved from the application layer to the transport layer.
The protocol will re-send the packets with missing data if data is lost over TCP. As a result, other data (read: other streams) on the TCP connection must wait for these packets to be successfully delivered. Since streams are multiplexed over a single TCP connection, if one stream loses data, all streams must pause and wait for that data to be re-transmitted. Over a lossy network, this can cause real problems.
HTTP/2 has generally served us well, but it suffers from head-of-line blocking and has a fair amount of overhead due to the HTTPS TLS handshake. This leads to latency or the feeling that a website is not very responsive. As more software moves to the web, it becomes increasingly important to deliver a responsive, fast, almost native-application-like experience to the user.
Further, these problems are exacerbated anywhere with lossy networks or large distances between client and server. Lossy networks will lose more data, requiring more packet redelivery, intensifying the HOL problem. Long communication hops delay the back-and-forth handshakes that HTTPS requires.
On top of this, we’re human. We have an insatiable need to build and improve and advance.
HTTP/3 swaps out TCP/TLS as the transport layer and replaces it with QUIC (“Quick UDP Internet Connections”). QUIC gives us streams “for free” (it’s part of the transport layer, not the application layer), and it allows us to build any protocol on top (HTTP is the first example). The HTTP request/response flow is generally the same, but the transport over the wire differs. It is binary over multiplexed QUIC (built on top of UDP), whereas previously, we had binary multiplexed over TCP.
It also makes HTTP/3 much faster! Handshakes are significantly quicker, and clients can start sending requests and receiving responses faster than they could previously. This significantly cuts down startup latency and makes the webpage feel more responsive. Encryption and authentication are provided by default, and establishing a connection takes one round trip instead of multiple.
Since HTTP/3 is now “binary over multiplexed QUIC,” the streams are independent of each other. This largely alleviates the head-of-line blocking problem, as an individual stream cannot delay other streams if it suffers packet loss. If a stream loses data, the lost packet will be recovered only in that stream. QUIC can do this since it’s built on top of UDP, which does not force packet re-delivery at the protocol layer.
As a result, QUIC can be smarter about how and when it chooses to resend data. This also allows for future improvements, such as lossy and non-lossy independent streams on the same connection!
This combines solid performance improvements under ideal network conditions and massive improvements in compromised situations. The following benchmarks show how the performance improvements currently scale with website size/complexity and client/server distance.
Tests were done with a small site (“small”), a content-heavy site (“medium”), and a single-page application (“large”) (more details in the link above). They were done over three distances: “short” being New York to Minnesota, “medium” being New York to London, England, and “long” being New York to Bangalore, India. Below are the speed improvements seen from moving from HTTP/2 -> HTTP/3:
As client/server distance increases, the speed bumps can get pretty mega. We can also see that content-heavy sites (arguably more representative of the modern web than “small”) benefit the most.
HTTP/3 doesn’t come without its challenges, which we will briefly discuss here. Note that some of these may have been resolved as this information primarily comes from Daniel’s 2019 talk.
Some amount of QUIC requests fail
This is largely from various entities blocking UDP traffic. Since QUIC uses UDP under the hood, it can look like a denial-of-service (DOS) attack, which many network administrators block. This will likely be worse on hyper-vigilant enterprise networks to avoid DDOS (distributed DOS) attacks.
Clients must be able to fall back on previous HTTP versions
QUIC denial will likely be location-based — it may not be a problem at home, but the local coffee shop might block UDP traffic. As a result, clients must be able to fall back on older HTTP versions to communicate successfully.
HTTP/3 is CPU intensive
As of 2019, HTTP/3 will consume 2–3x CPU to serve the same amount of traffic as previous protocols. This is partly because networking libraries are less optimized for UDP (they’ve had decades to tune for TCP finely) and partly due to a lack of dedicated hardware (think ASICs).
UDP stacks are unoptimized
This is partly responsible for the aforementioned increased CPU utilization. No one has optimized software and hardware stacks for UDP with the same intensity that they have for TCP. This will improve over time!
“Funny” TLS layer
We need all relevant TLS libraries to update to support the new standard and new API. This has likely made significant progress since 2019 but is not an easy task.
All QUIC stacks are user land
QUIC is built on UDP, which means all packets are copied to user space. This increased copying comes with increased resource utilization. Perhaps it could be integrated into the kernel (read: multiple kernels, everywhere), but that could be very difficult.
Wireshark has this down pat, but we need other tools as well. We’ve gotten used to segment numbers and specific window sizes, all replaced by QUIC mechanics.
According to caniuse, Firefox, Edge, and Chrome all support HTTP/3 by default. Safari doesn’t yet, but it can be enabled in the experimental features. Likewise, various browsers support it by default on Android, and it’s behind a feature flag on iOS. Currently, about a quarter of websites support HTTP/3, and we can see that trending up.
HTTP/3 is already making real-world, human-noticeable differences, which will likely improve as UDP stacks are optimized. It brings performance, reliability, and security upgrades but is not without challenges. It’s still early days, but I’m excited for a more secure, performant internet that is more resilient to lossy networks. I’m also excited to see where QUIC goes and the future applications that will be built on top of it.
Thanks for reading. Stay tuned for more!