Base64 Encode Your Headers
Tl;dr: Base64 encoding HTTP2 headers doesn’t really increase their size.
The Setup
HTTP has a problem: headers. Headers in HTTP are a way to send bits of metadata about a resource in both requests and responses. They include stuff like what webpage to load, how big it is, how it’s encoded, and caching information. In the 90s, they didn’t account for much time on the wire, and were pretty limited in size. They were just plain ASCII text strings that accounted for a handful of bytes before the actual HTML was sent.
But those days are over. Websites send more and more in the headers section of the request: notably multi-kilobyte cookies. A few kilobytes can’t add up to much right? I did a quick test by looking at the network usage for loading cnn.com. There were about 350 requests made, each with ~1KB of request and response headers for each. That’s a lot! It is enough to be noticeable on lower end devices like cellphones and tablets, which scarcely have the bandwidth to indulge this data deluge.
There are two answers that software engineers use frequently to address the problem of size: Caching and compression. Caching saves the browser from having re-download the webpage and associated resources each time the page is loaded. However, caching is not free. First, it adds a huge amount of complexity to the protocol and all devices involved. Try looking at RFC 7232, or at least the size of the scroll bar after loading the page. Additionally, caching only works in one direction: from server to client. If a client didn’t want to send duplicate headers each time, it is sadly out of luck. Caching is implemented using headers too, consuming valuable headers space.
Caching does work pretty well, since the fastest request is the one you don’t have to make. But when that isn’t enough, the next best thing is compression. Lossless compression is a way of squashing down the size of a bunch of bytes into a more compact form. Compression works by reducing redundancy in the uncompressed message. HTTP already uses this trick for requested documents, typically sending HTML and CSS with gzip encoding. However, like caching, compression is implemented by means of the headers, so it is not feasible to compress the headers themselves.
This is where HTTP/2 comes in. In the HTTP/2 spec, headers are encoded in a compressed manner called HPACK. This technique uses a combination of Huffman encoding as well as referencing previously sent headers to squeeze down the overall request. They work in both directions, allowing both clients and servers to save each others time. However, there is one snag that comes up when using HPACK: backwards compatibility.
HTTP/2 was designed to be as backwards compatible as possible with HTTP/1. It defines the way that new HTTP/2 messages can be translated back to HTTP/1. It even goes so far as to not include the version number at all in requests since it is just “HTTP”. One thing that arises out of this is header restrictions.
HPACK was designed for use with HTTP/2, but it isn’t a perfect match. Header compression works on arbitrary bytes, allowing any character to be compressed and sent. HTTP, because of its original ASCII roots, forbids most characters from being in header names and values. So, in order to use HPACK in a standards compliant way, you still need to use ASCII characters. (RFC 7230 for the curious)
But, HPACK knew this and was designed accordingly. HPACK works by allowing headers to be huffman compressed using a static huffman table. This table was designed with two major goals in mind: security when used over TLS (protection from BEAST attack), and optimal for almost all HTTP traffic. Static huffman tables do not have any known attacks.
A brief aside: Huffman encoding works by looking at the frequency of each character in a message. Each character is then encoded with a shorter sequence of bits based on how common each character is. Characters that occur more often get shorter codes, which leads to an overall shrinkage of the message. This technique, while not size optimal, is well understood and does a pretty good job. It is widely used, notably in GZIP, JPEG, and PNG.
Because of Google’s involvement in the creation of HTTP/2, a wealth of header information was available at the time of the huffman table creation. The table was generated based on header information that Google has sent and received leading to a table that is ideal. An abbreviated version of the table from the HPACK spec RFC 7541:
'a' |00011 [ 5]
'b' |100011 [ 6]
'c' |00100 [ 5]
'd' |100100 [ 6]
'e' |00101 [ 5]
From this table, we can see that the letter ‘e’ is encoded with only 5 bits rather than the typical 8. This is good, since most header names and values use the letter ‘e’ often, such as the Message-Encoding
header. One thing to note however: the shortest code in the whole table is 5 bits long. That means at best, we can only compress down to a ratio of 5⁄8, or 62.5% of the original message.
Binary Data
Where does this come into play with ASCII? Suppose you wanted to encode binary data into a header but were limited to ASCII printable characters? You only have about 90ish usable characters to encode the full byte gamut of 256 values. The common approach is to base64 encode the values, using only alphabet and numeric characters. These fit nicely into the HTTP valid characters. (I’m simplifying here a little, the details vary depending on who you ask.) For the privilege of using headers to store binary data, you pay a 2 bit tax on every byte you send. Since 64 values accounts for 6 bits of entropy, you are doing about 6⁄8, or 75% efficiency.
Here is where the serendipity happens. When you need to send binary data (such as in a Cookie header) you have to base 64 encode the value. This is wasteful, and inflates the size of the message. Because HPACK though is heavily tailored to commonly used HTTP headers and characters, on average, you only pay around 6.39 bits per character. (Add up all the huffman encoded lengths of alpha, numeric, and the control characters, and divide by 64). The end overhead is about ~6% increase in header size when sending binary data Thus, an inefficient encoding, followed by an efficient one gets you pretty much back to where you started.
One other interesting effect that if you didn’t base64 encode your binary data, it would actually explode in size. The huffman table uses very long codes (up to 30 bits) for rarer bytes. Compressing at all would be counter productive. In order to get a sane size, you have to base64 encode first.
I don’t know if this is a happy accident or clairvoyance on the part of the HTTP/2 designers. Cookies appear to be able to gain from this the most, since they are notorious for taking up space. I actually suspect that the enormously large, base64 encoded Cookies from the past influenced the generation of the huffman table.
Minor: It is possible to send uncompressed headers in HPACK, but any standards compliant HTTP receiver could reject them. The semantics of HTTP have not changed, just the encoding.