This post is part one of a two-part series detailing HTTP/2 best practices. Part two—Implementing HTTP/2 in Production Environments—covers what’s required to implement and debug HTTP/2-capable Web applications in production environments.
The Hypertext Transfer Protocol (HTTP) underpins the World Wide Web and cyberspace. If that sounds dated, consider that the version of the protocol most commonly in use, HTTP 1.1, is nearly 20 years old. When it was ratified back in 1997, floppy drives and modems were must-have digital accessories and Java was a new, up-and-coming programming language. Ratified in May 2015, HTTP/2 was created to address some significant performance problems with HTTP 1.1 in the modern Web era. Adoption of HTTP/2 has increased in the past year as browsers, Web servers, commercial proxies, and major content delivery networks have committed to or released support.
Unfortunately for people who write code for the Web, transitioning to HTTP/2 isn't always straightforward and a speed boost isn’t automatically guaranteed. The new protocol challenges some common wisdom when building performant Web applications and many existing tools—such as debugging proxies—don’t support it yet. This post is an introduction to HTTP/2 and how it changes Web performance best practices.
Binary frames: The ‘fundamental unit’ of HTTP/2
One benefit of HTTP 1.1 (over non-secure connections, at least) is that it supports interaction with Web servers using text in a telnet session on port 80: typing GET / HTTP/1.1
returns an HTML document on most Web servers. Because it’s a text protocol, debugging is relatively straightforward.
Instead of text, requests and responses in HTTP/2 are represented by a stream of binary frames, described as a “basic protocol unit” in the HTTP/2 RFC. Each frame has a type that serves a different purpose. The authors of HTTP/2 realized that HTTP 1.1 will exist indefinitely (the Gopher protocol still is out there, after all). The binary frames of an HTTP/2 request map to an HTTP 1.1 request to ensure backwards compatibility.
There are some new features in HTTP/2 that don’t map to HTTP 1.1, however. Server push (also known as “cache push”) and stream reset are features that correspond to types of binary frames. Frames can also have a priority that allows clients to give servers hints about the priority of some assets over others.
Other than using Wireshark 2.0, one of the easiest ways to actually see the individual binary frames is by using the net-internals tab of Google Chrome (type chrome://net-internals/#http2
into the address bar). The data can be hard to understand for large Web pages. Rebecca Murphey helpfully wrote a useful tool for displaying it visually in the command line.
Additionally, the protocol used to fetch assets can be displayed in the Chrome Web developer tools—right click on the column header and select “Protocol”:

All of the HTTP/2 requests in this listing use a secure connection over Transport Layer Security (TLS). All major browsers require HTTP/2 connections to be secure. This is done for a practical reason: an extension of TLS called Application-Layer Protocol Negotiation (ALPN) lets servers know the browser supports HTTP/2 (among other protocols) and avoids an additional round-trip. This also helps services that don’t understand HTTP/2, such as proxies—they see only encrypted data over the wire.
Reducing latency with multiplexing
A key performance problem with HTTP 1.1 is latency, or the time it takes to make a request and receive a response. This issue has become more pronounced as the number of images and amount of JavaScript and CSS on a typical Web page continue to increase. Every time an asset is fetched, a new TCP connection is generally needed. This requirement is important for two reasons: the number of simultaneous open TCP connections per host is limited by browsers and there’s a performance penalty incurred when establishing new connections. If a physical Web server is far away from users (for example, a user in Singapore requesting a page hosted at a data center on the U.S. East Coast), latency also increases. This scenario is not uncommon—one recent report says that more than 70% of global Internet traffic passes through the unmarked data centers of Northern Virginia.
HTTP 1.1 offers different workarounds for latency issues, including pipelining and the Keep-Alive header. However, pipelining was never widely implemented and the Keep-Alive header suffered from head-of-line blocking: the current request must complete before the next one can be sent.
In HTTP/2, multiple asset requests can reuse a single TCP connection. Unlike HTTP 1.1 requests that use the Keep-Alive header, the requests and response binary frames in HTTP/2 are interleaved and head-of-line blocking does not happen. The cost of establishing a connection (the well-known “three-way handshake”) has to happen only once per host. Multiplexing is especially beneficial for secure connections because of the performance cost involved with multiple TLS negotiations.

Implications for Web performance: goodbye inlining, concatenation, and image sprites?
HTTP/2 multiplexing has broad implications for frontend Web developers. It removes the need for several long-standing workarounds that aim to reduce the number of connections by bundling related assets, including:
- Concatenating JavaScript and CSS files: Combining smaller files into a larger file to reduce the total number of requests.
- Image spriting: Combining multiple small images into one larger image.
- Domain sharding: Spreading requests for static assets across several domains to increase the total number of open TCP connections allowed by the browser.
- Inlining assets: Bundling assets with the HTML document source, including base-64 encoding images or writing JavaScript code directly inside
<script>
tags.
With unbundled assets, there is greater opportunity to aggressively cache smaller pieces of a Web application. It’s easiest to explain this with an example:

A common concatenation pattern has been to bundle stylesheet files for different pages in an application into a single CSS file to reduce the number of asset requests. This large file is then fingerprinted with an MD5 hash of its contents in the filename so it can be aggressively cached by browsers. Unfortunately, this approach means that a very small change to the visual layout of the site, like changing the font style for a header, requires the entire concatenated file to be downloaded again.
When smaller asset files are fingerprinted, significant amounts of JavaScript and CSS components that don’t change frequently can be cached by browsers—a small refactor of a single function no longer invalidates a massive amount of JavaScript application code or CSS.
Lastly, deprecating concatenation can reduce frontend build infrastructure complexity. Instead of having several pre-build steps that concatenate assets, they can be included directly in the HTML document as smaller files.
Potential downsides of using HTTP/2 in the real world
Optimizing only for HTTP/2 clients potentially penalizes browsers that don’t yet support it. Older browsers still prefer bundled assets to reduce the number of connections. As of February 2016, caniuse.com reports global browser support of HTTP/2 at 71%. Much like dropping Internet Explorer 8.0 support, the decision to adopt HTTP/2 or go with a hybrid approach must be made using relevant data on a per-site basis.
As described in a post by Kahn Academy Engineering that analyzed HTTP/2 traffic on its site, unbundling a large number of assets can actually increase the total number of bytes transferred. With zlib, compressing a single large file is more efficient than compressing many small files. The effect can be significant on an HTTP/2 site that has unbundled hundreds of assets.
Using HTTP/2 in browsers also requires assets to be delivered over TLS. Setting up TLS certificates can be cumbersome for the uninitiated. Fortunately, open source projects such as Let’s Encrypt are working on making certificate registration more accessible.
A work in progress
Most users don’t care what application protocol your site uses—they just want it to be fast and work as expected. Although HTTP/2 has been officially ratified for almost a year, developers are still learning best practices when building faster websites on top of it. The benefits of switching to HTTP/2 depend largely on the makeup of the particular website and what percentage of its users have modern browsers. Moreover, debugging the new protocol is challenging and easy-to-use developers tools are still under construction.
Despite these challenges, HTTP/2 adoption is growing. According to researchers scanning popular Web properties, the number of top sites that use HTTP/2 is increasing, especially after CloudFlare and WordPress announced their support in late 2015. When considering a switch, it’s important to carefully measure and monitor asset- and page-load time in a variety of environments. As vendors and Web professionals educate themselves on the implications of this massive change, making decisions from real user data is critical. In the midst of a website obesity crisis, now is a great time to cut down on the total number of assets regardless of the protocol.
In Part 2 of this series on HTTP/2, we’ll focus on practical implementation details of HTTP/2 and how to enable it on your server and debug real traffic.
Additional resources
- Forgo js packaging? Not so fast.
- HTTP/2 for Web Developers
- HTTP/2 Explained
- Building for HTTP/2
- List of HTTP/2 Tools
Be sure to read part two of this two-part series: Implementing HTTP/2 in Production Environments
Jeff Martens, Product Manager for New Relic Browser, and Web performance expert Andy Davies contributed to this post with technical feedback and invaluable suggestions.
Background image courtesy of Shutterstock.com.
The views expressed on this blog are those of the author and do not necessarily reflect the views of New Relic. Any solutions offered by the author are environment-specific and not part of the commercial solutions or support offered by New Relic. Please join us exclusively at the Explorers Hub (discuss.newrelic.com) for questions and support related to this blog post. This blog may contain links to content on third-party sites. By providing such links, New Relic does not adopt, guarantee, approve or endorse the information, views or products available on such sites.