Hardog's blog

trace forever

Group: 572218159
Email: 1273203953@qq.com
Location: hangzhou·zhejiang
GitHub: https://github.com/hardog

原文链接: Enabling HTTP/2 for Dropbox web services: experiences and observations

HTTP2通过请求头部压缩能够减少请求入口带宽但对于出口带宽并没有多大效果, 对于post请求延迟是 由于Nginx1.9.15的实现原因, 通过Nginx错误日志以及验证机制来尽量发现更多潜在威胁, HTTP2目前仍然是一个比较新的协议存在兼容性问题因此仍需要谨慎使用!

At Dropbox, our traffic team recently upgraded the front-end Nginx servers to enable HTTP/2 for our web services. In this article, we would like to share our experiences and findings during the HTTP/2 transition. The overall upgrade was smooth for us, although there are also a couple of caveats that might be helpful to others.

Background: HTTP/2 and Dropbox web service infrastructure

HTTP/2 (RFC 7540) is the new major version of the HTTP protocol. It is based on SPDY and provides several performance optimizations compared to HTTP/1.1. These optimizations include more efficient header compression, server push, stream multiplexing over the same connection, etc. As of today, HTTP/2 is supported by major browsers.

Dropbox uses the open-source Nginx to terminate SSL connections and perform layer-7 load balancing for web traffic. Before the upgrade, our front-end servers ran Nginx 1.7-based software and supported SPDY. Another motivation for the upgrade is that Chrome currently supports SPDY and HTTP/2 but they will be dropping SPDY support on May 15th. If we don’t support HTTP/2 at that time, our Chrome clients would go from using SPDY back to HTTP/1.1.

The HTTP/2 upgrade process

The HTTP/2 upgrade was a straightforward and smooth transition for us. Nginx 1.9.5 added the HTTP/2 module (co-sponsored by Dropbox) and dropped SPDY support by default. In our case, we decided to upgrade to Nginx 1.9.15, which was then the latest stable version.

The Nginx upgrade involves making simple changes in configuration files. To enable HTTP/2, the http2 modifier needs to be added to the listen directive. In our case, because SPDY was previously enabled, we simply replaced spdy with http2.

Before (SPDY): listen A.B.C.D:443 ssl spdy;
After (HTTP/2): listen A.B.C.D:443 ssl http2;

Of course, you probably want to go through the complete Nginx HTTP/2 configuration options to optimize for the specific use cases.

As for deployments, we first enabled HTTP/2 on canary machines for about a week while we were still using SPDY in production. After verifying the correctness and evaluating the performance, HTTP/2 was enabled across the fleet for our web services.

http2_transition
Smooth transition from SPDY to HTTP/2 (60 minutes of traffic)

The figure above shows the smooth transition from SPDY to HTTP/2. The remaining HTTP/1.1 connections are not shown in this figure. We gradually enabled HTTP/2 across all front-end web servers around minute 23, 36, and 50. Before that, the connections include both HTTP/2 traffic in the canary machines and SPDY traffic in production machines. As you can see, roughly all the SPDY clients eventually migrated to HTTP/2.

Observations

We have closely monitored the performance after we enabled HTTP/2 on canary machines. Our observations include performance data that demonstrates the effectiveness of HTTP/2 as well as a couple of caveats as most HTTP/2 implementations are still relatively new.

Performance improvement

We have seen a significant reduction in the ingress traffic bandwidth, which is due to more efficient header compression (HPACK).

traffic_bandwidth
Reduced ingress traffic bandwidth (24 hours of traffic)

The figure above shows the ratio of average (per machine) traffic bandwidth between the canary and production machines, where HTTP/2 was enabled only on canary machines. Every canary or production machine received approximately the same amount of traffic from load balancers. As can be seen, the ingress traffic bandwidth was reduced significantly (close to 50%) after we enabled HTTP/2. It is worth noting that although we enabled SPDY previously in all canary and production machines, we did not turn on SPDY header compression due to the related security issues (CVE-2012-4929 aka CRIME). As for egress traffic, there was no significant change because headers typically contributed to a small fraction of the response traffic.

A couple of caveats

–Increased latency for POST requests.– When we enabled HTTP/2 on the canary machines, we noticed an increase in median latency. The figure below shows the ratio of P50 request latencies between canary and production machines. We investigated this issue and found that the increased latency was introduced by POST requests. After further study, this behavior appeared to be due to the specific implementation in Nginx 1.9.15. Related discussions can be found in the Nginx mailing list thread.

request_latency
Increased P50 request latency (24 hours of traffic)

Note that the increased P50 request latency ratio we see here (approximately 1.5x) depends on the specific traffic workload. In most cases, the overhead was about one additional round trip time for us, and it did not impact our key performance much. However, if your workload consists of many small and latency-sensitive POST requests, then the increased latency is an important factor to consider when upgrading to Nginx 1.9.15.

–Be careful with enabling HTTP/2 for everything, especially when you do not control the clients.– As HTTP/2 is still relatively new, from our experience, some clients/libraries and server implementations are not fully compatible yet. For example:

Because our API users may employ various of third-party HTTP libraries, we need to perform more extensive testing before enabling HTTP/2 support for our APIs.

Debugging tools

CloudFlare has presented a nice summary of HTTP/2 debugging tools. In addition, we found the Chrome net-internals tool (available at chrome://net-internals/#http2 in Chrome) to be helpful. The figure below is a screenshot of frame exchanges reported by net-internals when opening a new HTTP/2 session to www.dropbox.com.

net_internals_screenshot
Screenshot of net-internals when opening a new HTTP/2 session

Overall, we made a smooth transition to HTTP/2. The following are a few takeaways from this post.

We hope this post is helpful for those who are interested in enabling HTTP/2 for their services or those interested in networking in general. We would also like to hear your feedback in the comments below.