Analyzing HTTPS Performance Overhead
In this post we want to analyze HTTPS performance overhead and hopefully clear up some doubts that you may have had in the past. With best practices in place like early termination, Cache-Control
and HTTP/2, factors such as the latency of the TLS handshake and additional roundtrips start becoming things of the past. Newer protocols, better hardware, and faster connections have started making up for these delays.
Costs and overhead when switching to HTTPS
There is always going to be some https performance overhead and possibly costs when switching to HTTPS, but the question is with today's hardware, connectivity, and new HTTP/2 protocol, is the overhead and or cost even a factor worth considering anymore?
Price
Price was always factor when it came to users debating whether they should migrate to HTTPS. But the costs associated with purchasing SSL certificates is changing. Our recent integration with Let's Encrypt now lets KeyCDN customers deploy HTTPS with a custom Zone Alias for free! And web hosts are also following suit. Over the course of the next year you can expect a lot of the bigger names to now let you deploy an SSL certificate for free, with one click of a button.
For a lot of users, this takes the price factor totally out of the equation. There will still be some exceptions however as currently Let's Encrypt only issues DV certificates. Enterprise customers will most likely still be purchasing organization and extended validation certificates. With this change in the future? Only time will tell.
TLS overhead - SSL performance impact
There is some latency added when you switch to HTTPS. This is because the initial TLS handshake requires two extra roundtrips before the connection is established, compared to one through an unencrypted HTTP port. See diagram below.
CPU load
There is also an encryption process between the browser and the server in which they exchange information using a process known as asymmetric encryption. However tests between encrypted and unencrypted connections show a difference of only 5 ms, and a peak increase in CPU usage of only 2%. In January 2010, Gmail switched to using HTTPS for everything by default. They didn't deploy any additional machines or special hardware and on their frontend machines, SSL/TLS accounted for less than 1% of the CPU load.
Many people believe that SSL/TLS takes a lot of CPU time and we hope the preceding numbers will help to dispel that.
- Adam Langley, Google "Overclocking SSL"
Operational costs
If you are running on HTTP/2 (HTTPS) this only requires a single connection per origin which means few sockets, memory buffers as well as TLS handshakes. Because of this it you might be able to handle more users with fewer resources as opposed to HTTP.
HTTP vs HTTPS
We decided to run some test of HTTP vs HTTPS with our case study site and see what the results were. This is running on a small Vultr VPS with Nginx, PHP7, and HTTP/2 (over HTTPS). Our images are converted to WebP format using Optimus and served up using the WordPress Cache Enabler plugin. And the content is accelerated by KeyCDN.
HTTP
WebPageTest
We ran 5 tests on WebPageTest to get a median result.
Load time | First byte | Start render | DOC complete | Fully loaded | |
---|---|---|---|---|---|
Median test first view | 1.128 s | 0.136 s | 1.092 s | 1.128 s | 1.192 s |
Median test repeat view | 0.797 s | 0.109 s | 0.790 s | 0.797 s | 0.869 s |
HTTPS
We then added an SSL certificate to our web host and deployed the certificate with KeyCDN via the new Let's Encrypt integration. We did a quick force of HTTPS in our Nginx config (302 to new URLs) and rewrote all the old hard coded WordPress URLs in the database.
WebPageTest
We ran 5 tests on WebPageTest to get a median result.
Load time | First byte | Start render | DOC complete | Fully loaded | |
---|---|---|---|---|---|
Median test first view | 1.032 s | 0.207 s | 0.986 s | 1.032 s | 1.190 s |
Median test repeat view | 0.835 s | 0.183 s | 0.748 s | 0.835 s | 0.929 s |
It is important to note that we didn't use Pingdom in any of our tests because they use Chrome 39, which doesn't support the new HTTP/2 protocol. HTTP/2 in Chrome isn't supported until Chrome 40. You can tell this by looking at the User-Agent
in the request headers of your test results.
In our tests WebPageTest used Chrome 47, which does support HTTP/2.
Comparing HTTP vs HTTPS performance tests
In our tests the HTTPS version actually was first to complete DOC load.
And when it came time to fully loaded it came down to right about the same.
Below are the timing comparisons between the two. As you can see they are both neck and neck.
Here are the final comparison results. The HTTPS version actually had a quicker first load time but as you can see they are very similar. Bottom Line: Making a lot of short requests over HTTPS will probably be slower than HTTP, but if you transfer a lot of data in a single request, the difference will be insignificant.
Load time | First byte | Start render | DOC complete | Fully loaded | |
---|---|---|---|---|---|
HTTP first view | 1.128 s | 0.136 s | 1.092 s | 1.128 s | 1.192 s |
HTTPS first view | 1.032 s | 0.207 s | 0.986 s | 1.032 s | 1.190 s |
HTTP repeat view | 0.797 s | 0.109 s | 0.790 s | 0.797 s | 0.869 s |
HTTPS repeat view | 0.835 s | 0.183 s | 0.748 s | 0.835 s | 0.929 s |
Improving HTTPS performance
There are quite a few things you can do to cancel out those slight delays and improve your HTTPS performance, such as implementing caching and utilizing HTTP/2. A lot of these are applicable even with SPDY.
1. HTTP Strict Transport Security (HSTS)
HTTP Strict Transport Security (HSTS) is a security enhancement that restricts web browsers to access web servers solely over HTTPS. It helps your performance by eliminating unnecessary HTTP-to-HTTPS redirects and shifting this responsibility to the client, which will automatically rewrite all links to HTTPS. All major up to date browsers currently support the use of HSTS with the exception of Opera Mini.
See our full guide on how to enable HTTP Strict Transport Security on your server.
2. Cache-Control
Cache-Control is an HTTP cache header comprised of a set of directives that allow you define when / how a response should be cached and for how long. HTTP caching occurs when a browser stores copies of resources for faster access. This can be used with your HTTPS implementation.
See our full guide on how to use the Cache-Control
header directives.
3. Early termination
Early termination is very important in decreasing the latency due to the TLS handshake. By serving your content from a content delivery network (CDN) you are reducing the latency cost of each roundtrip between the client and the server, because the physical distance is less. A CDN allows you to terminate the connection closer to the user.
4. OCSP stapling
OCSP stapling is an alternative approach to the original Online Certificate Status Protocol (OCSP) for determining whether an SSL certificate is valid or not. It does this by allowing the web server to query the OCSP responder and then caches the response. This allows the web server to check the validity of it's certificates and eliminates the need for the client to contact the certificate authority, reducing another request.
OCSP stapling is automatically enabled when you server content with KeyCDN over HTTPS. See our guide on how to enable OCSP stapling on your server.
5. HTTP/2
And of course we have HTTP/2, which is the second major version update to the HTTP protocol since HTTP/1.1. Benefits and features of HTTP/2 include:
- Binary: As opposed to HTTP/1.1, which is textual.
- Multiplexing: Allowing multiple requests and responses to be sent at the same time.
- Header compression: Headers are compressed using a new algorithm which in turn reduces the amount of data.
- One Connection: Allows a client to use just one connection per origin.
- Server Push: Avoid delays by pushing responses it thinks the client will need to cache.
- RTT: Reduces additional round trip times making your website load faster without any optimization.
- ALPN extension: Allows faster-encrypted connections since the application protocol is determined during the initial connection.
- Addresses the head of line blocking problem in HTTP/1.1.
Are you running over HTTP/2 yet? Use our free HTTP/2 Test tool to check your web host and CDN provider.
6. HPACK compression
Along with HTTP/2 comes the ability to use HPACK compression if your server supports it. This can result in headers that are decreased in size by over 30% on average. KeyCDN has enabled HPACK compression on all of it's edge servers.
7. Brotli
Another performance advantage of using HTTPS is the ability to also use Brotli compression. Brotli is a new open source compression algorithim developed by Google as an alternative to Gzip, Zopfli, and Deflate. Google's case study on Brotli has shown compression ratios of up to 26% smaller than current methods, with less CPU usage.
Both the server and the client (browser) must be Brotli compatible to take advantage of smaller file sizes and be running over an HTTPS connection. Brotli compression is currently supported by the following browsers:
- Google Chrome: Chrome 50+
- Mozilla Firefox: Firefox 44+ (released January 26, 2016)
- Opera: Opera 38+
- Safari: Safari 11+
- Edge: Edge 15+
You can test your server to see if it supports Brotli our free Brotli Test tool.
Summary
We didn't see much of a delay from HTTPS and in fact some of the tests were faster! So the SSL performance impact is not as important as it used to be. The web is definitely moving in a new direction and TLS handshakes and certificates are no longer slowing us down. The are lots of ways as we mentioned above to even further improve upon your HTTPS performance and reduce your overhead. Of course we always recommend testing yourself as different setups and environments can vary.
HTTPS is here and it's here to stay. Scott Helme saw a 42% growth in HTTPS usage by the top 1 million sites in the last 6 months.
What has been your experience? We would love to hear about it.