up

Zoompf's Web Performance Blog

Note: Archived Content

This is the archived version of the Zoompf blog. Since our acquisition by Rigor, all our new research and posts on web performance are being published on The Rigor Blog

Lose the Wait: HTTP Compression

 Billy Hoffman on February 10, 2012. Category: Lose The Wait

One of the ways you can improve website performance is to reduce the amount of data that needs to get delivered to the client. An easy way to reduce the amount of data sent to a client is to compress the content and then transfer it to the client. This can be done with HTTP compression. Despite being a surprisingly simple feature of HTTP, there are numerous challenges which must be addressed to properly use HTTP compression. These challenges are:

  1. Ensuring you are only compressing compressible content.
  2. Ensuring you are not wasting resources trying to compress uncompressible content.
  3. Selecting the correct compression scheme for your visitors.
  4. Configuring the web server properly so compressed content is sent to capable clients.

In this post, part of our Lose the Wait performance series, I will discuss each of these issues and demonstrate how to configure your web server to implement HTTP compression properly.

Compressing Compressible Things

Let’s start out easy. What should HTTP compression get applied to? The answer is simple: Any content which is not already natively compressed.

Notice I didn’t say "text resources." Text resources, like HTML, CSS, and JavaScript certainly should be compressed because they are not natively compressed file formats. Unfortunately, most people seem to focus on these 3 types of files. In fact, a quick web search shows that most of the top results for ".htaccess compress" include instructions only on compressing HTML, CSS, and JavaScript files. This just reinforces what I’ve said before; you have to be careful where your advice comes from.

Here is a list of common text resource types on the web which should be served with HTTP compression:

  • XML. XML is structured text used in standalone files (like Flash’s crossdomain.xml or Google’s sitemap.xml) or as a data format wrapper for API calls.
  • JSON. JSON is a subset of JavaScript used as a data format wrapper for API calls.
  • News feeds. Both RSS and Atom feeds are XML documents.
  • HTML Components (HTC). HTC files are a proprietary Internet Explorer feature which package markup, style, and code information used for CSS behaviors. HTC files are often used by polyfills such as Pie or iepngfix.htc to fix various problems with IE or to back port modern functionality.
  • Plain Text. Plain text files can come in many forms, from README and LICENSE files, to Markdown files. All should be compressed.
  • Robots.txt. Robots.txt is a specific text file used to tell search engines what parts of the website to crawl. Robots.txt is often forgotten since it is not usually accessed by humans and does not appear in JavaScript-based web analytics logs. Since robots.txt is repeatedly accessed by search engine crawlers and can be quite large, it can consume large amounts of bandwidth without your knowledge.

ICO

As I said, HTTP compression isn’t just for text resources and should be applied to all non-natively compressed file formats. What do I mean by this?

As an example, let’s look at ICO files. ICO files are an image format used originally used for icon images on Windows. The format, as it is in use today, was created over 20 years ago for Windows 3.0. Today, ICO files are used on the web as Favicons for a website, usually displayed in the address bar or browser tab. While modern browsers allow other file formats besides ICO support is not universal. Many sites continue to use ICO files as Favicons for compatibility reasons.

Despite being an image, ICO files are not natively compressed. ICO images are actually a primitive version of a BMP image. Neither ICO nor BMP image formats are natively compressed. While can (and should) avoid using BMP images on your website, you can’t do this with ICO files. Be sure to configure your web server to server ICO images with HTTP compression.

SVG

SVG images are example of an image format which is not natively compressed. SVG images are just XML documents, but they have a different MIME type and file extension. This means, while someone might remember to compress XML documents, they forget to compress SVG documents.

You might be using SVG images on your website and not even know it. This is because of a feature of SVG images, SVG fonts, which allow SVG files to contain font glyphs used to render text. These SVG image-that-really-a-font files can be references in CSS using the @font-face syntax much like a OTF or WOFF font file. Divya Manian has written a comprehensive post about the pros and cons of SVG fonts. For the purposes of this discussion the main take-away from her post is that, until iOS 5, SVG fonts were the only type of custom font supported by iPhone, iPad, and iPod Touch.

Font support is, to put it nicely, a giant mess. Font libraries abstract this away from the web developer and serve the correct format, including SVG fonts, to the correct browser. This mean your website can be using SVG without you even knowing it. Remember to serve your SVG files using HTTP compression.

Compressing already compressed content

Another mistake developers make with HTTP compression is using it on content that is already natively compressed. Apply compression to something that is already compressed doesn’t help improve performance. In fact, it can hurt performance to two ways.

First, HTTP compression has a cost. The web server has to take the content, compress it, and then send it to the client. If the content cannot be compressed further, you are just wasting CPU doing a meaningless task.

Secondly, applying HTTP compression to something that’s already compressed doesn’t make it smaller. In fact, the overhead of adding headers, compression dictionaries, and checksums to response body actually makes it bigger, as shown in the figure below:

Do websites actually do this? Yes, and it’s more common than you would think. I used Zoompf WPO to examine Fox News. Fox News is the 40th most visited website in the United States. As you can see, Fox News is mistakenly applying HTTP compression to PNG images.

This not only wastes CPU, but also increases the size of the PNG images delivered to Fox News visitors by a few dozen bytes:

Zoompf actually has two different checks for this issue. The first check "Compressed Content served with HTTP compression" alerts you that you are wasting CPU time compressing something that is already compressed. The second check, "Bigger with HTTP Compression" identifies content that is actually larger when served using HTTP compression.

Both of these problems usually are the result of a configuration problem with the web server or an inline network device. Something in your environment is applying HTTP compression to all outbound content instead of only content that should be compressed.

GZIP Vs. DEFLATE

So far, we have talked about HTTP compression as if it is an opaque or atomic feature. But that is not the case. HTTP simply defines a mechanism for a web client and web server to agree a compression scheme can be used to transmit content. This is accomplished using the Accept-Encoding and Content-Encoding headers. There are two commonly used HTTP compression schemes on the web today: DEFLATE, and GZIP.

DEFLATE is a patent-free compression algorithm for lossless data compression. There are numerous open source implementations of the algorithm. The standard implementation library most people use is zlib. The zlib library provides functions for compressing and decompressing data using DEFLATE/INFLATE. The zlib library also provides a data format, confusingly named zlib, which wraps DEFLATE compressed data with a header and a checksum.

GZIP is another compression library which compresses data using DEFLATE. In fact, most implementations of GZIP actually uses the zlib library internal to conduct DEFLATE/INFLATE compression operations. GZIP produces its own data format, confusingly named GZIP, which wraps DEFLATE compressed data with a header and a checksum.

Unfortunately, the HTTP/1.1 RFC does a poor job when describing the allowable compression schemes for the Accept-Encoding and Content-Encoding headers. It defines Content-Encoding: gzip to mean that the response body is composed of the GZIP data format (GZIP headers, deflated data, and a checksum). It also defines Content-Encoding: deflate but, despite its name, this does not mean the response body is a raw block of DEFLATE compressed data. According to RFC-2616, Content-Encoding: deflate means the response body is:

[the] "zlib" format defined in RFC 1950 [31] in combination with the "deflate" compression mechanism described in RFC 1951 [29].

So, DEFLATE, and Content-Encoding: deflate, actually means the response body is composed of the zlib format (zlib header, deflated data, and a checksum).

This "deflate the identifier doesn’t mean raw DEFLATE compressed data" idea was rather confusing. Early versions of Microsoft’s IIS web server was programmed to return raw DEFLATE compressed data for Accept-Encoding: deflate requests instead of a zlib formatted response. And naturally versions of Internet Explorer at the time expected responses with a Content-Encoding: deflate header to have raw DEFLATE response bodies.

As Mark Adler, one of the authors of zlib, explains in this StackOver thread:

However early Microsoft servers would incorrectly deliver raw deflate for "Deflate" (i.e. just RFC 1951 data without the zlib RFC 1950 wrapper). This caused problems, browsers had to try it both ways, and in the end it was simply more reliable to only use GZIP.

As Mark says, browsers receive Content-Encoding: deflate had to handle two possible situations: the response body is raw DEFLATE data, or the response body is zlib wrapped DEFLATE. So, how well do modern browser handle raw DEFLATE or zlib wrapped DEFLATE responses? Verve Studios put together a test suite and tested a huge number of browsers. The results are not good.

All those fractional results in the table means the browser handled raw-DEFLATE or zlib-wrapped-DEFLATE inconsistently, which is really another way of saying "It’s broken and doesn’t work reliably." This seems to be a tricky bug that browser creators keep re-introducing into their products. Safari 5.0.2? No problem. Safari 5.0.3? Complete failure. Safari 5.0.4? No problem. Safari 5.0.5? Inconsistent and broken.

Sending raw DEFLATE data is just not a good idea. As Mark says "[it’s] simply more reliable to only use GZIP."

It should be also noted that all browsers that support DEFLATE also support GZIP, but all browser that support GZIP do not support DEFLATE. Some browsers, such as Android, don’t include deflate in their Accept-Encoding request header. Since you are going to have to configure your web server to use GZIP anyway, you might as well avoid the whole mess with Content-Encoding: deflate.

Luckily, avoiding DEFLATE isn’t all that difficult.

The Apache module which handles all HTTP compression is mod_deflate. Despite its name, mod_deflate don’t not support deflate at all. It’s impossible to get a stock version of Apache 2 to send either raw DEFLATE or zlib wrapped DEFLATE. Nginx, like Apache, does not support deflate at all. It will only send GZIP compressed responses. Sending an Accept-Encoding: deflate request header will result in an uncompressed response.

Microsoft’s IIS web server can send both gzip and deflate responses and you can enabled or disable each scheme individually. For IIS6, you can , you can edit the metabase to disable DEFLATE support. For IIS7, you can disable DEFLATE support by editing the DEFLATE compression scheme section in the <schemes> element of the <httpCompression> element of the various IIS7 .config files.

Both Zoompf’s free and commercial products have a check built-in, “Obsolete Compression Format”, which will detect if your web server is sending content compressed with DEFLATE.

Netscape 4 and Internet Explorer 6 Are Screwing You. Again.

So by now you should have your web server configured to:

  1. Properly compress what needs to be compressed.
  2. Avoid compressing already compressed content.
  3. Configured to only use GZIP.

Now you need to ensure that your configuration is not actually excluding perfectly capable browsers.

While HTTP compression is a mature feature today, there were some problems early on. Netscape 4 only supported HTTP compression for HTML documents even though it sent an Accept-Encoding: deflate, gzip for all requests. Serving it HTTP compressed CSS or JS documents would make it crash. For reasons that aren’t quite clear, the developers of Apache decided to address this client-side bug with a server-side fix. They added the following seemingly harmless line into the Apache configuration file:

BrowserMatch ^Mozilla/4 GZIP-only-text/html

Any browser calling itself Mozilla/4 would only receive HTTP compressed HTML files. Since Apache was and is the most popular web server on the Internet, this caused enormous problems which still affect us today.

First of all, this was the middle of the browser wars and Internet Explorer 4, Internet Explorer 5 and even Internet Explorer 6 all identified themselves as Mozilla/4 in their User-Agent strings. But these browsers could accept HTTP compression for non-HTML responses. Trying to patch around one buggy browser caused another to be slow! Since IE6 would ultimately achieve over 95% market share, it was a problem that IE6 would download webpages more slowly from Apache than from other web servers. To resolve this, the Apache developers were forced to add another configuration directive:

BrowserMatch bMSI[E] !no-GZIP !GZIP-only-text/html

This line means: if the User-Agent has MSIE in it, then turn off the no-GZIP and GZIP-only-text/html options, thereby instructing Apache to use HTTP compression for all responses if IE asked for it. And all was good, until it wasn’t.

You see, IE6 on Windows XP also multiple problems with HTTP compression. Most of these issues dealt with compressed CSS or JavaScript files being cached as compressed items and which were then read from the cache assuming they were not HTTP compressed. So again another Mozilla/4 browser had problems with compression, and so again the Apache developers had to "fix" the issue with another configuration directive:

BrowserMatch bMSIEs6 GZIP-only-text/html

This directive instructed the web server to only send compressed content for HTML responses if the browser was IE6. While this helps dealt with the majority of the issues, some of these bugs caused so many extreme edge-case problems that, for reliability reasons, larger sites would completely disable HTTP compression for IE6 entirely:

BrowserMatch bMSIEs6 no-GZIP

Eventually Microsoft fixed these issues with hot fixes and, comprehensively, with Windows XP Service Pack 2. But this created a fragmentation problem, where some IE6 browsers could handle HTTP compression for all content, and some could not. Another rule was added in an attempt to serve compressed content to IE6 browsers that had SP2 installed. This was done by looking for the poorly named SV1 identifier in IE6’s User-Agent string:

BrowserMatch "^Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1" !no-GZIP !GZIP-only-text/html

This chain of "deny this, but not this, unless it’s this, but not if it is also this" directives made configuring a web server to properly serve compressed documents to the appropriate browsers difficult and prone to error. Since these bug/solution cycles happened numerous times over several years these configuration directives mutated. Blog posts from 2004 would tell you to do one thing and blog posts from 2006 would say another. Much like a child’s game of telephone short comings, errors, missing edge cases, and missing corner cases were magnified as people reused old configuration files and shared the "correct" advice. Even today, many of the top Google search results for configuring HTTP compression for Apache using mod_deflate contain different and incorrect directives.

As I wrote in Advice on Trusting Advice it all comes down to where you get your advice from. Follow the advice on this top search result and IE9+ gets no compression at all. Follow the advice on this top search result and IE6 gets no compression at all. Follow the advice from this search result and no version of IE will get anything using HTTP compression, except for IE7. Follow advice from IBM, no version of IE will ever get a non-HTML file using HTTP compression.

Depending on which directives were used, and how match criteria is configured, you ended up with several possible scenarios:

  • HTTP compression is completely disabled for all Mozilla/4 browsers.
  • HTTP compression is completely disabled for IE6
  • HTTP compression is completely disabled for IE6 except SV1
  • HTTP compression is completely disabled for all versions of IE
  • HTTP compression is completely disabled for all versions of IE, except IE6 (so no compression for IE > 6)
  • HTTP compression for non-HTML files is disabled for all Mozilla/4 browsers.
  • HTTP compression for non-HTML files is disabled for IE6
  • HTTP compression for non-HTML files is disabled for IE6 except SV1
  • HTTP compression for non-HTML files is disabled for all versions of IE
  • HTTP compression for non-HTML files is disabled but all versions of IE, except IE6 (so no compression for IE > 6)

Apache makes it quite easy to mess this up. Nginx is much easier. It completely ignores the old Netscape 4 browsers and does not attempt to work around them. It also has a very simply mechanism to avoid sending compressed content to bad versions of IE6. You don’t need to manually define "this is good" and "this is bad" regexs, allows you to avoid making a mistake.

In practice, you should just not even try to work around these problematic browsers. The problem browser have all been updated or patched. Even the most recent of the affected browsers, IE6, was fixed nearly a decade ago. Even on platforms that are no longer supported, this issue has been fixed. You should review you configuration file and remove any browser filtering code used for HTTP compression.

Hopefully this section has also taught you that fixing a client-side bug with a server-side fix it rarely a good or sustainable idea. As I discussed in The Big Performance Improvement in IE9 No One is Talking About, this approach of using the User-Agent as a factor in content generation forced the widespread use of the Vary: User-Agent header. The Vary header used in this manner effectively nullifies the shared caching which reduces the overall performance of the web.

Extension Vs. MIME Type

It is important to review how your web server is configured to compress content. Most browsers allow you to specify either a list of file extensions to compress, or a list of MIME types to compress, or both. Be careful to review this list.

Let’s say you have configured your application to serve text/javascript responses using compression. Are you sure that’s the only MIME type you application uses when serving for JavaScript files? What about text/x-javascript or application/x-javascript or application/javascript? What MIME type does your API serve for JSON responses? text/json? application/json? Something else? How about HTML? Are all of your HTML files using text/html? Do you have some sections from the XHTML days which use other MIME types like application/xhtml+xml or text/xhtml or application/xhtml? Is all of the markup generated by your application served using a single and consistent MIME type? And let’s not forget about the code you didn’t write. What MIME type does that opaque charting library use to send data to the client? Or that auto-completing textbox widget you got from Github?

If you are configuring the web server to use compression using file extensions, did you get all of them? .htm or .html or is it something else? What about your 404 handler? A request happens for the non-existent file /foo/bar.jpg. Since the file extension is not explicitly defined as something that should be compressed (or, being an image, is explicitly defined not to be compressed), the 404 response isn’t sent with compression.

Care must be taken when configuring your web server to ensure that uncompressed content is not slipping through due to a missing file extension or MIME type declaration.

Properly Configuring HTTP Compression

So, given all these challenges, how should you go about configuring HTTP compression properly?

To see where you might have made a mistake configuring your server, your need a something to compare it to. I am a big fan of the .htaccess file from the HTML5 Boilerplate Project. This is an Apache configuration file specifically crafted for web performance optimizations. It provides a great starting point for implementing HTTP compression properly. It also serves as a nice guide to compare to an existing web server configuration to verify you are following best practices. At the very least, the HTML5 Boilerplate .htaccess file provides a comprehensive list of common web content which should or should not get served using HTTP compression.

Getting a good starting point is only half the battle. The configuration for HTTP compression on a web server only works when it matches the application running on that server. Even the HTML5 Boilerplate configuration file can fail you if there is a discrepancy between the file extensions and MIME types in the configuration file and those used by your application. It’s easy to forget or overlook a MIME type or a file extension that you application uses. To ensure your application matches your configuration, the best thing to do is carefully review:

  1. How is your web server configured to map MIME types to content or file extensions?
  2. How is your web server configured to compress content relative to those MIME types or extensions?
  3. How are your application’s filenames and extensions structured?
  4. How does your application change or override a response’s MIME type?
  5. What third party libraries use MIME types?

Once you think you have properly configured the web server, you need to validate it. Web Sniffer is a great, free, web-based tool that let you make individual HTTP requests and see the responses. Web Sniffer gives you some control over the User-Agent and Accept-Encoding header to ensure that compressed content is delivered properly. Hurl is another web-based HTTP tool you can use. It allows for more control than Web Sniffer, but requires you to manually enter more information to get the same results:

Hurl and Web Sniffer only test a single page at a time. You can use Zoompf’s free scan and Zoompf WPO can be used to scan multiple pages to verify no uncompressed content is slipping through.

Conclusions

As this post shows, there are many challenges which must be overcome to properly configure HTTP compression. Make sure all non-natively compressed content is served using HTTP compression. Don’t waste load time, CPU cycles, and bandwidth compressing content that is already compressed. Only use GZIP compression to ensure compatibility. Don’t try to work around old browsers since it is easy to make a mistake and end up not delivering compressed content to a capable browser. Review your application code and server configuration to make sure the application’s content and structure matches your HTTP compression settings. Don’t forget about compressing 404’s. Finally, don’t just assume your configuration works. Use a tool to validate that is works.

Want to see what performance problems your website has? Content Served Without Compression, Compressed Content Served with Compression, Bigger With Compression, and Obsolete Compression Format are just 4 of the nearly 400 performance issues Zoompf detects when testing your web applications. You can get a free performance scan of you website now and at a look at our Zoompf WPO product at Zoompf.com today!

Comments

Have some thoughts, a comment, or some feedback? Talk to us on Twitter @zoompf or use our contact us form.