Zoompf's Web Performance Blog

Note: Archived Content

This is the archived version of the Zoompf blog. Since our acquisition by Rigor, all our new research and posts on web performance are being published on The Rigor Blog

Lose the Wait: Page Weight and Transfer Weight

 Billy Hoffman on January 9, 2012. Category: random

This the first post in Zoompf’s Lose the Wait series, where we provide tips, tricks, guidance and encouragement to help you lose the wait and optimize your websites for maximum speed. The wait/weight homonym isn’t just us being clever. According to the excellent HTTP Archive, more than any other factor, the total amount of data transmitted for a webpage has the highest correlation to its page load time.

This makes sense. The more data that the browser has to downloaded, the longer a webpage will take to load. So by losing kilobytes of page weight you can truly lose the timing wait.

Measuring Page Weight

When I say “page weight” I mean the size, in bytes, of all the content which has to be downloaded to view a webpage. This means the base HTML file, any CSS, JavaScript, Images, web fonts, Flash, and anything else that needs to download to render the page. Reducing the size of this content will help us reduce page load time.

Of course, you cannot improve what you can’t measure. Having a quick, clear, and reliable way to calculate and visualize page weight is an important first step on our optimization path. Surprisingly most performance tools don’t do this very well. WebPageTest and Zoompf WPO will tell you the basic information like total page weight, but they do this as a byproduct of their performance analysis. This means you are going to have to wait 30 seconds or more while they scan your website to get page weight information. YSlow‘s “Statistics” tab is better since it provides content size grouped by resource type in addition to total page weight. However you still have to conduct run a YSlow analysis, which can be slow (don’t use YSlow’s autorun feature, it pretty much ruins Firefox’s performance).

We need something that allows us to dive even deeper into page weight data and something that is fast. For that we can use super awesome View Dependencies add-on for Firefox.

View Dependencies adds a “Dependencies” tab to Firefox’s native Page Info dialog. View Dependencies shows size of web content and groups them by usage, not content type like YSlow. This is very helpful as images used as a CSS background are separated from regular images. This helps you to track down resources that aren’t referenced properly and are not served from a static, cookieless domain.

Groups of responses are then organized by the hostname that served the response, allowing you to quickly identify servers which are bloated and need further investigation. View Dependencies also rolls up the number of requests for a specific resource type or requests to a specific host.

Best of all, View Dependencies is fast. Since it hooks into Firefox’s existing Page Info dialog, it doesn’t need to scan or re-request any resources. The only thing separating you from this wealth of data is how quickly you can get that dialog box open! Sadly, modern Firefox no longer has a native keyboard shortcut for Page Info like it has CTRL+U for View Source. The easiest way to get to the Page Info dialog is to right-click on the current webpage and select “View Page Info” from the context menu. There is also an add-on, Page Info Button, which places a button on Firefox’s toolbar which opens the Page Info dialog. This add-on also implements a keyboard shortcut CTRL+I which will open the dialog as well.

The Often Forgotten Transfer Weight

Page Weight isn’t the only thing we have to worry about. The bytes of the response body aren’t the only thing that gets sent to a visitor’s browser. HTTP itself adds weight in the form of request and response headers. HTTP response headers appear on every response, even empty responses. HTTP headers are also verbose text headers which cannot be compressed the way response bodies can be. HTTP header weight can really add up. Artur Bergman of Wikia and Fastly fame recent tweeted that they send nearly a terabyte of HTTP headers every day.

Clearly our Lose the Wait fitness plan will need to examine how we can lose some of this transfer weight.

Page Weight/Transfer Weight Fitness Plan

Over the next several posts, Zoompf plans to explore in great technical detail ways to reduce both response weight and transfer weight. Think of it as your fitness plan to reduce your wait by losing some the weight.

To some people that might sound very boring or redundant or even too elementary. You might be thinking: “OK. This is basic stuff. I already know how to reduce the size of content. A little HTTP compression, some minification, a bit of Closure compiler, some Smush.it, and I’m done”.

While this might be true, we often find that there are aspects of even the most basic optimization that people overlook or think they already know. For example, most people know that HTTP compression should be applied to text resources. But people often forget that “text resources” means more than just HTML, CSS, and JavaScript. XML and JSON API responses, Robots.txt, sitemaps, and HTML5 manifest files are all examples of text content which should be served with HTTP compressed but usually isn’t.

In fact, saying HTTP compression should apply to “text resources” isn’t the entire picture either. HTTP compression should apply to any non-natively compressed resource. SVG and ICO files are good examples of this. They are images so people think they don’t need to be compressed. But SVG and ICO are not natively compressed. SVG is just XML, and ICO is a primitive form of the uncompressed BMP file format. Both of these should be served with HTTP compression. Cursor files are another example. Various other binary files and formats are not natively compressed as well.

To ensure that you are fully optimizing your site, Zoompf is publishing a series of posts exploring different ways to reduce page weight and transfer weight as part of a Lose the Wait fitness plan. Below is a current snapshot of the topics list we plan to write about over the next few weeks:

  • HTTP compression (and doing it right)
  • Optimizing HTTP headers
  • HTML optimization (removing and refactoring structure)
  • HTML minification
  • CSS minification (and why all the tools suck)
  • JavaScript compilation and minification
  • Traditional lossless image optimization
  • Intelligent lossy image optimization

Make sure to follow Zoompf on Twitter and subscribe to the Lickity Split Blog RSS feed so you don’t a post in our Lost the Wait series.


Have some thoughts, a comment, or some feedback? Talk to us on Twitter @zoompf or use our contact us form.