Zoompf's Web Performance Blog

Note: Archived Content

This is the archived version of the Zoompf blog. Since our acquisition by Rigor, all our new research and posts on web performance are being published on The Rigor Blog

Too Chunky: Performance and HTTP Chunked Encoding

 Billy Hoffman on May 15, 2012. Category: Optimization

While debugging a customer issue this weekend, I uncovered a problem with chunked encoding in general, and ASP.NET in particular, that can reduce your website’s performance.

Let’s start with some background.

Digicure is a web security and performance services company in Denmark. They are also a Zoompf customer. At the end of last week, they contacted Zoompf support to tell us that some of our Zoompf WPO pages were timing out. Zoompf WPO is our web performance scanner delivered as a SaaS. User’s log in to the web interface and can conduct performance scans, review scan results, and generate reports. Zoompf WPO’s web interface is written in ASP.NET. This is largely because our performance scanner is written in C#.

By default, ASP.NET does not use chunked encoding. When an HTML page is being dynamically generated, ASP.NET buffers all of the output, and sends all of the content at once. This response includes a Content-Length header because the entire response is created before being delivered to the client, so the web server knows how long it is. This is called the Store-and-Forward approach.

Store-and-Forward Vs. Chunked

Store-and-Forward is not necessarily bad. In fact, it’s how HTTP/1.0 transmits dynamic responses when Connection: Keep-Alive is used. But Store-and-Forward does not create the ideal user experience. This is because no content is sent to the user until the application tier has finished generating the markup. This means the user sees no content. More importantly, the web browser doesn’t have any HTML yet, so it cannot start downloading other resources like CSS or JavaScript files while the HTML loads.

A better approach is chunked encoding. Chunked encoding was added in HTTP/1.1 and allows the web server to stream content to the client without having to know how large the content was ahead of time or having to close the connection when it’s done. We can see how the chunked encoding approach compares to Store-and-Forward in the figure below:

Chunked encoding is great, because the user starts getting content almost immediately. The application is faster because the browser can start to download other resources while the HTML is still being generated and streamed to the client.

Since chunked encoding can improve performance, I looked for appropriate places in Zoompf WPO’s web interface to use it. Now, there are a few areas of WPO where generating the HTML can take a long time. One is when WPO is generating the list of affected URLs for a specific performance issue. This list can be thousands of items long. Streaming this HTML to the client using chunked encoding provides a better user experience than completely generating the HTML and then delivering the page to the client. So I disabled output buffering for areas like that and ASP.NET uses chunked encoding to send pieces of HTML to a visitor as they are generated. I thought everything was fine.

Too Chunky

That is, until I heard from Digicure. They experienced pages which were loading slowly and would sometimes timeout. Specifically, lists of the affect URLs were appearing very slowly. I did not see this behavior when I tested the application, and no other customers had this issue. A quick check showed the web server and database were not under excessively load. Network checks showed there was plenty of available bandwidth to transmit the data quickly.

I decided to see the traffic the web server was actually sending However I did not use an HTTP proxy because I wanted to make sure that trying to measure what was happening did not affect what was happening. Instead I used Wireshark to capture the HTTP traffic between the browser and web server while fetching the slow pages. Here is what I saw:

<a name="affected"><h2>Affected URLs</h2></a>

12 <div class="nb">
17 <ul class="url_list">
4 <li>
9 <a href="
34 ShowResponse.aspx?scan=7519&amp;got=96&amp;check=300
2 ">
1a http://XXX.XXXXX.XXX/de/de
6 </a>
7 </li>
4 <li>
9 <a href="
36 ShowResponse.aspx?scan=7519&amp;got=1659&amp;check=300
2 ">

That shows some of the response body, encoded into chunks. The problem is each chunks is really small. As in, just a few bytes small. And there are so many chunks. Way too many. And then I noticed something with a sinking feeling: the way content was divided into chunks looked familiar. Oh crap, I know what the problem is. I went and looked at the source code generating this list:

if (!alreadyShown)
    fout.WriteLine("<div class="nb">");
    fout.WriteLine("<ul class="url_list">");
    foreach (IBasicItemInfo info in infos)
        HtmlUtils.RenderInternalLink(fout, "ShowResponse.aspx?scan=" + scanID + "&got=" + info.ID + "&check=" + issueID, StringUtils.Truncate(info.Name, 256));

See the problem? With buffered output disabled, anytime the application writes bytes to the response, those bytes are immediately sent to the client. Even if you are just writing a simply <li> tag! This HTML response is around 300 kilobytes, and ASP.NET is streaming that just a few bytes at a time. Really big pages sending all those chunks over a high latency connection like to Digicure are going to be slow.

There is another performance problem with overly “chunky” responses. Chunked encoding adds overhead. For each chunk, there are a few bytes to represent the length of the chunk, and then 4 bytes to represent two CRLF sequences. For small chunks, like sending an <li> tag, the overhead of the chunk is larger than the data in the chunk! For some pages Zoompf WPO was sending 75 kilobytes of chunked encoding overhead to transmit 300 kilobytes of data!

Control over this is very limited in ASP.NET. In fact, it seems to be Boolean. Either output buffering is enabled, and no content is sent to the client until everything is generated. Or output buffering is disabled, and every function call to write results in a chunk. There was no middle ground. I’m not an expert in IIS or ASP.NET and its possible I’m missing something, but Google queries returned no useful information about this.

In the end, I disabled buffered output and implemented my own output buffering. Now, using fout.Write() or fout.WriteLine() calls my class instead, which buffers data into 8 kilobyte chunks before sending the data to the client. Ideally, I should adjust the size of the buffer dynamically, based on how large I think the response is going to be. This is reasonable in WPO, since I know how many links will be in the affected URLs list, but isn’t always possible.

Testing Your Website for “Chunky-ness”

This isn’t exclusively an ASP.NET problem. At the end of the day, the problem Digicure experienced which resulted in slow pages was that the response was sliced into too many chunks, each of which were very small. ASP.NET just exasperated the problem by chunking every call to Write or WriteLine. This problem could inherently exist in other frameworks, and could exist if your application tier excessively flushing the output, resulting in a large number of small chunks.

How do you detect if a website has this problem? That’s a little trickier. As I said, most web traffic tools like proxies or plug-ins normalize the traffic before showing it to you. The uncompress it or they de-chunk it or normalize HTTP headers. You can’t count the chunks if you can’t see them. I suggest using Wireshark. This allows you to capture all of the traffic as it is that actually transmitted and received by a network interface on your computer. (Note that on Windows, Wireshark cannot capture traffic for localhost).

Here is how you can see if a webpage is “too chunky”:

  1. Load the page you want to test in your web browser.
  2. Select an interface in Wireshark and start a capture.
  3. Reload the page in the browser window.
  4. Stop the capture in Wireshark and enter “http” into the filter to exclude everything accept we traffic.
  5. Find the HTTP request, right click, and select “Follow TCP Stream”, as shown below:

If you see a large number of chunks, or very small chunks, examine your application code to see if you are flushing the output stream excessively, such as instead of a loop.


Chunked encoding is awesome. It allows us to transmit variable length content and use persistent connections. However each chunk has overhead, and you need to ensure your application isn’t too chunky.

I wish I could tell you that testing for “chunky-ness” is one of the 400 checks Zoompf can test your website for. Unfortunately that’s not currently the case. The HTTP library we wrote doesn’t currently expose chunk information, so we cannot detect this. Hopefully this is something we can change in the near future. Until, use Wireshark to test your website, and Zoompf’s free or paid offerings to find other types of performance problems.


Have some thoughts, a comment, or some feedback? Talk to us on Twitter @zoompf or use our contact us form.