Pro tip: Improve pageload times with pngcrush

With the emphasis on speed needed for websites, and many US mobile networks and residential ISPs introducing bandwidth caps and speed throttling, squeezing more performance out of every packet is paramount.

 Image: arhon

Most internet users have left the dark ages of dial-up, but web developers face new challenges with the end of net neutrality.

With major ISPs in the US moving from "unlimited access" to introducing bandwidth caps (a practice previously exclusive to mobile phone network operators), the need to optimize still remains. Other, nonconventional methods of providing internet access which necessitate the conservation of available throughput such as satellite internet access are becoming more widespread. While most efforts for optimization rightly focus on reducing page generation time and other server-side operations, compressing static images is a simple and effective task that falls to the wayside.

Delivering static content such as the global page design is not a particularly costly endeavor with the plummeting costs of cloud computing in the highly competitive price wars that started earlier this year. In aggregate, the reduced bandwidth load can shave off enough money to buy at least one cup of coffee.

The pngcrush utility is a free, open-source program that allows for lossless optimization of PNG image files; as such, the recompressed, or "crushed" image will have the same quality as the original file. For designs that use a high number of images, reducing the file size of these images can have a dramatic reduction on how much time it takes to load a website.

A quick start guide to using pngcrush

Using pngcrush is as straightforward as any command-line tool can be, with the basic usage being the following:

pngcrush [options] [infile.png] [outfile.png]

A wide variety of options exist for pngcrush, though for the purposes of this article, the most performance-focused options will be covered. A more in-depth guide to pngcrush can be found here.

Specifying input and output

For batch operations, putting all of the files you wish to transform into one folder is easier than managing each folder separately. Using the following format (wherein F:\ is the preferred target) is much easier:

pngcrush [options] -d F:\img_out F:\inputdir\*.png

Basic compression for every image

For batch operations, putting all of the files you wish to transform into one folder is easier than managing each folder separately. Using the following format (wherein F:\ is the preferred target) is much easier:

pngcrush -reduce -brute -d F:\img_out F:\inputdir\*.png

In this example, the "reduce" command will perform lossless color-type or bit-depth reduction. Additionally, "brute" is the brute force command, which will perform an exhaustive search for the most efficient compression method — occasionally resulting in a smaller file size than the default search. Sufficiently fast processors will not require much additional time to processes the added methods. For this article, the processing of images on a 3.0 GHz Phenom II N660 (in a two-year-old budget laptop) makes quick work of the compression.

Scenario 1: Using a modern image format

The homepage of Aoyama Gakuin, a private Christian educational facility in Tokyo, Japan, uses a surprisingly high total of 44 GIF images on the index of its website. GIF, a somewhat outdated image format limited to using LZW compression, is not as efficient as the newer PNG standard. PNG was created in response to a patent claim against CompuServe over the use of the LZW algorithm. The 44 images total 79,258 bytes — a small sum compared to the scrolling JPEG-compressed photos used on the same page, but with a high number of page views, can add up rather quickly. Of note, digital photographs are poor candidates for PNG compression, as lossless compression works poorly with digital photographs, which have little redundant data.

To convert the files quickly to PNG (a feature that pngcrush does not have), IrfanView was used to convert the files for processing. The process of converting from GIF to PNG removed 29,172 bytes total from this set of images. The first pass, with only the reduce and the brute options enabled, brings it down to 49,519 bytes — a small difference, as the conversion feature in IrfanView is very efficient at file compression. However, all of the images were converted as full-color images.

20 of the 44 images in use here are grayscale, non-transparent images, often text rendered in an image (often common for Japanese websites) or gradients. The first round of compression did not compress these images as greyscale; for these, we can manually convert these to a grayscale PNG and recompress using the following command:

pngcrush -c 0 -brute -d F:\ay_grey2 F:\ay_grey\*.png

With these 20 files totaling 18,561 bytes moved to their own folder, a batch recompression of these files with this option decreases the size by half to 9,161 bytes.

The verdict

Recompressing the files took under five minutes, including the time to arrange the files as needed in Windows Explorer. The original set was 79,258 bytes, and was compressed to a total of 40,119 bytes — a 49.38% reduction in size. For one page, a 38 kilobyte reduction is beneficial, but the sheer number of images involved here works against the site — reducing the number of images needed for the same design by using CSS and not relying on images to place text would result in a more efficient design than a drop-in replacement of compressed images.

Scenario 2: Compressing sprite tables

Using CSS and JavaScript to create controllable sprite tables — in effect reducing the number of distinct images loaded and thereby reducing HTTP GET requests — is a very efficient way of handling images on websites, something which was used to great effect in the recent redesign of Weather Underground. However, there is still room for optimization with these four sprite tables used in the new design.

These four files constitute 64,102 bytes — not a huge amount, but for a website which is, at the time of writing, the 666th most viewed website in the world, these images are loaded millions of times daily.

For the first pass with just reduce and brute enabled, the files were reduced to 48,409 bytes, a 24.48% decrease. However, moon-sprite is grayscale with an alpha channel; for this, we can compress the already compressed file further by using the color mode 4.

pngcrush -c 4 -brute moon-sprite.png moon-sprite2.png

This shrinks the moon-sprite file to 3,946 bytes — 66.8% smaller than the original file size of 11,900 bytes.

The verdict

The extra attention spent on the moon-sprite file takes the group of four images down to 46,416 bytes, a 27.59% shrink in file size. As a quick (and slightly hasty) example, if these four images are downloaded 1 million times per day, the compression savings is 16.38 GB of transferred data. For one month, working from the base rate of $0.12 USD for data transfer on Amazon S3, the savings per month is $59.00. Granted, this estimation does not consider caching, but for a website used as frequently as Weather Underground, it raises an important point.

Final thoughts

While paying attention to image compression is not as vital of a consideration as limiting HTTP GET requests or minimizing queries to shorten page generation time, the relatively low time investment needed to perform this task is well worth the results gained in the strongly compressed files.

Let us know how you approach optimizing your websites in the comments section.


James Sanders is a Java programmer specializing in software as a service and thin client design, and virtualizing legacy programs for modern hardware. James is currently a student at Wichita State University in Kansas.

Editor's Picks