Using efficient markup will help keep bandwidth usage down, and make pages load faster. If that isn’t enough, and you need to reduce document size even further, server side compression is worth taking a look at.
As the visitor count here has increased, my monthly bandwidth usage has gone up a lot. It’s not yet at the point where my host will start complaining, but to delay reaching the bandwidth limit I started looking at ways to enable on-the-fly compression of all documents except images.
Doing so turned out to be pretty easy, since this site is hosted on a server running Apache 2.0, which includes the mod_deflate module. To enable the
DEFLATE output filter provided by mod_deflate I just used the following example from the Apache website (after kindly asking my host to recompile Apache with
# Insert filter
# Netscape 4.x has some problems...
BrowserMatch ^Mozilla/4 gzip-only-text/html
# Netscape 4.06-4.08 have some more problems
BrowserMatch ^Mozilla/4\.0 no-gzip
# MSIE masquerades as Netscape, but it is fine
BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
# Don't compress images
SetEnvIfNoCase Request_URI \
\.(?:gif|jpe?g|png)$ no-gzip dont-vary
# Make sure proxies don't deliver the wrong content
Header append Vary User-Agent env=!dont-vary
If I make other file types that won’t benefit from compression available, like other images, movies, and compressed archives, I’ll just add those to the list of excluded file types.
Since I enabled the
DEFLATE filter, overall bandwidth usage has gone down quite a bit. I’ll have to wait a while to see the exact numbers, but it looks like most documents are now 40-50% of their uncompressed size.
I’m not sure how much CPU deflate uses, but I haven’t noticed any performance problems yet, and my host hasn’t notified me of any problems either, so it seems to be pretty efficient at the current (default) setting. The compression level can be increased by using the DeflateCompressionLevel directive, but that will use more CPU time, so I’ll leave it like it is for now.