One of the more overlooked aspects of the web master’s job is reviewing server logs. It’s not a surprise that a lot of people either ignore, do not know about, or do not know how to get to the value stored in these logs. After all, the format isn’t user friendly, and there it is definitely a case of “out of sight, out of mind.” However, there are some very good reasons to look through your server’s logs. Here are five things that you can (and should) look for in them on a regular basis.

Failing applications

Some applications do a poor job of reporting their failures to the help desk or development team. In other cases, applications are crashing so hard that they do not have the opportunity to report their failures. Reviewing your logs can reveal these kinds of issues. For example, if you see the server process itself restarting on a regular basis, that’s a very bad sign, which indicates that something so critical happened that the server went down. Also look for high rates of status 500 messages in the page logs. On an IIS server, the Event Log will record unhandled exceptions, the kind that result in the “yellow screen” on the local machine, and the vague error screen for other users. Compare these rates of incidents with what the developers know about, to see if maybe there are problems that they do not know about.

Security concerns

Hackers love to pound away at Web servers to probe for problems. It’s easy and automated, and few folks take the time to review their logs to catch it. I have seen all sorts of security issues revealed in log files. A sure sign of an attack is a flurry of 404 or 500 codes from the same IP. These attacks range from the mundane (probing WebDAV to see if it is accessible and writable) to the esoteric (using a referrer string that is actually a chunk of JavaScript, to execute cross site scripting when you view referrers with a Web-based log analysis tool). Trying to ban hackers on a per-IP basis is futile. That said, if the suspicious activity is coming from within the firewall, you may have a problem! But seeing the nature of the attacks on your systems can help you learn how to harden them.

Search engine optimization

Did you know that your server logs can help you with your search engine optimization (SEO) strategies? That’s right! If you’ve used a tool like Google Analytics, you know that it can identify what search engines and keywords are driving traffic to your site. The data comes from information which is also available in your logs. If your traffic analysis tool works on your logs (instead of via JavaScript like Google Analytics and others), make sure that you read the documentation to enable the tracking of the referrer so it can discover the search engine and keywords that find your site. Once you have this information, you will be able to get an idea of what searchers are looking for when they find your site, and what keywords attract traffic that sticks around and become a revenue stream, and what keywords do not make you money. From there, you can optimize your SEO and paid search campaigns to focus on the more profitable keywords first.


Speaking of turning traffic into revenue streams, your logs are the source of knowing how your visitors translate into money for you, and help you to understand how best to monetize your traffic. What you want to look for is trends in your clickstream data. The “clickstream” data shows the “paths” people take through the site, combining unique identifiers (an IP address or preferable a cookie or username) with the referring URLs of page views to show a tree of the user’s session. You will want to start at the “goal” — whether that be the point at which they sign up for a mailing list, make a purchase, request more information, or whatever you consider a “conversion” to be. Then work backwards. How do the users become conversions? Do they follow the same path? How many clicks does it take to convert them? And so on. From there, you can figure out how to make it as easy as possible for potential visitors to convert. For example, if you see that a particular page sends lots of people on the conversion path, but that path is long, then let them convert directly from that page.

Usability problems

Another great piece of information that you can get from clickstream data is whether or not you have usability problems. What you are looking for are things like abandoned processes. For example, if checking out of your online store is a multi-step process, and you see a lot of people getting to step three, but few people completing it (but no errors), that is a sign that there is something on that page that is causing problems for your visitors. Maybe it is onerous terms of service, or a request for information that users do not want to give up. It could be that the page is simply difficult to use. But without detecting and fixing these problems, you can be leaving a lot of money on the table.