Next Previous Contents

6. Squid Log Files

The logs are a valuable source of information about Squid workloads and performance. The logs record not only access information, but also system configuration errors and resource consumption (eg, memory, disk space). There are several log file maintained by Squid. Some have to be explicitely activated during compile time, others can safely be deactivated during run-time.

There are a few basic points common to all log files. The time stamps logged into the log files are usually UTC seconds unless stated otherwise. The initial time stamp usually contains a millisecond extension.

The frequent time lookups on busy caches may have a performance impact on some systems. The compile time configuration option --enable-time-hack makes Squid only look up a new time in one second intervals. The implementation uses Unix's alarm() functionality. Note that the resolution of logged times is much coarser afterwards, and may not suffice for some log file analysis programs. Usually there is no need to fiddle with the timestamp hack.

6.1 squid.out

If you run your Squid from the RunCache script, a file squid.out contains the Squid startup times, and also all fatal errors, e.g. as produced by an assert() failure. If you are not using RunCache, you will not see such a file.

6.2 cache.log

The cache.log file contains the debug and error messages that Squid generates. If you start your Squid using the default RunCache script, or start it with the -s command line option, a copy of certain messages will go into your syslog facilities. It is a matter of personal preferences to use a separate file for the squid log data.

From the area of automatic log file analysis, the cache.log file does not have much to offer. You will usually look into this file for automated error reports, when programming Squid, testing new features, or searching for reasons of a perceived misbehaviour, etc.

6.3 useragent.log

The user agent log file is only maintained, if

  1. you configured the compile time --enable-useragent-log option, and
  2. you pointed the useragent_log configuration option to a file.

From the user agent log file you are able to find out about distributation of browsers of your clients. Using this option in conjunction with a loaded production squid might not be the best of all ideas.

6.4 store.log

The store.log file covers the objects currently kept on disk or removed ones. As a kind of transaction log it is ususally used for debugging purposes. A definitive statement, whether an object resides on your disks is only possible after analysing the complete log file. The release (deletion) of an object may be logged at a later time than the swap out (save to disk).

The store.log file may be of interest to log file analysis which looks into the objects on your disks and the time they spend there, or how many times a hot object was accessed. The latter may be covered by another log file, too. With knowledge of the cache_dir configuration option, this log file allows for a URL to filename mapping without recursing your cache disks. However, the Squid developers recommend to treat store.log primarily as a debug file, and so should you, unless you know what you are doing.

The print format for a store log entry (one line) consists of eleven space-separated columns, compare with the storeLog() function in file src/store_log.c:

    "%9d.%03d %-7s %08X %4d %9d %9d %9d %s %d/%d %s %s\n"

time

The timestamp when the line was logged in UTC with a millisecond fraction.

action

The action the object was sumitted to, compare with src/store_log.c:

file number

The file number for the object storage file. Please note that the path to this file is calculated according to your cache_dir configuration.

A file number of FFFFFFFF denominates "memory only" objects. Any action code for such a file number refers to an object which existed only in memory, not on disk. For instance, if a RELEASE code was logged with file number FFFFFFFF, the object existed only in memory, and was released from memory.

status

The HTTP reply status code.

datehdr

The value of the HTTP "Date: " reply header.

lastmod

The value of the HTTP "Last-Modified: " reply header.

expires

The value of the HTTP "Expires: " reply header.

type

The HTTP "Content-Type" major value, or "unknown" if it cannot be determined.

sizes

This column consists of two slash separated fields:

  1. The advertised content length from the HTTP "Content-Length: " reply header.
  2. The size actually read.

If the advertised (or expected) length is missing, it will be set to zero. If the advertised length is not zero, but not equal to the real length, the object will be realeased from the cache.

method

The request method for the object, e.g. GET.

key

The key to the object, usually the URL.

The timestamp format for the columns Date to Expires are all expressed in UTC seconds. The actual values are parsed from the HTTP reply headers. An unparsable header is represented by a value of -1, and a missing header is represented by a value of -2.

The column key usually contains just the URL of the object. Some objects though will never become public. Thus the key is said to include a unique integer number and the request method in addition to the URL.

6.5 hierarchy.log

This logfile exists for Squid-1.0 only. The format is

        [date] URL peerstatus peerhost

6.6 access.log

Most log file analysis program are based on the entries in access.log. Currently, there are two file formats possible for the log file, depending on your configuration for the emulate_httpd_log option. By default, Squid will log in its native log file format. If the above option is enabled, Squid will log in the common log file format as defined by the CERN web daemon.

The common log file format contains other information than the native log file, and less. The native format contains more information for the admin interested in cache evaluation.

The common log file format

The Common Logfile Format is used by numerous HTTP servers. This format consists of the following seven fields:

        remotehost rfc931 authuser [date] "method URL" status bytes

It is parsable by a variety of tools. The common format contains different information than the native log file format. The HTTP version is logged, which is not logged in native log file format.

The native log file format

The native format is different for different major versions of Squid. For Squid-1.0 it is:

        time elapsed remotehost code/status/peerstatus bytes method URL

For Squid-1.1, the information from the hierarchy.log was moved into access.log. The format is:

        time elapsed remotehost code/status bytes method URL rfc931 peerstatus/peerhost type

For Squid-2 the columns stay the same, though the content within may change a little.

The native log file format logs more and different information than the common log file format: the request duration, some timeout information, the next upstream server address, and the content type.

There exist tools, which convert one file format into the other. Please mind that even though the log formats share most information, both formats contain information which is not part of the other format, and thus this part of the information is lost when converting. Especially converting back and forth is not possible without loss.

squid2common.pl is a conversion utility, which converts any of the squid log file formats into the old CERN proxy style output. There exist tools to analyse, evaluate and graph results from that format.

access.log native format in detail

It is recommended though to use Squid's native log format due to its greater amount of information made available for later analysis. The print format line for native access.log entries looks like this:

    "%9d.%03d %6d %s %s/%03d %d %s %s %s %s%s/%s %s"

Therefore, an access.log entry usually consists of (at least) 10 columns separated by one ore more spaces:

time

A Unix timestamp as UTC seconds with a millisecond resolution. You can convert Unix timestamps into something more human readable using this short perl script:

        #! /usr/bin/perl -p
        s/^\d+\.\d+/localtime $&/e;

duration

The elapsed time considers how many milliseconds the transaction busied the cache. It differs in interpretation between TCP and UDP:

Please note that the entries are logged after the reply finished being sent, not during the lifetime of the transaction.

client address

The IP address of the requesting instance, the client IP address. The client_netmask configuration option can distort the clients for data protection reasons, but it makes analysis more difficult. Often it is better to use one of the log file anonymizers.

Also, the log_fqdn configuration option may log the fully qualified domain name of the client instead of the dotted quad. The use of that option is discouraged due to its performance impact.

result codes

This column is made up of two entries separated by a slash. This column encodes the transaction result:

  1. The cache result of the request contains information on the kind of request, how it was satisfied, or in what way it failed. Please refer to section Squid result codes for valid symbolic result codes.

    Several codes from older versions are no longer available, were renamed, or split. Especially the ERR_ codes do not seem to appear in the log file any more. Also refer to section Squid result codes for details on the codes no longer available in Squid-2.

    The NOVM versions and Squid-2 also rely on the Unix buffer cache, thus you will see less TCP_MEM_HITs than with a Squid-1. Basically, the NOVM feature relies on read() to obtain an object, but due to the kernel buffer cache, no disk activity is needed. Only small objects (below 8KByte) are kept in Squid's part of main memory.

  2. The status part contains the HTTP result codes with some Squid specific extensions. Squid uses a subset of the RFC defined error codes for HTTP. Refer to section status codes for details of the status codes recognized by a Squid-2.

bytes

The size is the amount of data delivered to the client. Mind that this does not constitute the net object size, as headers are also counted. Also, failed requests may deliver an error page, the size of which is also logged here.

request method

The request method to obtain an object. Please refer to section request-methods for available methods. If you turned off log_icp_queries in your configuration, you will not see (and thus unable to analyse) ICP exchanges. The PURGE method is only available, if you have an ACL for ``method purge'' enabled in your configuration file.

URL

This column contains the URL requested. Please note that the log file may contain whitespaces for the URI. The default configuration for uri_whitespace denies whitespaces, though.

rfc931

The eigth column may contain the ident lookups for the requesting client. Since ident lookups have performance impact, the default configuration turns ident_loookups off. If turned off, or no ident information is available, a ``-'' will be logged.

hierarchy code

The hierarchy information consists of three items:

  1. Any hierarchy tag may be prefixed with TIMEOUT_, if the timeout occurs waiting for all ICP replies to return from the neighbours. The timeout is either dynamic, if the icp_query_timeout was not set, or the time configured there has run up.
  2. A code that explains how the request was handled, e.g. by forwarding it to a peer, or going straight to the source. Refer to section hier-codes for details on hierarchy codes and removed hierarchy codes.
  3. The name of the host the object was requested from. This host may be the origin site, a parent or any other peer. Also note that the hostname may be numerical.

type

The content type of the object as seen in the HTTP reply header. Please note that ICP exchanges usually don't have any content type, and thus are logged ``-''. Also, some weird replies have content types ``:'' or even empty ones.

There may be two more columns in the access.log, if the (debug) option log_mime_headers is enabled In this case, the HTTP request headers are logged between a ``['' and a ``]'', and the HTTP reply headers are also logged between ``['' and ``]''. All control characters like CR and LF are URL-escaped, but spaces are not escaped! Parsers should watch out for this.

6.7 Squid result codes

The TCP_ codes refer to requests on the HTTP port (usually 3128). The UDP_ codes refer to requests on the ICP port (usually 3130). If ICP logging was disabled using the log_icp_queries option, no ICP replies will be logged.

The following result codes were taken from a Squid-2, compare with the log_tags struct in src/access_log.c:

TCP_HIT

A valid copy of the requested object was in the cache.

TCP_MISS

The requested object was not in the cache.

TCP_REFRESH_HIT

The requested object was cached but STALE. The IMS query for the object resulted in "304 not modified".

TCP_REF_FAIL_HIT

The requested object was cached but STALE. The IMS query failed and the stale object was delivered.

TCP_REFRESH_MISS

The requested object was cached but STALE. The IMS query returned the new content.

TCP_CLIENT_REFRESH_MISS

The client issued a "no-cache" pragma, or some analogous cache control command along with the request. Thus, the cache has to refetch the object.

TCP_IMS_HIT

The client issued an IMS request for an object which was in the cache and fresh.

TCP_SWAPFAIL_MISS

The object was believed to be in the cache, but could not be accessed.

TCP_NEGATIVE_HIT

Request for a negatively cached object, e.g. "404 not found", for which the cache believes to know that it is inaccessible. Also refer to the explainations for negative_ttl in your squid.conf file.

TCP_MEM_HIT

A valid copy of the requested object was in the cache and it was in memory, thus avoiding disk accesses.

TCP_DENIED

Access was denied for this request.

TCP_OFFLINE_HIT

The requested object was retrieved from the cache during offline mode. The offline mode never validates any object, see offline_mode in squid.conf file.

UDP_HIT

A valid copy of the requested object was in the cache.

UDP_MISS

The requested object is not in this cache.

UDP_DENIED

Access was denied for this request.

UDP_INVALID

An invalid request was received.

UDP_MISS_NOFETCH

During "-Y" startup, or during frequent failures, a cache in hit only mode will return either UDP_HIT or this code. Neighbours will thus only fetch hits.

NONE

Seen with errors and cachemgr requests.

The following codes are no longer available in Squid-2:

ERR_*

Errors are now contained in the status code.

TCP_CLIENT_REFRESH

See: TCP_CLIENT_REFRESH_MISS.

TCP_SWAPFAIL

See: TCP_SWAPFAIL_MISS.

TCP_IMS_MISS

Deleted, TCP_IMS_HIT used instead.

UDP_HIT_OBJ

Hit objects are no longer available.

UDP_RELOADING

See: UDP_MISS_NOFETCH.

6.8 HTTP status codes

These are taken from RFC 2616 and verified for Squid. Squid-2 uses almost all codes except 307 (Temporary Redirect), 416 (Request Range Not Satisfiable), and 417 (Expectation Failed). Extra codes include 0 for a result code being unavailable, and 600 to signal an invalid header, a proxy error. Also, some definitions were added as for RFC 2518 (WebDAV). Yes, there are really two entries for status code 424, compare with http_status in src/enums.h:

 000 Used mostly with UDP traffic.

 100 Continue
 101 Switching Protocols
*102 Processing

 200 OK
 201 Created
 202 Accepted
 203 Non-Authoritative Information
 204 No Content
 205 Reset Content
 206 Partial Content
*207 Multi Status

 300 Multiple Choices
 301 Moved Permanently
 302 Moved Temporarily
 303 See Other
 304 Not Modified
 305 Use Proxy
[307 Temporary Redirect]

 400 Bad Request
 401 Unauthorized
 402 Payment Required
 403 Forbidden
 404 Not Found
 405 Method Not Allowed
 406 Not Acceptable
 407 Proxy Authentication Required
 408 Request Timeout
 409 Conflict
 410 Gone
 411 Length Required
 412 Precondition Failed
 413 Request Entity Too Large
 414 Request URI Too Large
 415 Unsupported Media Type
[416 Request Range Not Satisfiable]
[417 Expectation Failed]
*424 Locked
*424 Failed Dependency
*433 Unprocessable Entity

 500 Internal Server Error
 501 Not Implemented
 502 Bad Gateway
 503 Service Unavailable
 504 Gateway Timeout
 505 HTTP Version Not Supported
*507 Insufficient Storage

 600 Squid header parsing error

6.9 Request methods

Squid recognizes several request methods as defined in RFC 2616. Newer versions of Squid (2.2.STABLE5 and above) also recognize RFC 2518 ``HTTP Extensions for Distributed Authoring -- WEBDAV'' extensions.

 method    defined    cachabil.  meaning
 --------- ---------- ---------- -------------------------------------------
 GET       HTTP/0.9   possibly   object retrieval and simple searches.
 HEAD      HTTP/1.0   possibly   metadata retrieval.
 POST      HTTP/1.0   CC or Exp. submit data (to a program).
 PUT       HTTP/1.1   never      upload data (e.g. to a file).
 DELETE    HTTP/1.1   never      remove resource (e.g. file).
 TRACE     HTTP/1.1   never      appl. layer trace of request route.
 OPTIONS   HTTP/1.1   never      request available comm. options.
 CONNECT   HTTP/1.1r3 never      tunnel SSL connection.

 ICP_QUERY Squid      never      used for ICP based exchanges.
 PURGE     Squid      never      remove object from cache.

 PROPFIND  rfc2518    ?          retrieve properties of an object.
 PROPATCH  rfc2518    ?          change properties of an object.
 MKCOL     rfc2518    never      create a new collection.
 MOVE      rfc2518    never      create a duplicate of src in dst.
 COPY      rfc2518    never      atomically move src to dst.
 LOCK      rfc2518    never      lock an object against modifications.
 UNLOCK    rfc2518    never      unlock an object.

6.10 Hierarchy Codes

The following hierarchy codes are used with Squid-2:

NONE

For TCP HIT, TCP failures, cachemgr requests and all UDP requests, there is no hierarchy information.

DIRECT

The object was fetched from the origin server.

SIBLING_HIT

The object was fetched from a sibling cache which replied with UDP_HIT.

PARENT_HIT

The object was requested from a parent cache which replied with UDP_HIT.

DEFAULT_PARENT

No ICP queries were sent. This parent was chosen because it was marked ``default'' in the config file.

SINGLE_PARENT

The object was requested from the only parent appropriate for the given URL.

FIRST_UP_PARENT

The object was fetched from the first parent in the list of parents.

NO_PARENT_DIRECT

The object was fetched from the origin server, because no parents existed for the given URL.

FIRST_PARENT_MISS

The object was fetched from the parent with the fastest (possibly weighted) round trip time.

CLOSEST_PARENT_MISS

This parent was chosen, because it included the the lowest RTT measurement to the origin server. See also the closests-only peer configuration option.

CLOSEST_PARENT

The parent selection was based on our own RTT measurements.

CLOSEST_DIRECT

Our own RTT measurements returned a shorter time than any parent.

NO_DIRECT_FAIL

The object could not be requested because of a firewall configuration, see also never_direct and related material, and no parents were available.

SOURCE_FASTEST

The origin site was chosen, because the source ping arrived fastest.

ROUNDROBIN_PARENT

No ICP replies were received from any parent. The parent was chosen, because it was marked for round robin in the config file and had the lowest usage count.

CACHE_DIGEST_HIT

The peer was chosen, because the cache digest predicted a hit. This option was later replaced in order to distinguish between parents and siblings.

CD_PARENT_HIT

The parent was chosen, because the cache digest predicted a hit.

CD_SIBLING_HIT

The sibling was chosen, because the cache digest predicted a hit.

NO_CACHE_DIGEST_DIRECT

This output seems to be unused?

CARP

The peer was selected by CARP.

ANY_PARENT

part of src/peer_select.c:hier_strings[].

INVALID CODE

part of src/peer_select.c:hier_strings[].

Almost any of these may be preceded by 'TIMEOUT_' if the two-second (default) timeout occurs waiting for all ICP replies to arrive from neighbors, see also the icp_query_timeout configuration option.

The following hierarchy codes were removed from Squid-2:

code                  meaning
--------------------  -------------------------------------------------
PARENT_UDP_HIT_OBJ    hit objects are not longer available.
SIBLING_UDP_HIT_OBJ   hit objects are not longer available.
SSL_PARENT_MISS       SSL can now be handled by squid.
FIREWALL_IP_DIRECT    No special logging for hosts inside the firewall.
LOCAL_IP_DIRECT       No special logging for local networks.

6.11 cache/log (Squid-1.x)

This file has a rather unfortunate name. It also is often called the swap log. It is a record of every cache object written to disk. It is read when Squid starts up to ``reload'' the cache. If you remove this file when squid is NOT running, you will effectively wipe out your cache contents. If you remove this file while squid IS running, you can easily recreate it. The safest way is to simply shutdown the running process:

        % squid -k shutdown
This will disrupt service, but at least you will have your swap log back. Alternatively, you can tell squid to rotate its log files. This also causes a clean swap log to be written.
        % squid -k rotate

For Squid-1.1, there are six fields:

  1. fileno: The swap file number holding the object data. This is mapped to a pathname on your filesystem.
  2. timestamp: This is the time when the object was last verified to be current. The time is a hexadecimal representation of Unix time.
  3. expires: This is the value of the Expires header in the HTTP reply. If an Expires header was not present, this will be -2 or fffffffe. If the Expires header was present, but invalid (unparsable), this will be -1 or ffffffff.
  4. lastmod: Value of the HTTP reply Last-Modified header. If missing it will be -2, if invalid it will be -1.
  5. size: Size of the object, including headers.
  6. url: The URL naming this object.

6.12 swap.state (Squid-2.x)

In Squid-2, the swap log file is now called swap.state. This is a binary file that includes MD5 checksums, and StoreEntry fields. Please see the Programmers Guide for information on the contents and format of that file.

If you remove swap.state while Squid is running, simply send Squid the signal to rotate its log files:

        % squid -k rotate
Alternatively, you can tell Squid to shutdown and it will rewrite this file before it exits.

If you remove the swap.state while Squid is not running, you will not lose your entire cache. In this case, Squid will scan all of the cache directories and read each swap file to rebuild the cache. This can take a very long time, so you'll have to be patient.

By default the swap.state file is stored in the top-level of each cache_dir. You can move the logs to a different location with the cache_swap_log option.

6.13 Which log files can I delete safely?

You should never delete access.log, store.log, cache.log, or swap.state while Squid is running. With Unix, you can delete a file when a process has the file opened. However, the filesystem space is not reclaimed until the process closes the file.

If you accidentally delete swap.state while Squid is running, you can recover it by following the instructions in the previous questions. If you delete the others while Squid is running, you can not recover them.

The correct way to maintain your log files is with Squid's ``rotate'' feature. You should rotate your log files at least once per day. The current log files are closed and then renamed with numeric extensions (.0, .1, etc). If you want to, you can write your own scripts to archive or remove the old log files. If not, Squid will only keep up to logfile_rotate versions of each log file. The logfile rotation procedure also writes a clean swap.state file, but it does not leave numbered versions of the old files.

To rotate Squid's logs, simple use this command:

        squid -k rotate
For example, use this cron entry to rotate the logs at midnight:
        0 0 * * * /usr/local/squid/bin/squid -k rotate

6.14 How can I disable Squid's log files?

To disable access.log:

        cache_access_log /dev/null

To disable store.log:

        cache_store_log none

It is a bad idea to disable the cache.log because this file contains many important status and debugging messages. However, if you really want to, you can: To disable access.log:

        cache_log /dev/null

6.15 My log files get very big!

You need to rotate your log files with a cron job. For example:

        0 0 * * * /usr/local/squid/bin/squid -k rotate

6.16 Managing log files

The preferred log file for analysis is the access.log file in native format. For long term evaluations, the log file should be obtained at regular intervals. Squid offers an easy to use API for rotating log files, in order that they may be moved (or removed) without disturbing the cache operations in progress. The procedures were described above.

Depending on the disk space allocated for log file storage, it is recommended to set up a cron job which rotates the log files every 24, 12, or 8 hour. You will need to set your logfile_rotate to a sufficiently large number. During a time of some idleness, you can safely transfer the log files to your analysis host in one burst.

Before transport, the log files can be compressed during off-peak time. On the analysis host, the log file are concatinated into one file, so one file for 24 hours is the yield. Also note that with log_icp_queries enabled, you might have around 1 GB of uncompressed log information per day and busy cache. Look into you cache manager info page to make an educated guess on the size of your log files.

The EU project DESIRE developed some some basic rules to obey when handling and processing log files:

6.17 Why do I get ERR_NO_CLIENTS_BIG_OBJ messages so often?

This message means that the requested object was in ``Delete Behind'' mode and the user aborted the transfer. An object will go into ``Delete Behind'' mode if

6.18 What does ERR_LIFETIME_EXP mean?

This means that a timeout occurred while the object was being transferred. Most likely the retrieval of this object was very slow (or it stalled before finishing) and the user aborted the request. However, depending on your settings for quick_abort, Squid may have continued to try retrieving the object. Squid imposes a maximum amount of time on all open sockets, so after some amount of time the stalled request was aborted and logged win an ERR_LIFETIME_EXP message.

6.19 Retrieving ``lost'' files from the cache

I've been asked to retrieve an object which was accidentally destroyed at the source for recovery. So, how do I figure out where the things are so I can copy them out and strip off the headers?

The following method applies only to the Squid-1.1 versions:

Use grep to find the named object (Url) in the cache/log file. The first field in this file is an integer file number.

Then, find the file fileno-to-pathname.pl from the ``scripts'' directory of the Squid source distribution. The usage is

        perl fileno-to-pathname.pl [-c squid.conf]
file numbers are read on stdin, and pathnames are printed on stdout.


Next Previous Contents