Future directions

There has recently been a move towards a standard for the automatic configuration of proxy-caches. New versions of Netscape and Internet Explorer are expected to use the new unknown standard to automatically change their proxy settings. This allows you to manipulate your cache server settings without inconveniencing clients.

Roaming

Roaming customers have to remove their configured caches, since your access control lists should stop them accessing your cache from another network.

Although both problems can be reduced by the cgi-generated configs (discussed above) a firewall between the browser and your cgi server would still mean that roaming users cannot access the Internet.

There are changes on the horizon that would help. As more and more protocols take roaming users into account, standards will evolve that make Internet usage plug-and-play. If you are in Tanzania today, plug in your modem and use the Internet. If you are in France in a weeks time, plug in again and (without config changes) you will be ready to go.

Progress on standards for autoconfiguration of Internet applications is underway, which will allow administrators to specify config files depending on where a user connects from without something like the cgi kludge above.

Browsers

Browser support for CARP is not at the stage where it is tremendously useful: once there is a proper standard for setup, it's likely to be included into the main browsers.

At some stage, expect support for ICP and cache-digests in browsers. The browser will then be able to make intelligent decisions as to which cache to talk to. Since ICP requests are efficient, a browser could send requests for each of the links on a page once it has retrieved the HTML source.

Transparency

Currently there is a major trend towards transparent caching, not only in the "Outer Internet" (where bandwidth is very expensive), but in the USA. (Transparency is covered in detail in chapter 12.)

Transparency has one major advantage: Users do not have to configure their browsers to access the cache.

To backbone providers this means that they can cache all passing traffic. A local ISP would configure their clients to talk to their cache; a backbone provider could then ask their ISP clients to use theirs as parents, but transparent caching has another advantage.

A backbone provider is acting as transit for requests that originate on other backbone provider's networks. With transparency, a backbone provider reduces this traffic as well as requests from their network to other backbone providers.

Assume you place a cache the hop before a major peering point. Here the cache intercepts both incoming requests (from other providers to web servers on your network) and outgoing (from your network to web servers on other provider's networks). This will reduce your peering-point usage (by caching outgoing requets for pages), and will also reduce the money you spend on other people's customers: since you reduce the cost it takes for data to flow out of your network. The latter cost may be minimal, but in times of network trouble it can reduce your latency noticibly.

As more and more backbone providers cache pages, more local ISPs will cache ("since it's cached further along the path, we may as well implement caching here - it's not going to change anything"). Though this will probably cause a drop in the hit rate of the backbone providers, their ever increasing user-base may make up for it. Backbone providers are caching centrally - with large numbers of edge caches (local ISP caches), they are likely to see fewer hits. Certain Inter-University networks have already noticed such a hit rate decline. As more and more universities add local caches, their hit rate falls.

Since the Universities are large, it's likely that their users will surf the same web page twice. Previously the Inter-University network would have returned the hit for that page, now the University's local cache does; this reduces the edge-cache's number of queries, and hence it's hit rate.