What's a Web Cache? Why do people use them? A Web cache sits between one or more Web servers (also known as origin servers) and a client or many clients, and watches requests come by, saving copies of the responses - like HTML pages, images and files (collectively known as representations) - for itself. Then, if there is another request for the same URL, it can use the response that it has, instead of asking the origin server for it again.
- There are two main reasons that Web caches are used:
- Kinds of Web Caches
- Aren't Web Caches bad for me? Why should I help them?
- How Web Caches Work
- How (and how not) to Control Caches
- Tips for Building a Cache-Aware Site
- Writing Cache-Aware Scripts
- Frequently Asked Questions
- What are the most important things to make cacheable?
- How can I make my pages as fast as possible with caches?
- I understand that caching is good, but I need to keep statistics on how many people visit my page!
- How can I see a representation's HTTP headers?
- My pages are password-protected; how do proxy caches deal with them?
- Should I worry about security if people access my site through a cache?
- I'm looking for an integrated Web publishing solution. Which ones are cache-aware?
- My images expire a month from now, but I need to change them in the caches now!
- I run a Web Hosting service. How can I let my users publish cache-friendly pages?
- I've marked my pages as cacheable, but my browser keeps requesting them on every request. How do I force the cache to keep representations of them?
- Implementation Notes - Web Servers
- Implementation Notes - Server-Side Scripting
- References and Further Information
- To reduce latency - Because the request is satisfied from the cache (which is closer to the client) instead of the origin server, it takes less time for it to get the representation and display it. This makes the Web seem more responsive.
- To reduce network traffic - Because representations are reused, it reduces the amount of bandwidth used by a client. This saves money if the client is paying for traffic, and keeps their bandwidth requirements lower and more manageable.
- What's a Web Cache? Why do people use them?
- Kinds of Web Caches
- Aren't Web Caches bad for me? Why should I help them?
- How Web Caches Work
- How (and how not) to Control Caches
- Tips for Building a Cache-Aware Site
- Writing Cache-Aware Scripts
- Frequently Asked Questions
- Implementation Notes - Web Servers
- Implementation Notes - Server-Side Scripting
If you examine the preferences dialog of any modern Web browser (like Internet Explorer, Safari or Mozilla), you'll probably notice a 'cache' setting. This lets you set aside a section of your computer's hard disk to store representations that you've seen, just for you. The browser cache works according to fairly simple rules. It will check to make sure that the representations are fresh, usually once a session (that is, the once in the current invocation of the browser).
This cache is especially useful when users hit the 'back' button or click a link to see a page they've just looked at. Also, if you use the same navigation images throughout your site, they'll be served from browsers' caches almost instantaneously.
Web proxy caches work on the same principle, but a much larger scale. Proxies serve hundreds or thousands of users in the same way; large corporations and ISPs often set them up on their firewalls, or as standalone devices (also known as intermediaries).
Because proxy caches aren't part of the client or the origin server, but instead are out on the network, requests have to be routed to them somehow. One way to do this is to use your browser's proxy setting to manually tell it what proxy to use; another is using interception. Interception proxies have Web requests redirected to them by the underlying network itself, so that clients don't need to be configured for them, or even know about them.
Proxy caches are a type of shared cache; rather than just having one person using them, they usually have a large number of users, and because of this they are very good at reducing latency and network traffic. That's because popular representations are reused a number of times.
Also known as 'reverse proxy caches' or 'surrogate caches,' gateway caches are also intermediaries, but instead of being deployed by network administrators to save bandwidth, they're typically deployed by Webmasters themselves, to make their sites more scalable, reliable and better performing.
Requests can be routed to gateway caches by a number of methods, but typically some form of load balancer is used to make one or more of them look like the origin server to clients.
This tutorial focuses mostly on browser and proxy caches, although some of the information is suitable for those interested in gateway caches as well.
Web caching is one of the most misunderstood technologies on the Internet. Webmasters in particular fear losing control of their site, because a proxy cache can 'hide' their users from them, making it difficult to see who's using the site.
Unfortunately for them, even if Web caches didn't exist, there are too many variables on the Internet to assure that they'll be able to get an accurate picture of how users see their site. If this is a big concern for you, this tutorial will teach you how to get the statistics you need without making your site cache-unfriendly.
Another concern is that caches can serve content that is out of date, or stale. However, this tutorial can show you how to configure your server to control how your content is cached.
CDNs are an interesting development, because unlike many proxy caches, their gateway caches are aligned with the interests of the Web site being cached, so that these problems aren't seen. However, even when you use a CDN, you still have to consider that there will be proxy and browser caches downstream.
On the other hand, if you plan your site well, caches can help your Web site load faster, and save load on your server and Internet link. The difference can be dramatic; a site that is difficult to cache may take several seconds to load, while one that takes advantage of caching can seem instantaneous in comparison. Users will appreciate a fast-loading site, and will visit more often.
Think of it this way; many large Internet companies are spending millions of dollars setting up farms of servers around the world to replicate their content, in order to make it as fast to access as possible for their users. Caches do the same for you, and they're even closer to the end user. Best of all, you don't have to pay for them.
The fact is that proxy and browser caches will be used whether you like it or not. If you don't configure your site to be cached correctly, it will be cached using whatever defaults the cache's administrator decides upon.
All caches have a set of rules that they use to determine when to serve a representation from the cache, if it's available. Some of these rules are set in the protocols (HTTP 1.0 and 1.1), and some are set by the administrator of the cache (either the user of the browser cache, or the proxy administrator).
Generally speaking, these are the most common rules that are followed (don't worry if you don't understand the details, it will be explained below):
- If the response's headers tell the cache not to keep it, it won't.
- If no validator (an
Last-Modifiedheader) is present on a response, it will be considered uncacheable.
- If the request is authenticated or secure, it won't be cached.
- A cached representation is considered fresh (that is, able to be sent to a client without checking with the origin server) if:
- It has an expiry time or other age-controlling header set, and is still within the fresh period.
- If a browser cache has already seen the representation, and has been set to check once a session.
- If a proxy cache has seen the representation recently, and it was modified relatively long ago.
- Fresh representations are served directly from the cache, without checking with the origin server.
- If an representation is stale, the origin server will be asked to validate it, or tell the cache whether the copy that it has is still good.
Together, freshness and validation are the most important ways that a cache works with content. A fresh representation will be available instantly from the cache, while a validated representation will avoid sending the entire representation over again if it hasn't changed.
There are several tools that Web designers and Webmasters can use to fine-tune how caches will treat their sites. It may require getting your hands a little dirty with your server's configuration, but the results are worth it. For details on how to use these tools with your server, see the Implementation sections below.
HTML authors can put tags in a document's
HEAD section that describe its attributes. These meta tags are often used in the belief that they can mark a document as uncacheable, or expire it at a certain time.
Meta tags are easy to use, but aren't very effective. That's because they're only honored by a few browser caches (which actually read the HTML), not proxy caches (which almost never read the HTML in the document). While it may be tempting to put a Pragma: no-cache meta tag into a Web page, it won't necessarily cause it to be kept fresh.
If your site is hosted at an ISP or hosting farm and they don't give you the ability to set arbitrary HTTP headers (like
Cache-Control), complain loudly; these are tools necessary for doing your job.
On the other hand, true HTTP headers give you a lot of control over how both browser caches and proxies handle your representations. They can't be seen in the HTML, and are usually automatically generated by the Web server. However, you can control them to some degree, depending on the server you use. In the following sections, you'll see what HTTP headers are interesting, and how to apply them to your site.
HTTP headers are sent by the server before the HTML, and only seen by the browser and any intermediate caches. Typical HTTP 1.1 response headers might look like this:
HTTP/1.1 200 OK Date: Fri, 30 Oct 1998 13:19:41 GMT Server: Apache/1.3.3 (Unix) Cache-Control: max-age=3600, must-revalidate Expires: Fri, 30 Oct 1998 14:19:41 GMT Last-Modified: Mon, 29 Jun 1998 02:28:12 GMT ETag: "3e86-410-3596fbbc" Content-Length: 1040 Content-Type: text/html
The HTML would follow these headers, separated by a blank line. See the Implementation sections for information about how to set HTTP headers.
Many people believe that assigning a
Pragma: no-cache HTTP header to a representation will make it uncacheable. This is not necessarily true; the HTTP specification does not set any guidelines for Pragma response headers; instead, Pragma request headers (the headers that a browser sends to a server) are discussed. Although a few caches may honor this header, the majority won't, and it won't have any effect. Use the headers below instead.
Expires HTTP header is a basic means of controlling caches; it tells all caches how long the associated representation is fresh for. After that time, caches will always check back with the origin server to see if a document is changed.
Expires headers are supported by practically every cache.
Most Web servers allow you to set
Expires response headers in a number of ways. Commonly, they will allow setting an absolute time to expire, a time based on the last time that the client saw the representation (last access time), or a time based on the last time the document changed on your server (last modification time).
Expires headers are especially good for making static images (like navigation bars and buttons) cacheable. Because they don't change much, you can set extremely long expiry time on them, making your site appear much more responsive to your users. They're also useful for controlling caching of a page that is regularly changed. For instance, if you update a news page once a day at 6am, you can set the representation to expire at that time, so caches will know when to get a fresh copy, without users having to hit 'reload'.
The only value valid in an
Expires header is a HTTP date; anything else will most likely be interpreted as 'in the past', so that the representation is uncacheable. Also, remember that the time in a HTTP date is Greenwich Mean Time (GMT), not local time.
<span>Expires: Fri, 30 Oct 1998 14:19:41 GMT</span>
It's important to make sure that your Web server's clock is accurate if you use the
Expires header. One way to do this is using the Network Time Protocol (NTP); talk to your local system administrator to find out more.
Expires header is useful, it has some limitations. First, because there's a date involved, the clocks on the Web server and the cache must be synchronised; if they have a different idea of the time, the intended results won't be achieved, and caches might wrongly consider stale content as fresh.
Another problem with
Expires is that it's easy to forget that you've set some content to expire at a particular time. If you don't update an
Expires time before it passes, each and every request will go back to your Web server, increasing load and latency.
HTTP 1.1 introduced a new class of headers,
Cache-Control response headers, to give Web publishers more control over their content, and to address the limitations of
Cache-Control response headers include:
max-age=[seconds] - specifies the maximum amount of time that an representation will be considered fresh. Similar to
Expires, this directive is relative to the time of the request, rather than absolute. [seconds] is the number of seconds from the time of the request you wish the representation to be fresh for.
s-maxage=[seconds] - similar to
max-age, except that it only applies to shared (e.g., proxy) caches.
public- marks authenticated responses as cacheable; normally, if HTTP authentication is required, responses are automatically uncacheable.
no-cache- forces caches to submit the request to the origin server for validation before releasing a cached copy, every time. This is useful to assure that authentication is respected (in combination with public), or to maintain rigid freshness, without sacrificing all of the benefits of caching.
no-store- instructs caches not to keep a copy of the representation under any conditions.
must-revalidate- tells caches that they must obey any freshness information you give them about a representation. HTTP allows caches to serve stale representations under special conditions; by specifying this header, you're telling the cache that you want it to strictly follow your rules.
proxy-revalidate- similar to
must-revalidate, except that it only applies to proxy caches.
<span>Cache-Control: max-age=3600, must-revalidate</span>
If you plan to use the
Cache-Control headers, you should have a look at the excellent documentation in HTTP 1.1
In How Web Caches Work, we said that validation is used by servers and caches to communicate when an representation has changed. By using it, caches avoid having to download the entire representation when they already have a copy locally, but they're not sure if it's still fresh.
Validators are very important; if one isn't present, and there isn't any freshness information (
Cache-Control) available, caches will not store a representation at all.
The most common validator is the time that the document last changed, as communicated in
Last-Modified header. When a cache has an representation stored that includes a
Last-Modified header, it can use it to ask the server if the representation has changed since the last time it was seen, with an
HTTP 1.1 introduced a new kind of validator called the ETag. ETags are unique identifiers that are generated by the server and changed every time the representation does. Because the server controls how the ETag is generated, caches can be surer that if the ETag matches when they make a
If-None-Match request, the representation really is the same.
Almost all caches use Last-Modified times in determining if an representation is fresh; ETag validation is also becoming prevalent.
Most modern Web servers will generate both
Last-Modified headers to use as validators for static content (i.e., files) automatically; you won't have to do anything. However, they don't know enough about dynamic content (like CGI, ASP or database sites) to generate them; see Writing Cache-Aware Scripts.
Besides using freshness information and validation, there are a number of other things you can do to make your site more cache-friendly.
- Use URLs consistently - this is the golden rule of caching. If you serve the same content on different pages, to different users, or from different sites, it should use the same URL. This is the easiest and most effective may to make your site cache-friendly. For example, if you use '/index.html' in your HTML as a reference once, always use it that way.
- Use a common library of images and other elements and refer back to them from different places.
- Make caches store images and pages that don't change often by using a
Cache-Control: max-ageheader with a large value.
- Make caches recognize regularly updated pages by specifying an appropriate max-age or expiration time.
- If a resource (especially a downloadable file) changes, change its name. That way, you can make it expire far in the future, and still guarantee that the correct version is served; the page that links to it is the only one that will need a short expiry time.
- Don't change files unnecessarily. If you do, everything will have a falsely young
Last-Modifieddate. For instance, when updating your site, don't copy over the entire site; just move the files that you've changed.
- Minimize use of SSL - because encrypted pages are not stored by shared caches, use them only when you have to, and use images on SSL pages sparingly.
By default, most scripts won't return a validator (a
ETag response header) or freshness information (
Cache-Control). While some scripts really are dynamic (meaning that they return a different response for every request), many (like search engines and database-driven sites) can benefit from being cache-friendly.
Generally speaking, if a script produces output that is reproducable with the same request at a later time (whether it be minutes or days later), it should be cacheable. If the content of the script changes only depending on what's in the URL, it is cacheble; if the output depends on a cookie, authentication information or other external criteria, it probably isn't.
- The best way to make a script cache-friendly (as well as perform better) is to dump its content to a plain file whenever it changes. The Web server can then treat it like any other Web page, generating and using validators, which makes your life easier. Remember to only write files that have changed, so the
Last-Modifiedtimes are preserved.
- Another way to make a script cacheable in a limited fashion is to set an age-related header for as far in the future as practical. Although this can be done with
Expires, it's probably easiest to do so with
Cache-Control: max-age, which will make the request fresh for an amount of time after the request.
- If you can't do that, you'll need to make the script generate a validator, and then respond to
If-None-Matchrequests. This can be done by parsing the HTTP headers, and then responding with
304 Not Modifiedwhen appropriate. Unfortunately, this is not a trival task.
Some other tips;
- Don't use POST unless it's appropriate. Responses to the POST method aren't kept by most caches; if you send information in the path or query (via GET), caches can store that information for the future.
- Don't embed user-specific information in the URL unless the content generated is completely unique to that user.
- Don't count on all requests from a user coming from the same host, because caches often work together.
Content-Lengthresponse headers. It's easy to do, and it will allow the response of your script to be used in a persistent connection. This allows clients to request multiple representations on one TCP/IP connection, instead of setting up a connection for every request. It makes your site seem much faster.
See the Implementation Notes for more specific information.
A good strategy is to identify the most popular, largest representations (especially images) and work with them first.
The most cacheable representation is one with a long freshness time set. Validation does help reduce the time that it takes to see a representation, but the cache still has to contact the origin server to see if it's fresh. If the cache already knows it's fresh, it will be served directly.
If you must know every time a page is accessed, select ONE small item on a page (or the page itself), and make it uncacheable, by giving it a suitable headers. For example, you could refer to a 1x1 transparent uncacheable image from each page. The
Referer header will contain information about what page called it.
Be aware that even this will not give truly accurate statistics about your users, and is unfriendly to the Internet and your users; it generates unnecessary traffic, and forces people to wait for that uncached item to be downloaded. For more information about this, see On Interpreting Access Statistics in the references.
Many Web browsers let you see the
Last-Modified headers are in a 'page info' or similar interface. If available, this will give you a menu of the page and any representations (like images) associated with it, along with their details.
To see the full headers of a representation, you can manually connect to the Web server using a Telnet client.
To do so, you may need to type the port (be default, 80) into a separate field, or you may need to connect to
www.google.com 80 (note the space). Consult your Telnet client's documentation.
Once you've opened a connection to the site, type a request for the representation. For instance, if you want to see the headers for
http://www.google.com/foo.html, connect to
80, and type:
GET /foo.html HTTP/1.1 [return] Host: www.google.com [return][return]
Press the Return key every time you see
[return]; make sure to press it twice at the end. This will print the headers, and then the full representation. To see the headers only, substitute HEAD for GET.
By default, pages protected with HTTP authentication are considered private; they will not be kept by shared caches. However, you can make authenticated pages public with a Cache-Control: public header; HTTP 1.1-compliant caches will then allow them to be cached.
If you'd like such pages to be cacheable, but still authenticated for every user, combine the
Cache-Control: public and
no-cache headers. This tells the cache that it must submit the new client's authentication information to the origin server before releasing the representation from the cache. This would look like:
Cache-Control: public, no-cache
Whether or not this is done, it's best to minimize use of authentication; for example, if your images are not sensitive, put them in a separate directory and configure your server not to force authentication for it. That way, those images will be naturally cacheable.
SSL pages are not cached (or decrypted) by proxy caches, so you don't have to worry about that. However, because caches store non-SSL requests and URLs fetched through them, you should be conscious about unsecured sites; an unscrupulous administrator could conceivably gather information about their users, especially in the URL.
In fact, any administrator on the network between your server and your clients could gather this type of information. One particular problem is when CGI scripts put usernames and passwords in the URL itself; this makes it trivial for others to find and user their login.
If you're aware of the issues surrounding Web security in general, you shouldn't have any surprises from proxy caches.
It varies. Generally speaking, the more complex a solution is, the more difficult it is to cache. The worst are ones which dynamically generate all content and don't provide validators; they may not be cacheable at all. Speak with your vendor's technical staff for more information, and see the Implementation notes below.
The Expires header can't be circumvented; unless the cache (either browser or proxy) runs out of room and has to delete the representations, the cached copy will be used until then.
The most effective solution is to change any links to them; that way, completely new representations will be loaded fresh from the origin server. Remember that the page that refers to an representation will be cached as well. Because of this, it's best to make static images and similar representations very cacheable, while keeping the HTML pages that refer to them on a tight leash.
If you want to reload an representation from a specific cache, you can either force a reload (in Firefox, holding down shift while pressing 'reload' will do this by issuing a
Pragma: no-cache request header) while using the cache. Or, you can have the cache administrator delete the representation through their interface.
If you're using Apache, consider allowing them to use .htaccess files and providing appropriate documentation.
Otherwise, you can establish predetermined areas for various caching attributes in each virtual server. For instance, you could specify a directory /cache-1m that will be cached for one month after access, and a /no-cache area that will be served with headers instructing caches not to store representations from it.
Whatever you are able to do, it is best to work with your largest customers first on caching. Most of the savings (in bandwidth and in load on your servers) will be realized from high-volume sites.
I've marked my pages as cacheable, but my browser keeps requesting them on every request. How do I force the cache to keep representations of them?
Caches aren't required to keep a representation and reuse it; they're only required to not keep or use them under some conditions. All caches make decisions about which representations to keep based upon their size, type (e.g., image vs. html), or by how much space they have left to keep local copies. Yours may not be considered worth keeping around, compared to more popular or larger representations.
Some caches do allow their administrators to prioritize what kinds of representations are kept, and some allow representations to be 'pinned' in cache, so that they're always available.
Generally speaking, it's best to use the latest version of whatever Web server you've chosen to deploy. Not only will they likely contain more cache-friendly features, new versions also usually have important security and performance improvements.
Apache uses optional modules to include headers, including both Expires and Cache-Control. Both modules are available in the 1.2 or greater distribution.
The modules need to be built into Apache; although they are included in the distribution, they are not turned on by default. To find out if the modules are enabled in your server, find the httpd binary and run httpd -l; this should print a list of the available modules. The modules we're looking for are mod_expires and mod_headers.
- If they aren't available, and you have administrative access, you can recompile Apache to include them. This can be done either by uncommenting the appropriate lines in the Configuration file, or using the -enable-module=expires and -enable-module=headers arguments to configure (1.3 or greater). Consult the INSTALL file found with the Apache distribution.
Once you have an Apache with the appropriate modules, you can use mod_expires to specify when representations should expire, either in .htaccess files or in the server's access.conf file. You can specify expiry from either access or modification time, and apply it to a file type or as a default. See the module documentation for more information, and speak with your local Apache guru if you have trouble.
Cache-Control headers, you'll need to use the mod_headers module, which allows you to specify arbitrary HTTP headers for a resource. See the mod_headers documentation.
Here's an example .htaccess file that demonstrates the use of some headers.
- .htaccess files allow web publishers to use commands normally only found in configuration files. They affect the content of the directory they're in and their subdirectories. Talk to your server administrator to find out if they're enabled.
### activate mod_expires ExpiresActive On ### Expire .gif's 1 month from when they're accessed ExpiresByType image/gif A2592000 ### Expire everything else 1 day from when it's last modified ### (this uses the Alternative syntax) ExpiresDefault "modification plus 1 day" ### Apply a Cache-Control header to index.html <files index.html> Header append Cache-Control "public, must-revalidate" </Files>
- Note that mod_expires automatically calculates and inserts a
Cache-Control:max-ageheader as appropriate.
One thing to keep in mind is that it may be easier to set HTTP headers with your Web server rather than in the scripting language. Try both.
Because the emphasis in server-side scripting is on dynamic content, it doesn't make for very cacheable pages, even when the content could be cached. If your content changes often, but not on every page hit, consider setting a Cache-Control: max-age header; most users access pages again in a relatively short period of time. For instance, when users hit the 'back' button, if there isn't any validator or freshness information available, they'll have to wait until the page is re-downloaded from the server to see it.
CGI scripts are one of the most popular ways to generate content. You can easily append HTTP response headers by adding them before you send the body; Most CGI implementations already require you to do this for the Content-Type header. For instance, in Perl;
#!/usr/bin/perl print "Content-type: text/htmln"; print "Expires: Thu, 29 Oct 1998 17:04:19 GMTn"; print "n"; ### the content body follows...
Since it's all text, you can easily generate
Expires and other date-related headers with in-built functions. It's even easier if you use
print "Cache-Control: max-age=600n";
This will make the script cacheable for 10 minutes after the request, so that if the user hits the 'back' button, they won't be resubmitting the request.
The CGI specification also makes request headers that the client sends available in the environment of the script; each header has 'HTTP_' appended to its name. So, if a client makes an
If-Modified-Since request, it may show up like this:
HTTP_IF_MODIFIED_SINCE = Fri, 30 Oct 1998 14:19:41 GMT
See also the cgi_buffer library, which automatically handles ETag generation and validation,
Content-Length generation and gzip content-oding for Perl and Python CGI scripts with a one-line include. The Python version can also be used to wrap arbitrary CGI scripts.
PHP is a server-side scripting language that, when built into the server, can be used to embed scripts inside a page's HTML, much like SSI, but with a far larger number of options. PHP can be used as a CGI script on any Web server (Unix or Windows), or as an Apache module.
By default, representations processed by PHP are not assigned validators, and are therefore uncacheable. However, developers can set HTTP headers by using the
For example, this will create a Cache-Control header, as well as an Expires header three days in the future:
<?php Header("Cache-Control: must-revalidate"); $offset = 60 * 60 * 24 * 3; $ExpSr = "Expires: " . gmdate("D, d M Y H:i:s", time() + $offset) . " GMT"; Header($ExpSr); ?>
Remember that the
Header() function MUST come before any other output.
As you can see, you'll have to create the HTTP date for an
Expires header by hand; PHP doesn't provide a function to do it for you (although recent versions have made it easier; see the PHP's date documentation). Of course, it's easy to set a
Cache-Control: max-age header, which is just as good for most situations.
For more information, see the manual entry for header.
Cold Fusion makes setting arbitrary HTTP headers relatively easy, with the CFHEADER tag. Unfortunately, their example for setting an
Expires header, as below, is a bit misleading.
<cfheader NAME="Expires" VALUE="#Now()#">
It doesn't work like you might think, because the time (in this case, when the request is made) doesn't get converted to a HTTP-valid date; instead, it just gets printed as a representation of Cold Fusion's Date/Time object. Most clients will either ignore such a value, or convert it to a default, like January 1, 1970.
However, Cold Fusion does provide a date formatting function that will do the job; GetHttpTimeSTring. In combination with DateAdd, itÃ‚¬„s easy to set Expires dates; here, we set a header to declare that representations of the page expire in one month;
<cfheader value="#GetHttpTimeString(DateAdd('m', 1, Now()))#">
You can also use the
CFHEADER tag to set
Cache-Control: max-age and other headers.
Remember that Web server headers are passed through in some deployments of Cold Fusion (such as CGI); check yours to determine whether you can use this to your advantage, by setting headers on the server instead of in Cold Fusion.
When setting HTTP headers from ASPs, make sure you either place the Response method calls before any HTML generation, or use
Response.Buffer to buffer the output. Also, note that some versions of IIS set a
Cache-Control: private header on ASPs by default, and must be declared public to be cacheable by shared caches.
Active Server Pages, built into IIS and also available for other Web servers, also allows you to set HTTP headers. For instance, to set an expiry time, you can use the properties of the
<% Response.Expires=1440 %>
specifying the number of minutes from the request to expire the representation. Likewise, absolute expiry time can be set like this (make sure you format HTTP date correctly):
<% Response.ExpiresAbsolute=#May 31,1996 13:30:15 GMT# %>
Cache-Control headers can be added like this:
<% Response.CacheControl="public" %>
Response.Expires is deprecated; the proper way to set cache-related headers is with
Response.Cache.SetExpires ( DateTime.Now.AddMinutes ( 60 ) ) ; Response.Cache.SetCacheability ( HttpCacheability.Public ) ;
See the MSDN documentation for more information.
The HTTP 1.1 spec has many extensions for making pages cacheable, and is the authoritative guide to implementing the protocol. See sections 13, 14.9, 14.21, and 14.25.
One-line include in Perl CGI, Python CGI and PHP scripts automatically handles ETag generation and validation, Content-Length generation and gzip Content-Encoding - correctly. The Python version can also be used as a wrapper around arbitrary CGI scripts.
This document is Copyright Â© 1998-2006 Mark Nottingham firstname.lastname@example.org. This work is licensed under a Creative Commons License If you do mirror this document, please send e-mail to the address above, so that you can be informed of updates. All trademarks within are property of their respective holders.
Although the author believes the contents to be accurate at the time of publication, no liability is assumed for them, their application or any consequences thereof. If any misrepresentations, errors or other need for clarification is found, please contact the author immediately.
The latest revision of this document can always be obtained from mnot.net
Abstract This document defines the Edge Architecture, which extend the Web infrastructure through the use of HTTP surrogates - intermediaries that act on behalf of an origin server. Status of this document This document is part of a submission to the World Wide Web Consortium (see Submission Request, W3C Staff Comment) that outlines an approach to scaling the Web infrastructure. Comments to the authors are welcome, but you are also encouraged to share your views on the W3C publicly archived www-talk mailing list email@example.com. For a full list of all acknowledged Submissions, please see Acknowledged Submissions to W3C. This document is a NOTE made available by the W3C for discussion only. Publication of this Note by W3C indicates no endorsement by W3C or the W3C Team, or any W3C Members. No W3C resources were or are allocated to the issues addressed by the NOTE. W3C has had no editorial control over the preparation of this NOTE. A list of current W3C technical documents can be found at the Technical Reports page. 1. Introduction One approach to scaling the Web is the use of surrogates - intermediaries that act on behalf of and with authority of an origin server (also known as "reverse proxies"). This document describes a framework for the distribution of HTTP content by surrogates, by specifying a means of controlling surrogates with HTTP headers, along with caching and response processing models for them. Surrogates may be deployed close to the origin server, or throughout the network - a configuration often referred to as a "Content Delivery Network" (CDN). Because they act on behalf of the origin server (and therefore the content's owner), surrogates offer content owners greater control over their behavior than proxies. As a result, they offer greater potential for improving performance, offloading processing from the origin server, and adding unique functionality to the Web. This document uses the Extended BNF syntax and rules from HTTP/1.1. 2. Controlling Surrogates with HTTP Headers Unlike standard HTTP intermediaries, Surrogates offer the ability for finer control by the origin server and content owner, because of their implied relationship. To enable this, we define HTTP headers to advertise the capabilities of a particular surrogate device, and control how it behaves. 2.1 Surrogate-Capability Header The Surrogate-Capabilities request header allows surrogates to advertise their capabilities with capability tokens. Capability tokens indicate sets of operations (e.g., caching, processing) that a surrogate is willing to perform. They follow the form of product tokens in the HTTP; Capability tokens are case-sensitive. As requests pass through surrogates, the Surrogate-Capabilities header is appended; The name in each capability set identifies a device token, which uniquely identifies the surrogate that appended it. Device tokens must be unique within a request's Surrogate-Capabilities header. The value contains a space-separated list of capability tokens. In the example above, two surrogates are present in the request chain; one identified as 'abc' is capable of applying "ESI/1.0", while 'def' is capable of handling both "Surrogate/1.0" and "ESI/1.0". Surrogates must only append their information to any existing Surrogate-Capability headers, so that it may be read from left to right (or, if the headers are separate, from top to bottom) to construct a list of surrogates that the request passed through (and thus a list that may be read from right to left to discover surrogates that the response will pass through). Surrogates may modify the Surrogate-Control header to instruct downstream surrogates to process capabilities originally targetted at them. If no downstream surrogates have identified themselves, the header should be stripped from responses. 4.2.1 no-store The no-store directive specifies that the response entity should not be stored in cache; it is only to be used for the original request, and may not be validated on the origin server. 4.2.2 no-store-remote The no-store-remote directive has similar semantics to the no-store directive, except that it should only be honored by those surrogates that consider themselves "remote". Generally, this means those that are more than one or two hops from the origin server, such as surrogates in a CDN. 4.2.3 max-age The max-age directive specifies how long the response entity can be considered fresh, in seconds. After this time, implementations must consider the cached entity stale. For example, max-age=30 Optionally, a '+' and a freshness extension can be appended, that specifies an additional period of time (in seconds) the stale entity may be served, before it must be revalidated or refetched as appropriate. For example, max-age=30+60 If no freshness extension is specified, it should be considered as '0' (i.e., the object should be revalidated or refetched immediately). 4.2.4 content The content directive identifies what processing surrogates should perform on the response before forwarding it. The value of the content directive is a left-to-right ordered, space-separated list of capabilities for processing by surrogates. For example, content="ESI/1.0 ESI-Inline/1.0" This directive specifies that first the operations represented by the "ESI/1.0" capability token and then the "ESI-Inline/1.0" capability token should be applied to the response entity. See also "Response Processing Model". Once processing takes place, the capability token that invoked it (as well as the 'content' directive, if appropriate) is consumed; that is, it is not passed forward to surrogates. 2.3 Surrogate-Control Targetting Because surrogates can be deployed hetrogeneously in a hierarchy, it is necessary to enable the targetting of directives at individual devices. Surrogate-Control directives may have a parameter that identifies the surrogate that they are targetted at, as identified by the device token in the request's Surrogate-Capabilities header. Directives without targetting parameters are applied to all surrogates, unless a targetted directive overrides it. For example, Surrogate-Control: max-age=60;abc, max-age=300 Here, the device that identified itself as 'abc' in the Surrogate-Capabilties request header will apply a max-age of 300; all other surrogates will apply a max-age of 60. Surrogate-Control: content="ESI/1.0";abc, content="ESI-Inline/1.0";def This header specifies that the device that identified itself as 'abc' should process the response entity for ESI, while the surrogate that identified itself as 'def' should process it for ESI-Inline. Implementations are not required to support targetting. 3. Caching Model Caching in surrogates operates in a manner similar to the HTTP; the same freshness and validation mechanisms form its basis. However, there are additional mechanisms for controlling cacheability in Surrogates, that override such mechanisms in the HTTP. The Surrogate-Control response header contains several directives that influence entity cacheability; specifically, "no-store", "no-store-remote", and "max-age" (see "Surrogate-Control Header" for more information). Collectively, these directives and their behaviors are described by the capability token Surrogate/1.0 This token should be included in all requests sent by compliant surrogates (see "Surrogate-Capabilities Header"). When any of these directives are present, they override any HTTP cacheability information present in the response. If more than one is targetted at a surrogate, the most specific applies. For example, Surrogate-Control: max-age=60, no-store;abc The surrogate that identified itself as 'abc' (see "Controlling Surrogates with HTTP Headers") would apply no-store; others would apply max-age=60. Conversely, Surrogate-Control: no-store, max-age=60;abc In this example, 'abc' would apply max-age=60, while other surrogates would apply no-store. Surrogates should ignore any HTTP Cache-Control request header directives. 4. Response Processing Model Surrogates may also invoke processors on response entitities as they pass through. Examples of processing include images transcoding, the application of XSLT stylesheets, or the interpretation of an in-markup language. By default, processing takes place after caching; that is, cached entities have any applicable processing applied before being served. Processors and extension directives may modify this behavior. Processing is invoked with the 'content' surrogate-control directive. The actual coorperative processing model depends on the nature of the processing, capabilities of surrogates, and their permissions to delegate processing to other surrogates. This document does not define any specific processing model. 1. Caching
- HTTP 1.1 Specification
- Network Appliance Caching Tutorial for Web Authors and Webmasters
- Brian D. Davison's Web Caching and Content Delivery Resources
- Caching Tutorial for Web Authors and Webmasters
- HTTP Header Analysis
- Cacheability Query