Considerations for caching resources in localStorage

Many years ago I was bitten by a radioactive tester, and since then I have been cursed with the inability to ignore edge cases. As superpowers go, it’s a bit rubbish.

Steve Souders recently wrote about a technique he had observed on Google and Bing’s mobile search sites. It involves using the HTML5 localStorage API to cache JavaScript and CSS resources (and also images, as data-uris) instead of the standard browser cache or offline storage API (AppCache). Nicholas Zakas covered it in his presentation “Mobile Web Speed Bumps” at Web Directions Unplugged, and there was an excited buzz when Steve discussed it again in his presentation at Mobilism last week.

The technique

When a new visitor comes to your web site (no cookies), your server sends them the resources inline with the HTML:

<script id="js.resource1">
var tester = {'type':'Radioactive'};
...
</script>

<style id="css.resource2">
.tester {color:#f00;}
...
</style>

This can give you a pretty large file, but that’s okay, because this only happens on the first visit. (Well…maybe.) You also need to transmit some kind of storage/recovery (dehydration/rehydration) library; I think this probably needs to be sent as inline script with every visit, so that it is guaranteed to always be available immediately.

When the client receives the HTML, the storage code examines candidate inline resources (perhaps flagged with a data-* attribute, e.g. <script data-cacheable="1">), and inserts them into localStorage, using keys derived from the resources themselves, such as their IDs. The code sets a cookie to match the resources that have been cached. In pseudocode:

foreach resource {
    localStorage.setItem(resource.id, resource.contents);
    cachedResources.push(resource.id);
}
cookie['cachedResources'] = cachedResources.join('|');

On subsequent visits to the server, the client passes it the cachedResources cookie, which contains a list of resources that are already available locally. When the server interprets the cookie, it can send down a completely different inline script block:

<script>
rehydrate(['js.resource1', 'css.resource2']);
</script>

The rehydration script then dives into the localStorage, retrieves the resources by key, and creates new <script> and <style> blocks to inject the resources into the page (with the necessary attention to script execution order). Job done; no external HTTP calls needed for JS and CSS.

Except…for the edge cases.

Edge cases, part 1

The biggest ones to consider are:

  • What if the browser doesn’t support the localStorage API?
  • What if the browser’s cookies and localStorage have got out of sync?

No localStorage

If the browser doesn’t support the localStorage API, the first request from the visitor will be exactly the same: no cookies, so the server transmits the resources inline. The storage code detects that localStorage is not available, skips the caching steps, and records the browser’s absence of capabilities in a cookie, e.g. cookie['localStorageAvailable'] = false.

In the second visit, the server gets the cookie, and now knows that the browser can’t handle localStorage. Sending the resources inline regardless would be a waste, because the browser could probably cache them if they were referenced as external resources. So this is exactly what the server can do: head down a third code path, where it renders standard script or link tags for the resources:

<script id="js.resource1" 
    src="/js/resource1.js" ></script>
<link rel="stylesheet" id="css.resource2" 
    href="/css/resource2.css" />

When the browser receives these script tags in its second visit, it will have to make follow-up HTTP requests to retrieve the external resources. But on every subsequent visit, the browser should be able to pull them from its standard file cache.

This is somewhat wasteful, because the data is transmitted twice. That makes this an excellent use case for server-side browser sniffing to identify known old browsers that are incapable of using localStorage, and give them the external resource tags right from the very start.

Cookies and localStorage are out of sync

Browsers have no native mechanism for, and no obligation to keep cookies and localStorage in sync. Therefore, they will be out of sync on a regular basis.

If the user deletes their cookies, but leaves their localStorage intact, the server will think that the browser needs to receive the resources inline again. The storage code will stash the inline resources in localStorage, overwriting the old values. This uses more bandwidth than strictly necessary, but otherwise there is no harm in it.

If the user deletes their localStorage, but leaves their cookies intact, that’s a bit more problematic. The server won’t transmit the resource inline, and the rehydration script will try to pull it out of localStorage, and find that the cupboard is bare. Oops.

So the rehydration script needs a secondary code path to request a missing resource from the server as an external resource. Then, once the external resource has loaded, it can stash it in localStorage and update the cookie, ready for the next round.

function rehydrate(id) {
    serializedResource = localStorage.getItem(id);
    if (!serializedResource) {
        // Cupboard is bare
        loadAndStashRemoteScript(getPathFromId(id));
    }
}

Edge cases, part 2

Inserting stuff into a cache: easy. Fun, too. Figuring out when to remove items from a cache: hard.

I am sure that Bing and Google have put a lot of thought into this already, but I haven’t had time to dissect their code yet. Here are some of the things that just spring immediately to mind for me, though. (Edge case superpowers, remember?) What happens when…

  • the localStorage cache is full?
  • you have updated a resource server-side, and want to cache this new revision?
  • you stop using a particular server-side resource altogether?
  • you’re running multiple applications on the same domain, all of which use localStorage?

If localStorage is full, you get a QUOTA_EXCEEDED_ERR when you try to insert a value. The localStorage API has no provision for increasing storage, either automatically, or with a user’s authorization. (You could tell them that their cache is full, so they can empty it themselves, but nobody will have a clue what you’re talking about.)

So what to do? You could call localStorage.clear(), which will empty the cache (for that domain), but that’s the nuclear option. You may be deleting items that you yourself inserted just a moment ago.

You could iterate over the key/value pairs in the cache, and evaluate each one as a candidate for deletion. But how do you decide which items stay, and which ones go?

One possibility would be to make a timestamp part of the key whenever you insert an item, e.g. ‘js.resource1.2011-05-20T23:43:27+01:00‘. Then you can sort all the keys by time, eject the oldest, and try inserting your original value into the cache again to see if it fits. If it doesn’t, keep ejecting keys until it does.

Of course, you should remember to update the matching cookie, so that the server knows that any ejected resources are no longer available locally. And you still run the risk of removing values from the cache that you just inserted as part of the current server round-trip. Given that the most browsers give you 2-5MB of localStorage to play with, you’d have to be inserting quite a lot of data for that to happen, but don’t rule it out. Unless you implement your own compression algorithm, the data you insert into the localStorage cache will not be gzipped like it is on the wire.

To ensure that you don’t delete items that were cached in localStorage by other applications running on the same domain, you might also want to consider namespacing your keys (e.g. ‘monkeyfez.js.resource1.2011-05-20T23:43:27+01:00‘), so you don’t tamper with values that aren’t yours. On the other hand, what if other applications have used up all of the available localStorage, leaving nothing for you to play with? Coordinate with your colleagues to ensure that you use a common algorithm for de-caching, and that you come to some agreement about how much cache each application should be allowed to use.

Timestamping the key allows you to track when the value was inserted into the cache, but it doesn’t say anything about the version of the resource on the server. Revving your resource file names is a good idea for busting caches, and it would probably help here, too.

Say that your code is in a file called “main.js“. During your build process, you append a revision ID to it, so it becomes main.4e56afa.js. When you serve this to the client, generate the cache id based on the application name, resource type, file name and the revision number:

<script id="monkeyfez.js.main.4e56afa">
var tester = {'type':'Radioactive'};
...
</script>

When you come to insert the file into localStorage, you add the timestamp to the key, so it becomes monkeyfez.js.main.4e56afa.2011-05-20T23:43:27+01:00.

Now, when you come to retrieve the resource, you won’t know in advance when it was inserted into the cache, so you’ll have to iterate over all keys to find the ones that match the pattern monkeyfez.js.main.4e56afa.*. I wouldn’t rule out the possibility that you might have multiple copies of the same file (even with the same revision number) in the cache under different timestamps, so sort the keys based on their dates, and take the most recent one.

If you update the resource on the server, the file will get a different revision number, e.g. main.b99a341.js. The first time that a visitor comes back after this change, they will be unaware that this new version exists. According to their cookies, they have monkeyfez.js.main.4e56afa. The server recognizes that this is useless, so it sends the new version as an inline code block. When the new version hits the client, the storage code inserts the new version into the cache.

But the old version (monkeyfez.js.main.4e56afa) will still be there in localStorage, and a reference to that revision will still there in the user’s cookie. Is it safe to delete all previous versions of monkeyfez.js.main when you receive a differently revved version of this resource? Probably. In the odd case where you have multiple branches of your application living on the same domain, the worst that will happen is that when you use the other branches, the application will transmit a different revision, which will then cause your revision to be ejected from the cache again. Annoying, but not catastrophic.

The same applies if you stop using the file main.js altogether, and decide to move to core.js instead. Assuming that you implement this kind of FIFO strategy, and synchronize the cookie whenever you update localStorage, then eventually the unused monkeyfez.js.main.4e56afa.2011-05-20T23:43:27+01:00 cache item will be kicked out of the cache when it is the oldest item available.

Summary

There’s a lot more to this technique than first meets the eye. Implementing it involves changes on both the client and the server, and getting all the caching/cookie synchronization right is a non-trivial task. But is it worth it?

Reducing HTTP requests is always a good thing, especially on mobile, where the latency really kills you. But read the comments on Steve Souders’s article to see some discussion about slow I/O for localStorage. If you have to run through a manual try/fail cycle of cache expiry as soon as it fills up, I imagine that could slow things down considerably.

Is it faster than using the browser’s normal cache? I don’t know; I haven’t got a working implementation with which to test it yet. And even though you’re storing the resources locally, your browser still has to parse them, which will take just as long as it ever did. But I find the fact that Google and Bing are both using it for their mobile sites highly suggestive, and it’s certainly something I want to investigate more.

References and further reading:

One reply on “Considerations for caching resources in localStorage”

Browser-sniffing is a bad idea, and server-side browser sniffing is no better of an idea than client-side browser sniffing. The problem is, the user agent string is provided by the user agent itself, and can be faked. Feature detection is the only reliable way to go, even if there are plenty of examples of out there of people writing code that relies on the user agent string to provide different code paths.

Comments are closed.