diff --git a/content/userguide/_index.md b/content/userguide/_index.md index acb43c5..16f2f00 100644 --- a/content/userguide/_index.md +++ b/content/userguide/_index.md @@ -3,3 +3,252 @@ title = 'User Guide' date = 2024-09-02T11:18:33-07:00 weight = 3 +++ + +This basic tutorial shows via pseudocode how you can get started with integrating memcached into your application. Memcached is not an automatic application accelerator: it must be integrated into an application via code. + +[The Caching Story Tutorial](/tutorialcachingstory/) is a good place to start +if you are unfamiliar with memcached. + +## Basic Data Caching + +The "hello world" of memcached is to fetch "something" from somewhere, maybe process it a little, then put it into the cache, to expire in N seconds. + +### Initializing a Memcached Client + +Read the documentation carefully for your client. + +```perl +my $memclient = Cache::Memcached->new({ servers => [ '10.0.0.10:11211', '10.0.0.11:11211' ]}); +``` + +``` + # pseudocode +memcli = new Memcache +memcli:add_server('10.0.0.10:11211') +``` + +Some rare clients will allow you add the same servers over and over again, without harm. Most will require that you carefully construct your memcached client object *once* at the start of your request, and perhaps persist it between requests. Initializing multiple times may cause memory leaks in your application or stack up connections against memcached until you cause a failure. + +### Wrapping an SQL Query + +Memcached is primarily used for reducing load on SQL databases. + +``` + # Don't load little bobby tables +sql = "SELECT * FROM user WHERE user_id = ?" +key = 'SQL:' . user_id . ':' . md5sum(sql) + # We check if the value is 'defined', since '0' or 'FALSE' # can be + # legitimate values! +if (defined result = memcli:get(key)) { + return result +} else { + handler = run_sql(sql, user_id) + # Often what you get back when executing SQL is a special handler + # object. You can't directly cache this. Stick to strings, arrays, + # and hashes/dictionaries/tables + rows_array = handler:turn_into_an_array + # Cache it for five minutes + memcli:set(key, rows_array, 5 * 60) + return rows_array +} +``` + +Wow, zippy! When you cache these user rows, they will now see that same data for up to five minutes. Unless you actively invalidate the cache when a user makes a change, it can take up to five minutes for them to see a difference. + +Often this is enough to help. If you have some complex queries, such as a count of users or number of posts in a thread. It might be acceptable to limit how often those queries can be issued by having a flat cache. + +### Wrapping Several Queries + +The more processing that you can turn into a single memcached request, the better. Often you can replace several SQL queries with a single memcached lookup. + +``` +sql1 = "SELECT * FROM user WHERE user_id = ?" +sql2 = "SELECT * FROM user_preferences WHERE user_id = ?" +key = 'SQL:' . user_id . ':' . md5sum(sql1 . sql2) +if (defined result = memcli:get(key)) { + return result +} else { + # Remember to add error handling, kids ;) + handler = run_sql(sql1, user_id) + t[info] = handler:turn_into_an_array + handler = run_sql(sql2, user_id) + t[pref] = handler:turn_into_an_array + # Client will magically take this hash/table/dict/etc + # and serialize it for us. + memcli:set(key, t, 5 * 60) + return t +} +``` + +When you load a user, you fetch the user itself *and* their site preferences (whether they want to be seen by other users, what theme to show, etc). What was once two queries and possibly many rows of data, is now a single cache item, cached for five minutes. + +### Wrapping Objects + +Some languages allow you to configure objects to be serialized. Exactly how to do this in your language is beyond the scope of this document, however some tips remain. + + * Consider if you actually need to serialize a whole object. Odds are your constructor could pull from cache. + * Serialize it as efficiently and simply as possible. Spending a lot of time in object setup/teardown can drag CPU. + +Further consider, if you're deserializing a huge object for a request, and then using one small part of it, you might want to cache those parts separately. + +### Fragment Caching + +Once upon a time ESI (Edge Side Includes) were all the rage. Sadly they require special proxies/caching/etc. You can do this within your app for dynamic, authenticated pages just fine. + +Memcached isn't just all about preventing database queries. You can cache +rendered HTML as well. + +``` + # Lets generate a bio page! +user = fetch_user_info(user_id) +bio_template = fetch_biotheme_for(user_id) +page_template = fetch_page_theme +pagedata = fetch_page_data + +bio_fragment = apply_template(bio_template, user) +page = apply_template(page_template, bio_fragment) +print "Content-Type: text/html", page +``` + +In this oversimplified example, we're loading user data (which could be using a cache!), loading the raw template for the "bio" part of a webpage (which could be using a cache!). Then it loads the main template, which includes the header and footer. + +Finally, it processes all that together into the main page and returns it. Applying templates can be costly. You can cache the assembled bio fragment, in case you're rendering a custom header for the viewing user. Or if it doesn't matter, cache the whole 'page' output. + +``` +key = 'FRAG-BIO:' . user_id +if (result = memcli:get(key)) { + return result +} else { + user = fetch_user_info(user_id) + bio_template = fetch_biotheme_for(user_id) + bio_fragment = apply_template(bio_template, user) + memcli:set(key, bio_fragment, 5 * 15) + return bio_fragment +} +``` + +See? Why do more work than you have to. The more you can roll up the faster pages will render, the happier your users. + +## Extended Functions + +Beyond 'set', there are add, incr, decr, etc. They are simple commands but require a little finesse. + +### Proper Use of `add` + +`add` allows you to set a value if it doesn't already exist. You use this when initializing counters, setting locks, or otherwise setting data you don't want overwritten as easily. There can be some odd little gotchas and race conditions in handling of `add` however. + +``` + # There can be only one +key = "the_highlander" +real_highlander = memcli:get(key) +if (! real_highlander) { + # Hmm, nobody there. + var = fetch_highlander + if (! memcli:add(key, var, 3600)) { + # Uh oh! Somebody beat us! + # We can either use the variable we fetched, + # or issue `get` again in case it might be newer. + real_highlander = memcli:get(key) + } else { + # We win! + gloat + } +} +return real_highlander +``` + +### Proper Use of `incr` or `decr` + +`incr` and `decr` commands can be used to maintain counters. Such as how many hits a page has received, when you rate limit a user, etc. These commands will allow you to add values from 1 or higher, or even negative values. + +They do not, however, initialize a missing value. + +``` +# Got a hit! +key = 'hits: ' . user_id +if (! memcli:incr(key, 1)) { + # Whoops, key doesn't already exist! + # There's a chance someone else just noticed this too, + # so we use `add` instead of `set` + if (! memcli:add(key, 1, 60 * 60 * 24)) { + # Failed! Someone else already put it back. + # So lets try one more time to incr. + memcli:incr(key, 1) + } else { + return success + } +} else { + return success +} +``` + +If you're not careful, you could miss counting that hit :) You can doll this up and retry a few times, or no times, depending on how important you think it is. Just don't run a `set` when you mean to do an `add` in this case. + +## Cache Invalidation + +Levelling up in memcached requires that you learn about actively invalidating (or revalidating) your cache. + +When a user comes along and edits their user data, you should be attempting to keep the cache in sync some way, so the user has no idea they're being fed cached data. + +### Expiration + +A good place to start is to tune your expiration times. Even if you're actively deleting or overwriting cached data, you'll still want to have the cache expire occasionally. In case your app has a bug, a crash, a network blip, or some other issue where the cache could become out of sync. + +There isn't a "rule of thumb" when picking an expiration time. Sit back and think about your users, and what your data is. How long can you go without making your users angry? Be honest with yourself, as "THEY _ALWAYS_ NEED FRESH DATA" isn't necessarily true. + +Expiration times are specified in unsigned integer seconds. They can be set from `0`, meaning "never expire", to 30 days `(60*60*24*30)`. Any time higher than 30 days is interpreted as a unix timestamp date. If you want to expire an object on january 1st of next year, this is how you do that. + +For binary protocol an expiration must be unsigned. If a negative expiration +is given to the ASCII protocol, it is treated it as "expire immediately". + +### `delete` + +The simplest method of invalidation is to simply delete it, and have your website re-cache the data next time it's fetched. + +So user Bob updates his bio. You want Bob to see his latest info when he so vainly reloads the page. So you: + +``` +memcli:delete('FRAG-BIO: ' . user_id) +``` + +... and next time he loads the page, it will fetch from the database and repopulate the cache. + +### `set` + +The most efficient idea is to actively update your cache as your data changes. When Bob updates his bio, take bob's bio object and shove it into the cache via 'set'. You can pass the new data into the same routine that normally checks for data, or however you want to structure it. + +Play your cards right, and your database only ever handles writes, and data it hasn't seen in a long time. + +### Invalidating by Tag + +TODO: link to namespacing document + say how this isn't possible. + +## Key Usage + +Thinking about your keys can save you a lot of time and memory. Memcached is a hash, but it also remembers the full key internally. The longer your keys are, the more bytes memcached has to hash to look up your value, and the more memory it wastes storing a full copy of your key. + +On the other hand, it should be easy to figure out exactly where in your code a key came from. Otherwise many laborous hours of debugging wait for you. + +### Avoid User Input + +It's very easy to compromise memcached if you use arbitrary user input for keys. The ASCII protocol uses spaces and newlines. Ensure that neither show up your keys, live long and prosper. Binary protocol does not have this issue. + +### Short Keys + +64-bit UID's are clever ways to identify a user, but suck when printed out. 18446744073709551616. 20 characters! Using base64 encoding, or even just hexadecimal, you can cut that down by quite a bit. + +With the binary protocol, it's possible to store anything, so you can directly pack 4 bytes into the key. This makes it impossible to read back via the ASCII protocol, and you should have tools available to simply determine what a key is. + +### Informative Keys + +``` +key = 'SQL' . md5sum("SELECT blah blah blah") +``` + +... might be clever, but if you're looking at this key via tcpdump, strace, etc. You won't have any clue where it's coming from. + +In this particular example, you may put your SQL queries into an outside file with the md5sum next to them. Or, more simply, appending a unique query ID into the key. + +``` +key = 'SQL' . query_id . ':' . m5sum("SELECT blah blah blah") +``` diff --git a/content/userguide/faq.md b/content/userguide/faq.md index 9113fbb..249c947 100644 --- a/content/userguide/faq.md +++ b/content/userguide/faq.md @@ -1,5 +1,120 @@ +++ -title = 'Faq' +title = 'FAQ' date = 2024-09-04T14:41:27-07:00 -draft = true +weight = 2 +++ + +## Basics + +### How can you list all keys? + +You can list all keys using an interface that is deliberately limited. +Applications _must not_ rely on reading all keys back from a memcached server. +A server may have millions or billions of unrelated keys. An application that +relies on looking at all keys to then render a page will eventually fail. + +You can use the `lru crawler` to examine all keys in an instance. This +interface provides useful data for doing an analysis on data that is stored in +a cache. See [the protocol +documentation](Protocol](http://github.com/memcached/memcached/blob/master/doc/protocol.txt)) +for full info. + +### Why only RAM? + +Everything memcached does is an attempt to guarantee latency and speed. That +said, it can make sense for some larger values to be fetched from high speed +flash drives. [A feature called extstore](/features/flashstorage/) allows +splitting items between RAM and disk storage. + +### Why no complex operations? + +All operations should run in O(1) time. They must be atomic. This doesn't necessarily mean complex operations can never happen, but it means we have to think very carefully about them first. Many complex operations can be emulated on top of more basic functionality. + +### Why is memcached not recommended for sessions? Everyone does it! + +If a session disappears, often the user is logged out. If a portion of a cache disappears, either due to a hardware crash or a simple software upgrade, it should not cause your users noticable pain. [This overly wordy post](http://dormando.livejournal.com/495593.html) explains alternatives. Memcached can often be used to reduce IO requirements to very very little, which means you may continue to use your existing relational database for the things it's good at. + +Like keeping your users from being knocked off your site. + +### What about the MySQL query cache? + +The MySQL query cache can be a useful start for small sites. Unfortunately it uses many global locks on the mysql database, so enabling it can throttle you down. It also caches queries per table, and has to expire the entire cache related to a table when it changes, at all. If your site is fairly static this can work out fine, but when your tables start changing with any frequency this immediately falls over. + +Memory is also limited, as it requires using a chunk of what's directly on your database. + +### Is memcached atomic? + +Aside from any bugs you may come across, all commands are internally atomic. Issuing multiple sets at the same time has no ill effect, aside from the last one in being the one that sticks. + +### How do I troubleshoot client timeouts? + +See [Timeouts](/troubleshooting/timeouts) for help. + +## Setup Questions + +### How do I authenticate? + +Limited password based authentication is available in [the basic protocol](http://github.com/memcached/memcached/blob/master/doc/protocol.txt) - You can also enable TLS and authenticate by certificates verification. + +### How do you handle failover? + +You usually don't. Some clients have a "failover" option that will try the next server in the case of a failure. + +- TODO: renovate this section. + +### How do you handle replication? + +It doesn't. Adding replication to the system halves your effective cache size. If you can't handle even a few percent extra cache misses, you have serious problems. Even with replication, things can break. More moving parts. Software to crash. + +- TODO: renovate this section + +### Can you persist cache between restarts? + +Yes, in some situations. See [the documentation on warm restart](/features/restart/). + +### Do clients and servers all need to talk to each other? + +Nope. The less chatter, the more scalable the system. + +## Monitoring + +### Why Isn't curr_items Decreasing When Items Expire? + +Expiration in memcached is lazy. In general, an item cannot be known to be expired until something looks at it. This helps the server keep consistent performance. + +Since 1.5.0 a background thread analyzes the cache over time and +asynchronously removes expired items from memory. [See this blog post for more detail](https://memcached.org/blog/modern-lru/) + +## Use Cases + +### When would you not want to use memcached? + +It doesn't always make sense to add memcached to your application. + +TODO: link to that whynot page here or just inline new stuff? + +### Why can't I use it as a database? + +Memcached is an ephemeral data store. Meaning if the server goes down (crash, +reboot, "cloud burps") then your data is gone. The ephemeral nature of the +software allows us to take extreme tradeoffs in design which allow us to be +10x, 100x, or even 1000x faster than a traditional database. Combining caching +with traditional datastores allows reducing cost and improving user +experience. + +### Can using memcached make my application slower? + +Yes, absolutely. If your DB queries are all fast, your website is fast, adding memcached might not make it faster. + +Also, this: + +``` +my @post_ids = fetch_all_posts($thread_id); +my @post_entries = (); +for my $post_id (@post_ids) { + push(@post_entries, $memc->get($post_id)); +} +# Yay I have all my post entries! +``` + +Instead of this anti-pattern, use pipelined gets instead. Fetching a single item from memcached still requires a network roundtrip and a little processing. The more you can fetch at once the better. diff --git a/content/userguide/usecases.md b/content/userguide/usecases.md index becbf73..9998569 100644 --- a/content/userguide/usecases.md +++ b/content/userguide/usecases.md @@ -1,5 +1,208 @@ +++ -title = 'Usecases' +title = 'Use Cases' date = 2024-09-04T14:41:21-07:00 -draft = true +weight = 1 +++ + +### Namespacing + +Memcached does not natively support namespaces or tags. It's difficult to support this natively as you cannot atomically expire the namespaces across all of your servers without adding quite a bit of complication. + +However you can emulate them easily. + +#### Simulating Namespaces with Key Prefixes + +Using a coordinated key prefix, you can create a virtual namespace that spans your entire memcached cluster. The prefix can be stored in your configuration and changed manually, or stored in an external key. + +#### Deleting By Namespace + +Given a user and all his related keys, you want a one-stop switch to invalidate all of their cache entries at the same time. + +Using namespacing, you would set up a tertiary key with a version number inside it. You end up doing an extra round trip to memcached to figure the namespace. + +``` +user_prefix = memcli:get('user_namespace:' . user_id) +bio_data = memcli:get(user_prefix . user_id . 'bio') +``` + +Invalidating the namespace simply requires editing that key. Your application will no longer request the old keys, and they will eventually fall off the end of the LRU and be reclaimed. + +Careful in how you implement the prefix. You'll want to use `add` so you don't blow away an existing namespace. You'll also want to initialize it to something with a low probability of coming up again. + +An easy recommendation is a unix timestamp. + +``` +# Namespace management, basic fetch. +key = 'namespace:' . user_id +namespace = memcli:get(key) +if (!namespace) { + namespace = time() + if (! memcli:add(key, namespace)) { + # lost the race. + namespace = memcli:get(key) + # Could re-test it and jump to the start of the loop, hard fail, etc. + } + # Send back the new namespace. + return namespace +} +``` + +And on invalidation: + +``` +key = 'namespace:' . user_id +if (! memcli:incr(key, 1)) { + # Increment failed! Key must not exist. + memcli:add(key, time()) +} +``` + +This isn't a perfect algorithm either, but a simple one. The key is initialized via a timestamp, and then incremented by one each time the data is to be invalidated. This works well if the invalidations are infrequent, as a missing key will always end up being replaced with a larger number than was slowly incremented from before. + +You can drop the race condition further by using millisecond resolution instead of seconds, but that makes your key prefix longer. For bonus points, base64 encode the number before sticking it in front of the other keys. + +### Storing sets or lists + +Storing lists of data into memcached can mean either storing a single item with a serialized array, or trying to manipulate a huge "collection" of data by adding, removing items without operating on the whole set. Both should be possible. + +One thing to keep in mind is memcached's 1 megabyte limit on item size, so storing the whole collection (ids, data) into memcached might not be the best idea. + +Steven Grimm explains a better approach on the mailing list: http://lists.danga.com/pipermail/memcached/2007-July/004578.html + +Chris Hondl and Paul Stacey detail alternative approaches to the same ideal: http://lists.danga.com/pipermail/memcached/2007-July/004581.html + +A combination of both would make for very scalable lists. IDs between a range are stored in separate keys, and data is strewn about using individual keys. + +### Managing lists with `append`/`prepend` + +Assuming you pick a route of storing a list of ids (numbers) in one a memcached key, and fetch the full data of a sets items separately, you can use `append`/`prepend` for atomic updates. + +Lets take an AJAX-y feature of managing a users' list of interests. You give the user a box to type into. They type "hammers" and your fancy AJAX script updates their list of interests with "hammers", and anyone viewing their profile can instantly see "hammers" added to their list of interests. Normally you either have to try a CAS update to fetch the old cache item, add the hammer interest-id, then re-set the value, or simply delete the cache and pull the whole list from the database on the next view. + +Instead, you can use append. When adding an interest-id onto the users' interest cache entry, simply issue an append command with a binary packed string representing the id. Do this similar to incr/decr, as append will fail if the cache list doesn't already exist. + +So `append`, if fail, load from database and run `add`, if fail, decide how much you care to ensure the item got in and do more work. + +Now lets say that user adds several hundred interests, then goes back to the beginning and decides he doesn't like "hammers" anymore, as he actually likes nails more. How do you handle this? You still have the old options of trying to pull the list, edit it, and CAS it back in, or deleting the whole thing. Or you can maintain a blacklist. + +When initializing the list, the first byte in the list can be a "zero marker", a whole four or eight byte (depending on how big your interest-ids are) value that contains nothing but zeros. + +When you're loading the list in from memcached, the first set of items you read will be "blacklist" items. Once you hit a value of "0", start reading the list as items that are supposed to exist. You can check each item against the "blacklist" and not enter it into the list for display. + +So for common cases where the list won't get too large, you can add stuff to the end and then prepend removed items to the front. If the cache gets blown and reloaded, it won't have the blacklisted items in it to begin with, so the cache entry is cleaned. + +This has obvious limitations based on cache size, but is a clever way to handle avoiding excessively expensive recaching operations with fickle users. + +### Zero byte values + +Don't do this: + +``` +if (data = memcli:get('helloworld')) { + # Yay stuff! +} +``` + +... because you could have perfectly valid data that has a result of 0, or false, or empty. Empty keys are useful for advisory locks, caching status flags, and the like. If you can get all that you need from the existence of the key alone, you don't need to waste bytes with extra data. + +### Reducing key size + +The smaller your keys, the less memory overhead you have. With smaller items in the lower slab classes this can matter even more, as shaving a few bytes could end up putting the item into a more efficient slab class. Also, keys are limited to 250 characters (effectively. this may be raised to 65k in the future). + +Compress keys when it's easy or makes sense. "super_long_function_names_abstract_key" might be descriptive but is a waste. Boil it down to a function id you can grep your code for. "slfnak", or whatever. + +Base64 encode long numbers. Easy enough to use a commandline program to turn that back into a numeric. + +Binary protocol allows setting arbitrary keys. Instead of base64 encoding you can byte pack them down to their native size. Also instead of 'sflnak', you could pack a two byte identifier and map the number back to your code. + +However, don't do any of this unless you're really hurting for extra memory. Some easy changes like this can sometimes save between 5 and 20% of your memory, but often buying a few gigs of ram is cheaper than your time. + +### Accelerating counters safely + +TODO: This is a memcached/mysql hybrid for avoiding running `count(*) from table where user_id = ?` constantly. I'll be fleshing this out later since I see some bugs in what I have here (deadlocks?) + +### Rate limiting + +TODO: There were a couple decent posts on this. I've seen some slides that were buggy. Need to round them up. + +### Loose central locking + +While we don't recommend doing this for any serious locking situation, sometimes you would benefit from an advisory, sometimes reliable "lock" obtainable via memcached. + +Given a cache item that is popular and difficult to recreate, you could end up with dozens (or hundreds) of processes slamming your database at the same time in an attempt to refill a cache. Discussed more below as the "stampeding herd" problem, we'll describe a simple method of using `add` to create an advisory "loose lock" + +``` +key = "expensive_frontpage_item" +item = memcli:get(key) +if (! defined item) { + # Oh crap, we have to recache it! + # Give us 60 seconds to recache the item. + if (memcli:add(key . "_lock", 60)) { + item = fetch_expensive_thing_from_database + memcli:add(key, item, 86400) + memcli:delete(key . "_lock") + } else { + # Lost the race. We can do any number of things: + # - short sleep, then re-fetch. + # - try the above a few times, then slow-fetch and return the item + # - show the user a page without this expensive content + # - show some less expensive content + # - throw an error + } +} +return item +``` + +Worst case you can end up operating without the lock at all. Best case you can reduce the amount of parallel queries going on without adding more infrastructure. + +Use at your own risk! 'add' can fail because the key already exists, or because the remote server was down. If your client doesn't give you a way to tell the difference, you have to make a decision on how hard to try before running the query anyway or throwing an error. + +### Avoiding stampeding herd + +It's a big problem when cache misses on hot (or expensive) items cause a mess of application processes to slam your database for answers. There are a large array of choices one has to avoid this problem, and we'll discuss a few below. + +#### Loose lock + +As shown above, in a pinch you can reduce the odds of needing to run the query by using memcached's `add` feature. + +#### Outside mutex + +A third party centralized mutex can also be used. MySQL has `SELECT GET_LOCK() ... RELEASE_LOCK()` which is fast but requires bothering your database a little bit. Other services exist as well, but this author isn't confident enough in what he knows to recommend any ;) + +#### Scaling expiration + +A common trick is to use soft expiration values embedded in your cached object. If your object is due to expire in an hour, set it to actually expire in 1.5 hours or more. Inside your object set a "soft timeout" for when you think the object is old. + +When you fetch an object and it has passed the soft timeout, you can pick any method that agrees with you to re-cache it: + + * Do a "lock" as noted above. If you fail to aquire the lock, return the old cached item. Lock winner recaches. + * Also store a "hard" timeout, or just assume the hard timeout is soft timeout + a value. Randomly decide if you want to recache, and increase the odds of recaching the value the older the item is. + * Dispatch an asyncronous job to recache the object. + * etc. + +The cache object can still go away for many reasons (server restart, LRU, eviction, etc). Use this as mitigation but not your only line of defense. + +#### Gearmand or similar + +Using a job server can be an easy win. [Gearman](http://gearman.org/) is a common, fast, scalable job service. While funneling recache requests through a job server will certainly add overhead, you can selectively use the service or rely purely on background jobs. Perhaps you'll want to funnel high traffic users through gearmand, but no one else. + +Gearmand has two magic tricks; asyncronous or syncronous job processing. + +In the case of a scaling expiration value, you can issue an asyncronous job to recache the object, then return the cache to a user. Gearmand can collapse similar jobs down so you don't end up executing millions of them. + +In the case of a syncronous update, gearmand can coalesce incoming jobs with the same parameters. So the first process to issue the job request will get a worker to recache the data. Every other procecss after him will "subscribe" to the results of that first job, and not create more parallelism. When the first job finishes, gearmand broadcasts the response to all listeners and they all continue forward as though they had issued the request directly. + +Very handy. + +### Loose replication + +Some clients may natively support replication. It will pick two unique servers to store a value on, and either linearly or randomly retrieve the value again. Sometimes you don't have this feature! + +You can "try" but not guarantee replication by modifying your key and storing a value twice. If you want to store a highly retrieved value from three locations, you could add '1', '2', or '3' to the end of the key, and store them all. Then on fetch randomly pick one. + +Has a lot of gotchas; storage failure means you're more likely to get stale data back. Cute hack if you're in a pinch though. + +### "Touching" keys with `add` + +Create an item that you want to expire in a week? Don't always fetch the item but want it to remain near the top of the LRU for some reason? `add` will actually bump a value to the front of memcached's LRU if it already exists. If the `add` call succeeds, it means it's time to recache the value anyway. +