-
Notifications
You must be signed in to change notification settings - Fork 2
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
did very mild editing..
- Loading branch information
Showing
6 changed files
with
555 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,3 +1,4 @@ | ||
/public | ||
/resources | ||
.DS_Store | ||
*.swp |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,94 @@ | ||
+++ | ||
title = 'Configuring' | ||
date = 2024-09-04T12:50:50-07:00 | ||
prev = '/serverguide/' | ||
next = '/serverguide/maintenance/' | ||
weight = 1 | ||
+++ | ||
|
||
## Commandline Arguments | ||
|
||
Memcached comes equipped with basic documentation about its commandline arguments. View `memcached -h` or `man memcached` for up to date documentation. The service strives to have mostly sensible defaults. | ||
|
||
When setting up memcached for the first time, you should pay attention to `-m`, `-d`, and `-v`. | ||
|
||
`-m` tells memcached how much RAM to use for item storage (in megabytes). Note carefully that this isn't a global memory limit, so memcached will use a little more memory than you tell it to. Set this to safe values. Setting it to less than 64 megabytes may still use up to 64 megabytes as a minimum. | ||
|
||
`-d` tells memcached to daemonize. If you're running from an init script you may not be setting this. If you're using memcached for the first time, it might be educational to start the service *without* `-d` and watching it. | ||
|
||
`-v` controls verbosity to STDOUT/STDERR. Multiple `-v`'s increase verbosity. A single one prints extra startup information, and multiple will print increasingly verbose information about requests hitting memcached. If you're curious to see if a test script is doing what you expect it to, running memcached in the foreground with a few verbose switches is a good idea. | ||
|
||
We attempt to have sensible defaults to minimize the amount of options end | ||
users need to set. | ||
|
||
## Init Scripts | ||
|
||
If you have installed memcached from your OS's package management system, odds are it already comes with an init script. They come with alternative methods to configure what startup options memcached receives. Such as via a /etc/sysconfig/memcached file. Make sure you check these before you run off editing init scripts or writing your own. | ||
|
||
If you're building memcached yourself, the 'scripts/' directory in the source tarball contains several examples of init scripts. | ||
|
||
## Multiple Instances | ||
|
||
Running multiple local instances of memcached is trivial. If you're maintaining a developer environment or a localhost test cluster, simply change the port it listens on, ie: `memcached -p 11212`. | ||
|
||
## Networking | ||
|
||
Since 1.5.6 memcached defaults to listening only on TCP. `-l` allows you to bind to specific interfaces or IP addresses. Memcached does not spend much, if any, effort in ensuring its defensibility from random internet connections. So you *must not* expose memcached directly to the internet, or otherwise any untrusted users. Using SASL authentication here helps, but should not be totally trusted. | ||
|
||
### TCP | ||
|
||
`-p` changes where it will listen for TCP connections. When changing the port via `-p`, the port for UDP will follow suit. | ||
|
||
### UDP | ||
|
||
`-U` modifies the UDP port, defaulting to off since 1.5.6. UDP is useful for fetching or setting small items, not as useful for manipulating large items. Setting this to 0 will disable it, if you're worried. | ||
|
||
### Unix Sockets | ||
|
||
If you wish to restrict a daemon to be accessable by a single local user, or just don't wish to expose it via networking, a unix domain socket may be used. `-s <file>` is the parameter you're after. If enabling this, TCP/UDP will be disabled. | ||
|
||
## Connection Limit | ||
|
||
By default the max number of concurrent connections is set to 1024. Configuring this correctly is important. Extra connections to memcached may hang while waiting for slots to free up. You may detect if your instance has been running out of connections by issuing a `stats` command and looking at "listen_disabled_num". That value should be zero or close to zero. | ||
|
||
Memcached can scale with a large number of connections very simply. The amount of memory overhead per connection is low (even lower if the connection is idle), so don't sweat setting it very high. | ||
|
||
Lets say you have 5 webservers, each running apache. Each apache process has a MaxClients setting of 12. This means that the maximum number of concurrent connections you may receive is 5 x 12 (60). Always leave a few extra slots open if you can, for administrative tasks, adding more webservers, crons/scripts/etc. | ||
|
||
## Threading | ||
|
||
Threading is used to scale memcached across CPU's. The model is by "worker threads", meaning that each thread handles concurrent connections. Since using libevent allows good scalability with concurrent connections, each thread is able to handle many clients. | ||
|
||
This is different from some webservers, such as apache, which use one process or one thread per active client connection. Since memcached is highly efficient, low numbers of threads are fine. In webserver land, it means it's more like nginx than apache. | ||
|
||
By default 4 threads are allocated. Unless you are running memcached extremely hard, you should not set this number to be any higher. Setting it to very large values (80+) will make it run considerably slower. | ||
|
||
## Inspecting Running Configuration | ||
|
||
``` | ||
$ echo "stats settings" | nc localhost 11211 | ||
STAT maxbytes 67108864 | ||
STAT maxconns 1024 | ||
STAT tcpport 11211 | ||
STAT udpport 11211 | ||
STAT inter NULL | ||
STAT verbosity 0 | ||
STAT oldest 0 | ||
STAT evictions on | ||
STAT domain_socket NULL | ||
STAT umask 700 | ||
STAT growth_factor 1.25 | ||
STAT chunk_size 48 | ||
STAT num_threads 4 | ||
STAT stat_key_prefix : | ||
STAT detail_enabled no | ||
STAT reqs_per_event 20 | ||
STAT cas_enabled yes | ||
STAT tcp_backlog 1024 | ||
STAT binding_protocol auto-negotiate | ||
STAT auth_enabled_sasl no | ||
STAT item_size_max 1048576 | ||
END | ||
``` | ||
|
||
cool huh? Between 'stats' and 'stats settings', you can double check that what you're telling memcached to do is what it's actually trying to do. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,80 @@ | ||
+++ | ||
title = 'Hardware and Instances' | ||
date = 2024-09-04T13:51:13-07:00 | ||
prev = '/serverguide/performance/' | ||
weight = 4 | ||
+++ | ||
|
||
Memcached is supported on 32bit and 64bit x86 systems, as well as 32bit and | ||
64bit ARM platforms. | ||
It runs on many operating systems (Linux, BSD's, various | ||
unix). There is no official support for windows. | ||
|
||
## Hardware Requirements | ||
|
||
Memcached has very basic requirements for hardware. It is generally light on CPU usage, will take as much memory as you give it, and network usage will vary from mild to moderate, depending on the average size of your items and how much traffic you expect to have. | ||
|
||
### CPU Requirements | ||
|
||
Memcached is typically light on CPU usage, due to its goal to respond very fast. Memcached is multithreaded, defaulting to 4 worker threads. This doesn't necessarily mean you have to run 100 cores to have memcached meet your needs. If you're going to need to rely on memcached's multithreading, you'll know it. For the common case, any bits of CPU anywhere is usually sufficient. Most installations only need a single memcached thread. | ||
|
||
### RAM Requirements | ||
|
||
The major point of memcached is to sew together sections of memory from multiple hosts and make your app see it as one large section of memory. The more memory the better. However, don't take memory away from other services that might benefit from it. | ||
|
||
It is helpful to have each memcached server have roughly the same amount of memory available. Cluster uniformity means you can simply add and remove servers without having to care about one's particular "weight", or having one server hurt more if it is lost. | ||
|
||
#### Avoid Swapping | ||
|
||
Assign physical memory, with a few percent extra, to a memcached server. Do not over-allocate memory and expect swap to save you. Performance will be very, very poor. Take extra care to monitor if your server is using swap, and tune if necessary. | ||
|
||
#### Is High Speed RAM Necessary? | ||
|
||
Not so much, no. Getting that extra high speed memory will not likely net you measurable benefits. | ||
|
||
#### NUMA Considerations | ||
|
||
Memcached works okay under normal loads in a NUMA system. There is a | ||
measurable performance drop under benchmarking conditions when memcached runs | ||
across multiple NUMA nodes. If you are extremely sensitive to performance and | ||
have NUMA systems, the best workaround is to run one memcached instance per | ||
NUMA node and bind the instances via `numactl` or similar. | ||
|
||
## Hardware Layouts | ||
|
||
### Running Memcached on Webservers | ||
|
||
An easy layout is to use spare memory on webservers or compute nodes that you may have. If you buy a webserver with 4G of RAM, but your app and OS only use 2G of RAM at most, you could assign 1.5G or more to memcached instances. | ||
|
||
This has a good tradeoff of spreading memory out more thinly, so losing any one webserver will not cause as much pain. | ||
|
||
Caveats being extra maintenance, and keeping an eye on your application's multi-get usage, as it can end up accessing every memcached in your list. You also run a risk of pushing a machine into swap or killing memcached if your app has a memory leak. Often it's a good idea to run hosts with very little swap, or no swap at all. Better to let an active service die than have it turn into a tarpit. | ||
|
||
### Running Memcached on Databases | ||
|
||
Not a great idea. If you have a database host, give as much ram as possible to the database. When cache misses do happen, you'll get more benefit from ensuring your indexes and data are already in memory. | ||
|
||
### Using Dedicated Hosts | ||
|
||
Using dedicated hardware for memcached means you don't have to worry about other programs on the machine interfering with memcached. You can put a lot of memory (64G+) into a single host and have fewer machines for your memory requirements. | ||
|
||
This has an added benefit of being able to more easily expand large amounts of memory space. Instead of adding new webservers that may go idle, you can add specialized machines to throw gobs of RAM at the problem. | ||
|
||
This ends up having several caveats. The more you compress down your memcached cluster, the more pain you will feel when a host dies. | ||
|
||
Lets say you have a cache hitrate of 90%. If you have 10 memcached servers, and 1 dies, your hitrate may drop to 82% or so. If 10% of your cache misses are getting through, having that jump to 18% or 20% means your backend is suddenly handling *twice* as many requests as before. Actual impact will vary since databases are still decent at handling repeat queries, and your typical cache miss will often be items that the database would have to look up regardless. Still, *twice*! | ||
|
||
So lets say you buy a bunch of servers with 144G of ram, but you can only afford 4 of them. Now when you lose a single server, 25% of your cache goes away, and your hitrate can tank even harder. | ||
|
||
### Capacity Planning | ||
|
||
Given the above notes on hardware layouts, be sure you practice good capacity planning. Get an idea for how many servers can be lost before your application is overwhelmed. Make sure you always have more than that. | ||
|
||
If you cannot take down memcached instances, you ensure that upgrades (hardware or software), and normal failures are excessively painful. Save yourself some anguish and plan ahead. | ||
|
||
### Network | ||
|
||
Network requirements will vary greatly by the average size of your memcached items. Your application should aim to keep them small, as it can mean the difference between being fine with gigabit inter-switch uplinks, or being completely toast. | ||
|
||
Most deployments will have low requirements (< 10mbps per instance), but a heavy hit service can be quite challenging to support. That said, if you're resorting to infiniband or 10 gigabit ethernet to hook up your memcached instances, you could probably benefit from spreading them out more. | ||
|
Oops, something went wrong.