How to run multiple Wordpress sites on a single low-end VPS with Nginx

After yesterday’s post there was a little bit of interest in some pointers as to how to wring the most performance out of a low-end VPS in order to run multiple CMS-based sites without having to constantly upgrade to higher end packages.

First a bit of background, as yesterday’s post conveniently threw out a lot of useful data that confirmed how reliable this set up actually is. The post sat on the front page of HN for at least 2 hours. I had to go out during its peak, so it may have risen higher than the 8th slot, but I didn’t see it (edit: a friend saw it reach 6th, fwiw). It oddly dropped off the front page very rapidly after cross-posting to Reddit, where it also disappeared, so I suspect I set off some kind of alarm in the moderator or voting algorithm’s head. In either case the ‘onslaught’ stopped after 2-3 hours and left behind some interesting stats:

Front page of HN for 2 hours

  • 3000 unique visitors
  • Peaking at 150+ concurrent
  • 1-3 per second
  • 20% memory usage for mysqld
  • 6 php-fpm processes at around 5-10% memory each
  • 5 nginx processes at 1% memory each
  • Peak CPU 28.36%
  • Peak outboud traffic at 4.15 Mbps
 

It’s reassuring to know that this kind of traffic blip can be handled relatively easily by this server. As mentioned elsewhere, I had optimised the sites to run smoothly on the 512MB RAM server, before recently upping that to 1GB as the cost/value trade-off was a no-brainer for sites affecting 4 small businesses.

There are improvements to be made, no doubt, and which I will be looking into, but as it stands I’ll share the set up details in brief here. If anyone can give tips on Chef/Puppet/Vagrant etc. deployment processes it would be welcome, to save time etc.

I’ll cover the following areas:

  • Securing the server and setting up Nginx
  • Tune swap memory and MySQL to stop wsod-moments
  • Set up a decent monitoring and backup system as a contingency
  • Stress test
  • Maintain

1. Securing the server and setting up Nginx

I set up SSH keys for use on the Digital Ocean droplet (guide, they also recommend hardening this further with fail2ban) and using Putty from my Win7 machine to access it. Now would be a good time to look into rkhunter and Tripwire to keep an eye out for rootkits and system intrusions, etc. I hear Samhain can be useful also.

Then it’s time to install Nginx and Wordpress (guide) and either use the Digital Ocean settings or the hardened and optimised Perusio config. You’ll need one mysite.conf file for each site (in sites-available), symlinking it to the sites-enabled folder and restarting/reloading Nginx with each modification. It’s all explained very well in the links above.

As I sell a digital product through this site, I had to add an ‘internal access only’ rule to my config for a specific location, and set the ownership on the location to be those used by Nginx. I wrote this up for Woocommerce, but I think it got integrated into their digital downloads support section. Off topic for this post, in any case.

If you’re already running a server, try running:

netstat -tnlp | grep -v 127.0.0 | grep -v ::1:
 

To see if any services you aren’t aware of are listening for connections. Security-types say it’s a bit late then to add rkhunter et al. at this point, as systems may already be compromised. Your call!

2. Tune WP, swap memory and MySQL to stop wsod-moments

Raising WP or your CMS’s memory limits in php.ini or the settings/config file is an obvious first port of call, as well as installing a WP cache module that plays nicely with your existing plugins, which should only be those plugins actually required by your site. But there are further improvements to be made with the resource-hungry mysqld and system memory.

Grab a copy of mysqltuner.pl and set up perl (more info in the readme) to run it. There is more research to be done at this stage, as many of the variables it suggests to modify are of course not intuitive, and this is an area you don’t want to over-fudge. I try to keep the maximum possible memory usage lower than 100% (even though I hear this doesn’t actually matter as much as I thought) and follow the suggestions made by the tuner to /etc/mysql/my.cnf. The current working config is:

key_buffer              = 8M
max_allowed_packet      = 4M
thread_stack            = 192K
thread_cache_size       = 128

This replaces the startup script and checks MyISAM tables if needed

the first time they are touched

myisam-recover         = BACKUP max_connections        = 100 tmp_table_size          = 16M table_cache            = 64 thread_concurrency     = 10 #

* Query Cache Configuration

# query_cache_limit       = 4M query_cache_size        = 32M

 

I share this only because it seemed the most stable config, pleasing the tuner at the same time, but I really am not qualified to talk with any authority on tuning a database, so please consult a specialist if relying on this for anything important. Failing that, Stackoverflow has plenty of helpful answers to work from.

After this tuning round had settled down I could open more concurrent pages (that required db calls) at once without it wsod’ing and forcing me to reload mysql or reboot the server. But there were still some excessive wait times for loading as the queues were processed. The server needed more temporary memory to survive these bursts of 10+ concurrent users. Swap memory to the rescue.

Digital Ocean runs its servers on SSD drives, which gives great data transfer speeds and is therefore a decent and usable overflow for memory intensive operations. It can’t match the speed of real RAM, but it stops the site going to sleep at its busiest, so it’s fit for purpose. I’m not sure how keen hosts are to enable swap memory, given that it rarely comes enabled by default - especially on SSD servers. I hear there are some implications involving swamping the shared drives for other users, but if used in moderation I’m not sure how damaging it should be. If anyone can chime in on that front, it’d be appreciated.

I followed this simple guide to setting up a swap file (good explanation here of pros and cons), and saw an instant improvement in site responsiveness when loading 10+ db-intensive pages at once. We’re more or less there now, just a few notes to follow on monitoring, backing up and testing.

3. Set up a decent monitoring and backup system as a contingency

It’s nice to get an email when/if the server does fall-over, exceed pre-defined memory or CPU limits, or if a backup fails. I use Monit and Obnam to cover monitoring and backing up, respectively. You’ll need postfix or sendmail etc. set up to send yourself an email from these. Outside the scope here, but again, guides are out there.

I opted for an off-site backup on top of Digital Ocean’s periodic backups in case I lose access to the DO server/dashboard and need to migrate quickly. The Obnam backups are also much more regular than the DO backups, which seem to be set at 3-5 day intervals.

Obnam was a bit of a beast to install on Ubuntu 12.04, but adding the suggested repos and PPAs, working from the latest build (and ensuring your config file has no typos in it, avoiding time-wasting emails to the Obnam developer… sorry, Lars, and thanks for changing the error message!) will see you through. I chose Obnam over Duplicity and rsync+script for ease of use, speed and clever deduplication and incremental updates, all encrypted and done over SFTP. It took around 40 minutes to transfer the first 300MB lump to the Raspberry Pi, with incremental updates taking place in a matter of seconds every day. These include the site databases and files, easily pointed to in the Obnam config file. The recurring event takes place with a cron job. I had to research cron some more to get it to play nicely, deleting databse dumps older than 7 days, but that was a very quick job. Here’s the resulting crontab, if it’s any use to you:

MAILTO=“webmaster@site.com”
10 05 * * * /usr/bin/find /USER/backup-dir/ -name ‘*.sql.gz’ mtime +7 -delete
12 05 * * * /usr/bin/mysqldump -u USER -pPASSWORD -A | /bin/gzip > /USER/backup-dir/mysql–date +\%y\%m\%d.sql.gz
20 05 * * * /usr/bin/obnam backup
 

This runs an instruction to delete any files in the backup directory older than 7 days at 5:10am, then creates a new mysqldump at 5:12am (no space after -p), then finally runs the Obnam backup 8 minutes later, taking the root directory and sftp info it needs from the Obnam config file. The MAILTO command sends notifications if you want them. If not, put nothing between the quotes.

4. Stress test

To make sure you’re not dreaming, and that your little VPS can withstand a rush of visitors, sign up at Blitz.io (using this link would give me a free credit, so you know) for ten free credits, equal to ten free tests. Set the concurrency to around 100 per second (remember the HN front page gave 1-3 per second for me) and the timeout to 1500ms, just to let you see how slow the site gets under heavy load, without it timing out and returning an error immediately. Then just either bask in the satisfaction of having freed yourself from the shared-host racket, creating a reliable and secure server of your own with contingency plans in place, or frown as your site trips over under heavy load still. Check again for plugins with dodgy code, deprecated calls, debug logs within your CMS and so on.

5.  Maintain

It’s probably good practice to ensure that software is patched and up to date on a regular basis, especially the CMS and any internet facing services. Test your backups and tighten up your Nginx config if you get a chance, to ensure that sensitive files are internal access only. Also, try to look into intrusion detection and firewalling your VPS, for peace of mind.

That about wraps things up. You should be able to set all of this up in a very long day, or over a few if not pressed for time.

Again, if you want to experiment, here’s a link to Digital Ocean with my affiliate code. And again, I’m only adding this because they offer it. It’s not the aim of the post, just a nice bonus for me if it was of any use to you.

Discuss here, or over at HN.

Thanks for reading. I do translation from French and Swedish to English, so if that's useful to you, feel free to connect and message me on LinkedIn or Twitter.

Published by and tagged linux, performance and scaling using 1765 words.

Next:
Previous: