Nowadays, WordPress is a key player in the Internet. Invented as a blog system, evolved to a bigger service used for example by newspapers or even online shops. Because it was purposed to do something really different, sometimes it might seem to be a little tardy. But what if I tell you that it can be made much faster in less than 10 minutes? Let’s see.
WordPress performance
To be completely sure that we are going in the right way, we need to start with inspecting current performance. To do this I have installed a new version of WordPress and added a couple of posts with additional custom fields. I have used an ab tool to analyze the performance. It’s not so important what we put into -c and -n parameters, but it’s necessary to keep them identical before, and after changes. In the example below the ab will send 10 000 requests.
1 | ab -c 200 -n 10000 http://192.168.56.103/ |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | Benchmarking 192.168.56.103 (be patient) Finished 10000 requests Server Software: Apache/2.2.22 Server Hostname: 192.168.56.103 Server Port: 80 Document Path: / Document Length: 7694 bytes Concurrency Level: 200 Time taken for tests: 105.684 seconds Complete requests: 10000 Failed requests: 7939 (Connect: 0, Receive: 0, Length: 7939, Exceptions: 0) Write errors: 0 Non-2xx responses: 7939 Total transferred: 22115247 bytes HTML transferred: 17850023 bytes Requests per second: 94.62 [#/sec] (mean) Time per request: 2113.676 [ms] (mean) Time per request: 10.568 [ms] (mean, across all concurrent requests) Transfer rate: 204.35 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 62 93.2 3 455 Processing: 6 2015 3524.4 297 14233 Waiting: 0 2009 3525.5 293 14230 Total: 6 2077 3514.7 360 14233 |
The indicator that is most interesting for us it the number of requests per second: 94.62 [#/sec]. It’s looking better than the raw installation of Magento, but we want to do more. Our blog needs to be prepared for much, much bigger traffic, isn’t it?
before: 94.62 [#/sec]
Let’s start the magic
First of all, we need to know the bottlenecks. The most sensitive parts of all applications as a rule are: a database, business logic and views. Well, usually the applications do not contain much more than that, so let’s assume we can find a performance problem almost everywhere in the code.
What can be done about that? The users can be disallowed to access the executable part of server when it’s not necessary. We need to use some kind of cache, which can handle our requests before they will hit the Apache – and here comes Varnish.
Varnish configuration with WordPress and Apache
We will establish the Varnish as our main http server. All the users will hit it, and when it won’t find any matching record in cache then will ask Apache for the response.
To install Varnish at your Ubuntu or Debian server please use the apt (or yum in other cases).
1 | sudo apt-get install varnish |
Then we will need to move the Apache from the port 80 to some other – I chose the port 81.
1 2 3 | #/etc/apache2/ports.conf Listen 81 NameVirtualHost *:81 |
1 2 | #/etc/apache2/sites-available/*.conf #... |
#…
And now we can set the Varnish as a main http server at our machine:
1 2 3 4 5 6 7 | #/etc/default/varnish DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s malloc,1024m" |
The last thing to do is to tell the Varnish where it should call when any matching records in the cache are found. This is a backend service and could be found in a VCL file.
1 2 3 4 5 6 | #/etc/varnish/default.vcl backend default { .host = "127.0.0.1"; .port = "81"; } |
Is it working 100 times faster now? Not yet. At this point we have almost finished, but we still need to choose whether we want to use one of the WordPress plugins prepared to work with Varnish or make it faster without modifying anything in WordPress. I’ve chosen the second option.
Configuring the VCL
I have prepared a VCL configuration for my WordPress instance responsible for keeping all sites in cache for one hour and clearing the whole cache if I send any changes to the database by POST, PUT or DELETE. That is really simple and doesn’t require any purging or banning the content and I don’t need anything more at this point.
So let’s go back to our VCL file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 | #/etc/varnish/default.vcl backend default { .host = "127.0.0.1"; .port = "81"; } acl purge { "10.0.1.100"; "10.0.1.101"; "10.0.1.102"; "10.0.1.103"; "10.0.1.104"; } sub vcl_recv { if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return (lookup); } if (req.url ~ "\.(gif|jpg|jpeg|swf|css|js|flv|mp3|mp4|pdf|ico|png)(\?.*|)$") { unset req.http.cookie; set req.url = regsub(req.url, "\?.*$", ""); } if (req.url ~ "\?(utm_(campaign|medium|source|term)|adParams|client|cx|eid|fbid|feed|ref(id|src)?|v(er|iew))=") { set req.url = regsub(req.url, “\?.*$”, “”); } if (req.url ~ "wp-(login|admin)" || req.url ~ "preview=true" || req.url ~ "xmlrpc.php") { return (pass); } if (req.http.cookie) { if (req.http.cookie ~ "(wordpress_|wp-settings-)") { return(pass); } else { unset req.http.cookie; } } } sub vcl_fetch { if ( (!(req.url ~ "(wp-(login|admin)|login)")) || (req.request == "GET") ) { unset beresp.http.set-cookie; set beresp.ttl = 1h; } if (req.url ~ "\.(gif|jpg|jpeg|swf|css|js|flv|mp3|mp4|pdf|ico|png)(\?.*|)$") { set beresp.ttl = 365d; } } sub vcl_deliver { # multi-server webfarm? set a variable here so you can check # the headers to see which frontend served the request # set resp.http.X-Server = "server-01"; if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } sub vcl_hit { if (req.request == "PURGE") { purge; error 200 "OK"; } } sub vcl_miss { if (req.request == "PURGE") { purge; error 404 "Not cached"; } } |
The vcl_recv function is responsible for managing and deciding if we should ask Apache for the content or deliver the response from saved cache.
Now we can check whether it helped with the same ab command as used before:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | ab -c 200 -n 10000 http://192.168.56.103/ Benchmarking 192.168.56.103 (be patient) Completed 10000 requests Finished 10000 requests Server Software: Apache/2.2.22 Server Hostname: 192.168.56.103 Server Port: 80 Document Path: /w Document Length: 7694 bytes Concurrency Level: 200 Time taken for tests: 1.420 seconds Complete requests: 10000 Failed requests: 0 Write errors: 0 Total transferred: 80760000 bytes HTML transferred: 76940000 bytes Requests per second: 7041.10 [#/sec] (mean) Time per request: 28.405 [ms] (mean) Time per request: 0.142 [ms] (mean, across all concurrent requests) Transfer rate: 55531.20 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 6 2.2 6 14 Processing: 2 17 43.8 11 525 Waiting: 1 15 43.9 9 524 Total: 8 23 43.5 16 527 |
after: 7041.10 [#/sec]
In result, our WordPress website can handle 7041.10 requests per second which is almost 100 times faster than in the beginning.
Conclusion
With Varnish’s VCL configuration’s language we can manage the cache without making any changes in the application’s code. The example was prepared to show how simple it can be for the uncomplicated WordPress websites. Of course, you need to work on your configuration if your code is more sophisticated than a raw WordPress installation but I think it’s a good place to start.