Testing the consistency of response time on a website

So, we had some customers today complaining about inconsistent page load times. So I taken a look at the hypervisor they were on and I could see it was really quite busy. In the sense that all 122GB of RAM available was being used by server instances. Ironically though it wasn’t that busy, but I live-migrated the customer anyway to a much quieter server, but the customer saw no change whatsoever.

In this case it indicated already that it wasn’t a network infrastructure or hardware issue and likely the increase in latency they saw over the last so many days was being caused by something else. Most likely the growing size of their database, not being reflected by their static amount of ram and the variables set for their tablesize cache, and etc in MySQL.

So my friend kindly put together this excellent oneliner. Check it out!

$ for i in $(seq 50); do curl -sL http://www.google.com/ -o /dev/null -w %{time_total}\\n; sleep 1; done
0.698
0.493
0.365
0.293
0.326
0.525
0.342
0.527
0.445
0.263
0.493

Pretty neat eh