A customer of ours was having some serious disruptions to his webserver, with 15 minute outages happening here and there. He said he couldn’t see an increase in traffic and therefore didn’t understand why it reached maxclients. Here was a quick way to prove whether traffic really increased or not by directly grepping the access logs for the time and day in question and using wc -l to count them, and a for loop to step thru the minutes of the hour in between the events.
Proud of this simple one.. much simpler than a lot of other scripts that do the same thing I’ve seen out there!
root@anonymousbox:/var/log/apache2# for i in `seq 01 60`; do printf "total visits: 13:$i\n\n"; grep "12/Jul/2016:13:$i" access.log | wc -l; done
total visits: 13:1
305
total visits: 13:2
474
total visits: 13:3
421
total visits: 13:4
411
total visits: 13:5
733
total visits: 13:6
0
total visits: 13:7
0
total visits: 13:8
0
total visits: 13:9
0
total visits: 13:10
30
total visits: 13:11
36
total visits: 13:12
30
total visits: 13:13
29
total visits: 13:14
28
total visits: 13:15
26
total visits: 13:16
26
total visits: 13:17
32
total visits: 13:18
37
total visits: 13:19
31
total visits: 13:20
42
total visits: 13:21
47
total visits: 13:22
65
total visits: 13:23
51
total visits: 13:24
57
total visits: 13:25
38
total visits: 13:26
40
total visits: 13:27
51
total visits: 13:28
51
total visits: 13:29
32
total visits: 13:30
56
total visits: 13:31
37
total visits: 13:32
36
total visits: 13:33
32
total visits: 13:34
36
total visits: 13:35
36
total visits: 13:36
39
total visits: 13:37
70
total visits: 13:38
52
total visits: 13:39
27
total visits: 13:40
38
total visits: 13:41
46
total visits: 13:42
46
total visits: 13:43
47
total visits: 13:44
39
total visits: 13:45
36
total visits: 13:46
39
total visits: 13:47
49
total visits: 13:48
41
total visits: 13:49
30
total visits: 13:50
57
total visits: 13:51
68
total visits: 13:52
99
total visits: 13:53
52
total visits: 13:54
92
total visits: 13:55
66
total visits: 13:56
75
total visits: 13:57
70
total visits: 13:58
87
total visits: 13:59
67
total visits: 13:60
root@anonymousbox:/var/log/apache2# for i in `seq 01 60`; do printf “total visits: 12:$i\n\n”; grep “12/Jul/2016:12:$i” access.log | wc -l; done
total visits: 12:1
So this came up recently where a customer was asking if we could tune their apache2 for higher traffic. The best way to do this is to benchmark the site to double the traffic expected, this should be a good measure of whether the site is going to hold up..
# Use Apachebench to test the local requests
ab -n 1000000 -c 1000 http://localhost:80/__*index.html
Benchmarking localhost (be patient)
Completed 100000 requests
Completed 200000 requests
Completed 300000 requests
Completed 400000 requests
Completed 500000 requests
Completed 600000 requests
Completed 700000 requests
Completed 800000 requests
Completed 900000 requests
Completed 1000000 requests
Finished 1000000 requests
Server Software: Apache/2.2.15
Server Hostname: localhost
Server Port: 80
Document Path: /__*index.html
Document Length: 5758 bytes
Concurrency Level: 1000
Time taken for tests: 377.636 seconds
Complete requests: 1000000
Failed requests: 115
(Connect: 0, Receive: 0, Length: 115, Exceptions: 0)
Write errors: 0
Total transferred: 6028336810 bytes
HTML transferred: 5757366620 bytes
Requests per second: 2648.05 [#/sec] (mean)
Time per request: 377.636 [ms] (mean)
Time per request: 0.378 [ms] (mean, across all concurrent requests)
Transfer rate: 15589.21 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 52 243.0 22 15036
Processing: 0 282 1898.4 27 81404
Waiting: 0 270 1780.1 24 81400
Total: 6 334 1923.7 50 82432
Percentage of the requests served within a certain time (ms)
50% 50
66% 57
75% 63
80% 67
90% 84
95% 1036
98% 4773
99% 7991
100% 82432 (longest request)
# During the benchmark test you may wish to use sar to indicate general load and io
stdbuf -o0 paste <(sar -q 10 100) <(sar 10 100) | awk '{printf "%8s %2s %7s %7s %7s %8s %9s %8s %8s\n", $1,$2,$3,$4,$5,$11,$13,$14,$NF}'
# Make any relevant adjustments to httpd.conf threads
# diff /etc/httpd/conf/httpd.conf /home/backup/etc/httpd/conf/httpd.conf
103,108c103,108
< StartServers 2000
< MinSpareServers 500
< MaxSpareServers 900
< ServerLimit 2990
< MaxClients 2990
< MaxRequestsPerChild 20000
---
> StartServers 8
> MinSpareServers 5
> MaxSpareServers 20
> ServerLimit 256
> MaxClients 256
> MaxRequestsPerChild 4000
-----------------------------------
In this case we increased the number of startservers and minspareservers. Thanks to Jacob for this.
I noticed there were some changes to the way we used openstack quotas today. So I had to do it the manual way! Please note that this can only be done thru the Admin API function, so if you are a Rackspace customer you would need to reach out to us to do this, unless you ran your own openstack or devstack implementation in-house.
There is a lot of different commands available, use nova help to get more detail
supernova lon help quota-update
[SUPERNOVA] Running nova against lon...
usage: nova quota-update [--user ] [--instances ]
[--cores ] [--ram ]
[--floating-ips ]
[--fixed-ips ]
[--metadata-items ]
[--injected-files ]
[--injected-file-content-bytes ]
[--injected-file-path-bytes ]
[--key-pairs ]
[--security-groups ]
[--security-group-rules ]
[--server-groups ]
[--server-group-members ]
[--force]
Update the quotas for a tenant/user.
Positional arguments:
ID of tenant to set the quotas for.
Optional arguments:
--user ID of user to set the quotas for.
--instances New value for the "instances" quota.
--cores New value for the "cores" quota.
--ram New value for the "ram" quota.
--floating-ips
New value for the "floating-ips" quota.
--fixed-ips New value for the "fixed-ips" quota.
--metadata-items
New value for the "metadata-items" quota.
--injected-files
New value for the "injected-files" quota.
--injected-file-content-bytes
New value for the "injected-file-content-
bytes" quota.
--injected-file-path-bytes
New value for the "injected-file-path-bytes"
quota.
--key-pairs New value for the "key-pairs" quota.
--security-groups
New value for the "security-groups" quota.
--security-group-rules
New value for the "security-group-rules"
quota.
--server-groups
New value for the "server-groups" quota.
--server-group-members
New value for the "server-group-members"
quota.
--force Whether force update the quota even if the
already used and reserved exceeds the new
quota.
So, as you may already be aware, I am working on a lightweight backup script called obscene redundancy’. An redundant backup software capable of 18 replicas of data to Rackspace Cloud Files API service. It’s so redundant… it’s obscene redundancy.
Today, I was discussing with my colleague, that it was all very well uploading your tar to cloud files, but, wouldn’t you really like to know if the file you uploaded is completely identical number of bits, and order? Enter, Cloud Files ‘HEAD’and Etag. Our MD5 friend.
What I did to improve the obscene redundancy script was quite simple here:
# We define a variable that takes the 'Etag' (MD5Sum) value for the cloud files archive
cfmd5sum=$(swiftly --conf swiftly-configs/swiftly-${SHORT_REGION,,}.conf head
"${BACKUP_DEST}/${FILE}" | grep -i Etag | awk '{print $2}')
# We Define a variable that generates an 'MD5Sum' for the local file archive
localmd5sum=$(md5sum "$BACKUP_DIR"/"$FILE")
echo "Checking Data integrity of Cloud Files upload to $REGION"
echo "Cloud Files Archive MD5: $cfmd5sum ....... Local File Archive MD5: $localmd5sum"
# If these values
if [[ "$cfmd5sum" -ne "$localmd5sum" ]];
then
echo "VALUES NOT EQUAL"
echo "$REGION CRC OK..."
else
echo "VALEUS EQUAL
echo "$REGION CRC missing, in error, or NOT OK..."
fi
After all this I found that the script wasn’t working properly… so I did some debugging about this to check, at least, first of all , the length of each variable.
if [[ "$cfmd5sum" == "$localmd5sum" ]]; then
echo "VALUES EQUAL, (local md5sum length given first)"
echo "$localmd5sum"| wc -L
echo "$cfmd5sum"| wc -L
echo "$REGION CRC OK..."
else
echo "VALUES NOT EQUAL"
echo "$localmd5sum"|wc -L
echo "$cfmd5sum"|wc -L
echo "$REGION CRC missing, in error, or NOT OK..."
fi
The output shown me that the variable length was different. At this stage I’ve no idea why, but will add updates here. I’m going to commit this to obsceneredundancy because proof of concept is working and valid, as shown by the output of the script. (i.e. the method is fine, it’s just the way the string is compared in the if, statement, I suspect it is to do with special character or \n characters as I had before. So, when I made this addition to the multi-dc-backup.sh script.. the output now looks like:
Creating Container in LON for obsceneredundancy
LON: Backing up ...
Source: /var/www/ ---> Dest: cloudfiles://LON/obsceneredundancy/varwww-2016-07-06-6bd657e9-d268-4883-9f40-3859f690aadb.tar.gz
Checking Data integrity of Cloud Files upload to BACKUP_TO_LON
Cloud Files Archive MD5: 65147eb66f8bbeff03a229570b0a1be7 ....... Local File Archive MD5: 65147eb66f8bbeff03a229570b0a1be7 /var/backup/varwww-2016-07-06-6bd657e9-d268-4883-9f40-3859f690aadb.tar.gz
VALUES NOT EQUAL
107
32
BACKUP_TO_LON CRC missing, in error, or NOT OK...
lon: COMPLETED OK 15504796/15504796
ORD: Not backing up ...
Creating Container in IAD for obsceneredundancy
IAD: Backing up ...
Source: /var/www/ ---> Dest: cloudfiles://IAD/obsceneredundancy/varwww-2016-07-06-6bd657e9-d268-4883-9f40-3859f690aadb.tar.gz
Checking Data integrity of Cloud Files upload to BACKUP_TO_IAD
Cloud Files Archive MD5: 65147eb66f8bbeff03a229570b0a1be7 ....... Local File Archive MD5: 65147eb66f8bbeff03a229570b0a1be7 /var/backup/varwww-2016-07-06-6bd657e9-d268-4883-9f40-3859f690aadb.tar.gz
VALUES NOT EQUAL
107
32
BACKUP_TO_IAD CRC missing, in error, or NOT OK...
iad: COMPLETED OK 15504796/15504796
DFW: Not backing up ...
As we can see the 107 (localmd5size) and the 32 (cloudfilesmd5size) are different! I’ve no idea why, since when echoing the variables they look the same. I suspect gremlins and Trolls. A fresh head tomorrow will probably solve this in a few minutes!
Hey folks, so I have noticed that in the new modsecurity CRS version 2, that ‘chained’ rules are supported. This means that whitelisting IP’s has been altered slightly.
Previously whitelisting in modsecurity v2 ip whitelisting was simpler like:
SecRule REMOTE_ADDR “^11.22.33.44” phase:1,nolog,allow,ctl:ruleEngine=off
Now in modsecurity v2 the whitelist configuration must look something like
SecRule REMOTE_ADDR "^11\.22\.33\.44$" phase:1,log,allow,ctl:ruleEngine=Off,id:999945
Now it’s kind of weird, but I hear that chains are much more secure so in that regard maybe v2 has something awesome to offer. Just was head scratching on this one for a good 20 minutes!
You might be wondering why you are receiving an error like ‘configtest failed’ when restarting apache2 using modsecurity. This is probably the fix for v2 you need.
Hey folks. So, recently I have been doing a bit of work on the Rackspace community, specifically trying to document and make as easy as possible the importing and exporting of cloud server VHD’s between Rackspace regions. This might be really useful if you are designing some HA or multi-region and/or load balancing solution that might be utilizing autoscale, and other kinds of redundancy too, but moving your ‘golden image’ between regions might be quite difficult if doing the entire process manually or step by step as I have documented in the below two articles:
Exporting Cloud server images from a Rackspace Region
https://community.rackspace.com/products/f/25/t/7089
Importing Cloud Server Images to a Rackspace Region
https://community.rackspace.com/products/f/25/t/7186
In this article I completely finish writing the ‘automation demo’ of how to specifically move images, without changing much at all, apart from one ‘serverID’ variable, and the source and destination. The script isn’t finished yet, however the last time I posted this on my blog I was so excited, I actually forgot to include the import function. (which is kind of important!) sorry about that.
echo "Exporting VHD to Cloud Files"
# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`
echo "IMAGEID detected as $IMAGEID"
# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
# > export-cloudfiles
echo "THE IMAGE ID IS: $IMAGEID"
IMAGEID=${IMAGEID%$'\r'}
curl -v "https://lon.images.api.rackspacecloud.com/v2/$TENANT/tasks" -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" -d '{"type": "export", "input": {"image_uuid": "'$IMAGEID'" , "receiving_swift_container": "export"}}' -o export-cloudfiles
echo "Export looks like"
echo "Waiting for Task to complete..."
## WAIT FOR TASKID EXPORT TO COMPLETE TO CLOUD FILES
# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`
# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
curl "https://lon.images.api.rackspacecloud.com/v2/1000000/tasks/$TASKID_EXPORT" -X GET -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" | python -mjson.tool > export-status
EXPORT_STATUS=$(cat export-status | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')
while [ "$EXPORT_STATUS" = "processing" ]; do
sleep 15
curl "https://lon.images.api.rackspacecloud.com/v2/1000000/tasks/$TASKID_EXPORT" -X GET -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" | python -mjson.tool > export-status
EXPORT_STATUS=$(cat export-status | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')
done
# SET CORRECT CLOUD FILES NAME
CLOUD_FILES_NAME=$(cat export-cloudfiles | python -mjson.tool | grep image_uuid | awk '{print $2}' | sed 's/,//g' | sed 's/"//g')
## Download VHD Cloud from Cloud Files to this server
As You can probably see my code is still rather rough, but it’s just so darn exciting that this script works from start to finish, nicely I just HAD to share it a bit earlier! The plan now is to add commandline function so that you can specify ./moveregion {SOURCE_REGION} {DEST_REGION} {SERVER_ID} {TENANT_ID} . Then a customer or a racker would only need these 4 variables to import and export images in an automated way.
I can rewrite the script in such a way that it would accept a .txt file of a couple of hundred cloud server UUID’s, and it would take the server UUID of each, use that uuid to create an image of each server, export to cloud files, import to cloud files, and then import to glance image store for the second region destination. Which naturally, would save hundreds of hours of human time doing this manually.. which is … nice 😀
I would really like to make a UI frontend, using something like Django, and utilize some form of ‘light’ database, that keeps track of all the API import/exports, and even provides estimated time for completion, but my UI skills are really limited to xhtml, css php and mysql.. I need a python or django guy to help out with some of this. If anyone is interested, please reach out to me.
So I wrote a piece of software (basic) using BASh which exports Rackspace Cloud Servers between regions. It’s pure API CALLS using curl and I’m particularly proud of this piece, since it only took a day. (once I spent the whole of the next day figuring out an issue with the JSON and bash expansion for parameters to export the cloud server image to cloud files).
This is a super rough example of an automation-in-progress for cloud-servers between regions. Once you’ve set the script up, you simply change the serverid, and the script can do the rest, and you can migrate server by server, or perform batch migrates with this.
I’m going to refactor and rewrite it when I have time, but for now, here you are! Enjoy 😀
I hope that this is useful to people, particularly our customers.. when I release a finely tuned version that has commandline arguments support.
#!/bin/bash
echo "Exporting VHD to Cloud Files"
# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`
echo "IMAGEID detected as $IMAGEID"
# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
# > export-cloudfiles
echo "THE IMAGE ID IS: $IMAGEID"
IMAGEID=${IMAGEID%$'\r'}
curl -v "https://lon.images.api.rackspacecloud.com/v2/$TENANT/tasks" -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" -d '{"type": "export", "input": {"image_uuid": "'$IMAGEID'" , "receiving_swift_container": "export"}}' -o export-cloudfiles
echo "Export looks like"
echo "Waiting for Task to complete..."
## WAIT FOR TASKID EXPORT TO COMPLETE TO CLOUD FILES
# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`
# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
curl "https://lon.images.api.rackspacecloud.com/v2/10101010/tasks/$TASKID_EXPORT" -X GET -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" | python -mjson.tool > export-status
EXPORT_STATUS=$(cat export-status | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')
while [ "$EXPORT_STATUS" = "processing" ]; do
sleep 15
curl "https://lon.images.api.rackspacecloud.com/v2/100101010/tasks/$TASKID_EXPORT" -X GET -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" | python -mjson.tool > export-status
EXPORT_STATUS=$(cat export-status | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')
done
# SET CORRECT CLOUD FILES NAME
CLOUD_FILES_NAME=$(cat export-cloudfiles | python -mjson.tool | grep image_uuid | awk '{print $2}' | sed 's/,//g' | sed 's/"//g')
## Download VHD Cloud from Cloud Files to this server
So, you may have noticed over the past weeks and months I have been a little bit quieter about the articles I have been writing. Mainly because I’ve been working on a new github project, which, although simple, and lightweight is actually really rather outrageously powerful.
https://github.com/aziouk/obsceneredundancy
Imagine being able to take 15+ redundant replica copies of your files, across 5 or 6 different datacentres. Rackspace Cloud Files API powered, but also with a lot of the flexibility of Bourne Again Shell (BASH).
This was actually quite a neat achievement and I am pleased with the results. There are still some limitations of this redundant replica application, and there are a few bugs, but it is a great proof of concept which shows what you can do with the API both quickly and cheaply (ish). Using filesystems as a service will be the future with some further innovation on the world wide network infrastructure, and it would only take a small breakthrough to rapidly alter the way that OS and machines boot/backup.
If you want to see the project and read the source code before I lay out and describe/explain the entire process of writing this software as well as how to deploy it with cron on linux, then you need wait no longer. Revision 1 alpha is now tested, ready and working in 5 different datacentres.
You can actually toggle which datacentres you wish to utilize as well, it is slightly flexible. The only important consideration here is to understand that there are some limitations such as a lack of de-duping, and this uses tar’s and swiftly, instead of directly querying the API. Since directly uploading thru the API a tar file is relatively simple, I will probably implement it like that as I have before and get rid of swiftly in future iterations, however such a project is really ideal for learning more about BASH , CRON, API and programmatic automation of and sequential filesystems utilizing functional programming and division of labour between workers,
https://github.com/aziouk/obsceneredundancy
Test it (please note it will be a little bit buggy on different environments and there is no instructions yet)
So, you want to configure NFS? This isn’t too difficult to do. First of all you will need to, in the simplest setup, create 2 servers, one acting as the NFS server which hosts the content and attached disks. The second server, acting as the client, which mounts the filesystem of the NFS server over the network to a local mount point on the client. In RHEL 7 this is remarkably easy to do.
Install and Configure NFS on the Server
Install dependencies
yum -y install nfs-utils rpcbind
Create a directory on the server
This is the directory we will share
mkdir -p /opt/nfs
Configure access for the client server on ip 10.0.0.2
vi /etc/exports
# alternatively you can directly pipe the configuration but I don't recommend it
echo "/opt/nfs 10.0.0.2(no_root_squash,rw,sync)" > /etc/exports
service rpcbind start; service nfs start
service nfs status
Install and configure NFS on the Client
Install dependencies & start rpcbind
yum install nfs-utils rpcbind
service rpcbind start
Create directory to mount NFS
# Directory we will mount our Network filesystem on the client
mkdir -p /mnt/nfs
# The server ip address is 10.0.0.1, with the path /opt/nfs, we want to mount it to the client on /mnt/nfs this could be anything like
# /mnt/randomdata-1234 etc as long as the folder exists;
mount 10.0.0.1:/opt/nfs /mnt/nfs/
Check that the NFS works
echo "meh testing.." > /mnt/nfs/testing.txt
cat /mnt/nfs/testing.txt
ls -al /mnt/nfs
You should see the filesystem now has testing.txt on it. Confirming you setup NFS correctly.
Make NFS mount permanent by enabling the service permanently, and adding the mount to fstab
This will cause the server to automount the fs during boot time
systemctl enable nfs-server
vi /etc/fstab
10.0.0.1:/opt/nfs /mnt/nfs nfs defaults 0 0
# OR you could simply pipe the configuration to the file (this is really dangerous though)
# Unless you are absolutely sure what you are doing
echo "10.0.0.1:/opt/nfs /mnt/nfs nfs defaults 0 0" >> /etc/fstab
If you reboot the client now, you should see that the NFS mount comes back.