Using Meta-data to track Rackspace Cloud Servers

Hey, so from time to time we have customers who ask us about how they can tag their servers, this might be for automation or for means of organising their servers. Whilst it’s not possible to tag servers thru the API in such a way that it shows the ‘tag’ in the UI, that you can add in the mycloud control panel. You can instead use the cloud server meta-data set command, it’s easy enough. Here is how I achieved it.

set-meta-data.sh

#!/bin/bash

USERNAME='mycloudusername'
APIKEY='mycloudapikey'
# Tenant ID (account number is the number shown in the URL address when logged into Rackspace control panel)
ACCOUNT_NUMBER=1001010
API_ENDPOINT="https://lon.servers.api.rackspacecloud.com/v2/$ACCOUNT_NUMBER"
SERVER_ID='e9036384-c9be-4c8c-8551-c2f269c424bc'

# This just grabs from a large JSON output the AUTH TOKEN for the API. We auth with the apikey, and we get the auth token and set it in this variable 'TOKEN'
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`


# Then we re-use the $TOKEN we retrieved for the call to the API, supply the $ACCOUNT_NUMBER and importantly, the $API_ENDPOINT.
# Also we sent a file, metadata.json that contains the meta-data we want to add to the server.
curl -s -v  \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNT_NUMBER" \
-H "Accept: application/json"  \
-X PUT -d @metadata.json -H "content-type: application/json" "$API_ENDPOINT/servers/$SERVER_ID/metadata" | python -mjson.tool

metadata.json

{
    "metadata": {
        "Label" : "MyServer",
        "Version" : "v1.0.1-2"
    }
}
chmod +x set-meta-data.sh
./set-meta-data.sh

OK , so now you’ve set the data.

What about retrieving it you ask? That’s not too difficult. Just remove the PUT and replace it with a GET, and take away the -d @metadata.json bit, and we’re off, like so:

get-meta-data.sh


#!/bin/bash

USERNAME='mycloudusername'
APIKEY='mycloudapikey'
ACCOUNT_NUMBER=1001010
API_ENDPOINT="https://lon.servers.api.rackspacecloud.com/v2/$ACCOUNT_NUMBER"
SERVER_ID='c2036384-c9be-4c8c-8551-c2f269c4249r'


TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`



curl -s -v  \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNT_NUMBER" \
-H "Accept: application/json"  \
-X GET "$API_ENDPOINT/servers/$SERVER_ID/metadata" | python -mjson.tool

simples! and as the fonz would say ‘Hey, grades are not cool, learning is cool. ‘

chmod +x get-meta-data.sh
./get-meta-data.sh 

Using Cloud Files Versioning, Setting up from Scratch

Sooooo.. you want to use cloud-files, but, you want to have versioning? no problem! Here’s how you do it from the ground up.

Authorise yourself thru identity API

Basically… set the token by querying the identity api with username and password..

!/bin/bash

# Username used to login to control panel
USERNAME='mycloudusername'

# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='mycloudapikey'

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

If you were to add to this file;

echo $TOKEN

You’d see this when running it

# ./versioning.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   3991     91  0:00:01  0:00:01 --:--:--  3996
8934534DFGJdfSdsdFDS232342DFFsDDFIKJDFijTx8WMIDO8CYzbhyViGGyekRYvtw3skCYMaqIWhw8adskfjds894FGKJDFKj34i2jgidgjdf@DFsSDsd

To understand how the curl works to authorise itself with the identity API and specifically how the TOKEN is extracted from the return output and set in the script, here is the -v verbose output


# ./versioning.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
* About to connect() to identity.api.rackspacecloud.com port 443 (#0)
*   Trying 72.3.138.129...
* Connected to identity.api.rackspacecloud.com (72.3.138.129) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* SSL connection using TLS_DHE_RSA_WITH_AES_128_CBC_SHA
* Server certificate:
* 	subject: CN=identity.api.rackspacecloud.com,OU=Domain Validated,OU=Thawte SSL123 certificate,OU=Go to https://www.thawte.com/repository/index.html,O=identity.api.rackspacecloud.com
* 	start date: Nov 14 00:00:00 2011 GMT
* 	expire date: Nov 12 23:59:59 2016 GMT
* 	common name: identity.api.rackspacecloud.com
* 	issuer: CN=Thawte DV SSL CA,OU=Domain Validated SSL,O="Thawte, Inc.",C=US
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0> POST /v2.0/tokens HTTP/1.1
> User-Agent: curl/7.29.0
> Host: identity.api.rackspacecloud.com
> Accept: */*
> Content-type: application/json
> Content-Length: 115
>
} [data not shown]
* upload completely sent off: 115 out of 115 bytes
< HTTP/1.1 200 OK
< Server: nginx
< Date: Tue, 02 Feb 2016 18:19:06 GMT
< Content-Type: application/json
< Content-Length: 5028
< Connection: keep-alive
< X-NewRelic-App-Data: Censored
< vary: Accept, Accept-Encoding, X-Auth-Token
< Front-End-Https: on
<
{ [data not shown]
100  5143  100  5028  100   115   3825     87  0:00:01  0:00:01 --:--:--  3826
* Connection #0 to host identity.api.rackspacecloud.com left intact
{
    "access": {
        "serviceCatalog": [
            {
                "endpoints": [
                    {
                        "internalURL": "https://snet-storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567",
                        "publicURL": "https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567",
                        "region": "LON",
                        "tenantId": "MossoCloudFS_10010101"
                    }
                ],
                "name": "cloudFiles",
                "type": "object-store"
            },
            {
   "token": {
            "RAX-AUTH:authenticatedBy": [
                "APIKEY"
            ],
            "expires": "2016-02-03T18:31:18.838Z",
            "id": "#$dfgkldfkl34klDFGDFGLK#$OFDOKGDFODJ#$OFDOGIDFOGI34ldfldfgkdo34lfdFGDKDFGDODFKDFGDFLK",
            "tenant": {
                "id": "10010101",
                "name": "10010101"
            }
        },

This is truncated, the output is larger, but basically the "token" section is stripped away at the id: part so that only the string is left, then that password is added into the TOKEN variable.

So no you understand auth.

Create the Version container

This contains all of the version changes of any file

i.e. if you overwrite a file 10 times, all 10 versions will be saved

# Create Versioning Container (Backup versions)
curl -i -XPUT -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions

Note we use $TOKEN , which is basically just hte password with the X-Auth-Token Header. -H means 'send this header'. X-Auth-Token is the header name, and $TOKEN is the password we populated in the variable in the first auth section above.

Create a Current Container

This only contains the 'current' or most latest version of the file

# Create current container (latest versions)
curl -i -XPUT -H "X-Auth-Token: $TOKEN" -H  "X-Versions-Location: versions"  https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current

I'm being a bit naughty here, I could make MossoCloudFS_10010101 a variable, like $CONTAINERSTORE or $CONTAINERPARENT. Or better $TENANTCONTAINER But meh. You get the idea. And learnt something.

Note importantly X-Versions-Location Header set when creating the 'current' cloud files container. It's asking to store versions of changes in current to the versions folder. Nice.

Create an object

Create the first version of an object, because its awesome

# Create an object
curl -i -XPUT --data-binary 1 -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current/myobject.obj

yay! My first object. I just put the number 1 in it. Not very imaginative but you get the idea. Now lets revise the object

Create a new version of the object

# Create a new version of the object (second version)
curl -i -XPUT --data-binary 2 -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current/myobject.obj

Create a list of the older versions of the object

# Create a list of the older versions of the object
curl -i -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions?prefix=008myobject.obj

Delete the current version of an object

# Delete the current version of the object
curl -i -XDELETE -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions?prefix=008myobject.obj

Pretty cool. Altogether now.

#!/bin/bash

# Username used to login to control panel
USERNAME='mycloudusername'

# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY=mycloudapikey'


# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`


# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export

# Create Versioning Container (Backup versions)
curl -i -XPUT -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions

# Create current container (latest versions)
curl -i -XPUT -H "X-Auth-Token: $TOKEN" -H  "X-Versions-Location: versions"  https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current


# Create an object
curl -i -XPUT --data-binary 1 -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current/myobject.obj

# Create a new version of the object (second version)
curl -i -XPUT --data-binary 2 -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current/myobject.obj

# Create a list of the older versions of the object
curl -i -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions?prefix=008myobject.obj

# Delete the current version of the object

curl -i -XDELETE -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions?prefix=008myobject.obj

What the output of the full script looks like:

# ./versioning.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   4291     98  0:00:01  0:00:01 --:--:--  4290
HTTP/1.1 202 Accepted
Content-Length: 76
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx514bac5247924b5db247d-0056b0ecb7lon3
Date: Tue, 02 Feb 2016 17:51:51 GMT

Accepted

The request is accepted for processing.

HTTP/1.1 202 Accepted Content-Length: 76 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx7b7f42fc19b1428b97cfa-0056b0ecb8lon3 Date: Tue, 02 Feb 2016 17:51:52 GMT

Accepted

The request is accepted for processing.

HTTP/1.1 201 Created Last-Modified: Tue, 02 Feb 2016 17:51:53 GMT Content-Length: 0 Etag: c4ca4238a0b923820dcc509a6f75849b Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx2495824253374261bf52a-0056b0ecb8lon3 Date: Tue, 02 Feb 2016 17:51:53 GMT HTTP/1.1 201 Created Last-Modified: Tue, 02 Feb 2016 17:51:54 GMT Content-Length: 0 Etag: c81e728d9d4c2f636f067f89cc14862c Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx785e4a5b784243a1b8034-0056b0ecb9lon3 Date: Tue, 02 Feb 2016 17:51:54 GMT HTTP/1.1 204 No Content Content-Length: 0 X-Container-Object-Count: 2 Accept-Ranges: bytes X-Storage-Policy: Policy-0 X-Container-Bytes-Used: 2 X-Timestamp: 1454435183.80523 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx4782072371924905bc513-0056b0ecbalon3 Date: Tue, 02 Feb 2016 17:51:54 GMT

Rackspace Customer takes the time to improve my script :D

Wow. this was an awesome customer. Who was obviously capable in using the API but was struggling. So I thrown them my portable python -mjson parsing script for identity token and glance image export to cloud files. So,the customer wrote back, commenting that I’d made a mistake, specifically I had added ‘export’ instead of ‘exports’

#!/bin/bash

# Task ID - supply with command
TASK=$1
# Username used to login to control panel
USERNAME='myusername'
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='myapikeyhere'

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# Requests progress of specified task
curl -X GET -H "X-Auth-Token: $TOKEN" "https://lon.images.api.rackspacecloud.com/v2/10010101/tasks/$TASK"

I just realised that the customer didn’t adapt the script to be able to pass in the image ID, on the initial export to cloud files.

Theoretically you could not only do the above but.. something like:

I just realised your script you sent checks the TASK. I just amended my initial script a bit further with your suggestion to accept myclouduser mycloudapikey and mycloudimageid

#!/bin/bash

# Username used to login to control panel
USERNAME=$1
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY=$2

# Find the image ID you'd like to make available on cloud files
IMAGEID=$3

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`


# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
curl https://lon.images.api.rackspacecloud.com/v2/10031542/tasks -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" -d '{"type": "export", "input": {"image_uuid": "'"$IMAGEID"'", "receiving_swift_container": "exports"}}'

# I thought that could theoretically process the output of the above and extract $TASK_ID to check the TASK too.

Note my script isn’t perfect but the customer did well!

This way you could simply provide to the script the cloud username, password and imageid. Then when the glance export starts, the task could be extracted in the same way as the TOKEN is from identity auth.

That way you could simply run something like

./myexportvhd.sh mycloudusername mycloudapikey mycloudimageid 

Not only would it start the image export to a set export folder.
But it'd provide you an update as to the task status.

You could go further, you could then watch the tasks status with a batch script while loop, until all show a complete or failed output and then record which ones succeeded and which ones failed. You could then create a batch script off that which downloaded and rsynched to somewhere the ones that succeeded.

Or..something like that.

I love it when one of our customers makes me think really hard. Gotta love that!

Deleting Glance Images one liner

I’ve been working on some glance automation and I wanted to quickly delete all the glance images so I can test if my ansible playbook is downloading all the reference cloud qcow2 images and populating glance with them correctly.

bash-4.2# glance image-list | awk '{print $2}' | grep -v ID | xargs -i echo glance image-delete {}
glance image-delete 8d73249e-c616-4481-8256-f634877eb5a2
glance image-delete 2ea3faef-530c-4679-9faf-b11c7e7889eb
glance image-delete 697efb18-72fe-4305-8e1d-18e0f1481bd6
glance image-delete 555811e2-f941-4cb5-bba2-6ed8751bf188
glance image-delete 7182dca4-f0f4-4176-a706-d8ca0598ef9f
glance image-delete 0f5f2bc5-94a4-4361-a17e-3fed96f07c4e
glance image-delete a01580c2-f264-4058-a366-30d726c2c496
glance image-delete 92a39f49-b6e5-4d32-9856-37bbdac6c285
glance image-delete c01a6464-8e2c-4edb-829e-6d123bc3c8f4
-bash-4.2# glance image-delete 8d73249e-c616-4481-8256-f634877eb5a2
-bash-4.2# glance image-delete 2ea3faef-530c-4679-9faf-b11c7e7889eb
-bash-4.2# glance image-delete 697efb18-72fe-4305-8e1d-18e0f1481bd6
-bash-4.2# glance image-delete 555811e2-f941-4cb5-bba2-6ed8751bf188
-bash-4.2# glance image-delete 7182dca4-f0f4-4176-a706-d8ca0598ef9f
-bash-4.2# glance image-delete 0f5f2bc5-94a4-4361-a17e-3fed96f07c4e
-bash-4.2# glance image-delete a01580c2-f264-4058-a366-30d726c2c496
-bash-4.2# glance image-delete 92a39f49-b6e5-4d32-9856-37bbdac6c285
-bash-4.2# glance image-delete c01a6464-8e2c-4edb-829e-6d123bc3c8f4

Downloading exported Cloud Server Image from Cloud Files using BASH/curl/API

So, after succesfully exporting the image in the previous article, I wanted to download the VHD so I could use it on virtualbox at home.

#!/bin/bash

# Username used to login to control panel
USERNAME='adambull'
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='mycloudapikey'
# Find the image ID you'd like to make available on cloud files

# Simply replace mytenantidgoeshere10011111etc with just the account number, the number given in the url in mycloud control panel! replace everything after _ so it looks like _101110
TENANTID='MossoCloudFS_mytenantidgoeshereie1001111etc'

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# Download the cloud files image

VHD_FILENAME=5fb64bf2-afae-4277-b8fa-0b69bc98185a.vhd
curl -o -i -X GET "https://storage101.lon3.clouddrive.com/v1/$TENANTID/exports/$VHD_FILENAME" \
-H "X-Auth-Token: $TOKEN"

Really really easy

Output looks like;

 ./download-image-id.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   4470    102  0:00:01  0:00:01 --:--:--  4473
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  1 3757M    1 38.1M    0     0  7231k      0  0:08:52  0:00:05  0:08:47 7875k

Exporting Rackspace Cloud Server Image to Cloud Files (so you can download it)

So today, a customer wanted to know if there was a way to export a Rackspace Cloud Server image out of Rackspace to download it. Yes, this is possible and can be done using the Images API and Cloud Files. Here is a summary of the basic process below;

Step 1: Make container called ‘export’ in cloud files; You can do this thru the mycloud control panel by navigating to your cloud files and simply clicking create container, call it ‘export’.

Screen Shot 2016-01-22 at 2.46.56 PM

Step 2: Create bash script to query API with correct user, apikey and imageid;

vim mybashscript.sh
#!/bin/bash

# Username used to login to control panel
USERNAME='mycloudusernamehere'
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='mycloudapikeyhere'
# Find the image ID you'd like to make available on cloud files
# set the image id below of the image you want to copy to cloud files, see in control panel
IMAGEID="5fb24bf2-afae-4277-b8fa-0b69bc98185a"

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
curl https://lon.images.api.rackspacecloud.com/v2/10045567/tasks -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" -d '{"type": "export", "input": {"image_uuid": "'"$IMAGEID"'", "receiving_swift_container": "exports"}}'

It’s so simple I had to check myself that it was really this simple.

It is. yay! Next guide shows you how to download the image you made.

Testing your servers available bandwidth & DDOS resiliency with iperf

So, if you buy a server with say a 1.6Gbps connection in this customers case, you might want to test you have the bandwidth you need, for instance to be resilient against small DOS and DDOS in the sub 500mbit -1000mbit range.

Here is how I did it (quick summary)


$ iperf -c somedestipiwanttospeedtest-censored -p 80 -P 2 -b 100m
WARNING: option -b implies udp testing
------------------------------------------------------------
Client connecting to somedestipiwanttospeedtest-censored, UDP port 80
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  4] local someipsrc port 53898 connected with somedestipiwanttospeedtest-censored port 80
[  3] local someipsrc port 50460 connected with somedestipiwanttospeedtest-censored port 80


[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  4] Sent 85471 datagrams
[  3]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  3] Sent 85471 datagrams
[SUM]  0.0-10.0 sec   240 MBytes   201 Mbits/sec
[  3] WARNING: did not receive ack of last datagram after 10 tries.
[  4] WARNING: did not receive ack of last datagram after 10 tries.


$ iperf -c somedestipiwanttospeedtest-censored -p 80 -P 10 -b 100m
WARNING: option -b implies udp testing
------------------------------------------------------------
Client connecting to somedestipiwanttospeedtest-censored, UDP port 80
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[ 12] local someipsrc port 50725 connected with somedestipiwanttospeedtest-censored port 80
[  5] local someipsrc port 40410 connected with somedestipiwanttospeedtest-censored port 80
[  6] local someipsrc port 51075 connected with somedestipiwanttospeedtest-censored port 80
[  4] local someipsrc port 58020 connected with somedestipiwanttospeedtest-censored port 80
[  3] local someipsrc port 50056 connected with somedestipiwanttospeedtest-censored port 80
[  7] local someipsrc port 57017 connected with somedestipiwanttospeedtest-censored port 80
[  8] local someipsrc port 49473 connected with somedestipiwanttospeedtest-censored port 80
[  9] local someipsrc port 50491 connected with somedestipiwanttospeedtest-censored port 80
[ 10] local someipsrc port 40974 connected with somedestipiwanttospeedtest-censored port 80
[ 11] local someipsrc port 38348 connected with somedestipiwanttospeedtest-censored port 80
[ ID] Interval       Transfer     Bandwidth
[ 12]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[ 12] Sent 81355 datagrams
[  5]  0.0-10.0 sec   114 MBytes  95.8 Mbits/sec
[  5] Sent 81448 datagrams
[  6]  0.0-10.0 sec   114 MBytes  95.8 Mbits/sec
[  6] Sent 81482 datagrams
[  4]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[  4] Sent 81349 datagrams
[  3]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[  3] Sent 81398 datagrams
[  7]  0.0-10.0 sec   114 MBytes  95.8 Mbits/sec
[  7] Sent 81443 datagrams
[  8]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[  8] Sent 81408 datagrams
[  9]  0.0-10.0 sec   114 MBytes  95.8 Mbits/sec
[  9] Sent 81421 datagrams
[ 10]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[ 10] Sent 81404 datagrams
[ 11]  0.0-10.0 sec   114 MBytes  95.8 Mbits/sec
[ 11] Sent 81427 datagrams
[SUM]  0.0-10.0 sec  1.11 GBytes   957 Mbits/sec


It looks like you are getting the bandwidth you desire, when repeating the test with 20 connections I can see the bandwidth hits a total of 2.01Gbits/sec

# iperf -c somedestipiwanttospeedtest-censored -p 80 -P 20 -b 100m
WARNING: option -b implies udp testing
------------------------------------------------------------
Client connecting to somedestipiwanttospeedtest-censored, UDP port 80
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[ 22] local someipsrc port 44231 connected with somedestipiwanttospeedtest-censored port 80
[  4] local someipsrc port 55259 connected with somedestipiwanttospeedtest-censored port 80
[  7] local someipsrc port 49519 connected with somedestipiwanttospeedtest-censored port 80
[  3] local someipsrc port 45301 connected with somedestipiwanttospeedtest-censored port 80
[  6] local someipsrc port 48654 connected with somedestipiwanttospeedtest-censored port 80
[  5] local someipsrc port 33666 connected with somedestipiwanttospeedtest-censored port 80
[  8] local someipsrc port 33963 connected with somedestipiwanttospeedtest-censored port 80
[  9] local someipsrc port 39593 connected with somedestipiwanttospeedtest-censored port 80
[ 10] local someipsrc port 36229 connected with somedestipiwanttospeedtest-censored port 80
[ 11] local someipsrc port 36331 connected with somedestipiwanttospeedtest-censored port 80
[ 14] local someipsrc port 54622 connected with somedestipiwanttospeedtest-censored port 80
[ 13] local someipsrc port 36159 connected with somedestipiwanttospeedtest-censored port 80
[ 12] local someipsrc port 53881 connected with somedestipiwanttospeedtest-censored port 80
[ 15] local someipsrc port 43221 connected with somedestipiwanttospeedtest-censored port 80
[ 16] local someipsrc port 60284 connected with somedestipiwanttospeedtest-censored port 80
[ 17] local someipsrc port 49735 connected with somedestipiwanttospeedtest-censored port 80
[ 18] local someipsrc port 43866 connected with somedestipiwanttospeedtest-censored port 80
[ 19] local someipsrc port 44631 connected with somedestipiwanttospeedtest-censored port 80
[ 20] local someipsrc port 56852 connected with somedestipiwanttospeedtest-censored port 80
[ 21] local someipsrc port 59338 connected with somedestipiwanttospeedtest-censored port 80
[ ID] Interval       Transfer     Bandwidth
[ 22]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 22] Sent 85471 datagrams
[  4]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  4] Sent 85449 datagrams
[  7]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  7] Sent 85448 datagrams
[  3]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  3] Sent 85448 datagrams
[  6]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  6] Sent 85449 datagrams
[  5]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  5] Sent 85448 datagrams
[  8]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  8] Sent 85453 datagrams
[  9]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  9] Sent 85453 datagrams
[ 10]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 10] Sent 85454 datagrams
[ 11]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 11] Sent 85456 datagrams
[ 14]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 14] Sent 85457 datagrams
[ 13]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 13] Sent 85457 datagrams
[ 12]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 12] Sent 85457 datagrams
[ 15]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 15] Sent 85460 datagrams
[ 16]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 16] Sent 85461 datagrams
[ 17]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 17] Sent 85462 datagrams
[ 18]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 18] Sent 85464 datagrams
[ 19]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 19] Sent 85467 datagrams
[ 20]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 20] Sent 85467 datagrams
[ 21]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 21] Sent 85467 datagrams
[SUM]  0.0-10.0 sec  2.34 GBytes  2.01 Gbits/sec

The last test I did used 2 connections only at 500mbit each;

# iperf -c somedestipiwanttospeedtest-censored -p 80 -P 2 -b 500m
WARNING: option -b implies udp testing
------------------------------------------------------------
Client connecting to somedestipiwanttospeedtest-censored, UDP port 80
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  4] local someipsrc port 60841 connected with somedestipiwanttospeedtest-censored port 80
[  3] local someipsrc port 51495 connected with somedestipiwanttospeedtest-censored port 80
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   570 MBytes   479 Mbits/sec
[  4] Sent 406935 datagrams
[  3]  0.0-10.0 sec   570 MBytes   479 Mbits/sec
[  3] Sent 406933 datagrams
[SUM]  0.0-10.0 sec  1.11 GBytes   957 Mbits/sec

Disable TCP Offloading on Linux NIC

# read -p "Interface: " iface; ethtool -k $iface | awk -F: '/offload: on$/{print$1}' | sed 's/^\(.\).*-\(.\).*-\(.\).*/\1\2\3/' | xargs --no-run-if-empty -n1 -I{} ethtool -K $iface {} off


Disable offloading for all interfaces:

# for iface in $(cd /sys/class/net; echo *); do ethtool -k $iface | awk -F: '/offload: on$/{print$1}' | sed 's/^\(.\).*-\(.\).*-\(.\).*/\1\2\3/' | xargs --no-run-if-empty -n1 -I{} ethtool -K $iface {} off; done

A big thank you to Daniel C. for this!

How to speed test a Rackspace CDN?

So, today, a customer was asking if we could show speed tests to CDN.

So I used my french server to test external connections from outside of Rackspace. For a CDN, it’s fairly speedy!

#!/bin/bash
CSTATS=`curl -w '%{speed_download}\t%{time_namelookup}\t%{time_total}\n' -o /dev/null -s http://6281487ef0c74fc1485b-69e4500000000000dfasdcd1b6b.r12.cf1.rackcdn.com/bigfile-rackspace-testing`
SPEED=`echo $CSTATS | awk '{print $1}' | sed 's/\..*//'`
DNSTIME=`echo $CSTATS | awk '{print $2}'`
TOTALTIME=`echo $CSTATS | awk '{print $3}'`
echo "Transfered $SPEED bytes/sec in $TOTALTIME seconds."
echo "DNS Resolve Time was $DNSTIME seconds."
# ./speedtest.sh
Transfered 3991299 bytes/sec in 26.272 seconds.
DNS Resolve Time was 0.061 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 7046221 bytes/sec in 14.881 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 29586916 bytes/sec in 3.544 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 14539272 bytes/sec in 7.212 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 9060846 bytes/sec in 11.573 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 25551753 bytes/sec in 4.104 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 28225927 bytes/sec in 3.715 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 9036412 bytes/sec in 11.604 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 32328623 bytes/sec in 3.243 seconds.
DNS Resolve Time was 0.004 seconds.

Checking a crashing / unstable Server

So, what to do if a customer has a server which is frequently crashing? Well, important things to check is open files, to look at all the users in /etc/passwd and substitute the username to check each of their cron jobs and check the files which are open using the apache process id.

This will help rule out a lot of common issues being seen on servers, and may even be of use for checking whether the server has been hacked.

netstat -ntulp
for i in $(awk -F: '{print $1}' /etc/passwd); do crontab -l -u $i ;done
lsof -p $(cat /var/run/apache2/apache2.pid) | grep log

This is a nice one liner, thanks to my colleague Aaron for providing this, well, actually it was so awesome I stole it 😛