Creating basic Log Tree’s for Rackspace CDN

Well, this was a little bit of a hack, but I share it because it’s quite cool


# Simple script thaqt is designed to process Rackspace .CDN_ACCESS_LOGS recursively
# once they are download with swiftly

# to download the CDN logs use
# swiftly --verbose --eventlet --concurrency=100 get .CDN_ACCESS_LOGS --all-objects -o ./
alldirs=$(ls -al -1);

echo "$alldirs" | awk '{print $9}' > alldirs.txt

for item in `cat alldirs.txt`

        echo " --- CONTAINER $item START ---"
        tree -L 2 $item;
 echo " --- CONTAINER $item END --- "
 printf "\n\n"

DISASTER RECOVERY! Exporting a Broken Cloud-server image VHD from Rackspace and attempting to recover data

Thanks to my colleague Marcin for thie guestmount tools protip.

I wrote a previous guide which explains how to download/export a Cloud server image VHD from Rackspace Cloud, which is failing to build. It might allow you to perform data recovery, even if the image can’t be booted. Which I’m guessing someone is going to run into sooner or later, and will be pleased to see this article, it will at least give you a best shot at reading the VHD and recovering it, since as you might know already, just because boot or kernel is broken, doesn’t mean that the data isn’t there!

# A better article to use if you want to download via commandline

# My article doing this thru a web-browser which might be useful too for some customers

Once the image gets downloaded to your new cloud instance you can use ‘libguestfs-tools’ package (same name on Ubuntu and CentOS) which contains tools necessary for mounting .vhd image files.

The command would be (read-only mode):

guestmount -a {image-name}.vhd -i --ro {mount-point}

Taking strace output from stderr and piping to other utilities

Well, this is a strange thing to do, but say you want to know how fast an application is processing data. how to tell? Enter strace, and… a bit of wc -l with the assistance of tee and 2>&1 proper redirection.

strace -p 9653 2>&1 | tee >(wc -l)

where 9653 is the process id (pid) and wc -l is the command you want to pipe to!

read(4, " - - [26/05/2015:15:15"..., 8192) = 8192
read(4, "o) Version/7.1.6 Safari/537.85.1"..., 8192) = 8192
read(4, "ident/6.0)\"\n91.233.154.8 - - [26"..., 8192) = 8192

1290 lines in the output.. perfect, thats what I wanted to know, roughly how quickly this log parser is going thru my logs 😀

Concatenating all Rackspace CDN Logs into a single File

In my previous article Retrieve CDN log files via swiftly I shown how to download all of the CDN logs.

After downloading all of the CDN logs, you will likely want to parse them. However, the way that Rackspace presently log the CDN is a different log file for each hour, on each day, on each month, and year. So this actually is convenient if you require the logs seperate, but if you want to parse them in something like awstats, piwik, or other log parsers like goaccess, it would help if they are all part of the same file.

Here’s how I achieved it (where /home/adam/cdn is the path of the cdn logs. Don’t worry this will pickup ALL log files inside there, at least the ones that are gz files

find /home/adam/cdn -type f | xargs -i zcat {} > alldomains.cdn.log

I could have probably used -type gz or similar and used select to find everything. It works nicely though. It’s a quickie, but a goodie.

Adding nodes and Updating nodes behind a Cloud Load Balancer

I have succeeded in putting together a basic script documenting exactly how API works and for adding node(s), listing the nods behind the LB, as well as updating the nodes (such as DRAINING, DISABLED, ENABLED).

Use update node to set one of your nodes to gracefully drain (not accept new connections, wait for present connections to die). Naturally, you will want to put the secondary server in behind the load balancer first, with

Once new node is added as enabled, set the old node to ‘DRAINING’. This will gracefully switch over the server.

# List Load Balancers



TOKEN=`curl -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

curl -v -H "X-Auth-Token: $TOKEN" -H "content-type: application/json" -X GET "$CUSTOMER_ID/loadbalancers/$LB_ID"


# Add Node(s)



TOKEN=`curl -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# Add Node
curl -v -H "X-Auth-Token: $TOKEN" -d @addnode.json -H "content-type: application/json" -X POST "$CUSTOMER_ID/loadbalancers/$LB_ID/nodes"


For the addnode script you require a file, called addnode.json
that file must contain the snet ip's you wish to add

# addnode.json

{"nodes": [
            "address": "",
            "port": 80,
            "condition": "ENABLED",






TOKEN=`curl -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: applic

# Update Node

curl -v -H "X-Auth-Token: $TOKEN" -d @updatenode.json -H "content-type: application/json" -X PUT "$CUSTOMER_ID/loadbalancers/$LB_ID/nodes/$NODE_ID"



## updatenode.json

            "condition": "DISABLED",

Naturally, you will be able to change condition to ENABLED, DISABLED, or DRAINING.

I recommend to use DRAINING, since it will gracefully remove the cloud-server, and any existing connections will be waited on, before removing the server from LB.

Checking Rackspace DNS API Rate Limits

so, you want to check your Rackspace DNS API rate Limits?

Here is how you do it.


# Username used to login to control panel

# Find the APIKey in the 'account settings' part of the menu of the control panel

# Customer ID of the account (the numbers that are in the URL when you login to mycloud control panel)

# This section simply retrieves the TOKEN
TOKEN=`curl -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# Retrieve Rate Limit detail for the account
curl "$CUSTOMER_ID/limits" -H "X-Auth-Token: $token" | python -m json.tool

Cheers &
Best wishes,

HTTPS to HTTP redirect for Rackspace Cloud Load Balancer


So a customer reached out to us today asking about how to configure HTTPS to HTTP redirects. This is actually really easy to do.

Think of it like this;

When you enable HTTP and HTTPS (allow insecure and secure traffic), the protocol is always HTTP hitting the server.

When you ‘only allow secure traffic’ it will always be HTTPS. Think of it like, if the load balancer has certificates on it and supports both protocols (i.e. terminates SSL at the load balancer), then the requests hitting the server will always be HTTP.

This is why the X_forwarded_Proto becomes important in your server being able to determine what traffic hitting it coming from load balancer originated from HTTPS protocol, and which originated from HTTP protocol. This is what allows you to effectively do the redirection on the cloud-server side.

I hope this helps!

So the rewrite rule on the server, using x_forwarded_proto to detect the protocol instead of the usual ‘https’ directive can be (or rather will need to be replaced) with a rule that uses the header instead of the regular incoming protocol to determine redirect.

    RewriteEngine On
    RewriteCond %{HTTP:http_x_forwarded_proto} !=http
    RewriteCond %{REQUEST_URI} !^/some/path.*
    RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

Quite simple, but took me the better part of half an hour to get my head properly around this 😀

You might be interested in achieving this on Windows systems with iis or similar, so check this link out which explains how to do that step by step in windows;

Installing Rackspace Cloud Backup on a Windows Box

You may be a managed infrastructure customer at Rackspace, which is the infrastructure-only support. Maybe you want to install the Cloud Backup agent, but aren’t sure how. For windows, it really couldn’t be easier. You:

Required Steps
1) Download Cloud Backup Installer
2) Install Cloud Backup, providing, when prompted your
i) cloud username

Once the Backup Agent has been installed, it’s possible to start backing up, by configuring Backups in the mycloud control panel. Rackspace can help you schedule backups if you feel uncomfortable doing that part, but as the customer you would want to install the Agent yourselves. Rackspace doesn’t hold credentials or login to managed infrastructure customers machines, so providing you can take care of this part. We can help you do the rest.

# Installing Backup Agent on the Windows Cloud Server

1. Download this link with your browser, right click ‘save target as’ or ‘save link as’, to download the windows Cloud Backup installer.

2. When installing, the software requests you to provide your cloud username, and the API key. You can find these two pieces of detail under your ‘account settings’ page at the below url:

(You will need to be logged in, and can access account settings from the menu in the top right hand corner of the page)

3. When prompted by the installer, copy paste in your username and API key.

The whole process is described step by step at the below URL:

Once you have performed these steps you will be able to set backups in the control panel and Rackspace can guide you how to set that up once the agent is installed on your windows server.