Using Cloud Files Versioning, Setting up from Scratch

Sooooo.. you want to use cloud-files, but, you want to have versioning? no problem! Here’s how you do it from the ground up.

Authorise yourself thru identity API

Basically… set the token by querying the identity api with username and password..

!/bin/bash

# Username used to login to control panel
USERNAME='mycloudusername'

# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='mycloudapikey'

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

If you were to add to this file;

echo $TOKEN

You’d see this when running it

# ./versioning.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   3991     91  0:00:01  0:00:01 --:--:--  3996
8934534DFGJdfSdsdFDS232342DFFsDDFIKJDFijTx8WMIDO8CYzbhyViGGyekRYvtw3skCYMaqIWhw8adskfjds894FGKJDFKj34i2jgidgjdf@DFsSDsd

To understand how the curl works to authorise itself with the identity API and specifically how the TOKEN is extracted from the return output and set in the script, here is the -v verbose output


# ./versioning.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
* About to connect() to identity.api.rackspacecloud.com port 443 (#0)
*   Trying 72.3.138.129...
* Connected to identity.api.rackspacecloud.com (72.3.138.129) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* SSL connection using TLS_DHE_RSA_WITH_AES_128_CBC_SHA
* Server certificate:
* 	subject: CN=identity.api.rackspacecloud.com,OU=Domain Validated,OU=Thawte SSL123 certificate,OU=Go to https://www.thawte.com/repository/index.html,O=identity.api.rackspacecloud.com
* 	start date: Nov 14 00:00:00 2011 GMT
* 	expire date: Nov 12 23:59:59 2016 GMT
* 	common name: identity.api.rackspacecloud.com
* 	issuer: CN=Thawte DV SSL CA,OU=Domain Validated SSL,O="Thawte, Inc.",C=US
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0> POST /v2.0/tokens HTTP/1.1
> User-Agent: curl/7.29.0
> Host: identity.api.rackspacecloud.com
> Accept: */*
> Content-type: application/json
> Content-Length: 115
>
} [data not shown]
* upload completely sent off: 115 out of 115 bytes
< HTTP/1.1 200 OK
< Server: nginx
< Date: Tue, 02 Feb 2016 18:19:06 GMT
< Content-Type: application/json
< Content-Length: 5028
< Connection: keep-alive
< X-NewRelic-App-Data: Censored
< vary: Accept, Accept-Encoding, X-Auth-Token
< Front-End-Https: on
<
{ [data not shown]
100  5143  100  5028  100   115   3825     87  0:00:01  0:00:01 --:--:--  3826
* Connection #0 to host identity.api.rackspacecloud.com left intact
{
    "access": {
        "serviceCatalog": [
            {
                "endpoints": [
                    {
                        "internalURL": "https://snet-storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567",
                        "publicURL": "https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567",
                        "region": "LON",
                        "tenantId": "MossoCloudFS_10010101"
                    }
                ],
                "name": "cloudFiles",
                "type": "object-store"
            },
            {
   "token": {
            "RAX-AUTH:authenticatedBy": [
                "APIKEY"
            ],
            "expires": "2016-02-03T18:31:18.838Z",
            "id": "#$dfgkldfkl34klDFGDFGLK#$OFDOKGDFODJ#$OFDOGIDFOGI34ldfldfgkdo34lfdFGDKDFGDODFKDFGDFLK",
            "tenant": {
                "id": "10010101",
                "name": "10010101"
            }
        },

This is truncated, the output is larger, but basically the "token" section is stripped away at the id: part so that only the string is left, then that password is added into the TOKEN variable.

So no you understand auth.

Create the Version container

This contains all of the version changes of any file

i.e. if you overwrite a file 10 times, all 10 versions will be saved

# Create Versioning Container (Backup versions)
curl -i -XPUT -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions

Note we use $TOKEN , which is basically just hte password with the X-Auth-Token Header. -H means 'send this header'. X-Auth-Token is the header name, and $TOKEN is the password we populated in the variable in the first auth section above.

Create a Current Container

This only contains the 'current' or most latest version of the file

# Create current container (latest versions)
curl -i -XPUT -H "X-Auth-Token: $TOKEN" -H  "X-Versions-Location: versions"  https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current

I'm being a bit naughty here, I could make MossoCloudFS_10010101 a variable, like $CONTAINERSTORE or $CONTAINERPARENT. Or better $TENANTCONTAINER But meh. You get the idea. And learnt something.

Note importantly X-Versions-Location Header set when creating the 'current' cloud files container. It's asking to store versions of changes in current to the versions folder. Nice.

Create an object

Create the first version of an object, because its awesome

# Create an object
curl -i -XPUT --data-binary 1 -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current/myobject.obj

yay! My first object. I just put the number 1 in it. Not very imaginative but you get the idea. Now lets revise the object

Create a new version of the object

# Create a new version of the object (second version)
curl -i -XPUT --data-binary 2 -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current/myobject.obj

Create a list of the older versions of the object

# Create a list of the older versions of the object
curl -i -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions?prefix=008myobject.obj

Delete the current version of an object

# Delete the current version of the object
curl -i -XDELETE -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions?prefix=008myobject.obj

Pretty cool. Altogether now.

#!/bin/bash

# Username used to login to control panel
USERNAME='mycloudusername'

# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY=mycloudapikey'


# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`


# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export

# Create Versioning Container (Backup versions)
curl -i -XPUT -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions

# Create current container (latest versions)
curl -i -XPUT -H "X-Auth-Token: $TOKEN" -H  "X-Versions-Location: versions"  https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current


# Create an object
curl -i -XPUT --data-binary 1 -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current/myobject.obj

# Create a new version of the object (second version)
curl -i -XPUT --data-binary 2 -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/current/myobject.obj

# Create a list of the older versions of the object
curl -i -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions?prefix=008myobject.obj

# Delete the current version of the object

curl -i -XDELETE -H "X-Auth-Token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10010101/versions?prefix=008myobject.obj

What the output of the full script looks like:

# ./versioning.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   4291     98  0:00:01  0:00:01 --:--:--  4290
HTTP/1.1 202 Accepted
Content-Length: 76
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx514bac5247924b5db247d-0056b0ecb7lon3
Date: Tue, 02 Feb 2016 17:51:51 GMT

Accepted

The request is accepted for processing.

HTTP/1.1 202 Accepted Content-Length: 76 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx7b7f42fc19b1428b97cfa-0056b0ecb8lon3 Date: Tue, 02 Feb 2016 17:51:52 GMT

Accepted

The request is accepted for processing.

HTTP/1.1 201 Created Last-Modified: Tue, 02 Feb 2016 17:51:53 GMT Content-Length: 0 Etag: c4ca4238a0b923820dcc509a6f75849b Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx2495824253374261bf52a-0056b0ecb8lon3 Date: Tue, 02 Feb 2016 17:51:53 GMT HTTP/1.1 201 Created Last-Modified: Tue, 02 Feb 2016 17:51:54 GMT Content-Length: 0 Etag: c81e728d9d4c2f636f067f89cc14862c Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx785e4a5b784243a1b8034-0056b0ecb9lon3 Date: Tue, 02 Feb 2016 17:51:54 GMT HTTP/1.1 204 No Content Content-Length: 0 X-Container-Object-Count: 2 Accept-Ranges: bytes X-Storage-Policy: Policy-0 X-Container-Bytes-Used: 2 X-Timestamp: 1454435183.80523 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx4782072371924905bc513-0056b0ecbalon3 Date: Tue, 02 Feb 2016 17:51:54 GMT

Rackspace Customer takes the time to improve my script :D

Wow. this was an awesome customer. Who was obviously capable in using the API but was struggling. So I thrown them my portable python -mjson parsing script for identity token and glance image export to cloud files. So,the customer wrote back, commenting that I’d made a mistake, specifically I had added ‘export’ instead of ‘exports’

#!/bin/bash

# Task ID - supply with command
TASK=$1
# Username used to login to control panel
USERNAME='myusername'
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='myapikeyhere'

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# Requests progress of specified task
curl -X GET -H "X-Auth-Token: $TOKEN" "https://lon.images.api.rackspacecloud.com/v2/10010101/tasks/$TASK"

I just realised that the customer didn’t adapt the script to be able to pass in the image ID, on the initial export to cloud files.

Theoretically you could not only do the above but.. something like:

I just realised your script you sent checks the TASK. I just amended my initial script a bit further with your suggestion to accept myclouduser mycloudapikey and mycloudimageid

#!/bin/bash

# Username used to login to control panel
USERNAME=$1
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY=$2

# Find the image ID you'd like to make available on cloud files
IMAGEID=$3

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`


# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
curl https://lon.images.api.rackspacecloud.com/v2/10031542/tasks -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" -d '{"type": "export", "input": {"image_uuid": "'"$IMAGEID"'", "receiving_swift_container": "exports"}}'

# I thought that could theoretically process the output of the above and extract $TASK_ID to check the TASK too.

Note my script isn’t perfect but the customer did well!

This way you could simply provide to the script the cloud username, password and imageid. Then when the glance export starts, the task could be extracted in the same way as the TOKEN is from identity auth.

That way you could simply run something like

./myexportvhd.sh mycloudusername mycloudapikey mycloudimageid 

Not only would it start the image export to a set export folder.
But it'd provide you an update as to the task status.

You could go further, you could then watch the tasks status with a batch script while loop, until all show a complete or failed output and then record which ones succeeded and which ones failed. You could then create a batch script off that which downloaded and rsynched to somewhere the ones that succeeded.

Or..something like that.

I love it when one of our customers makes me think really hard. Gotta love that!

Testing Cloud Files API calls

So a customer was having issues with cloudfuse, the virtual ‘cloud files’ hard disk. So we needed to test whether their auth is working correctly:

#!/bin/bash
# Diagnostic script by Adam Bull
# Test Cloud Files Auth
# Tuesday, 02/02/2016

# Username used to login to control panel
USERNAME=’mycloudusername’

# Find the APIKey in the ‘account settings’ part of the menu of the control panel
APIKEY=’mycloudapikey’

# This section simply retrieves and sets the TOKEN

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d ‘{ “auth”:{“RAX-KSKEY:apiKeyCredentials”: { “username”:”‘$USERNAME'”, “apiKey”: “‘$APIKEY'” }} }’ -H “Content-type: application/json” | python -mjson.tool | grep -A5 token | grep id | cut -d ‘”‘ -f4`

# Container to Upload to (container must exist)

CONTAINER=testing

LOCALFILENAME=”/root/mytest.txt”
FILENAME=”mytest.txt”

# Put command, note MossoCloufFS_customeridhere needs to be populated with the correct value. This is the number in the mycloud url when logging in.
curl -i -v -X PUT “https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_101100/$CONTAINER/$FILENAME” -T “$LOCALFILENAME” -H “X-Auth-Token: $TOKEN”

Ansible roles/glance/task/main.yml playbook for Glance API Deployment

I am working on a project at work to deploy Keystone and Glance. I’ve currently been tasked with finishing off the glance role part of the playbook with the basic setup tasks and retrieving the basic qcow2 images for the various distributions and automatically retrieving and populating the glance API image-list. Here is how I did it;

This is using an encrypted group_vars all vars.yml which contains sensitive password variables like GLANCE_DBPASS

This file shows how Glance SQL database, permissions, population and images are uploaded to glance for use by openstack compute.

glance-api

File: osan/roles/glance/tasks/main.yml

---

   - name: Create keystone database
     mysql_db:
        name: glance

   - name: Configure database user privileges
     mysql_user:
       name: glance
       host: "{{ item }}"
       password: "{{ GLANCE_DBPASS }}"
       priv: glance.*:ALL
     with_items:
       - "%"
       - localhost

#   - name: Set credentials to admin
#   command: source admin-openrc.sh

   - name: Create the Glance user service credentials
     command: openstack user create --domain default --password {{ GLANCE_PASS }} glance
     environment: admin_env
     ignore_errors: yes

   - name: Add the admin role to the glance user and service project
     command: openstack role add --project service --user glance admin
     environment: admin_env
     ignore_errors: yes

   - name: Create the glance service entity
     command: openstack service create --name glance --description "OpenStack Image service" image
     environment: admin_env
     ignore_errors: yes

   - name: Create the Image service API endpoints for glance
     command: openstack endpoint create --region RegionOne image public http://controller:9292
     environment: admin_env
     ignore_errors: yes

   - name: Create the Image service API endpoints for glance
     command: openstack endpoint create --region RegionOne image internal http://controller:9292
     environment: admin_env
     ignore_errors: yes

   - name: Create the Image service API endpoints for glance
     command: openstack endpoint create --region RegionOne image admin 'http://controller:9292'
     environment: admin_env
     ignore_errors: yes

   - name: Install Glance and Dependencies
     yum: pkg={{item}} state=installed
     with_items:
     - openstack-glance
     - python-glance
     - python-glanceclient

   - name: replace glance-api.conf file
     template: src=glance-api.conf.ansible dest=/etc/glance/glance-api.conf owner=root

   - name: replace glance-registory.conf file
     template: src=glance-registry.conf.ansible dest=/etc/glance/glance-registory.conf owner=root

   - name: Populate the Image service database
     command: su -s /bin/sh -c "glance-manage db_sync" glance

   - name: Start & Enable openstack-glance-registry.service
     service: name=openstack-glance-registry.service enabled=yes state=started

   - name: Start & Enable openstack-glance-api.service
     service: name=openstack-glance-api.service enabled=yes state=started


   - name: Retrieve CentOS 7 x86_64.qcow2
     get_url: url=http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1503.qcow2 dest=/root/CentOS-7-x86_64-GenericCloud-1503.qcow2 mode=0600

   - name: Populate Glance DB with CentOS 7 qcow2 Image
     command:  glance image-create --name "centos7-x86_x64" --file /root/CentOS-7-x86_64-GenericCloud-1503.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress


   - name: Retrieve Cirros qcow2 Image
     get_url: url=http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img dest=/root/cirros-0.3.4-x86_64-disk.img mode=0600

   - name: Import Cirros qcow Image to Glance
     command:  glance image-create --name "cirros-0.3.4_x86_64" --file /root/cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress


   - name: Retrieve Ubuntu 14.04 Trusty Tahr qcow2 Image
     get_url: url=http://cloud-images.ubuntu.com/releases/14.04/release-20140416.1/ubuntu-14.04-server-cloudimg-amd64-disk1.img dest=/root/ubuntu-14.04-server-cloudimg-amd64-disk1.img mode=0600

   - name: Import Ubuntu 14.04 Trusty Tahr to Glance
     command: glance image-create --name "ubuntu-14.04-lts-trusty-tahr-amd64" --file /root/ubuntu-14.04-server-cloudimg-amd64-disk1.img --disk-format qcow2 --container-format bare --visibility public --progress


   - name: Retrieve Fedora 23 qcow2 Image
     get_url: url=https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images/Fedora-Cloud-Base-23-20151030.x86_64.qcow2 dest=/root/Fedora-Cloud-Base-23-20151030.x86_64.qcow2 mode=0600

   - name: Import Fedora 23 qcow2 Image to Glance
     command: glance image-create --name "fedora-23-amd64" --file /root/Fedora-Cloud-Base-23-20151030.x86_64.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress


   - name: Retrieve Debian 8 amd64 qcow2 Image
     get_url: url=http://cdimage.debian.org/cdimage/openstack/current/debian-8.2.0-openstack-amd64.qcow2 dest=/root/debian-8.2.0-openstack-amd64.qcow2 mode=0600

   - name: Import Debian 8 to Glance
     command: glance image-create --name "debian8-2-0-amd64" --file /root/debian-8.2.0-openstack-amd64.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress


   - name: Retrieve OpenSuSE 13.2 Guest Qcow2 Image
     get_url: url=http://download.opensuse.org/repositories/Cloud:/Images:/openSUSE_13.2/images/openSUSE-13.2-OpenStack-Guest.x86_64.qcow2 dest=/root/openSUSE-13.2-OpenStack-Guest.x86_64.qcow2 mode=0600

   - name: Import OpenSuSE 13.2 to Glance
     command: glance image-create --name "opensuse-13-2-amd64" --file /root/openSUSE-13.2-OpenStack-Guest.x86_64.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress

The above is in yaml format which is really tricky so what your syntax when using it. It is VERY sensitive.

After this runs we are left with a nice glance image-list output. Glance is ready for compute to use the qcow2 images we associated using the openstack Glance API.

+--------------------------------------+------------------------------------+
| ID                                   | Name                               |
+--------------------------------------+------------------------------------+
| f58aaed4-fda7-41b3-a0c9-e99d6c956afd | centos7-x86_x64                    |
| b4c7224b-0e0d-475c-880c-f48e1c0608b2 | cirros-0.3.4_x86_64                |
| 975accd5-d9bc-4485-86df-88e97e7f3237 | debian8-2-0-amd64                  |
| 41e7949c-3e17-434f-8008-4551673da496 | fedora-23-amd64                    |
| 092338df-6e8e-471b-93ff-07b339510636 | opensuse-13-2-amd64                |
| ae707804-3dd5-474f-ab8d-3d6e855e420d | ubuntu-14.04-lts-trusty-tahr-amd64 |
+--------------------------------------+------------------------------------+

Downloading exported Cloud Server Image from Cloud Files using BASH/curl/API

So, after succesfully exporting the image in the previous article, I wanted to download the VHD so I could use it on virtualbox at home.

#!/bin/bash

# Username used to login to control panel
USERNAME='adambull'
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='mycloudapikey'
# Find the image ID you'd like to make available on cloud files

# Simply replace mytenantidgoeshere10011111etc with just the account number, the number given in the url in mycloud control panel! replace everything after _ so it looks like _101110
TENANTID='MossoCloudFS_mytenantidgoeshereie1001111etc'

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# Download the cloud files image

VHD_FILENAME=5fb64bf2-afae-4277-b8fa-0b69bc98185a.vhd
curl -o -i -X GET "https://storage101.lon3.clouddrive.com/v1/$TENANTID/exports/$VHD_FILENAME" \
-H "X-Auth-Token: $TOKEN"

Really really easy

Output looks like;

 ./download-image-id.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   4470    102  0:00:01  0:00:01 --:--:--  4473
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  1 3757M    1 38.1M    0     0  7231k      0  0:08:52  0:00:05  0:08:47 7875k

Exporting Rackspace Cloud Server Image to Cloud Files (so you can download it)

So today, a customer wanted to know if there was a way to export a Rackspace Cloud Server image out of Rackspace to download it. Yes, this is possible and can be done using the Images API and Cloud Files. Here is a summary of the basic process below;

Step 1: Make container called ‘export’ in cloud files; You can do this thru the mycloud control panel by navigating to your cloud files and simply clicking create container, call it ‘export’.

Screen Shot 2016-01-22 at 2.46.56 PM

Step 2: Create bash script to query API with correct user, apikey and imageid;

vim mybashscript.sh
#!/bin/bash

# Username used to login to control panel
USERNAME='mycloudusernamehere'
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='mycloudapikeyhere'
# Find the image ID you'd like to make available on cloud files
# set the image id below of the image you want to copy to cloud files, see in control panel
IMAGEID="5fb24bf2-afae-4277-b8fa-0b69bc98185a"

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
curl https://lon.images.api.rackspacecloud.com/v2/10045567/tasks -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" -d '{"type": "export", "input": {"image_uuid": "'"$IMAGEID"'", "receiving_swift_container": "exports"}}'

It’s so simple I had to check myself that it was really this simple.

It is. yay! Next guide shows you how to download the image you made.

Resizing a Rackspace Performance Server

It’s possible for the customer to do this thru the API, but it is without express warantee. It’s not possible to resize performance servers thru the mycloud control panel, so, to do it you will need to use curl API, or what I like to use, supernova wrapper for nova or nova. It’s quite simple really;

The below example is how to resize a performance server to 4 gigs (this was from 2 gigs)

supernova customer resize --poll uuidgoeshere performance1-4

Perform a traceroute thru the Rackspace monitoring API

So, I was thinking about the Rackspace traceroute monitoring API and wondering what I could do with it, when I come across this gem

/monitoring_zones/mzsyd/traceroute

Well what is it you ask. Well it’s an API path for performing a traceroute on the 6 different region endpoints. This means you can use an API call to run traceroutes (for free!) thru the Rackspace cloud monitoring API. This would be pretty handy at testing connectivity around the world to your chosen destination from each datacentre. Handy Andy.

Then you ask what does the mzsyd mean? That’s a region ID: Let’s see about putting together a script to list the region ID’s we can run the traceroutes on first of all:

File: list-monitoring-zones.sh

!/bin/bash

USERNAME='mycloudusername'
APIKEY='mycloudapikey'
ACCOUNTNUMBER='10010110'
API_ENDPOINT="https://monitoring.api.rackspacecloud.com/v1.0/$ACCOUNTNUMBER"


TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`




curl -s -v  \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNTNUMBER" \
-H "Accept: application/json"  \
-X GET  \
"$API_ENDPOINT/monitoring_zones"

Lets take a look at the response when I run this monitoring zone list.


chmod +x list-monitoring-zones.sh
./list-monitoring-zones.sh

Response

< Content-Type: application/json; charset=UTF-8
< Via: 1.1 Repose (Repose/7.3.0.0)
< Vary: Accept-Encoding
< X-LB: api1.dfw1.prod.cm.k1k.me
< Transfer-Encoding: chunked
<
{
    "values": [
        {
            "id": "mzdfw",
            "label": "Dallas Fort Worth (DFW)",
            "country_code": "US",
            "source_ips": [
                "2001:4800:7902:0001::/64",
                "50.56.142.128/26"
            ]
        },
        {
            "id": "mzhkg",
            "label": "Hong Kong (HKG)",
            "country_code": "HK",
            "source_ips": [
                "2401:1800:7902:0001::/64",
                "180.150.149.64/26"
            ]
        },
        {
            "id": "mziad",
            "label": "Northern Virginia (IAD)",
            "country_code": "US",
            "source_ips": [
                "2001:4802:7902:0001::/64",
                "69.20.52.192/26"
            ]
        },
        {
            "id": "mzlon",
            "label": "London (LON)",
            "country_code": "GB",
            "source_ips": [
                "2a00:1a48:7902:0001::/64",
                "78.136.44.0/26"
            ]
        },
        {
            "id": "mzord",
            "label": "Chicago (ORD)",
            "country_code": "US",
            "source_ips": [
                "2001:4801:7902:0001::/64",
                "50.57.61.0/26"
            ]
        },
        {
            "id": "mzsyd",
            "label": "Sydney (SYD)",
            "country_code": "AU",
            "source_ips": [
                "2401:1801:7902:0001::/64",
                "119.9.5.0/26"
            ]
        }
    ],
    "metadata": {
        "count": 6,
        "limit": 100,
        "marker": null,
        "next_marker": null,
        "next_href": null
    }
* Connection #0 to host monitoring.api.rackspacecloud.com left intact

We can see many zones available to run our traceroute to;

id 'mzsyd' for Sydney SYD.
id 'mzdfw' for Dallas Fort Worth DFW
id 'mzhkg' for Hong Kong HKG
id 'mziad' for Northern Viginia IAD
id 'mzord' for Chicago ORD
id 'mzlon' for London LON

So now I know what the zone id's are, as defined above here. Now time to use them and run a traceroute to haxed.me.uk. Lets see;

File: perform-traceroute-from-monitoring-zone.sh

!/bin/bash

USERNAME='mycloudusernamehere'
APIKEY='apikeyhere'
ACCOUNTNUMBER=10010110
API_ENDPOINT="https://monitoring.api.rackspacecloud.com/v1.0/$ACCOUNTNUMBER"



TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`




curl -s -v  \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNTNUMBER" \
-H "Accept: application/json"  \
-d @ip.json -H "content-type: application/json" -X POST  \
"$API_ENDPOINT/monitoring_zones/mzsyd/traceroute"

You also need the ip.json file. It's easy to make, put it in the same dir as the shellscript.

File: ip.json

{
        "target":               "haxed.me.uk",
        "target_resolver":      "IPv4"
}

We're going to refer to ip.json file which contains our destination data. You can do this with IPv6 IP's too if you wanted! That is pretty cool!
It is possible to do this without including the file, and actually just pass the json directly, with -d { "target": "haxed.me.uk", "target_resolver": "IPv4"} , but lets do it properly 😀


chmod +x perform-traceroute-from-monitoring-zone.sh
./perform-traceroute-from-monitoring-zone

the response, a nice traceroute of course from syd to my lon server.

Response

 Accept: application/json
> content-type: application/json
> Content-Length: 55
>
* upload completely sent off: 55 out of 55 bytes
< HTTP/1.1 200 OK
< Date: Wed, 13 Jan 2016 11:19:14 GMT
< Server: Jetty(9.2.z-SNAPSHOT)
< X-RateLimit-Type: traceroute
< X-RateLimit-Remaining: 296
< X-RateLimit-Window: 24 hours
< x-trans-id: eyJyZXF1ZXN0SWQiOiI5MTNhNTY1Mi05ODAyLTQ5MmQtOTAwYS05NDU1M2ZhNDJmNzUiLCJvcmlnaW4
< X-RateLimit-Limit: 300
< X-Response-Id: .rh-TI8E.h-api1.ord1.prod.cm.k1k.me.r-4RFTh9up.c-28452540.ts-1452683954386.v-91eaf0a
< Content-Type: application/json; charset=UTF-8
< Via: 1.1 Repose (Repose/7.3.0.0)
< Vary: Accept-Encoding
< X-LB: api0.ord1.prod.cm.k1k.me
< Transfer-Encoding: chunked
<
{
    "result": [
        {
            "ip": "119.9.5.2",
            "hostname": null,
            "number": 1,
            "rtts": [
                0.421,
                0.384,
                0.442,
                0.457,
                0.455
            ]
        },
        {
            "ip": "119.9.0.30",
            "hostname": null,
            "number": 2,
            "rtts": [
                1.015,
                0.872,
                0.817,
                1.014,
                0.926
            ]
        },
        {
            "ip": "119.9.0.109",
            "hostname": null,
            "number": 3,
            "rtts": [
                1.203,
                1.179,
                1.185,
                1.232,
                1.182
            ]
        },
        {
            "ip": "202.84.223.2",
            "hostname": null,
            "number": 4,
            "rtts": [
                3.53,
                5.301,
                3.975,
                5.772,
                3.804
            ]
        },
        {
            "ip": "202.84.223.1",
            "hostname": null,
            "number": 5,
            "rtts": [
                3.437,
                3.522,
                2.837,
                4.274,
                2.805
            ]
        },
        {
            "ip": "202.84.140.206",
            "hostname": null,
            "number": 6,
            "rtts": [
                141.198,
                140.746,
                143.871,
                140.987,
                141.545
            ]
        },
        {
            "ip": "202.40.149.238",
            "hostname": null,
            "number": 7,
            "rtts": [
                254.354,
                175.559,
                176.787,
                176.701,
                175.634
            ]
        },
        {
            "ip": "134.159.63.18",
            "hostname": null,
            "number": 8,
            "rtts": [
                175.302,
                175.299,
                175.183,
                175.146,
                175.149
            ]
        },
        {
            "ip": "64.125.26.6",
            "hostname": null,
            "number": 9,
            "rtts": [
                175.395,
                175.408,
                175.469,
                175.49,
                175.475
            ]
        },
        {
            "ip": "64.125.30.184",
            "hostname": null,
            "number": 10,
            "rtts": [
                285.818,
                285.872,
                285.801,
                285.835,
                285.887
            ]
        },
        {
            "ip": "64.125.29.52",
            "hostname": null,
            "number": 11,
            "rtts": [
                285.864,
                285.938,
                285.826,
                285.922,
                303.125
            ]
        },
        {
            "ip": "64.125.28.98",
            "hostname": null,
            "number": 12,
            "rtts": [
                284.711,
                284.865,
                284.73,
                284.697,
                284.713
            ]
        },
        {
            "ip": "64.125.29.48",
            "hostname": null,
            "number": 13,
            "rtts": [
                287.341,
                310.82,
                287.33,
                287.359,
                287.455
            ]
        },
        {
            "ip": "64.125.29.130",
            "hostname": null,
            "number": 14,
            "rtts": [
                286.168,
                286.012,
                286.108,
                286.105,
                286.168
            ]
        },
        {
            "ip": "64.125.30.235",
            "hostname": null,
            "number": 15,
            "rtts": [
                284.61,
                284.681,
                284.667,
                284.892,
                286.069
            ]
        },
        {
            "ip": "64.125.20.97",
            "hostname": null,
            "number": 16,
            "rtts": [
                287.516,
                287.435,
                287.557,
                287.581,
                287.438
            ]
        },
        {
            "ip": "94.31.42.254",
            "hostname": null,
            "number": 17,
            "rtts": [
                288.156,
                288.019,
                288.034,
                288.08
            ]
        },
        {
            "ip": null,
            "hostname": null,
            "number": 18,
            "rtts": []
        },
        {
            "ip": "134.213.131.251",
            "hostname": null,
            "number": 19,
            "rtts": [
                292.687,
                293.72,
                295.335,
                293.981
            ]
        },
        {
            "ip": "162.13.232.1",
            "hostname": null,
            "number": 20,
            "rtts": [
                293.295,
                293.738,
                295.46,
                294.301
            ]
        },
        {
            "ip": "162.13.232.103",
            "hostname": null,
            "number": 21,
            "rtts": [
                294.733,
                294.996,
                298.884,
                295.056
            ]
        },
        {
            "ip": "162.13.136.211",
            "hostname": null,
            "number": 22,
            "rtts": [
                294.919,
                294.77,
                298.956,
                296.481
            ]
        }
    ]
* Connection #0 to host monitoring.api.rackspacecloud.com left intact

This is pretty cool. If we want to run a traceroute from lets say chicago, we just swap out the 'mzsyd' variable to show 'mziad', wow thats simple 🙂

Custom Error pages for Linux Apache2 and the Rackspace Load Balancer

This article describes how to configure custom error pages for Apache2 and Zeus Load Balancer thru Rackspace API.

I have noticed that every now and then this question comes up. For instance, for most people there will define the errorpage within apache2, but if your not able to do that, and want a more helpful error page to be custom set, perhaps in the same xhtml layout as on your website, then custom error page for Load Balancer may be useful to you.

In apache2 this is traditionally set using the ErrorDocument directive.

The most common error pages are:

400 — Bad Request
The server did not understand the request due to bad syntax.

401 — Unauthorized
The visitor must by authorized (e.g., have a password) to access the page.

403 — Forbidden
The server understood the request but was unable to execute it. This could be due to an incorrect username and/or password or the server requires different input.

404 — Not Found
The server cannot find a matching URL.

500 — Internal Server Error
The server encountered an unexpected condition which prevented it from fulfilling the request.

In your apache2 httpd.conf you will need to add some directives to configure custom error pages. These custom error-pages will be forwarded to the Load Balancer in the case you have servers behind the Load Balancer, and the cloud server behind it encounters an error, it will be sent to the LB and then relayed to the customer, obviously. Instead of being sent directly to the customer as with a traditional apache2 webserver. To form your error page directive edit your httpd conf file either in sites-enabled or httpd.conf or similar, and merely add the following:

ErrorDocument 404 /my404page.html
ErrorDocument 500 /myphppage.php

ErrorDocument 403 /my-custom-forbidden-error-page.html
ErrorDocument 400 /my-bad-request-error-page.html

Then ensure that the error page defined (i.e. my404page.html my-custom-forbidden-error-page.html is placed in the correct directory i.e. the websites root /).

I.e. if your Documentroot is /var/www/html then your my404page.html should go in there.

For some people, for example who are using Load Balancer ACL, the blocked customer/client/visitor won’t be able to see/contact your server when it’s added to the LB DENY list. Therefore you might want to setup another error page on the LB to help customers that are accidentally/wrongly blocked on what process to carry out to remove the block, i.e. contact admin email, etc, etc.

To set custom error page on the load balancer you can use the API like so. You will need two files to achieve this, which will be detailed below:

errorpage.json (file)

{"errorpage":
{"content":"\n\n   Warning- Your IP has been blocked by the Load Balancer ACL, or there is an error contacting the servers behind the load balancer. Please contact [email protected] if you believe you have been blocked in error. \n\n"}
}

customerror.sh (file)

#!/bin/bash
# Make sure to set your customer account number, if you don't know what it is, it's the number that appears in the URL
# after logging in to mycloud.rackspace.com
# ALSO set your username and apikey, visible from the 'account settings' part of rackspace mycloud control panel

USERNAME='mycloudusernamehere'
APIKEY='mycloudapikeygoeshere'
ACCOUNTNUMBER='1001100'

# The load balancer ID, shown in the load balancer details page in mycloud contrl panel
LOADBALANCERID='157089'


API_ENDPOINT="https://lon.loadbalancers.api.rackspacecloud.com/v1.0/$ACCOUNTNUMBER"


# Store the API auth password response in the variable TOKEN

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# Execute the API call to update the load balancer error page, submitting as a file the errorpage.json to be used

curl -s -v  \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNTNUMBER" \
-H "Accept: application/json"  \
-d @errorpage.json -X PUT -H "content-type: application/json" \
"$API_ENDPOINT/loadbalancers/$LOADBALANCERID/errorpage"

That it, this concludes the discussion/tutorial on how to set basic error pages both in apache2 (without load balancer) , and with the load balancer only as described above. Please note, it appears you can only set a single error page for the load balancer, as opposed to one error page for each HTTP code.

If anyone see’s any differences or errors though, please let me know!

Best wishes,
Adam

Using Python with nova-client to list servers

A customer came to me today complaining about his code not working. He’d forgot to include the ‘account-number’, also refered to as the project_id in openstack. Without it, your going to get HTTP 405, i.e. MethodNotAllowed: Method Not Allowed (HTTP 405)

from novaclient import client

nova = client.Client ("2" ,"username", "password", "account-number", "https://lon.identity.api.rackspacecloud.com/v2.0")

list = nova.servers.list()

print list

This does what it says on the tin, queries the API using nova python module to extract server list.