Moving Rackspace Cloud Servers between Regions with automation II

Hey folks. So, recently I have been doing a bit of work on the Rackspace community, specifically trying to document and make as easy as possible the importing and exporting of cloud server VHD’s between Rackspace regions. This might be really useful if you are designing some HA or multi-region and/or load balancing solution that might be utilizing autoscale, and other kinds of redundancy too, but moving your ‘golden image’ between regions might be quite difficult if doing the entire process manually or step by step as I have documented in the below two articles:

Exporting Cloud server images from a Rackspace Region https://community.rackspace.com/products/f/25/t/7089

Importing Cloud Server Images to a Rackspace Region https://community.rackspace.com/products/f/25/t/7186

In this article I completely finish writing the ‘automation demo’ of how to specifically move images, without changing much at all, apart from one ‘serverID’ variable, and the source and destination. The script isn’t finished yet, however the last time I posted this on my blog I was so excited, I actually forgot to include the import function. (which is kind of important!) sorry about that.


#!/bin/bash

USERNAME='yourmycloudusernamehere'
APIKEY='youapikeyhere'
API_ENDPOINT='https://lon.servers.api.rackspacecloud.com/v2/1000000'
SERVER_ID='94157dc7-924a-424a-8825-c5ffbd341622'
TENANT='1000000'
CUSTOMER_ID='1000000'

#### DO NOT CHANGE BELOW THIS LINE

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# START IMAGE CREATION
echo "Creating Image at Local Datacentre"

curl -v -D export-headers \
-H "X-Auth-Token: $TOKEN" \
-H "Accept: application/json" \
-H "content-type: application/json" \
-d '{"createImage" : {"name" : "RA-'$SERVER_ID'", "metadata": { "ImageType": "Rackspace Automation Image Exported from '$TENANT'", "ImageVersion": "2.0"}}}' \
-X POST "$API_ENDPOINT/servers/$SERVER_ID/action" -o /tmp/export-file

echo "export headers"
cat export-headers

# Retrieve correct ImageID and use to check status of image
IMAGEID=$(cat export-headers | grep -i location | sed 's/\// /g' | awk '{print $7}')
sleep 5
echo "image id"
echo $IMAGEID

API_ENDPOINT='https://lon.images.api.rackspacecloud.com/v2/images/'
URL=$API_ENDPOINT$IMAGEID
URL=${URL%$'\r'}

curl -v \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 1000000" \
-H "Accept: application/json" \
-H "content-type: application/json" \
-X GET "$URL" | python -mjson.tool > imagestatus

echo "imagestatus: $imagestatus"

STATUS=$(cat imagestatus | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')

## WAIT FOR IMAGE TO EXIT SAVE STATE

echo "Waiting for image to complete..."
sleep 5
while [ "$STATUS" != "active" ]; do
echo "image $IMAGEID is still saving..."
sleep 10
curl -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 1000000" \
-H "Accept: application/json" \
-H "content-type: application/json" \
-X GET "$URL" | python -mjson.tool > imagestatus

STATUS=$(cat imagestatus | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')
done

## PREPARE/CREATE CLOUD FILES CONTAINER for EXPORT

echo "Preparing/Creating Cloud Files Container for Export"
API_ENDPOINT='https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_1000000'

curl -v -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 1000000" \
-H "Accept: application/json" \
-X PUT "$API_ENDPOINT/export"
sleep 5

## EXPORT VHD TO CLOUD FILES

echo "Exporting VHD to Cloud Files"
# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

echo "IMAGEID detected as $IMAGEID"
# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
# > export-cloudfiles

echo "THE IMAGE ID IS: $IMAGEID"
IMAGEID=${IMAGEID%$'\r'}
curl -v "https://lon.images.api.rackspacecloud.com/v2/$TENANT/tasks" -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" -d '{"type": "export", "input": {"image_uuid": "'$IMAGEID'" , "receiving_swift_container": "export"}}' -o export-cloudfiles
echo "Export looks like"

cat export-cloudfiles

sleep 15

echo "export cloud-files looks like:"
cat export-cloudfiles

TASKID_EXPORT=$(cat export-cloudfiles | python -mjson.tool | grep '"id"' | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')

echo "task ID export looks like"
echo "$TASKID_EXPORT"

API_ENDPOINT='https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_1000000'

sleep 15

echo "Waiting for Task to complete..."
## WAIT FOR TASKID EXPORT TO COMPLETE TO CLOUD FILES

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
curl "https://lon.images.api.rackspacecloud.com/v2/1000000/tasks/$TASKID_EXPORT" -X GET -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" | python -mjson.tool > export-status

EXPORT_STATUS=$(cat export-status | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')

while [ "$EXPORT_STATUS" = "processing" ]; do
sleep 15
curl "https://lon.images.api.rackspacecloud.com/v2/1000000/tasks/$TASKID_EXPORT" -X GET -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" | python -mjson.tool > export-status
EXPORT_STATUS=$(cat export-status | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')
done

# SET CORRECT CLOUD FILES NAME
CLOUD_FILES_NAME=$(cat export-cloudfiles | python -mjson.tool | grep image_uuid | awk '{print $2}' | sed 's/,//g' | sed 's/"//g')

## Download VHD Cloud from Cloud Files to this server

API_ENDPOINT='https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_1000000'

# GET FILE FROM SOURCE CLOUD FILES

URL="$API_ENDPOINT/export/$CLOUD_FILES_NAME.vhd"
URL=${URL%$'\r'}

curl -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: $TENANT" \
-H "Accept: application/json" \
-X GET "$API_ENDPOINT/export/$CLOUD_FILES_NAME.vhd" > $CLOUD_FILES_NAME.vhd

## NEW API USER/PASS REQUIRED FOR 2ND REGION

### DO NOT CHANGE ANYTHING ABOVE THIS POINT

USERNAME='yourmycloudusernamegoeshere'
APIKEY='yourapikeyfromsecondregiongoeshere'

### DO NOT CHANGE ANYTHING BELOW THIS POINT

## Now for uploading the VHD to Cloud Files to Destination REGION

API_ENDPOINT='https://storage101.ord1.clouddrive.com/v1/MossoCloudFS_900000'
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

curl -v -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 900000" \
-H "Accept: application/json" \
-X PUT "$API_ENDPOINT/import"

## Upload VHD Image to Cloud Files destination for import
curl -v -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 900000" \
-H "Accept: application/json" \
-X PUT "$API_ENDPOINT/import/$CLOUD_FILES_NAME.vhd" -T "$CLOUD_FILES_NAME.vhd"

# Find the Customer_ID
IMPORT_IMAGE_ENDPOINT=https://ord.images.api.rackspacecloud.com/v2/$CUSTOMER_ID

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

VHD_NOTES="autoimport-$SERVER_ID"
IMPORT_CONTAINER=import
VHD_FILENAME="$CLOUD_FILES_NAME.vhd"

curl -X POST "$IMPORT_IMAGE_ENDPOINT/tasks" \
-H "X-Auth-Token: $TOKEN" \
-H "Content-Type: application/json" \
-d "{\"type\":\"import\",\"input\":{\"image_properties\":{\"name\":\"$VHD_NOTES\"},\"import_from\":\"$IMPORT_CONTAINER/$VHD_FILENAME\"}}" |\
python -mjson.tool

As You can probably see my code is still rather rough, but it’s just so darn exciting that this script works from start to finish, nicely I just HAD to share it a bit earlier! The plan now is to add commandline function so that you can specify ./moveregion {SOURCE_REGION} {DEST_REGION} {SERVER_ID} {TENANT_ID} . Then a customer or a racker would only need these 4 variables to import and export images in an automated way.

I can rewrite the script in such a way that it would accept a .txt file of a couple of hundred cloud server UUID’s, and it would take the server UUID of each, use that uuid to create an image of each server, export to cloud files, import to cloud files, and then import to glance image store for the second region destination. Which naturally, would save hundreds of hours of human time doing this manually.. which is … nice 😀

I would really like to make a UI frontend, using something like Django, and utilize some form of ‘light’ database, that keeps track of all the API import/exports, and even provides estimated time for completion, but my UI skills are really limited to xhtml, css php and mysql.. I need a python or django guy to help out with some of this. If anyone is interested, please reach out to me.

This project will be avaialble on github soon

Move Rackspace Cloud Servers between Regions (Automation)

Hey!

So I wrote a piece of software (basic) using BASh which exports Rackspace Cloud Servers between regions. It’s pure API CALLS using curl and I’m particularly proud of this piece, since it only took a day. (once I spent the whole of the next day figuring out an issue with the JSON and bash expansion for parameters to export the cloud server image to cloud files).

This is a super rough example of an automation-in-progress for cloud-servers between regions. Once you’ve set the script up, you simply change the serverid, and the script can do the rest, and you can migrate server by server, or perform batch migrates with this.

I’m going to refactor and rewrite it when I have time, but for now, here you are! Enjoy 😀

I hope that this is useful to people, particularly our customers.. when I release a finely tuned version that has commandline arguments support.

#!/bin/bash

USERNAME=''
APIKEY=''
API_ENDPOINT='https://lon.servers.api.rackspacecloud.com/v2/100101010'
SERVER_ID='cd2b545b-99d4-42c1-a881-4714f4bf4b92'
TENANT='100101010'
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# START IMAGE CREATION
echo "Creating Image at Local Datacentre"

curl -v -D export-headers \
-H "X-Auth-Token: $TOKEN" \
-H "Accept: application/json" \
-H "content-type: application/json" \
-d '{"createImage" : {"name" : "RA-'$SERVER_ID'", "metadata": { "ImageType": "Rackspace Automation Image Exported from '$TENANT'", "ImageVersion": "2.0"}}}' \
-X POST "$API_ENDPOINT/servers/$SERVER_ID/action" > export-headers

echo "export headers"
cat export-headers

# Retrieve correct ImageID and use to check status of image
IMAGEID=$(cat export-headers | grep -i location | sed 's/\// /g' | awk '{print $7}')
sleep 5
echo "image id"
echo $IMAGEID

API_ENDPOINT='https://lon.images.api.rackspacecloud.com/v2/images/'
URL=$API_ENDPOINT$IMAGEID
URL=${URL%$'\r'}

curl -v \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 100101010" \
-H "Accept: application/json" \
-H "content-type: application/json" \
-X GET "$URL" | python -mjson.tool > imagestatus

echo "imagestatus: $imagestatus"

STATUS=$(cat imagestatus | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')

## WAIT FOR IMAGE TO EXIT SAVE STATE

echo "Waiting for image to complete..."
sleep 5
while [ "$STATUS" != "active" ]; do
echo "image $IMAGEID is still saving..."
sleep 10
curl -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 100101010" \
-H "Accept: application/json" \
-H "content-type: application/json" \
-X GET "$URL" | python -mjson.tool > imagestatus

STATUS=$(cat imagestatus | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')
done

## PREPARE/CREATE CLOUD FILES CONTAINER for EXPORT

echo "Preparing/Creating Cloud Files Container for Export"
API_ENDPOINT='https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_100101010'

curl -v -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 100101010" \
-H "Accept: application/json" \
-X PUT "$API_ENDPOINT/export"
sleep 5

## EXPORT VHD TO CLOUD FILES

echo "Exporting VHD to Cloud Files"
# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

echo "IMAGEID detected as $IMAGEID"
# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
# > export-cloudfiles

echo "THE IMAGE ID IS: $IMAGEID"
IMAGEID=${IMAGEID%$'\r'}
curl -v "https://lon.images.api.rackspacecloud.com/v2/$TENANT/tasks" -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" -d '{"type": "export", "input": {"image_uuid": "'$IMAGEID'" , "receiving_swift_container": "export"}}' -o export-cloudfiles
echo "Export looks like"

cat export-cloudfiles

sleep 15

echo "export cloud-files looks like:"
cat export-cloudfiles

TASKID_EXPORT=$(cat export-cloudfiles | python -mjson.tool | grep '"id"' | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')

echo "task ID export looks like"
echo "$TASKID_EXPORT"

API_ENDPOINT='https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_100101010'

sleep 15

echo "Waiting for Task to complete..."
## WAIT FOR TASKID EXPORT TO COMPLETE TO CLOUD FILES

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
curl "https://lon.images.api.rackspacecloud.com/v2/10101010/tasks/$TASKID_EXPORT" -X GET -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" | python -mjson.tool > export-status

EXPORT_STATUS=$(cat export-status | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')

while [ "$EXPORT_STATUS" = "processing" ]; do
sleep 15
curl "https://lon.images.api.rackspacecloud.com/v2/100101010/tasks/$TASKID_EXPORT" -X GET -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" | python -mjson.tool > export-status
EXPORT_STATUS=$(cat export-status | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')
done

# SET CORRECT CLOUD FILES NAME
CLOUD_FILES_NAME=$(cat export-cloudfiles | python -mjson.tool | grep image_uuid | awk '{print $2}' | sed 's/,//g' | sed 's/"//g')

## Download VHD Cloud from Cloud Files to this server

API_ENDPOINT='https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10101010'

# GET FILE FROM SOURCE CLOUD FILES

URL="$API_ENDPOINT/export/$CLOUD_FILES_NAME.vhd"
URL=${URL%$'\r'}

curl -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: $TENANT" \
-H "Accept: application/json" \
-X GET "$API_ENDPOINT/export/$CLOUD_FILES_NAME.vhd" > $CLOUD_FILES_NAME.vhd

## NEW API USER/PASS REQUIRED FOR 2ND REGION

USERNAME=''
APIKEY=''

## Now for uploading the VHD to Cloud Files to Destination REGION

API_ENDPOINT='https://storage101.ord1.clouddrive.com/v1/MossoCloudFS_891671'
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

curl -v -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 891671" \
-H "Accept: application/json" \
-X PUT "$API_ENDPOINT/import"

## Upload VHD Image to Cloud Files destination for import
curl -v -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 891671" \
-H "Accept: application/json" \
-X PUT "$API_ENDPOINT/import/$CLOUD_FILES_NAME.vhd" -T "$CLOUD_FILES_NAME.vhd"

Creating a Next Generation Server image from a First Generation Server

So we had a customer today that wanted to create a next generation cloud-server using a first generation server image. Since the first gen platform uses cloud files, it’s possible to do manually, downloading from cloud files , concatenating and untar to access the filesystem.

Like so;

cat receiverTar1.tar receivedTar2.tar >> alltars.tar
tar -itvf alltars.tar

Although I used on my mac:

tar -vxf alltars.tar 

This gives us the VHD files extracted into an ‘image’ folder;

$ ls -al image/
total 79851760
drwxr-xr-x   6 adam9261  RACKSPACE\Domain Users          204 Apr 19 12:17 .
drwxr-xr-x  11 adam9261  RACKSPACE\Domain Users          374 Apr 19 11:47 ..
-rw-r--r--   1 adam9261  RACKSPACE\Domain Users  40884003328 Jan  4 07:05 image.vhd
-rwxr-xr-x   1 adam9261  RACKSPACE\Domain Users         1581 Apr 19 12:15 import-container.sh
-rw-r--r--   1 adam9261  RACKSPACE\Domain Users            8 Jan  4 07:05 manifest.ovf
-rw-r--r--   1 adam9261  RACKSPACE\Domain Users        84480 Jan  4 07:05 snap.vhd

We are interested in the image.vhd file. Now lets upload it to cloud files to IMPORT it into Glance, which is what is used by the next generation platform to create a new server. The problem of course was that the first gen image format wasn’t compatible. Next gen builds need to retrieve the VHD image from glance.

Also, lets ensure we use Transfer-Encoding: chunked” as a host -H header. This tells Cloud Files that the .vhd exceeds 5G and it will create a multi-part manifest for us for the main file. Splitting it up for us into multiple objects spanned across 5Gig Files!


# Username used to login to control panel
USERNAME='mycloudusername'
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='mycloudapikey'
# Find the image ID you'd like to make available on cloud files
CUSTOMER_ID=10001010

IMPORT_CF_ENDPOINT="https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_50441c7a-dc22-4287-8e8c-b9844df"

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# Upload VHD
curl -X PUT -T image.vhd \
-H "X-Auth-Token: $TOKEN" \
-H "Transfer-Encoding: chunked" \
"$IMPORT_CF_ENDPOINT/import/image.vhd"

Update:

Something in curl transmission caused it to sadly, mess up. So I used swiftly instead.

$ swiftly put -i image.vhd import/image.vhd

The problem with swiftly was it didn’t like my .swiftly file in my ~ as should work 100% without problems, but it didn’t. With help of my friend Jake this is what I did, to get round that and set manually in the environment (as opposed to using the .swiftly file)

abull-mb:~ adam9261$ export SWIFTLY_AUTH_URL=https://identity.api.rackspacecloud.com/v2.0
abull-mb:~ adam9261$ export SWIFTLY_AUTH_USER=cloudusernamehere
abull-mb:~ adam9261$ export SWIFTLY_AUTH_KEY=apikeyhere
abull-mb:~ adam9261$ swiftly auth

Next stage import to glance

#!/bin/bash

# Username used to login to control panel
USERNAME='mycloudusername'
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='mycloudapikey'
# Find the image ID you'd like to make available on cloud files
CUSTOMER_ID=10001010
IMPORT_CF_ENDPOINT="https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_50441c7a-dc22-4287-8e8c-b6d76b237da"
IMPORT_IMAGE_ENDPOINT=https://LON.images.api.rackspacecloud.com/v2/$CUSTOMER_ID


# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`


VHD-NOTES=TESTING-RACKSPACE-IMAGE-IMPORT
IMPORT_CONTAINER=import

curl -X POST "$IMPORT_IMAGE_ENDPOINT/tasks" \
      -H "X-Auth-Token: $TOKEN" \
      -H "Content-Type: application/json" \
      -d "{\"type\":\"import\",\"input\":{\"image_properties\":{\"name\":\"$VHD_NOTES\"},\"import_from\":\"$IMPORT_CONTAINER/image.vhd\"}}" |\
      python -mjson.tool

Please note that image.vhd is hardcoded into the curl import. Also see VHD-NOTES variable which is passed to the task. This is just to identify the image more easily.

Response:

{
    "created_at": "2016-04-19T13:12:57Z",
    "id": "ff7d8c09-9dd7-43ed-824f-338201681b12",
    "input": {
        "image_properties": {
            "name": ""
        },
        "import_from": "import/image.vhd"
    },
    "message": "",
    "owner": "10001010",
    "result": null,
    "schema": "/v2/schemas/task",
    "self": "/v2/tasks/ff7d8c09-9dd7-43ed-815f-338201681ba7",
    "status": "pending",
    "type": "import",
    "updated_at": "2016-04-19T13:12:57Z"
}

I then retrieved the Task details: (code not included yet). In this case I used pitchfork.cloudapi.co. A rackspace service that allows you to make API calls using a web frontend.

I was in a rush to get this done for the customer as soon as possible.

{
    "status": "processing", 
    "created_at": "2016-04-19T13:12:57Z", 
    "updated_at": "2016-04-19T13:12:58Z", 
    "id": "ff7d8c09-9dd7-43ed-824f-338201681b12", 
    "result": null, 
    "owner": "10009158", 
    "input": {
        "image_properties": {
            "name": ""
        }, 
        "import_from": "import/image.vhd"
    }, 
    "message": "", 
    "type": "import", 
    "self": "/v2/tasks/ff7d8c09-9dd7-43ed-815f-338201681ba7", 
    "schema": "/v2/schemas/task"
}

We can now see that the status is processing. When it has completed, it will tell us whether it failed or not

After waiting 30 minutes or so:

{
    "status": "success", 
    "created_at": "2016-04-19T13:12:57Z", 
    "updated_at": "2016-04-19T14:22:53Z", 
    "expires_at": "2016-04-21T14:22:53Z", 
    "id": "ff7d8c09-9dd7-43ed-815f-338201681ba7", 
    "result": {
        "image_id": "826bbb51-0f83-4278-b0ad-702aba088aae"
    }, 
    "owner": "10009158", 
    "input": {
        "image_properties": {
            "name": ""
        }, 
        "import_from": "import/image.vhd"
    }, 
    "message": "", 
    "type": "import", 
    "self": "/v2/tasks/ff7d8c09-9dd7-43ed-815f-338201681ba7", 
    "schema": "/v2/schemas/task"
}

It worked!

A new way of Deploying CBS for Large Clusters, using the TOR method 5600% to 12800% faster

So, I was thinking about the problem with cloning CBS volumes, where if you want to make several 64 copies of a CBS disk or more in a quick time. But what happens is they are built sequentially and queued. They are copied one at a time. So when a windows customer approached us, a colleague reached out to me to see if there was any other way of doing this thru snapshots or clones. In fact there was, and cinder is to be considered a fox, fast and cunning and unseen , but it is trapped inside a cage called glance.

This is about overcoming those limitations, introducing TOR-CBS
Parallel CBS Building with Openstack Cinder

This is all about making the best of the infrastructure that is there. Cinder is massively distributed so, building 64 parallel copies is achievable at a much higher parallel bandwidth, and for those reasons it is a ‘tor like’ system. A friend of mine compared it to cellular division. There is a kind of organic nature to the method applied, as all children are used as new parents for copy. This explains the efficiency and speed of the system. I.e. the more servers you want to build the more time you save .

When this actually worked for the first time I had to take a step back. It really meant that building 64 CBS would take an hour, and building 128 of them would take 1 hour and 10 minutes. Damn, that’s fast!

When you’ve got all thatI.e. clone 1 disk to create a second disk. Clone both the first and the second disk to make four disks. Clone the four to make 8 in total. Clone 8 to make 16 in total. 32, 64, 128, 256, 512, 1024, 2048. Your cluster can double in size in roughly 10 minutes a go provided that Cinder service has the infrastructure in place. This appears to be a new potentially revolutionary way of building out in the cloud.

See the diagram below for a proper illustration and explanation.

rapiddeploy-tor-cbs

As you can see the one for one copy in the 9th or 10th step is in the tens of thousands of percent more efficient!! The reason is because CBS clone is a one to one copy, and even if you specify to build 50 from a single volume id source, it will incrementally build them, one by one.

My system works the same, except it uses all of the available disks already built from the previous n steps, therefore giving an n’th exponent of amplification of efficiency per step, in other words, ‘something for nothing’. It also properly utilizes the distributed nature of CBS and very many network ports. Instead of utilizing a single port from the source volume, which is ultimately the restricting bottleneck factor in spinning up large cloud solutions.

I am absolutely delighted. IT WORKS!!

The Code

build-cbs.sh

USERNAME='MYCLOUDUSERNAMEHERE'
APIKEY='MYAPIKEYHERE'
ACCOUNT_NUMBER=10010111
API_ENDPOINT="https://lon.blockstorage.api.rackspacecloud.com/v1/$ACCOUNT_NUMBER/volumes"
MASTER_CBS_VOL_ID="MY-MASTER-VOLUME-ID-HERE"

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

echo "Using MASTER_CBS_VOL_ID $MASTER_CBS_VOL_ID.."
sleep 2

# Populate CBS
# No longer using $1 and $2 as unnecessary now we have cbs-fork-step
for i in `seq 1 2`;
do

echo "Generating CBS Clone #$i"
curl -s -vvvv  \
-X POST "$API_ENDPOINT" \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNT_NUMBER" \
-H "Accept: application/json"  \
-H "Content-Type: application/json" -d '{"volume": {"source_volid": "'$MASTER_CBS_VOL_ID'", "size": 50, "display_name": "win-'$i'", "volume_type": "SSD"}}'  | jq .volume.id | tr -d '"' >> cbs.created.newstep
done

echo "Giving CBS 15 minute grace time for 50 CBS clone"

z=0
spin() {
   local -a marks=( '/' '-' '\' '|' )
   while [[ $z -lt 500 ]]; do
     printf '%s\r' "${marks[i++ % ${#marks[@]}]}"
     sleep 1
     let 'z++'
   done
 }

spin

echo "Listing all CBS Volume ID's created"
cat cbs.created.newstep
# Ensure all of the initial created cbs end up in the master file
cat cbs.created.newstep >> cbs.created.all

echo "Initial Copy completed"

So the first bit is simple, the above uses the openstack Cinder API endpoint to create two copies of the master. It takes a bit longer the initial process, but if your building 64 to infinite servers this is going to be the most efficient and fastest way to do it. The thing is, we want to recursively build CBS in steps.

Enter cbs-fork-step.sh

cbs-fork-step.sh

USERNAME='MYCLOUDUSERNAMEHERE'
APIKEY='MYAPIKEYHERE'
ACCOUNT_NUMBER=10010111
API_ENDPOINT="https://lon.blockstorage.api.rackspacecloud.com/v1/$ACCOUNT_NUMBER/volumes"

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

z=0
spin() {
   local -a marks=( '/' '-' '\' '|' )
   while [[ $z -lt 400 ]]; do
     printf '%s\r' "${marks[i++ % ${#marks[@]}]}"
     sleep 1
     let 'z++'
   done
 }

count=$1

#count=65;
while read n; do
echo ""
# Populate CBS TOR STEPPING

echo "Generating TOR CBS Clone $count::$n"
date
curl -s  \
-X POST "$API_ENDPOINT" \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNT_NUMBER" \
-H "Accept: application/json"  \
-H "Content-Type: application/json" -d '{"volume": {"source_volid": "'$n'", "size": 50, "display_name": "win-'$count'", "volume_type": "SSD"}}' | jq .volume.id | tr -d '"' >> cbs.created.newstep


((count=count+1))

done < cbs.created.all

cat cbs.created.newstep > cbs.created.all
echo "Waiting 8 minutes for Clone cycle to complete.."
spin

As you can see from the above, the volume master ID disappears, we’re now using the 2 CBS VOL ID’s that were initially copied in the first build-cbs.sh file. From now on, we’ll iterate while reading n lines of the cbs.crated.newstep file. For redundancy cbs.created.all is used as well. The problem is this is a fixed iterative loop, what about controlling how many times this runs?

Also, we obviously need to keep count and track of each CBS, so we call them win-‘$count’, the ‘ ‘ is for termination/escape from the ‘” “‘. This allows each CBS to get the correct logical name based on the sequence, but in order for this to work properly, we need to put it all together in a master.sh file. The master forker, which adds an extra loop traversal to the design.

Putting it all together

master.sh

drwxr-xr-x. 2 root root 4096 Oct 7 10:44 curl
drwxr-xr-x. 2 root root 4096 Nov 12 13:48 customer
drwxr-xr-x. 4 root root 4096 Oct 12 15:07 .gem
# Master Controller file

# Number of Copy Steps Minimum 2 Maximum 9
# Steps 2=2 copies, 3=4 copies, 4=8, 5=16, 6=32, 7=64, 8=128, 9=256
# Steps 2=4 copies, 3=8 copies, 4=16, 5=32, 6=64, 7=128
# The steps variable determines how many identical Tor-copies of the CBS you wish to make
steps=6

rm cbs.created.all
rm cbs.created.newstep

touch cbs.created.all
touch cbs.created.newstep

figlet TOR CBS
echo ‘By Adam Bull, Rackspace UK’
sleep 2

echo “This software is alpha”
sleep 2

echo “Initiating initial Copy using $MASTER_CBS_VOLUME_ID”
# Builds first copy
./build-cbs.sh

count=4
for i in `seq 1 $steps`; do
let ‘count–‘
./cbs-fork-step.sh $count
let ‘count = (count * 2)’
done

echo “Attaching CBS and Building Nova Compute..”
./build-nova.sh

This code is still alpha, but it works really nicely. The output of the script looks like;

# ./master.sh
 _____ ___  ____     ____ ____ ____
|_   _/ _ \|  _ \   / ___| __ ) ___|
  | || | | | |_) | | |   |  _ \___ \
  | || |_| |  _ <  | |___| |_) |__) |
  |_| \___/|_| \_\  \____|____/____/

By Adam Bull, Rackspace UK
This software is alpha
Initiating initial Copy using
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   5013    114  0:00:01  0:00:01 --:--:--  5017

Generating TOR CBS Clone 3::defd5aa1-2927-444c-992d-fba6602f117c
Wed Mar  2 12:25:26 UTC 2016

Generating TOR CBS Clone 4::8283420f-b02a-4094-a857-aedf73dffcc3
Wed Mar  2 12:25:27 UTC 2016
Waiting 8 minutes for Clone cycle to complete..
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   4942    113  0:00:01  0:00:01 --:--:--  4948

Generating TOR CBS Clone 5::defd5aa1-2927-444c-992d-fba6602f117c
Wed Mar  2 12:32:10 UTC 2016

Generating TOR CBS Clone 6::8283420f-b02a-4094-a857-aedf73dffcc3
Wed Mar  2 12:32:11 UTC 2016

Generating TOR CBS Clone 7::822687a8-f364-4dd1-8a8a-3d52687454dd
Wed Mar  2 12:32:12 UTC 2016

Generating TOR CBS Clone 8::4a97d22d-03c1-4b14-a64c-bbf3fa5bab07
Wed Mar  2 12:32:12 UTC 2016
Waiting 8 minutes for Clone cycle to complete..
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   5186    118 --:--:-- --:--:-- --:--:--  5183

Generating TOR CBS Clone 9::defd5aa1-2927-444c-992d-fba6602f117c
Wed Mar  2 12:38:56 UTC 2016

Generating TOR CBS Clone 10::8283420f-b02a-4094-a857-aedf73dffcc3
Wed Mar  2 12:38:56 UTC 2016

Generating TOR CBS Clone 11::822687a8-f364-4dd1-8a8a-3d52687454dd
Wed Mar  2 12:38:57 UTC 2016

Generating TOR CBS Clone 12::4a97d22d-03c1-4b14-a64c-bbf3fa5bab07
Wed Mar  2 12:38:58 UTC 2016

Generating TOR CBS Clone 13::42145009-33a7-4fc4-9865-da7a82e943c1
Wed Mar  2 12:38:58 UTC 2016

Generating TOR CBS Clone 14::58db8ae2-2e0e-4629-aad6-5c228eb4b342
Wed Mar  2 12:38:59 UTC 2016

Generating TOR CBS Clone 15::d0bf36cb-6dd5-4ed3-8444-0e1d61dba865
Wed Mar  2 12:39:00 UTC 2016

Generating TOR CBS Clone 16::459ba327-de60-4bc1-a6ad-200ab1a79475
Wed Mar  2 12:39:00 UTC 2016
Waiting 8 minutes for Clone cycle to complete..
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   4953    113  0:00:01  0:00:01 --:--:--  4958

Generating TOR CBS Clone 17::defd5aa1-2927-444c-992d-fba6602f117c
Wed Mar  2 12:45:44 UTC 2016

Generating TOR CBS Clone 18::8283420f-b02a-4094-a857-aedf73dffcc3
Wed Mar  2 12:45:45 UTC 2016

Generating TOR CBS Clone 19::822687a8-f364-4dd1-8a8a-3d52687454dd
Wed Mar  2 12:45:45 UTC 2016

Generating TOR CBS Clone 20::4a97d22d-03c1-4b14-a64c-bbf3fa5bab07
Wed Mar  2 12:45:46 UTC 2016

Generating TOR CBS Clone 21::42145009-33a7-4fc4-9865-da7a82e943c1
Wed Mar  2 12:45:46 UTC 2016

Generating TOR CBS Clone 22::58db8ae2-2e0e-4629-aad6-5c228eb4b342
Wed Mar  2 12:45:47 UTC 2016

Generating TOR CBS Clone 23::d0bf36cb-6dd5-4ed3-8444-0e1d61dba865
Wed Mar  2 12:45:48 UTC 2016

Generating TOR CBS Clone 24::459ba327-de60-4bc1-a6ad-200ab1a79475
Wed Mar  2 12:45:48 UTC 2016

Generating TOR CBS Clone 25::9b10b078-c82d-48cd-953e-e99d5e90774a
Wed Mar  2 12:45:49 UTC 2016

Generating TOR CBS Clone 26::0692c7dd-6db0-43e6-837d-8cc82ce23c78
Wed Mar  2 12:45:50 UTC 2016

Generating TOR CBS Clone 27::f2c4a89e-fc37-408a-b079-f405e150fa96
Wed Mar  2 12:45:50 UTC 2016

Generating TOR CBS Clone 28::5077f4d8-e5e1-42b6-af58-26a0b55ff640
Wed Mar  2 12:45:51 UTC 2016

Generating TOR CBS Clone 29::f18ec1c3-1698-4985-bfb9-28604bbdf70b
Wed Mar  2 12:45:52 UTC 2016

Generating TOR CBS Clone 30::fd96c293-46e5-49e4-85d5-5181d6984525
Wed Mar  2 12:45:52 UTC 2016

Generating TOR CBS Clone 31::9ea40b0d-fb60-4822-a538-3b9d967794a2
Wed Mar  2 12:45:53 UTC 2016

Generating TOR CBS Clone 32::ea7e2c10-d8ce-4f22-b8b5-241b81dff08c
Wed Mar  2 12:45:54 UTC 2016
Waiting 8 minutes for Clone cycle to complete..
/

Rackspace Customer takes the time to improve my script :D

Wow. this was an awesome customer. Who was obviously capable in using the API but was struggling. So I thrown them my portable python -mjson parsing script for identity token and glance image export to cloud files. So,the customer wrote back, commenting that I’d made a mistake, specifically I had added ‘export’ instead of ‘exports’

#!/bin/bash

# Task ID - supply with command
TASK=$1
# Username used to login to control panel
USERNAME='myusername'
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='myapikeyhere'

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# Requests progress of specified task
curl -X GET -H "X-Auth-Token: $TOKEN" "https://lon.images.api.rackspacecloud.com/v2/10010101/tasks/$TASK"

I just realised that the customer didn’t adapt the script to be able to pass in the image ID, on the initial export to cloud files.

Theoretically you could not only do the above but.. something like:

I just realised your script you sent checks the TASK. I just amended my initial script a bit further with your suggestion to accept myclouduser mycloudapikey and mycloudimageid

#!/bin/bash

# Username used to login to control panel
USERNAME=$1
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY=$2

# Find the image ID you'd like to make available on cloud files
IMAGEID=$3

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`


# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
curl https://lon.images.api.rackspacecloud.com/v2/10031542/tasks -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" -d '{"type": "export", "input": {"image_uuid": "'"$IMAGEID"'", "receiving_swift_container": "exports"}}'

# I thought that could theoretically process the output of the above and extract $TASK_ID to check the TASK too.

Note my script isn’t perfect but the customer did well!

This way you could simply provide to the script the cloud username, password and imageid. Then when the glance export starts, the task could be extracted in the same way as the TOKEN is from identity auth.

That way you could simply run something like

./myexportvhd.sh mycloudusername mycloudapikey mycloudimageid 

Not only would it start the image export to a set export folder.
But it'd provide you an update as to the task status.

You could go further, you could then watch the tasks status with a batch script while loop, until all show a complete or failed output and then record which ones succeeded and which ones failed. You could then create a batch script off that which downloaded and rsynched to somewhere the ones that succeeded.

Or..something like that.

I love it when one of our customers makes me think really hard. Gotta love that!

Ansible roles/glance/task/main.yml playbook for Glance API Deployment

I am working on a project at work to deploy Keystone and Glance. I’ve currently been tasked with finishing off the glance role part of the playbook with the basic setup tasks and retrieving the basic qcow2 images for the various distributions and automatically retrieving and populating the glance API image-list. Here is how I did it;

This is using an encrypted group_vars all vars.yml which contains sensitive password variables like GLANCE_DBPASS

This file shows how Glance SQL database, permissions, population and images are uploaded to glance for use by openstack compute.

glance-api

File: osan/roles/glance/tasks/main.yml

---

   - name: Create keystone database
     mysql_db:
        name: glance

   - name: Configure database user privileges
     mysql_user:
       name: glance
       host: "{{ item }}"
       password: "{{ GLANCE_DBPASS }}"
       priv: glance.*:ALL
     with_items:
       - "%"
       - localhost

#   - name: Set credentials to admin
#   command: source admin-openrc.sh

   - name: Create the Glance user service credentials
     command: openstack user create --domain default --password {{ GLANCE_PASS }} glance
     environment: admin_env
     ignore_errors: yes

   - name: Add the admin role to the glance user and service project
     command: openstack role add --project service --user glance admin
     environment: admin_env
     ignore_errors: yes

   - name: Create the glance service entity
     command: openstack service create --name glance --description "OpenStack Image service" image
     environment: admin_env
     ignore_errors: yes

   - name: Create the Image service API endpoints for glance
     command: openstack endpoint create --region RegionOne image public http://controller:9292
     environment: admin_env
     ignore_errors: yes

   - name: Create the Image service API endpoints for glance
     command: openstack endpoint create --region RegionOne image internal http://controller:9292
     environment: admin_env
     ignore_errors: yes

   - name: Create the Image service API endpoints for glance
     command: openstack endpoint create --region RegionOne image admin 'http://controller:9292'
     environment: admin_env
     ignore_errors: yes

   - name: Install Glance and Dependencies
     yum: pkg={{item}} state=installed
     with_items:
     - openstack-glance
     - python-glance
     - python-glanceclient

   - name: replace glance-api.conf file
     template: src=glance-api.conf.ansible dest=/etc/glance/glance-api.conf owner=root

   - name: replace glance-registory.conf file
     template: src=glance-registry.conf.ansible dest=/etc/glance/glance-registory.conf owner=root

   - name: Populate the Image service database
     command: su -s /bin/sh -c "glance-manage db_sync" glance

   - name: Start & Enable openstack-glance-registry.service
     service: name=openstack-glance-registry.service enabled=yes state=started

   - name: Start & Enable openstack-glance-api.service
     service: name=openstack-glance-api.service enabled=yes state=started


   - name: Retrieve CentOS 7 x86_64.qcow2
     get_url: url=http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1503.qcow2 dest=/root/CentOS-7-x86_64-GenericCloud-1503.qcow2 mode=0600

   - name: Populate Glance DB with CentOS 7 qcow2 Image
     command:  glance image-create --name "centos7-x86_x64" --file /root/CentOS-7-x86_64-GenericCloud-1503.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress


   - name: Retrieve Cirros qcow2 Image
     get_url: url=http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img dest=/root/cirros-0.3.4-x86_64-disk.img mode=0600

   - name: Import Cirros qcow Image to Glance
     command:  glance image-create --name "cirros-0.3.4_x86_64" --file /root/cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress


   - name: Retrieve Ubuntu 14.04 Trusty Tahr qcow2 Image
     get_url: url=http://cloud-images.ubuntu.com/releases/14.04/release-20140416.1/ubuntu-14.04-server-cloudimg-amd64-disk1.img dest=/root/ubuntu-14.04-server-cloudimg-amd64-disk1.img mode=0600

   - name: Import Ubuntu 14.04 Trusty Tahr to Glance
     command: glance image-create --name "ubuntu-14.04-lts-trusty-tahr-amd64" --file /root/ubuntu-14.04-server-cloudimg-amd64-disk1.img --disk-format qcow2 --container-format bare --visibility public --progress


   - name: Retrieve Fedora 23 qcow2 Image
     get_url: url=https://download.fedoraproject.org/pub/fedora/linux/releases/23/Cloud/x86_64/Images/Fedora-Cloud-Base-23-20151030.x86_64.qcow2 dest=/root/Fedora-Cloud-Base-23-20151030.x86_64.qcow2 mode=0600

   - name: Import Fedora 23 qcow2 Image to Glance
     command: glance image-create --name "fedora-23-amd64" --file /root/Fedora-Cloud-Base-23-20151030.x86_64.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress


   - name: Retrieve Debian 8 amd64 qcow2 Image
     get_url: url=http://cdimage.debian.org/cdimage/openstack/current/debian-8.2.0-openstack-amd64.qcow2 dest=/root/debian-8.2.0-openstack-amd64.qcow2 mode=0600

   - name: Import Debian 8 to Glance
     command: glance image-create --name "debian8-2-0-amd64" --file /root/debian-8.2.0-openstack-amd64.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress


   - name: Retrieve OpenSuSE 13.2 Guest Qcow2 Image
     get_url: url=http://download.opensuse.org/repositories/Cloud:/Images:/openSUSE_13.2/images/openSUSE-13.2-OpenStack-Guest.x86_64.qcow2 dest=/root/openSUSE-13.2-OpenStack-Guest.x86_64.qcow2 mode=0600

   - name: Import OpenSuSE 13.2 to Glance
     command: glance image-create --name "opensuse-13-2-amd64" --file /root/openSUSE-13.2-OpenStack-Guest.x86_64.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress

The above is in yaml format which is really tricky so what your syntax when using it. It is VERY sensitive.

After this runs we are left with a nice glance image-list output. Glance is ready for compute to use the qcow2 images we associated using the openstack Glance API.

+--------------------------------------+------------------------------------+
| ID                                   | Name                               |
+--------------------------------------+------------------------------------+
| f58aaed4-fda7-41b3-a0c9-e99d6c956afd | centos7-x86_x64                    |
| b4c7224b-0e0d-475c-880c-f48e1c0608b2 | cirros-0.3.4_x86_64                |
| 975accd5-d9bc-4485-86df-88e97e7f3237 | debian8-2-0-amd64                  |
| 41e7949c-3e17-434f-8008-4551673da496 | fedora-23-amd64                    |
| 092338df-6e8e-471b-93ff-07b339510636 | opensuse-13-2-amd64                |
| ae707804-3dd5-474f-ab8d-3d6e855e420d | ubuntu-14.04-lts-trusty-tahr-amd64 |
+--------------------------------------+------------------------------------+

Deleting Glance Images one liner

I’ve been working on some glance automation and I wanted to quickly delete all the glance images so I can test if my ansible playbook is downloading all the reference cloud qcow2 images and populating glance with them correctly.

bash-4.2# glance image-list | awk '{print $2}' | grep -v ID | xargs -i echo glance image-delete {}
glance image-delete 8d73249e-c616-4481-8256-f634877eb5a2
glance image-delete 2ea3faef-530c-4679-9faf-b11c7e7889eb
glance image-delete 697efb18-72fe-4305-8e1d-18e0f1481bd6
glance image-delete 555811e2-f941-4cb5-bba2-6ed8751bf188
glance image-delete 7182dca4-f0f4-4176-a706-d8ca0598ef9f
glance image-delete 0f5f2bc5-94a4-4361-a17e-3fed96f07c4e
glance image-delete a01580c2-f264-4058-a366-30d726c2c496
glance image-delete 92a39f49-b6e5-4d32-9856-37bbdac6c285
glance image-delete c01a6464-8e2c-4edb-829e-6d123bc3c8f4
-bash-4.2# glance image-delete 8d73249e-c616-4481-8256-f634877eb5a2
-bash-4.2# glance image-delete 2ea3faef-530c-4679-9faf-b11c7e7889eb
-bash-4.2# glance image-delete 697efb18-72fe-4305-8e1d-18e0f1481bd6
-bash-4.2# glance image-delete 555811e2-f941-4cb5-bba2-6ed8751bf188
-bash-4.2# glance image-delete 7182dca4-f0f4-4176-a706-d8ca0598ef9f
-bash-4.2# glance image-delete 0f5f2bc5-94a4-4361-a17e-3fed96f07c4e
-bash-4.2# glance image-delete a01580c2-f264-4058-a366-30d726c2c496
-bash-4.2# glance image-delete 92a39f49-b6e5-4d32-9856-37bbdac6c285
-bash-4.2# glance image-delete c01a6464-8e2c-4edb-829e-6d123bc3c8f4

Downloading exported Cloud Server Image from Cloud Files using BASH/curl/API

So, after succesfully exporting the image in the previous article, I wanted to download the VHD so I could use it on virtualbox at home.

#!/bin/bash

# Username used to login to control panel
USERNAME='adambull'
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='mycloudapikey'
# Find the image ID you'd like to make available on cloud files

# Simply replace mytenantidgoeshere10011111etc with just the account number, the number given in the url in mycloud control panel! replace everything after _ so it looks like _101110
TENANTID='MossoCloudFS_mytenantidgoeshereie1001111etc'

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# Download the cloud files image

VHD_FILENAME=5fb64bf2-afae-4277-b8fa-0b69bc98185a.vhd
curl -o -i -X GET "https://storage101.lon3.clouddrive.com/v1/$TENANTID/exports/$VHD_FILENAME" \
-H "X-Auth-Token: $TOKEN"

Really really easy

Output looks like;

 ./download-image-id.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   4470    102  0:00:01  0:00:01 --:--:--  4473
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  1 3757M    1 38.1M    0     0  7231k      0  0:08:52  0:00:05  0:08:47 7875k

Exporting Rackspace Cloud Server Image to Cloud Files (so you can download it)

So today, a customer wanted to know if there was a way to export a Rackspace Cloud Server image out of Rackspace to download it. Yes, this is possible and can be done using the Images API and Cloud Files. Here is a summary of the basic process below;

Step 1: Make container called ‘export’ in cloud files; You can do this thru the mycloud control panel by navigating to your cloud files and simply clicking create container, call it ‘export’.

Screen Shot 2016-01-22 at 2.46.56 PM

Step 2: Create bash script to query API with correct user, apikey and imageid;

vim mybashscript.sh
#!/bin/bash

# Username used to login to control panel
USERNAME='mycloudusernamehere'
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='mycloudapikeyhere'
# Find the image ID you'd like to make available on cloud files
# set the image id below of the image you want to copy to cloud files, see in control panel
IMAGEID="5fb24bf2-afae-4277-b8fa-0b69bc98185a"

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
curl https://lon.images.api.rackspacecloud.com/v2/10045567/tasks -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" -d '{"type": "export", "input": {"image_uuid": "'"$IMAGEID"'", "receiving_swift_container": "exports"}}'

It’s so simple I had to check myself that it was really this simple.

It is. yay! Next guide shows you how to download the image you made.