Converting a QEMU qcow2 cloud server image to an native disk img and putting on physical disk

Got this question at work a lot. Thought I’d finally get around to putting it down since it’s came up for me. I’ve got a virtual machine using virtio passthrough for my pcie, and I found actually that disk access via the qcow2 is pretty naff.

sudo apt-get install qemu-kvm

qemu-img convert windows10cloudimage.qcow2 -O raw diskimage.img

dd if=/path/to/windos10cloudimage.qcow2 of=/dev/sdc2

Please note in my case the physical partition I’d made was sdc2. I’d actually resized another 5TB disk I have in my system using gparted. Just so I can attach a physical partition with libvirtd. Evidently though libvirtd-manager doesn’t allow this business so I have to edit the xlm file in /etc/qemu/windows10.xml .

 

root@adam:/etc/libvirt/qemu# virsh  define /etc/libvirt/qemu/win10-uefi.xml 
Domain win10-uefi defined from /etc/libvirt/qemu/win10-uefi.xml
root@adam:/etc/libvirt/qemu# virt-manager

yeah baby!


You could alternatively do it all in one like below, though you may desire a copy of the img file as well as putting it to the disk.

qemu-img convert windows10.qcow2 -O raw /dev/sdc

Help! I can’t login to my cloud-server even though I’ve reset my root password

The most common cause of this is the permit root login is set to no, although there might be other causes, like a really broken sshd_config, instead of just one variable. The procedure for looking into this is pretty much the same regardless of the breakage that has occurred. Here is what you need to do:

Here’s the full procedure:

1) Put server into rescue mode.
2) Login to cloud-server on SSH port, please note rescue mode gives you a new temporary root password allowing you to reset the password for SSH on the ‘original disk’.
3) once logged in mount the /dev/xvdb devices, this may be /dev/xvdb1 or /dev/xvdb2 but is usually /dev/xvdb1 and chroot (change root to the ‘original disk’)

# Mount old disk
mnt /dev/xvdb1 /mnt

# Change to the ‘old disk’
chroot /mnt

# Set the new password for root on the old disk:

passwd
# enter the new password when prompted

and specifically ensure that /etc/ssh/sshd_config has this line:

PermitRootLogin no

changed to:

PermitRootLogin yes

Your developer or sysad won’t be able to login until you reset the root password here, and if you do not know the username to su to root from, it is absolutely critical to perform this work, otherwise you won’t be able to access the server.

Also, once you have allowed the root login, and changed the password to something you recognise you will be able to exit rescue mode thru the control panel and login to the machine as normal.

For more detail about how to do this (although all the steps are here pretty much, please see):

https://support.rackspace.com/how-to/rackspace-cloud-essentials-rescue-mode-on-linux-cloud-servers/

I hope this helps you folks out some,

Preventing /etc/resolv.conf reset on startup/boot

Today, a customer approached us after a Host Server Down complaining that, although the server is up again their website and application were down & not working. Even though the server was online and functioning correctly.

The customer discovered that the source of the issue was that there /etc/resolv.conf is blank, this means that they will not be able to resolve DNS A/PTR/CNAME record hostnames into a resolved IP. This is called hostname to IP resolution. Its means that if /etc/resolv.conf is blank and the customer uses hostnames in their calls, such a failure will break the connectivity due to failure to resolve to IP to communicate on the TCP stack.

There is actually a very simple way to prevent the /etc/resolv.conf file from being changed. But first, it’s important to understand why /etc/resolv.conf is being reset.

On All Rackspace cloud-servers there is a process called nova-agent, and when the server starts up, the /etc/resolv.conf file will be reset along with the networking configuration. This happens each time your server is restarted and is used to set new networking details, specifically if you take an image and build server on a new ip address or if your server is live-migrated to a new host, it makes sure on the next reboot it comes up with correct networking detail transparently. However this can cause some issues, such as in this case with the /etc/resolv.conf file. Fortunately there are some novel ways of preventing your /etc/resolv.conf being modified after you have added the correct nameservers you desire to it.

You can use the chattr immutable file setting to stop processes from modifying it after you have made the changes to your resolv.conf that are desired;

Set Immutable File

chattr +i /etc/resolv.conf 

Un-set Immutable File

chattr -i /etc/resolv.conf 

This /etc/resolv.conf issue is a common problem, however using immutable file flag and chattr should prevent it from being changed ever again.

Moving Rackspace Cloud Servers between Regions with automation II

Hey folks. So, recently I have been doing a bit of work on the Rackspace community, specifically trying to document and make as easy as possible the importing and exporting of cloud server VHD’s between Rackspace regions. This might be really useful if you are designing some HA or multi-region and/or load balancing solution that might be utilizing autoscale, and other kinds of redundancy too, but moving your ‘golden image’ between regions might be quite difficult if doing the entire process manually or step by step as I have documented in the below two articles:

Exporting Cloud server images from a Rackspace Region https://community.rackspace.com/products/f/25/t/7089

Importing Cloud Server Images to a Rackspace Region https://community.rackspace.com/products/f/25/t/7186

In this article I completely finish writing the ‘automation demo’ of how to specifically move images, without changing much at all, apart from one ‘serverID’ variable, and the source and destination. The script isn’t finished yet, however the last time I posted this on my blog I was so excited, I actually forgot to include the import function. (which is kind of important!) sorry about that.


#!/bin/bash

USERNAME='yourmycloudusernamehere'
APIKEY='youapikeyhere'
API_ENDPOINT='https://lon.servers.api.rackspacecloud.com/v2/1000000'
SERVER_ID='94157dc7-924a-424a-8825-c5ffbd341622'
TENANT='1000000'
CUSTOMER_ID='1000000'

#### DO NOT CHANGE BELOW THIS LINE

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# START IMAGE CREATION
echo "Creating Image at Local Datacentre"

curl -v -D export-headers \
-H "X-Auth-Token: $TOKEN" \
-H "Accept: application/json" \
-H "content-type: application/json" \
-d '{"createImage" : {"name" : "RA-'$SERVER_ID'", "metadata": { "ImageType": "Rackspace Automation Image Exported from '$TENANT'", "ImageVersion": "2.0"}}}' \
-X POST "$API_ENDPOINT/servers/$SERVER_ID/action" -o /tmp/export-file

echo "export headers"
cat export-headers

# Retrieve correct ImageID and use to check status of image
IMAGEID=$(cat export-headers | grep -i location | sed 's/\// /g' | awk '{print $7}')
sleep 5
echo "image id"
echo $IMAGEID

API_ENDPOINT='https://lon.images.api.rackspacecloud.com/v2/images/'
URL=$API_ENDPOINT$IMAGEID
URL=${URL%$'\r'}

curl -v \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 1000000" \
-H "Accept: application/json" \
-H "content-type: application/json" \
-X GET "$URL" | python -mjson.tool > imagestatus

echo "imagestatus: $imagestatus"

STATUS=$(cat imagestatus | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')

## WAIT FOR IMAGE TO EXIT SAVE STATE

echo "Waiting for image to complete..."
sleep 5
while [ "$STATUS" != "active" ]; do
echo "image $IMAGEID is still saving..."
sleep 10
curl -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 1000000" \
-H "Accept: application/json" \
-H "content-type: application/json" \
-X GET "$URL" | python -mjson.tool > imagestatus

STATUS=$(cat imagestatus | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')
done

## PREPARE/CREATE CLOUD FILES CONTAINER for EXPORT

echo "Preparing/Creating Cloud Files Container for Export"
API_ENDPOINT='https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_1000000'

curl -v -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 1000000" \
-H "Accept: application/json" \
-X PUT "$API_ENDPOINT/export"
sleep 5

## EXPORT VHD TO CLOUD FILES

echo "Exporting VHD to Cloud Files"
# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

echo "IMAGEID detected as $IMAGEID"
# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
# > export-cloudfiles

echo "THE IMAGE ID IS: $IMAGEID"
IMAGEID=${IMAGEID%$'\r'}
curl -v "https://lon.images.api.rackspacecloud.com/v2/$TENANT/tasks" -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" -d '{"type": "export", "input": {"image_uuid": "'$IMAGEID'" , "receiving_swift_container": "export"}}' -o export-cloudfiles
echo "Export looks like"

cat export-cloudfiles

sleep 15

echo "export cloud-files looks like:"
cat export-cloudfiles

TASKID_EXPORT=$(cat export-cloudfiles | python -mjson.tool | grep '"id"' | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')

echo "task ID export looks like"
echo "$TASKID_EXPORT"

API_ENDPOINT='https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_1000000'

sleep 15

echo "Waiting for Task to complete..."
## WAIT FOR TASKID EXPORT TO COMPLETE TO CLOUD FILES

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
curl "https://lon.images.api.rackspacecloud.com/v2/1000000/tasks/$TASKID_EXPORT" -X GET -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" | python -mjson.tool > export-status

EXPORT_STATUS=$(cat export-status | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')

while [ "$EXPORT_STATUS" = "processing" ]; do
sleep 15
curl "https://lon.images.api.rackspacecloud.com/v2/1000000/tasks/$TASKID_EXPORT" -X GET -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" | python -mjson.tool > export-status
EXPORT_STATUS=$(cat export-status | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')
done

# SET CORRECT CLOUD FILES NAME
CLOUD_FILES_NAME=$(cat export-cloudfiles | python -mjson.tool | grep image_uuid | awk '{print $2}' | sed 's/,//g' | sed 's/"//g')

## Download VHD Cloud from Cloud Files to this server

API_ENDPOINT='https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_1000000'

# GET FILE FROM SOURCE CLOUD FILES

URL="$API_ENDPOINT/export/$CLOUD_FILES_NAME.vhd"
URL=${URL%$'\r'}

curl -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: $TENANT" \
-H "Accept: application/json" \
-X GET "$API_ENDPOINT/export/$CLOUD_FILES_NAME.vhd" > $CLOUD_FILES_NAME.vhd

## NEW API USER/PASS REQUIRED FOR 2ND REGION

### DO NOT CHANGE ANYTHING ABOVE THIS POINT

USERNAME='yourmycloudusernamegoeshere'
APIKEY='yourapikeyfromsecondregiongoeshere'

### DO NOT CHANGE ANYTHING BELOW THIS POINT

## Now for uploading the VHD to Cloud Files to Destination REGION

API_ENDPOINT='https://storage101.ord1.clouddrive.com/v1/MossoCloudFS_900000'
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

curl -v -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 900000" \
-H "Accept: application/json" \
-X PUT "$API_ENDPOINT/import"

## Upload VHD Image to Cloud Files destination for import
curl -v -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 900000" \
-H "Accept: application/json" \
-X PUT "$API_ENDPOINT/import/$CLOUD_FILES_NAME.vhd" -T "$CLOUD_FILES_NAME.vhd"

# Find the Customer_ID
IMPORT_IMAGE_ENDPOINT=https://ord.images.api.rackspacecloud.com/v2/$CUSTOMER_ID

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

VHD_NOTES="autoimport-$SERVER_ID"
IMPORT_CONTAINER=import
VHD_FILENAME="$CLOUD_FILES_NAME.vhd"

curl -X POST "$IMPORT_IMAGE_ENDPOINT/tasks" \
-H "X-Auth-Token: $TOKEN" \
-H "Content-Type: application/json" \
-d "{\"type\":\"import\",\"input\":{\"image_properties\":{\"name\":\"$VHD_NOTES\"},\"import_from\":\"$IMPORT_CONTAINER/$VHD_FILENAME\"}}" |\
python -mjson.tool

As You can probably see my code is still rather rough, but it’s just so darn exciting that this script works from start to finish, nicely I just HAD to share it a bit earlier! The plan now is to add commandline function so that you can specify ./moveregion {SOURCE_REGION} {DEST_REGION} {SERVER_ID} {TENANT_ID} . Then a customer or a racker would only need these 4 variables to import and export images in an automated way.

I can rewrite the script in such a way that it would accept a .txt file of a couple of hundred cloud server UUID’s, and it would take the server UUID of each, use that uuid to create an image of each server, export to cloud files, import to cloud files, and then import to glance image store for the second region destination. Which naturally, would save hundreds of hours of human time doing this manually.. which is … nice 😀

I would really like to make a UI frontend, using something like Django, and utilize some form of ‘light’ database, that keeps track of all the API import/exports, and even provides estimated time for completion, but my UI skills are really limited to xhtml, css php and mysql.. I need a python or django guy to help out with some of this. If anyone is interested, please reach out to me.

This project will be avaialble on github soon

Move Rackspace Cloud Servers between Regions (Automation)

Hey!

So I wrote a piece of software (basic) using BASh which exports Rackspace Cloud Servers between regions. It’s pure API CALLS using curl and I’m particularly proud of this piece, since it only took a day. (once I spent the whole of the next day figuring out an issue with the JSON and bash expansion for parameters to export the cloud server image to cloud files).

This is a super rough example of an automation-in-progress for cloud-servers between regions. Once you’ve set the script up, you simply change the serverid, and the script can do the rest, and you can migrate server by server, or perform batch migrates with this.

I’m going to refactor and rewrite it when I have time, but for now, here you are! Enjoy 😀

I hope that this is useful to people, particularly our customers.. when I release a finely tuned version that has commandline arguments support.

#!/bin/bash

USERNAME=''
APIKEY=''
API_ENDPOINT='https://lon.servers.api.rackspacecloud.com/v2/100101010'
SERVER_ID='cd2b545b-99d4-42c1-a881-4714f4bf4b92'
TENANT='100101010'
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# START IMAGE CREATION
echo "Creating Image at Local Datacentre"

curl -v -D export-headers \
-H "X-Auth-Token: $TOKEN" \
-H "Accept: application/json" \
-H "content-type: application/json" \
-d '{"createImage" : {"name" : "RA-'$SERVER_ID'", "metadata": { "ImageType": "Rackspace Automation Image Exported from '$TENANT'", "ImageVersion": "2.0"}}}' \
-X POST "$API_ENDPOINT/servers/$SERVER_ID/action" > export-headers

echo "export headers"
cat export-headers

# Retrieve correct ImageID and use to check status of image
IMAGEID=$(cat export-headers | grep -i location | sed 's/\// /g' | awk '{print $7}')
sleep 5
echo "image id"
echo $IMAGEID

API_ENDPOINT='https://lon.images.api.rackspacecloud.com/v2/images/'
URL=$API_ENDPOINT$IMAGEID
URL=${URL%$'\r'}

curl -v \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 100101010" \
-H "Accept: application/json" \
-H "content-type: application/json" \
-X GET "$URL" | python -mjson.tool > imagestatus

echo "imagestatus: $imagestatus"

STATUS=$(cat imagestatus | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')

## WAIT FOR IMAGE TO EXIT SAVE STATE

echo "Waiting for image to complete..."
sleep 5
while [ "$STATUS" != "active" ]; do
echo "image $IMAGEID is still saving..."
sleep 10
curl -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 100101010" \
-H "Accept: application/json" \
-H "content-type: application/json" \
-X GET "$URL" | python -mjson.tool > imagestatus

STATUS=$(cat imagestatus | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')
done

## PREPARE/CREATE CLOUD FILES CONTAINER for EXPORT

echo "Preparing/Creating Cloud Files Container for Export"
API_ENDPOINT='https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_100101010'

curl -v -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 100101010" \
-H "Accept: application/json" \
-X PUT "$API_ENDPOINT/export"
sleep 5

## EXPORT VHD TO CLOUD FILES

echo "Exporting VHD to Cloud Files"
# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

echo "IMAGEID detected as $IMAGEID"
# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
# > export-cloudfiles

echo "THE IMAGE ID IS: $IMAGEID"
IMAGEID=${IMAGEID%$'\r'}
curl -v "https://lon.images.api.rackspacecloud.com/v2/$TENANT/tasks" -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" -d '{"type": "export", "input": {"image_uuid": "'$IMAGEID'" , "receiving_swift_container": "export"}}' -o export-cloudfiles
echo "Export looks like"

cat export-cloudfiles

sleep 15

echo "export cloud-files looks like:"
cat export-cloudfiles

TASKID_EXPORT=$(cat export-cloudfiles | python -mjson.tool | grep '"id"' | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')

echo "task ID export looks like"
echo "$TASKID_EXPORT"

API_ENDPOINT='https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_100101010'

sleep 15

echo "Waiting for Task to complete..."
## WAIT FOR TASKID EXPORT TO COMPLETE TO CLOUD FILES

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
curl "https://lon.images.api.rackspacecloud.com/v2/10101010/tasks/$TASKID_EXPORT" -X GET -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" | python -mjson.tool > export-status

EXPORT_STATUS=$(cat export-status | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')

while [ "$EXPORT_STATUS" = "processing" ]; do
sleep 15
curl "https://lon.images.api.rackspacecloud.com/v2/100101010/tasks/$TASKID_EXPORT" -X GET -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" | python -mjson.tool > export-status
EXPORT_STATUS=$(cat export-status | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')
done

# SET CORRECT CLOUD FILES NAME
CLOUD_FILES_NAME=$(cat export-cloudfiles | python -mjson.tool | grep image_uuid | awk '{print $2}' | sed 's/,//g' | sed 's/"//g')

## Download VHD Cloud from Cloud Files to this server

API_ENDPOINT='https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10101010'

# GET FILE FROM SOURCE CLOUD FILES

URL="$API_ENDPOINT/export/$CLOUD_FILES_NAME.vhd"
URL=${URL%$'\r'}

curl -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: $TENANT" \
-H "Accept: application/json" \
-X GET "$API_ENDPOINT/export/$CLOUD_FILES_NAME.vhd" > $CLOUD_FILES_NAME.vhd

## NEW API USER/PASS REQUIRED FOR 2ND REGION

USERNAME=''
APIKEY=''

## Now for uploading the VHD to Cloud Files to Destination REGION

API_ENDPOINT='https://storage101.ord1.clouddrive.com/v1/MossoCloudFS_891671'
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

curl -v -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 891671" \
-H "Accept: application/json" \
-X PUT "$API_ENDPOINT/import"

## Upload VHD Image to Cloud Files destination for import
curl -v -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 891671" \
-H "Accept: application/json" \
-X PUT "$API_ENDPOINT/import/$CLOUD_FILES_NAME.vhd" -T "$CLOUD_FILES_NAME.vhd"

Downloading / Backing up all Rackspace Cloud Files

Here’s a quick and dirty way to download your entire Rackspace Cloud Files container. This comes up a lot at work.

INSTALLING SWIFTLY

# Debian / Ubuntu systems
apt-get install python-pip
# CentOS and Redhat Systems
yum install python-pip
pip install swiftly

Once you have installed swiftly, you will want to configure your swiftly client. This is also relatively easy.

CONFIGURING SWIFTLY

# create a file in your ‘home’ environment. Using ~ is the root users directory
# if logged in as root on a unix server

touch ~/.swiftly.conf 

You will want to edit the file above

pico ~/.swiftly.conf 

The file needs to look exactly like the text below:

[swiftly]
auth_user = yourmycloudusername
auth_key = yourapikey
auth_url = https://identity.api.rackspacecloud.com/v2.0
region = LON

To save in pico you type CTRL + O

You have now installed swiftly, and configured swiftly. You should then be able to simply run the command:

Running swiftly to download all containers/files on Rackspace Cloud Files

swiftly get --all-objects --output=mycloudfiles/

This comes up a lot, I am sure that some people out there will appreciate this!

Disabling SELinux

Today we had a customer that needed to perform a first generation server to next generation migration however they cannot have SELinux enabled during this process.

I explain to the customer how to disable this, it’s pretty simple.

vi /etc/sysconfig/selinux

SELINUX=enforcing

Needs to be changed to

SELINUX=disabled

Job done. A simple one but nonetheless important stuff. If you wanted to automate this it would look something like this;

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g'/etc/selinux/config

This sed oneliner simply swaps SELINUX=enforcing to SELINUX=disabled, pretty simple stuffs. It will work on CentOS6 and 7 anyway, and should but I can’t guarantee work on CentOS 5.

Creating a Next Generation Server image from a First Generation Server

So we had a customer today that wanted to create a next generation cloud-server using a first generation server image. Since the first gen platform uses cloud files, it’s possible to do manually, downloading from cloud files , concatenating and untar to access the filesystem.

Like so;

cat receiverTar1.tar receivedTar2.tar >> alltars.tar
tar -itvf alltars.tar

Although I used on my mac:

tar -vxf alltars.tar 

This gives us the VHD files extracted into an ‘image’ folder;

$ ls -al image/
total 79851760
drwxr-xr-x   6 adam9261  RACKSPACE\Domain Users          204 Apr 19 12:17 .
drwxr-xr-x  11 adam9261  RACKSPACE\Domain Users          374 Apr 19 11:47 ..
-rw-r--r--   1 adam9261  RACKSPACE\Domain Users  40884003328 Jan  4 07:05 image.vhd
-rwxr-xr-x   1 adam9261  RACKSPACE\Domain Users         1581 Apr 19 12:15 import-container.sh
-rw-r--r--   1 adam9261  RACKSPACE\Domain Users            8 Jan  4 07:05 manifest.ovf
-rw-r--r--   1 adam9261  RACKSPACE\Domain Users        84480 Jan  4 07:05 snap.vhd

We are interested in the image.vhd file. Now lets upload it to cloud files to IMPORT it into Glance, which is what is used by the next generation platform to create a new server. The problem of course was that the first gen image format wasn’t compatible. Next gen builds need to retrieve the VHD image from glance.

Also, lets ensure we use Transfer-Encoding: chunked” as a host -H header. This tells Cloud Files that the .vhd exceeds 5G and it will create a multi-part manifest for us for the main file. Splitting it up for us into multiple objects spanned across 5Gig Files!


# Username used to login to control panel
USERNAME='mycloudusername'
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='mycloudapikey'
# Find the image ID you'd like to make available on cloud files
CUSTOMER_ID=10001010

IMPORT_CF_ENDPOINT="https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_50441c7a-dc22-4287-8e8c-b9844df"

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# Upload VHD
curl -X PUT -T image.vhd \
-H "X-Auth-Token: $TOKEN" \
-H "Transfer-Encoding: chunked" \
"$IMPORT_CF_ENDPOINT/import/image.vhd"

Update:

Something in curl transmission caused it to sadly, mess up. So I used swiftly instead.

$ swiftly put -i image.vhd import/image.vhd

The problem with swiftly was it didn’t like my .swiftly file in my ~ as should work 100% without problems, but it didn’t. With help of my friend Jake this is what I did, to get round that and set manually in the environment (as opposed to using the .swiftly file)

abull-mb:~ adam9261$ export SWIFTLY_AUTH_URL=https://identity.api.rackspacecloud.com/v2.0
abull-mb:~ adam9261$ export SWIFTLY_AUTH_USER=cloudusernamehere
abull-mb:~ adam9261$ export SWIFTLY_AUTH_KEY=apikeyhere
abull-mb:~ adam9261$ swiftly auth

Next stage import to glance

#!/bin/bash

# Username used to login to control panel
USERNAME='mycloudusername'
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='mycloudapikey'
# Find the image ID you'd like to make available on cloud files
CUSTOMER_ID=10001010
IMPORT_CF_ENDPOINT="https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_50441c7a-dc22-4287-8e8c-b6d76b237da"
IMPORT_IMAGE_ENDPOINT=https://LON.images.api.rackspacecloud.com/v2/$CUSTOMER_ID


# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`


VHD-NOTES=TESTING-RACKSPACE-IMAGE-IMPORT
IMPORT_CONTAINER=import

curl -X POST "$IMPORT_IMAGE_ENDPOINT/tasks" \
      -H "X-Auth-Token: $TOKEN" \
      -H "Content-Type: application/json" \
      -d "{\"type\":\"import\",\"input\":{\"image_properties\":{\"name\":\"$VHD_NOTES\"},\"import_from\":\"$IMPORT_CONTAINER/image.vhd\"}}" |\
      python -mjson.tool

Please note that image.vhd is hardcoded into the curl import. Also see VHD-NOTES variable which is passed to the task. This is just to identify the image more easily.

Response:

{
    "created_at": "2016-04-19T13:12:57Z",
    "id": "ff7d8c09-9dd7-43ed-824f-338201681b12",
    "input": {
        "image_properties": {
            "name": ""
        },
        "import_from": "import/image.vhd"
    },
    "message": "",
    "owner": "10001010",
    "result": null,
    "schema": "/v2/schemas/task",
    "self": "/v2/tasks/ff7d8c09-9dd7-43ed-815f-338201681ba7",
    "status": "pending",
    "type": "import",
    "updated_at": "2016-04-19T13:12:57Z"
}

I then retrieved the Task details: (code not included yet). In this case I used pitchfork.cloudapi.co. A rackspace service that allows you to make API calls using a web frontend.

I was in a rush to get this done for the customer as soon as possible.

{
    "status": "processing", 
    "created_at": "2016-04-19T13:12:57Z", 
    "updated_at": "2016-04-19T13:12:58Z", 
    "id": "ff7d8c09-9dd7-43ed-824f-338201681b12", 
    "result": null, 
    "owner": "10009158", 
    "input": {
        "image_properties": {
            "name": ""
        }, 
        "import_from": "import/image.vhd"
    }, 
    "message": "", 
    "type": "import", 
    "self": "/v2/tasks/ff7d8c09-9dd7-43ed-815f-338201681ba7", 
    "schema": "/v2/schemas/task"
}

We can now see that the status is processing. When it has completed, it will tell us whether it failed or not

After waiting 30 minutes or so:

{
    "status": "success", 
    "created_at": "2016-04-19T13:12:57Z", 
    "updated_at": "2016-04-19T14:22:53Z", 
    "expires_at": "2016-04-21T14:22:53Z", 
    "id": "ff7d8c09-9dd7-43ed-815f-338201681ba7", 
    "result": {
        "image_id": "826bbb51-0f83-4278-b0ad-702aba088aae"
    }, 
    "owner": "10009158", 
    "input": {
        "image_properties": {
            "name": ""
        }, 
        "import_from": "import/image.vhd"
    }, 
    "message": "", 
    "type": "import", 
    "self": "/v2/tasks/ff7d8c09-9dd7-43ed-815f-338201681ba7", 
    "schema": "/v2/schemas/task"
}

It worked!