Updating Xen Server PV Drivers

So you want to update your PV Drivers for your machine. You might want to do this if you are for instance migrating a VM with older PV tools, and you want to use on a newer Hypervisor. Some windows machines are particularly sensitive and can sometimes crash if the latest tools are not installed. In this case no tools are intalled at all on a 5.6 host. So we are going to install the PV 5.6 drivers.

This may be of use to Rackspace customers of the first generation platform.

Install/Upgrade PV Drivers Linux

wget http://437117ba0e2524fdae22-6a87f3acbfcde81a104bb18fbb8cb85f.r47.cf2.rackcdn.com/xen_tools_installer.sh; 
chmod u+x xen_tools_installer.sh; bash xen_tools_installer.sh; rm -rf xen_tools_installer.sh

Install/Upgrade PV Drivers Windows

# Xenserver 5.6 PV Drivers
http://8d268c176171c62fbd4b-7084e0c7b53cce27e6cc2142114e456e.r30.cf1.rackcdn.com/xstools-5.6.zip
# Xenserver 6.0 PV Drivers
http://8d268c176171c62fbd4b-7084e0c7b53cce27e6cc2142114e456e.r30.cf1.rackcdn.com/xstools-6.0.zip
# Xenserver 6.1 PV Drivers
http://8d268c176171c62fbd4b-7084e0c7b53cce27e6cc2142114e456e.r30.cf1.rackcdn.com/xstools-6.1.zip
# Xenserver 6.2 PV Drivers
http://8d268c176171c62fbd4b-7084e0c7b53cce27e6cc2142114e456e.r30.cf1.rackcdn.com/xstools-6.2.zip

The windows setup is a lot simpler. Just download a zip for your version of Xen Server (MAKE SURE YOU KNOW WHICH ONE! IMPORTANT), and then run the xensetup.exe. Simples!

From the Hypervisor side

It’s possible to use xenstore-ls and check against the domain for the PV drivers in the attr attribute;

Tools not installed example, empty attr

# xenstore-ls /local/domain/218 | grep attr
attr = ""

Backing up a MySQL Database remotely

So, you might want to backup a MySQL database remotely, like one of our customers did today. This is relatively simply utilizing the inbuilt mysqldump facility. This customer in particular was running varnish in front of his apache2 webserver so setting up phpmyadmin wasn’t entirely straight forward for this non technical customer. It’s easily achievable with something like;

Specific database

ssh -l user 1.1.1.1 "mysqldump -mysqldumpoptions databasenamegoeshere | gzip -3 -c" > /localpath/localfile.sql.gz 

All databases

mysqldump -uroot -ppassword -h162.13.137.249 > backup.sql

The formatting of the command should look like

mysqldump -u root -p[root_password] -h [hostname] [database_name] > dumpfilename.sql

Configuring very very strict Linux Firewall using iptables

So, you want to configure a very very secure Linux Firewall using iptables? no probs. Here is how to do it;

#!/bin/sh
# My system IP/set ip address of server
SERVER_IP="2.2.2.2"

# Flushing all rules
iptables -F
iptables -X

# Setting default filter policy
iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -P FORWARD DROP

# Allow ALL traffic on loopback
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
 
# Allow INCOMING CONNECTIONS ON SSH PORT 22
iptables -A INPUT -p tcp -s 0/0 -d $SERVER_IP --sport 513:65535 --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s $SERVER_IP -d 0/0 --sport 22 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT
# DROP ALL TRAFFIC COMING IN AND GOING OUT

iptables -A INPUT -j DROP
iptables -A OUTPUT -j DROP

This above configuration locks down everything apart from SSH.

But, most customers want to configure access from their client’s IP address only. I.e. they don’t want SSH to accept connections from any IP address. Only from the client on for example 1.1.1.1. Here’s how to do that:

# Allow incoming ssh only from IP 1.1.1.1
iptables -A INPUT -p tcp -s 1.1.1.1 -d $SERVER_IP --sport 513:65535 --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s $SERVER_IP -d 1.1.1.1 --sport 22 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT

Credit goes to; http://www.cyberciti.biz/tips/linux-iptables-4-block-all-incoming-traffic-but-allow-ssh.html

Install KVM and virt-manager on CentOS 7

So, you wanna install KVM on CentOS7. First we want to check if the instruction set for the cpu supports virtualisation emulation. This is important for great performance but in the case it is missing.

$ egrep -c '(vmx|svm)' /proc/cpuinfo
2

If the result comes back 0, you don’t have it!

Installing KVM

sudo yum install kvm virt-manager libvirt virt-install qemu-kvm xauth dejavu-lgc-sans-fonts

Delete All Cloud Backup from Cloud Files

Please note that by performing the below commands the effect can be destructive.

TAKE CAUTION WHEN USING THIS COMMAND IT CAN DELETE EVERYTHING IF YOU DO SOMETHING WRONG!!!!

# swiftly --verbose --eventlet --concurrency=100 for "" --prefix z_DO_NOT_DELETE --output-names do delete "" --recursive --until-empty

This particularly command *should* only remove the cloud files directories starting with z_DO_NOT_DELETE. I have tested it and it appears to work correctly.

Creating a Next Generation Server image from a First Generation Server

So we had a customer today that wanted to create a next generation cloud-server using a first generation server image. Since the first gen platform uses cloud files, it’s possible to do manually, downloading from cloud files , concatenating and untar to access the filesystem.

Like so;

cat receiverTar1.tar receivedTar2.tar >> alltars.tar
tar -itvf alltars.tar

Although I used on my mac:

tar -vxf alltars.tar 

This gives us the VHD files extracted into an ‘image’ folder;

$ ls -al image/
total 79851760
drwxr-xr-x   6 adam9261  RACKSPACE\Domain Users          204 Apr 19 12:17 .
drwxr-xr-x  11 adam9261  RACKSPACE\Domain Users          374 Apr 19 11:47 ..
-rw-r--r--   1 adam9261  RACKSPACE\Domain Users  40884003328 Jan  4 07:05 image.vhd
-rwxr-xr-x   1 adam9261  RACKSPACE\Domain Users         1581 Apr 19 12:15 import-container.sh
-rw-r--r--   1 adam9261  RACKSPACE\Domain Users            8 Jan  4 07:05 manifest.ovf
-rw-r--r--   1 adam9261  RACKSPACE\Domain Users        84480 Jan  4 07:05 snap.vhd

We are interested in the image.vhd file. Now lets upload it to cloud files to IMPORT it into Glance, which is what is used by the next generation platform to create a new server. The problem of course was that the first gen image format wasn’t compatible. Next gen builds need to retrieve the VHD image from glance.

Also, lets ensure we use Transfer-Encoding: chunked” as a host -H header. This tells Cloud Files that the .vhd exceeds 5G and it will create a multi-part manifest for us for the main file. Splitting it up for us into multiple objects spanned across 5Gig Files!


# Username used to login to control panel
USERNAME='mycloudusername'
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='mycloudapikey'
# Find the image ID you'd like to make available on cloud files
CUSTOMER_ID=10001010

IMPORT_CF_ENDPOINT="https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_50441c7a-dc22-4287-8e8c-b9844df"

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# Upload VHD
curl -X PUT -T image.vhd \
-H "X-Auth-Token: $TOKEN" \
-H "Transfer-Encoding: chunked" \
"$IMPORT_CF_ENDPOINT/import/image.vhd"

Update:

Something in curl transmission caused it to sadly, mess up. So I used swiftly instead.

$ swiftly put -i image.vhd import/image.vhd

The problem with swiftly was it didn’t like my .swiftly file in my ~ as should work 100% without problems, but it didn’t. With help of my friend Jake this is what I did, to get round that and set manually in the environment (as opposed to using the .swiftly file)

abull-mb:~ adam9261$ export SWIFTLY_AUTH_URL=https://identity.api.rackspacecloud.com/v2.0
abull-mb:~ adam9261$ export SWIFTLY_AUTH_USER=cloudusernamehere
abull-mb:~ adam9261$ export SWIFTLY_AUTH_KEY=apikeyhere
abull-mb:~ adam9261$ swiftly auth

Next stage import to glance

#!/bin/bash

# Username used to login to control panel
USERNAME='mycloudusername'
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY='mycloudapikey'
# Find the image ID you'd like to make available on cloud files
CUSTOMER_ID=10001010
IMPORT_CF_ENDPOINT="https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_50441c7a-dc22-4287-8e8c-b6d76b237da"
IMPORT_IMAGE_ENDPOINT=https://LON.images.api.rackspacecloud.com/v2/$CUSTOMER_ID


# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`


VHD-NOTES=TESTING-RACKSPACE-IMAGE-IMPORT
IMPORT_CONTAINER=import

curl -X POST "$IMPORT_IMAGE_ENDPOINT/tasks" \
      -H "X-Auth-Token: $TOKEN" \
      -H "Content-Type: application/json" \
      -d "{\"type\":\"import\",\"input\":{\"image_properties\":{\"name\":\"$VHD_NOTES\"},\"import_from\":\"$IMPORT_CONTAINER/image.vhd\"}}" |\
      python -mjson.tool

Please note that image.vhd is hardcoded into the curl import. Also see VHD-NOTES variable which is passed to the task. This is just to identify the image more easily.

Response:

{
    "created_at": "2016-04-19T13:12:57Z",
    "id": "ff7d8c09-9dd7-43ed-824f-338201681b12",
    "input": {
        "image_properties": {
            "name": ""
        },
        "import_from": "import/image.vhd"
    },
    "message": "",
    "owner": "10001010",
    "result": null,
    "schema": "/v2/schemas/task",
    "self": "/v2/tasks/ff7d8c09-9dd7-43ed-815f-338201681ba7",
    "status": "pending",
    "type": "import",
    "updated_at": "2016-04-19T13:12:57Z"
}

I then retrieved the Task details: (code not included yet). In this case I used pitchfork.cloudapi.co. A rackspace service that allows you to make API calls using a web frontend.

I was in a rush to get this done for the customer as soon as possible.

{
    "status": "processing", 
    "created_at": "2016-04-19T13:12:57Z", 
    "updated_at": "2016-04-19T13:12:58Z", 
    "id": "ff7d8c09-9dd7-43ed-824f-338201681b12", 
    "result": null, 
    "owner": "10009158", 
    "input": {
        "image_properties": {
            "name": ""
        }, 
        "import_from": "import/image.vhd"
    }, 
    "message": "", 
    "type": "import", 
    "self": "/v2/tasks/ff7d8c09-9dd7-43ed-815f-338201681ba7", 
    "schema": "/v2/schemas/task"
}

We can now see that the status is processing. When it has completed, it will tell us whether it failed or not

After waiting 30 minutes or so:

{
    "status": "success", 
    "created_at": "2016-04-19T13:12:57Z", 
    "updated_at": "2016-04-19T14:22:53Z", 
    "expires_at": "2016-04-21T14:22:53Z", 
    "id": "ff7d8c09-9dd7-43ed-815f-338201681ba7", 
    "result": {
        "image_id": "826bbb51-0f83-4278-b0ad-702aba088aae"
    }, 
    "owner": "10009158", 
    "input": {
        "image_properties": {
            "name": ""
        }, 
        "import_from": "import/image.vhd"
    }, 
    "message": "", 
    "type": "import", 
    "self": "/v2/tasks/ff7d8c09-9dd7-43ed-815f-338201681ba7", 
    "schema": "/v2/schemas/task"
}

It worked!

Installing Drupal 8 the hard way

Many people use phpmyadmin, but we’re going to do this properly to add users, databases and privileges. Heres how I did it.
Please note that this is a work in progress and is not finished yet .

Install httpd and mysql-server

yum install httpd mariadb-server php php-mysql

What you might find is that drupal 8 requires php5.5.9 lets install that;

[root@web-test-centos7 html]# yum install centos-release-scl
[root@web-test-centos7 html]# yum install php55-php-mysqlnd

Open Firewall port 80 http for CentOS

     sudo iptables -I INPUT 1 -p tcp --dport 80 -j ACCEPT
     sudo iptables -I INPUT 1 -p tcp --dport 443 -j ACCEPT

Save firewall rules in CentOS

/etc/init.d/iptables save

Alternatively, save firewall rules Ubuntu

iptables-save > /etc/iptables.rules

Save firewall rules for all other distros

iptables-save > /etc/sysconfig/iptables

Connect to MysQL to configure database user ‘drupal’

# mysql -u root
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 5.5.47-MariaDB MariaDB Server

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Create Drupal database

MariaDB [(none)]> create database drupal;

Grant ability to connect with drupal user

MariaDB [(none)]> grant usage on *.* to drupal@localhost identified by '@#@DS45Dfddfdgj334k34ldfk;DF';
Query OK, 0 rows affected (0.00 sec)

Grant all Privileges to the user drupal for database drupal

MariaDB [(none)]> grant all privileges on drupal.* to drupal@localhost;
Query OK, 0 rows affected (0.00 sec)

A new way of Deploying CBS for Large Clusters, using the TOR method 5600% to 12800% faster

So, I was thinking about the problem with cloning CBS volumes, where if you want to make several 64 copies of a CBS disk or more in a quick time. But what happens is they are built sequentially and queued. They are copied one at a time. So when a windows customer approached us, a colleague reached out to me to see if there was any other way of doing this thru snapshots or clones. In fact there was, and cinder is to be considered a fox, fast and cunning and unseen , but it is trapped inside a cage called glance.

This is about overcoming those limitations, introducing TOR-CBS
Parallel CBS Building with Openstack Cinder

This is all about making the best of the infrastructure that is there. Cinder is massively distributed so, building 64 parallel copies is achievable at a much higher parallel bandwidth, and for those reasons it is a ‘tor like’ system. A friend of mine compared it to cellular division. There is a kind of organic nature to the method applied, as all children are used as new parents for copy. This explains the efficiency and speed of the system. I.e. the more servers you want to build the more time you save .

When this actually worked for the first time I had to take a step back. It really meant that building 64 CBS would take an hour, and building 128 of them would take 1 hour and 10 minutes. Damn, that’s fast!

When you’ve got all thatI.e. clone 1 disk to create a second disk. Clone both the first and the second disk to make four disks. Clone the four to make 8 in total. Clone 8 to make 16 in total. 32, 64, 128, 256, 512, 1024, 2048. Your cluster can double in size in roughly 10 minutes a go provided that Cinder service has the infrastructure in place. This appears to be a new potentially revolutionary way of building out in the cloud.

See the diagram below for a proper illustration and explanation.

rapiddeploy-tor-cbs

As you can see the one for one copy in the 9th or 10th step is in the tens of thousands of percent more efficient!! The reason is because CBS clone is a one to one copy, and even if you specify to build 50 from a single volume id source, it will incrementally build them, one by one.

My system works the same, except it uses all of the available disks already built from the previous n steps, therefore giving an n’th exponent of amplification of efficiency per step, in other words, ‘something for nothing’. It also properly utilizes the distributed nature of CBS and very many network ports. Instead of utilizing a single port from the source volume, which is ultimately the restricting bottleneck factor in spinning up large cloud solutions.

I am absolutely delighted. IT WORKS!!

The Code

build-cbs.sh

USERNAME='MYCLOUDUSERNAMEHERE'
APIKEY='MYAPIKEYHERE'
ACCOUNT_NUMBER=10010111
API_ENDPOINT="https://lon.blockstorage.api.rackspacecloud.com/v1/$ACCOUNT_NUMBER/volumes"
MASTER_CBS_VOL_ID="MY-MASTER-VOLUME-ID-HERE"

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

echo "Using MASTER_CBS_VOL_ID $MASTER_CBS_VOL_ID.."
sleep 2

# Populate CBS
# No longer using $1 and $2 as unnecessary now we have cbs-fork-step
for i in `seq 1 2`;
do

echo "Generating CBS Clone #$i"
curl -s -vvvv  \
-X POST "$API_ENDPOINT" \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNT_NUMBER" \
-H "Accept: application/json"  \
-H "Content-Type: application/json" -d '{"volume": {"source_volid": "'$MASTER_CBS_VOL_ID'", "size": 50, "display_name": "win-'$i'", "volume_type": "SSD"}}'  | jq .volume.id | tr -d '"' >> cbs.created.newstep
done

echo "Giving CBS 15 minute grace time for 50 CBS clone"

z=0
spin() {
   local -a marks=( '/' '-' '\' '|' )
   while [[ $z -lt 500 ]]; do
     printf '%s\r' "${marks[i++ % ${#marks[@]}]}"
     sleep 1
     let 'z++'
   done
 }

spin

echo "Listing all CBS Volume ID's created"
cat cbs.created.newstep
# Ensure all of the initial created cbs end up in the master file
cat cbs.created.newstep >> cbs.created.all

echo "Initial Copy completed"

So the first bit is simple, the above uses the openstack Cinder API endpoint to create two copies of the master. It takes a bit longer the initial process, but if your building 64 to infinite servers this is going to be the most efficient and fastest way to do it. The thing is, we want to recursively build CBS in steps.

Enter cbs-fork-step.sh

cbs-fork-step.sh

USERNAME='MYCLOUDUSERNAMEHERE'
APIKEY='MYAPIKEYHERE'
ACCOUNT_NUMBER=10010111
API_ENDPOINT="https://lon.blockstorage.api.rackspacecloud.com/v1/$ACCOUNT_NUMBER/volumes"

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

z=0
spin() {
   local -a marks=( '/' '-' '\' '|' )
   while [[ $z -lt 400 ]]; do
     printf '%s\r' "${marks[i++ % ${#marks[@]}]}"
     sleep 1
     let 'z++'
   done
 }

count=$1

#count=65;
while read n; do
echo ""
# Populate CBS TOR STEPPING

echo "Generating TOR CBS Clone $count::$n"
date
curl -s  \
-X POST "$API_ENDPOINT" \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNT_NUMBER" \
-H "Accept: application/json"  \
-H "Content-Type: application/json" -d '{"volume": {"source_volid": "'$n'", "size": 50, "display_name": "win-'$count'", "volume_type": "SSD"}}' | jq .volume.id | tr -d '"' >> cbs.created.newstep


((count=count+1))

done < cbs.created.all

cat cbs.created.newstep > cbs.created.all
echo "Waiting 8 minutes for Clone cycle to complete.."
spin

As you can see from the above, the volume master ID disappears, we’re now using the 2 CBS VOL ID’s that were initially copied in the first build-cbs.sh file. From now on, we’ll iterate while reading n lines of the cbs.crated.newstep file. For redundancy cbs.created.all is used as well. The problem is this is a fixed iterative loop, what about controlling how many times this runs?

Also, we obviously need to keep count and track of each CBS, so we call them win-‘$count’, the ‘ ‘ is for termination/escape from the ‘” “‘. This allows each CBS to get the correct logical name based on the sequence, but in order for this to work properly, we need to put it all together in a master.sh file. The master forker, which adds an extra loop traversal to the design.

Putting it all together

master.sh

drwxr-xr-x. 2 root root 4096 Oct 7 10:44 curl
drwxr-xr-x. 2 root root 4096 Nov 12 13:48 customer
drwxr-xr-x. 4 root root 4096 Oct 12 15:07 .gem
# Master Controller file

# Number of Copy Steps Minimum 2 Maximum 9
# Steps 2=2 copies, 3=4 copies, 4=8, 5=16, 6=32, 7=64, 8=128, 9=256
# Steps 2=4 copies, 3=8 copies, 4=16, 5=32, 6=64, 7=128
# The steps variable determines how many identical Tor-copies of the CBS you wish to make
steps=6

rm cbs.created.all
rm cbs.created.newstep

touch cbs.created.all
touch cbs.created.newstep

figlet TOR CBS
echo ‘By Adam Bull, Rackspace UK’
sleep 2

echo “This software is alpha”
sleep 2

echo “Initiating initial Copy using $MASTER_CBS_VOLUME_ID”
# Builds first copy
./build-cbs.sh

count=4
for i in `seq 1 $steps`; do
let ‘count–‘
./cbs-fork-step.sh $count
let ‘count = (count * 2)’
done

echo “Attaching CBS and Building Nova Compute..”
./build-nova.sh

This code is still alpha, but it works really nicely. The output of the script looks like;

# ./master.sh
 _____ ___  ____     ____ ____ ____
|_   _/ _ \|  _ \   / ___| __ ) ___|
  | || | | | |_) | | |   |  _ \___ \
  | || |_| |  _ <  | |___| |_) |__) |
  |_| \___/|_| \_\  \____|____/____/

By Adam Bull, Rackspace UK
This software is alpha
Initiating initial Copy using
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   5013    114  0:00:01  0:00:01 --:--:--  5017

Generating TOR CBS Clone 3::defd5aa1-2927-444c-992d-fba6602f117c
Wed Mar  2 12:25:26 UTC 2016

Generating TOR CBS Clone 4::8283420f-b02a-4094-a857-aedf73dffcc3
Wed Mar  2 12:25:27 UTC 2016
Waiting 8 minutes for Clone cycle to complete..
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   4942    113  0:00:01  0:00:01 --:--:--  4948

Generating TOR CBS Clone 5::defd5aa1-2927-444c-992d-fba6602f117c
Wed Mar  2 12:32:10 UTC 2016

Generating TOR CBS Clone 6::8283420f-b02a-4094-a857-aedf73dffcc3
Wed Mar  2 12:32:11 UTC 2016

Generating TOR CBS Clone 7::822687a8-f364-4dd1-8a8a-3d52687454dd
Wed Mar  2 12:32:12 UTC 2016

Generating TOR CBS Clone 8::4a97d22d-03c1-4b14-a64c-bbf3fa5bab07
Wed Mar  2 12:32:12 UTC 2016
Waiting 8 minutes for Clone cycle to complete..
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   5186    118 --:--:-- --:--:-- --:--:--  5183

Generating TOR CBS Clone 9::defd5aa1-2927-444c-992d-fba6602f117c
Wed Mar  2 12:38:56 UTC 2016

Generating TOR CBS Clone 10::8283420f-b02a-4094-a857-aedf73dffcc3
Wed Mar  2 12:38:56 UTC 2016

Generating TOR CBS Clone 11::822687a8-f364-4dd1-8a8a-3d52687454dd
Wed Mar  2 12:38:57 UTC 2016

Generating TOR CBS Clone 12::4a97d22d-03c1-4b14-a64c-bbf3fa5bab07
Wed Mar  2 12:38:58 UTC 2016

Generating TOR CBS Clone 13::42145009-33a7-4fc4-9865-da7a82e943c1
Wed Mar  2 12:38:58 UTC 2016

Generating TOR CBS Clone 14::58db8ae2-2e0e-4629-aad6-5c228eb4b342
Wed Mar  2 12:38:59 UTC 2016

Generating TOR CBS Clone 15::d0bf36cb-6dd5-4ed3-8444-0e1d61dba865
Wed Mar  2 12:39:00 UTC 2016

Generating TOR CBS Clone 16::459ba327-de60-4bc1-a6ad-200ab1a79475
Wed Mar  2 12:39:00 UTC 2016
Waiting 8 minutes for Clone cycle to complete..
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5143  100  5028  100   115   4953    113  0:00:01  0:00:01 --:--:--  4958

Generating TOR CBS Clone 17::defd5aa1-2927-444c-992d-fba6602f117c
Wed Mar  2 12:45:44 UTC 2016

Generating TOR CBS Clone 18::8283420f-b02a-4094-a857-aedf73dffcc3
Wed Mar  2 12:45:45 UTC 2016

Generating TOR CBS Clone 19::822687a8-f364-4dd1-8a8a-3d52687454dd
Wed Mar  2 12:45:45 UTC 2016

Generating TOR CBS Clone 20::4a97d22d-03c1-4b14-a64c-bbf3fa5bab07
Wed Mar  2 12:45:46 UTC 2016

Generating TOR CBS Clone 21::42145009-33a7-4fc4-9865-da7a82e943c1
Wed Mar  2 12:45:46 UTC 2016

Generating TOR CBS Clone 22::58db8ae2-2e0e-4629-aad6-5c228eb4b342
Wed Mar  2 12:45:47 UTC 2016

Generating TOR CBS Clone 23::d0bf36cb-6dd5-4ed3-8444-0e1d61dba865
Wed Mar  2 12:45:48 UTC 2016

Generating TOR CBS Clone 24::459ba327-de60-4bc1-a6ad-200ab1a79475
Wed Mar  2 12:45:48 UTC 2016

Generating TOR CBS Clone 25::9b10b078-c82d-48cd-953e-e99d5e90774a
Wed Mar  2 12:45:49 UTC 2016

Generating TOR CBS Clone 26::0692c7dd-6db0-43e6-837d-8cc82ce23c78
Wed Mar  2 12:45:50 UTC 2016

Generating TOR CBS Clone 27::f2c4a89e-fc37-408a-b079-f405e150fa96
Wed Mar  2 12:45:50 UTC 2016

Generating TOR CBS Clone 28::5077f4d8-e5e1-42b6-af58-26a0b55ff640
Wed Mar  2 12:45:51 UTC 2016

Generating TOR CBS Clone 29::f18ec1c3-1698-4985-bfb9-28604bbdf70b
Wed Mar  2 12:45:52 UTC 2016

Generating TOR CBS Clone 30::fd96c293-46e5-49e4-85d5-5181d6984525
Wed Mar  2 12:45:52 UTC 2016

Generating TOR CBS Clone 31::9ea40b0d-fb60-4822-a538-3b9d967794a2
Wed Mar  2 12:45:53 UTC 2016

Generating TOR CBS Clone 32::ea7e2c10-d8ce-4f22-b8b5-241b81dff08c
Wed Mar  2 12:45:54 UTC 2016
Waiting 8 minutes for Clone cycle to complete..
/

Installing and using Swiftly with Rackspace Cloud Files

So you want a commandline tool for managing cloud files containers? Enter, swiftly. Now you’ve seen examples before with curl, fog and pyrax, but here’s a special one which is a commandline application.

# Upgrade pip just for hoots
pip install --upgrade pip

# Install swiftly
pip install swiftly

Now swiftly is installed, how about configuring it in your home directory

# put a .swiftly.conf in your home directory, if your root i.e. /root/.swiftly.
vi ~/.swiftly.conf

This is what the configuration should look like

[swiftly]                                                                       
auth_user = mycloudusername                                                        
auth_key = mycloudapikey                                  
auth_url = https://identity.api.rackspacecloud.com/v2.0                         
region = lon
bash-4.2 Tue Feb 16 15:04:34 pirax-test ~# swiftly get

current
export
exports
gallery
images
lb_153727_TESTING_Nov_2015
lb_153727_TESTING_Oct_2015
magento-stack-single-magento_server-gxlxbm7b6be2
meh
meh2
rackspace_orchestration_templates_store
scripts
testing
testing999
versions

This gives us the output of all of the cloud containers as shown above. Pretty cool. But what about placing files in a container?

swiftly put -i ~/myfile.txt CONTAINER/path/to/file/somefilenamethatsdifferent.txt

So If I wanted to upload to meh2 I would do an

swiftly put -i ~/mylocalfile.txt meh2/some/container/path/somefileiuploaded.txt

The destination file can be called mylocalfile.txt if you want but I want to illustrate the target name can be different to the local source name.

Stopping a process from crapping out without PIDcrap

So, this one comes up a lot too. So you wanna run a process, and you don’t want it to crap out, you don’t want PIDCrap or any other lunatic solution that simply doesn’t work 100%. Well, welcome to until.

I’ve been executing a ruby script that does some stuff with fog.

ruby my-fog-cloud-files-container-deleter-thingy.rb

but, it keeps crapping out with lots of errors

I figured crapout no more and nabbed this handy snippet, credit to good ole stackoverflow

until ruby my-fog-cloud-files-container-deleter-thingy.rb; do
    echo "Server 'myserver' crashed with exit code $?.  Respawning.." >&2
    sleep 1
done

Now when it craps out, it continues where it left off.. nice, simple, elegant.

I don’t know what kind of error handling swiftly and pyrax has available in built, but this is a nice way to do it. Theoretically this oneliner might be of use for turbolift as well as any other batch like job which might end prematurely before the code deploy finishes. I wish cloud init had something like this