Testing Cloud Files API calls

So a customer was having issues with cloudfuse, the virtual ‘cloud files’ hard disk. So we needed to test whether their auth is working correctly:

#!/bin/bash
# Diagnostic script by Adam Bull
# Test Cloud Files Auth
# Tuesday, 02/02/2016

# Username used to login to control panel
USERNAME=’mycloudusername’

# Find the APIKey in the ‘account settings’ part of the menu of the control panel
APIKEY=’mycloudapikey’

# This section simply retrieves and sets the TOKEN

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d ‘{ “auth”:{“RAX-KSKEY:apiKeyCredentials”: { “username”:”‘$USERNAME'”, “apiKey”: “‘$APIKEY'” }} }’ -H “Content-type: application/json” | python -mjson.tool | grep -A5 token | grep id | cut -d ‘”‘ -f4`

# Container to Upload to (container must exist)

CONTAINER=testing

LOCALFILENAME=”/root/mytest.txt”
FILENAME=”mytest.txt”

# Put command, note MossoCloufFS_customeridhere needs to be populated with the correct value. This is the number in the mycloud url when logging in.
curl -i -v -X PUT “https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_101100/$CONTAINER/$FILENAME” -T “$LOCALFILENAME” -H “X-Auth-Token: $TOKEN”

How to speed test a Rackspace CDN?

So, today, a customer was asking if we could show speed tests to CDN.

So I used my french server to test external connections from outside of Rackspace. For a CDN, it’s fairly speedy!

#!/bin/bash
CSTATS=`curl -w '%{speed_download}\t%{time_namelookup}\t%{time_total}\n' -o /dev/null -s http://6281487ef0c74fc1485b-69e4500000000000dfasdcd1b6b.r12.cf1.rackcdn.com/bigfile-rackspace-testing`
SPEED=`echo $CSTATS | awk '{print $1}' | sed 's/\..*//'`
DNSTIME=`echo $CSTATS | awk '{print $2}'`
TOTALTIME=`echo $CSTATS | awk '{print $3}'`
echo "Transfered $SPEED bytes/sec in $TOTALTIME seconds."
echo "DNS Resolve Time was $DNSTIME seconds."
# ./speedtest.sh
Transfered 3991299 bytes/sec in 26.272 seconds.
DNS Resolve Time was 0.061 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 7046221 bytes/sec in 14.881 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 29586916 bytes/sec in 3.544 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 14539272 bytes/sec in 7.212 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 9060846 bytes/sec in 11.573 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 25551753 bytes/sec in 4.104 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 28225927 bytes/sec in 3.715 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 9036412 bytes/sec in 11.604 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 32328623 bytes/sec in 3.243 seconds.
DNS Resolve Time was 0.004 seconds.

4 way NORAID mirror using ZFS

So I thought about a cool way to backup my files without using anything too fancy and I started to think about ZFS. Don’t know why I didn’t before because it’s ultra ultra resilient. Cheers Oracle. This is in Debian 7 Wheezy.

Step 1 Install zfs

# apt-get install lsb-release
# wget http://archive.zfsonlinux.org/debian/pool/main/z/zfsonlinux/zfsonlinux_6_all.deb
# dpkg -i zfsonlinux_6_all.deb

# apt-get update
# apt-get install debian-zfs

Step 2 Create Mirrored Disk Config with Zpool.
Here i’m using 4 x 75GB SATA Cloud Block Storage Devices to have 4 copies of the same data with ZFS great error checking abilities

zpool create -f noraidpool mirror xvdb xvdd xvde xvdf

Step 3. Write a little disk write utility

#!/bin/bash


while :
do

        echo "Testing." $x >> file.txt
        sleep 0.02
  x=$(( $x + 1 ))
done

Step 4 (Optional). Start killing the Disks with fire, kill iscsi connection etc, and see if file.txt is still tailing.

./write.sh & ; tail -f /noraidpool/file.txt

Step 5. Observe that as long as one of the 4 disks has it’s virtual block device connection your data is staying up. So it will be OK even if there is 3 or less I/O errors simultaneously. Not baaaad.


root@zfs-noraid-testing:/noraidpool# /sbin/modprobe zfs
root@zfs-noraid-testing:/noraidpool# lsmod | grep zfs
zfs                  2375910  1
zunicode              324424  1 zfs
zavl                   13071  1 zfs
zcommon                35908  1 zfs
znvpair                46464  2 zcommon,zfs
spl                    62153  3 znvpair,zcommon,zfs
root@zfs-noraid-testing:/noraidpool# zpool status
  pool: noraidpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        noraidpool  ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            xvdb    ONLINE       0     0     0
            xvdd    ONLINE       0     0     0
            xvde    ONLINE       0     0     0
            xvdf    ONLINE       0     0     0

errors: No known data errors

Step 6. Some more benchmark tests

time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync"

Step 7. Some concurrent fork tests

#!/bin/bash

while :
do

time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" &
        echo "Testing." $x >> file.txt
        sleep 2
  x=$(( $x + 1 ))
 zpool iostat
clear
done

or better

#!/bin/bash

time sh -c "dd if=/dev/zero of=ddfile bs=128k count=250000 && sync" &
time sh -c "dd if=/dev/zero of=ddfile bs=24k count=250000 && sync" &
time sh -c "dd if=/dev/zero of=ddfile bs=16k count=250000 && sync" &
while :
do

        echo "Testing." $x >> file.txt
        sleep 2
  x=$(( $x + 1 ))
 zpool iostat
clear
done

bwm-ng ‘elegant’ style output of disk I/O using zpool status


#!/bin/bash

time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" &
while :
do
clear
 zpool iostat
sleep 2
clear
done

To test the resiliency of ZFS I removed 3 of the disks, completely unlatching them


        NAME                      STATE     READ WRITE CKSUM
        noraidpool                DEGRADED     0     0     0
          mirror-0                DEGRADED     0     0     0
            1329894881439961679   UNAVAIL      0     0     0  was /dev/xvdb1
            12684627022060038255  UNAVAIL      0     0     0  was /dev/xvdd1
            4058956205729958166   UNAVAIL      0     0     0  was /dev/xvde1
            xvdf                  ONLINE       0     0     0

And noticed with just one remaining Cloud block storage device I was still able to access the data on the disk as well as create data:

cat file.txt  | tail
Testing. 135953
Testing. 135954
Testing. 135955
Testing. 135956
Testing. 135957
Testing. 135958
Testing. 135959
Testing. 135960
Testing. 135961
Testing. 135962

# mkdir test
root@zfs-noraid-testing:/noraidpool# ls -a
.  ..  ddfile  file.txt  forktest.sh  stat.sh  test  writetest.sh


That’s pretty flexible.

Checking Size of Database within MySQL

We had an issue where the rackspace intelligence monitor was giving a different value to the control panel instance of a dbaas. So I came up with a way of testing MySQL for the database size. There is nothing more reliable than running the query like this, I think.

SELECT table_schema AS "Database", 
ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS "Size (MB)" 
FROM information_schema.TABLES 
GROUP BY table_schema;

Deleting All the Files in a Cloud Container

Hey. So if only I had a cake for every customer that asked if we could delete all of their cloud files in a single container for them (i’d be really really really fat so maybe that is a bad idea). A dollar though, now there’s a thought.

On that note, here is a dollar. Probably the best dollar you’ll see today. You could probably do this with php, bash or swiftly, but doing it *THIS* way is also awesome, and I learnt (although some might say learned) something. Here is how I did it. I should also importantly thank Matt Dorn for his contributions to this article. Without him this wouldn’t exist.

Step 1. Install Python, pip

yum install python pip
apt-get install python pip

Step 2. Install Pyrax (rackspace Python Openstack Library)

pip install pyrax

Step 3. Install Libevent

curl -L -O https://github.com/downloads/libevent/libevent/libevent-2.0.21-stable.tar.gz
tar xzf libevent-2.0.21-stable.tar.gz
cd libevent-2.0.21-stable
./configure --prefix="$VIRTUAL_ENV"
make && make install
cd $VIRTUAL_ENV/..

Step 4. Install Greenlet and Gevent


pip install greenlet
pip install gevent

Step 5. Check gevent library loading in Python Shell

python
import gevent

If nothing comes back, the gevent lib works OK.

Step 6. Create the code to delete all the files

#!/usr/bin/python
# -*- coding: utf-8 -*-
from gevent import monkey
from gevent.pool import Pool
from gevent import Timeout
monkey.patch_all()
import pyrax

if __name__ == '__main__':
    pool = Pool(100)
pyrax.set_setting('identity_type', 'rackspace')
pyrax.set_setting('verify_ssl', False)
# Rackspace Credentials Go here, Region LON, username: mycloudusername apikey: myrackspaceapikey. 
pyrax.set_setting('region', 'LON')
pyrax.set_credentials('mycloudusername', 'myrackspaceapikey')

cf = pyrax.cloudfiles
# Remember to set the container correctly (which container to delete all files within?)
container = cf.get_container('testing')
objects = container.get_objects(full_listing=True)


def delete_object(obj):

# added timeout of 5 seconds just in case

    with Timeout(5, False):
        try:
            obj.delete()
        except:
            pass


for obj in objects:
    pool.spawn(delete_object, obj)
pool.join()

It’s well worth noting that this can also be used to list all of the objects as well, but that is something for later…

Step 7. Execute (not me the script!)

The timeout can be adjusted. And the script can be run several times to ensure any missed files are retried to be deleted.

Creating a Bootable CD of XenServer 6.2 using Mac OS X

My Colleague at work asked me to create a bootable CD of XenServer 6.2. So I thought I’d quickly throw together a tutorial on how to do this.

Step 1. Download the ISO from Xen website http://xenserver.org/open-source-virtualization-download.html

In my case I’m using the 6.2 version release. but this process is good for burning any bootable ISO

wget http://downloadns.citrix.com.edgesuite.net/8159/XenServer-6.2.0-install-cd.iso

Step 2. Convert from iso to a dmg/img

hdiutil convert -format UDRW -o xenserver6.2.img XenServer-6.2.0-install-cd.iso

Step 3. Locate USB device in my case it was /dev/disk2. My colleague was using xen 6.5 previously.

diskutil list

$ diskutil list
/dev/disk1
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *14.4 MB    disk1
   1:                  Apple_HFS MenuMeters 1.8.1        14.4 MB    disk1s1
/dev/disk2
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                            XenServer-6.5.0        *62.0 GB    disk2

Step 4. unmount the USB disk

diskutil unmountDisk /dev/disk2

Step 5. Create the USB image


sudo dd if=xenserver6.2.img.dmg of=/dev/disk2 bs=1m

563+1 records in
563+1 records out
590448640 bytes transferred in 186.928304 secs (3158690 bytes/sec)

Step 6. Eject USB device safely

diskutil eject /dev/disk2

Job done! You should now have a working bootable USB disk ISO for xen server 6.2 ready to install.

Cloud Files Container Bulk Deletion Script written in BASH

So, we had a lot of customers asking for ways to delete all of their cloud files in a single container, instead of having to do manually. This is possible using the bulk delete function defined in the rackspace docs. Find below the steps required to do this.

Step 1: Make an auth.json file (for simplicity)

{
    "auth": {
        "RAX-KSKEY:apiKeyCredentials": {
            "username": "mycloudusername",
            "apiKey": "mycloudapikey"
        }
    }
}

It’s quite simple and nothing intimidating.

For step 2 I’m using an application called jq, to install it do


wget https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64
mv jq-linux64 /bin/jq
alias jq = '/bin/jq'

Now you can use jq at the commandline.

Step 2: Set variable called $TOKEN that can store the api token password output, the nice thing is there is no token stored in the script so its kind of secure

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d @auth.json -H "Content-type: application/json" | jq .access.token.id | sed 's/"//g'`
echo $TOKEN

Step 3: Set a variable for the container name

# Container to Upload to
CONTAINER=meh2

Step 4: Populate a List of all the files in the $CONTAINER variable, in this case ‘meh2’.

# Populate File List
echo "Populating File List"
curl -X GET https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567/meh2 -H "X-Auth-Token: $TOKEN" > filelist.txt

Step 5: Add container name to the file listing by rewriting the output file filelist.txt to a deletelist.txt

sed -e "s/^/\/$CONTAINER\//" <  filelist.txt > deletelist.txt

Step 6: Bulk Delete Files thru API

echo "Deleting Files.."
curl -i -v -XDELETE -H"x-auth-token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567\?bulk-delete -T ./deletelist.txt

Step 7: Confirm the deletion success

# Confirm Deleted
echo "Confirming Deleted in $CONTAINER.."
curl -i -X GET https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567/meh2 -H "X-Auth-Token: $TOKEN"

The completed script looks like this:


 Mass Delete Container

# Get Token

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d @auth.json -H "Content-type: application/json" | jq .access.token.id | sed 's/"//g'`
echo $TOKEN

# Container to Upload to
CONTAINER=meh2


# Populate File List
echo "Populating File List"
curl -X GET https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567/meh2 -H "X-Auth-Token: $TOKEN" > filelist.txt


# Add Container Prefix
echo "Adding Container Prefix.."
sed -e "s/^/\/$CONTAINER\//" <  filelist.txt > deletelist.txt


# Delete Files
echo "Deleting Files.."
curl -i -v -XDELETE -H"x-auth-token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567\?bulk-delete -T ./deletelist.txt

# Confirm Deleted
echo "Confirming Deleted in $CONTAINER.."
curl -i -X GET https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567/meh2 -H "X-Auth-Token: $TOKEN"

Pretty simple!

Running it..

* About to connect() to storage101.lon3.clouddrive.com port 443 (#0)
* Trying 2a00:1a48:7900::100...
* Connected to storage101.lon3.clouddrive.com (2a00:1a48:7900::100) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* SSL connection using TLS_DHE_RSA_WITH_AES_256_CBC_SHA
* Server certificate:
* subject: CN=storage101.lon3.clouddrive.com
* start date: May 18 00:00:00 2015 GMT
* expire date: Nov 17 23:59:59 2016 GMT
* common name: storage101.lon3.clouddrive.com
* issuer: CN=thawte DV SSL CA - G2,OU=Domain Validated SSL,O="thawte, Inc.",C=US
> DELETE /v1/MossoCloudFS_10045567?bulk-delete HTTP/1.1
> User-Agent: curl/7.29.0
> Host: storage101.lon3.clouddrive.com
> Accept: */*
> x-auth-token: AAA7uz-F91SDsaMOCKTOKEN-gOLeB5bbffh8GBGwAPl9F313Pcy4Xg_zP8jtgZolMOudXhsZh-nh9xjBbOfVssaSx_shBMqkxIEEgW1zt8xESJbZLIsvBTNzfVBlTitbUS4RarUOiXEw
> Content-Length: 515
> Expect: 100-continue
>
< HTTP/1.1 100 Continue HTTP/1.1 100 Continue * We are completely uploaded and fine < HTTP/1.1 200 OK HTTP/1.1 200 OK < Content-Type: text/plain Content-Type: text/plain < X-Trans-Id: tx010194ea9a104443b89bb-00161f7f1dlon3 X-Trans-Id: tx010194ea9a104443b89bb-001611f7f1dlon3 < Date: Thu, 15 Oct 2015 10:25:35 GMT Date: Thu, 15 Oct 2015 10:25:35 GMT < Transfer-Encoding: chunked Transfer-Encoding: chunked < Number Deleted: 44 Number Not Found: 0 Response Body: Response Status: 200 OK Errors: * Connection #0 to host storage101.lon3.clouddrive.com left intact

Mounting A disk in Linux Operating Systems

So, apparently, some people aren’t sure how to properly add a new disk to a Linux server. Well, it’s pretty simple if you are using something like Cloud Block Storage and have ondemand. But even if you don’t, after adding the disk into your Linux machine, here is the process of going about adding a standard disk, fdisking (partitioning), formating (mkfs), and the mounting process itself, including that in fstab.

Step 1. List the disk’s on the box

 
$ fdisk -l

Device     Boot Start      End  Sectors Size Id Type
/dev/xvda1 *     2048 41940991 41938944  20G 83 Linux

Disk /dev/xvdb: 75 GiB, 80530636800 bytes, 157286400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

As we can see the new 75GiB drive I added hasn’t any file system on it yet.

Step 2. Let’s start partitioning the disk

$ fdisk /dev/xvdb

Device     Boot Start      End  Sectors Size Id Type
/dev/xvda1 *     2048 41940991 41938944  20G 83 Linux

Disk /dev/xvdb: 75 GiB, 80530636800 bytes, 157286400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Step 3. Let’s create a new partition using ‘n’ key, and then let’s state we want a primary partition using ‘p’.
Step 4: Let’s set the first and last sectors of the disk (you can type enter and the machine will normally chose something sane for you). It’s also possible to add multiple partitions of a single device, but for this tutorial we won’t be covering that.


Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)

Select (default p): p

Partition number (1-4, default 1): 1
First sector (2048-157286399, default 2048): 
Last sector, +sectors or +size{K,M,G,T,P} (2048-157286399, default 157286399): 

Created a new partition 1 of type 'Linux' and of size 75 GiB.

Step 4. Write the disk using the ‘w’ key, and then after it’s finished confirm with fdisk that new partition is present.


Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

root@tesladump:/home# fdisk -l

Disk /dev/xvda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0c708d98

Device     Boot Start      End  Sectors Size Id Type
/dev/xvda1 *     2048 41940991 41938944  20G 83 Linux

Disk /dev/xvdb: 75 GiB, 80530636800 bytes, 157286400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x07c2e0a6

Device     Boot Start       End   Sectors Size Id Type
/dev/xvdb1       2048 157286399 157284352  75G 83 Linux

Now the disk has been partitioned.

Step 5. So lets create an (EXT3) file system on the device.


$ mkfs -t ext3 /dev/xvdb1

mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 19660544 4k blocks and 4915200 inodes
Filesystem UUID: c717ddbd-d5c9-4bb1-a8af-b521d38cbb14
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

So that is the file system done. Now all we need to do is mount it. We can do that thru the /etc/fstab file, and also using the mount -a command.

Step 6. Simply we edit our /etc/fstab to accommodate the new disk

 

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#                
/dev/xvda1      /               ext3    errors=remount-ro,noatime,barrier=0 0       1


/dev/xvdb1      /home/thetesladump ext3 defaults,noatime,nofail 0    0

Here we choose to mount the partition1 /dev/xvdb (/dev/xvdb1) on the symlink directory /home/thetesladump . A little FTP I temporarily wanted to host.

So we setup the new 75Gig cloud block device to be mounted at the ftp user thetesladump’s root home user directory. All ready to go. wooo.

Step 7. Complete hte process by running a mount -a

 

mount -a

Step 8. Confirm your disks are mounted where you wanted


root@tesladump:/home/theteslasociety# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       20G  1.1G   18G   6% /
udev             10M     0   10M   0% /dev
tmpfs           199M   21M  179M  11% /run
tmpfs           498M     0  498M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           498M     0  498M   0% /sys/fs/cgroup
/dev/xvdb1       74G  178M   70G   1% /home/thetesladump