Deleting All the Files in a Cloud Container

Hey. So if only I had a cake for every customer that asked if we could delete all of their cloud files in a single container for them (i’d be really really really fat so maybe that is a bad idea). A dollar though, now there’s a thought.

On that note, here is a dollar. Probably the best dollar you’ll see today. You could probably do this with php, bash or swiftly, but doing it *THIS* way is also awesome, and I learnt (although some might say learned) something. Here is how I did it. I should also importantly thank Matt Dorn for his contributions to this article. Without him this wouldn’t exist.

Step 1. Install Python, pip

yum install python pip
apt-get install python pip

Step 2. Install Pyrax (rackspace Python Openstack Library)

pip install pyrax

Step 3. Install Libevent

curl -L -O https://github.com/downloads/libevent/libevent/libevent-2.0.21-stable.tar.gz
tar xzf libevent-2.0.21-stable.tar.gz
cd libevent-2.0.21-stable
./configure --prefix="$VIRTUAL_ENV"
make && make install
cd $VIRTUAL_ENV/..

Step 4. Install Greenlet and Gevent


pip install greenlet
pip install gevent

Step 5. Check gevent library loading in Python Shell

python
import gevent

If nothing comes back, the gevent lib works OK.

Step 6. Create the code to delete all the files

#!/usr/bin/python
# -*- coding: utf-8 -*-
from gevent import monkey
from gevent.pool import Pool
from gevent import Timeout
monkey.patch_all()
import pyrax

if __name__ == '__main__':
    pool = Pool(100)
pyrax.set_setting('identity_type', 'rackspace')
pyrax.set_setting('verify_ssl', False)
# Rackspace Credentials Go here, Region LON, username: mycloudusername apikey: myrackspaceapikey. 
pyrax.set_setting('region', 'LON')
pyrax.set_credentials('mycloudusername', 'myrackspaceapikey')

cf = pyrax.cloudfiles
# Remember to set the container correctly (which container to delete all files within?)
container = cf.get_container('testing')
objects = container.get_objects(full_listing=True)


def delete_object(obj):

# added timeout of 5 seconds just in case

    with Timeout(5, False):
        try:
            obj.delete()
        except:
            pass


for obj in objects:
    pool.spawn(delete_object, obj)
pool.join()

It’s well worth noting that this can also be used to list all of the objects as well, but that is something for later…

Step 7. Execute (not me the script!)

The timeout can be adjusted. And the script can be run several times to ensure any missed files are retried to be deleted.

Extracting IPV4 IP addresses from a list of machine ID’s

for id in $(cat list.txt); do supernova lon show $id | awk '/accessIPv4/ {print $4}'; done >> iplist.txt

It’s a pretty simple hack. thanks to Jan for this.

Of course you need to extract the machine id’s to run the above statement. Here is how I did that:

nova list --tenant 10010101 > list.txt

Pretty cool. And yeah I know, I put step 1 after step 2, but you get the idea !

Resetting Meta data of a RAX Rackspace Xen Server

At work we had some customers complaining of metadata not being removed on their servers.


nova --os-username username --os-password apigoeshere meta uuidgoeshere delete rax:reboot_window

It was pretty simple to do as a one liner right.

But imagine we have a list.txt full of 100 servers that need clearing for an individual customer, that would be a nightmare to do manually like above. so we can do it like:

for server in $(cat list.txt); do nova --os-username username --os-password apikeygoeshere meta $server delete rax:reboot_window; done

Now that is pretty cool. And saved me and my colleagues a lot of time.

Testing the consistency of response time on a website

So, we had some customers today complaining about inconsistent page load times. So I taken a look at the hypervisor they were on and I could see it was really quite busy. In the sense that all 122GB of RAM available was being used by server instances. Ironically though it wasn’t that busy, but I live-migrated the customer anyway to a much quieter server, but the customer saw no change whatsoever.

In this case it indicated already that it wasn’t a network infrastructure or hardware issue and likely the increase in latency they saw over the last so many days was being caused by something else. Most likely the growing size of their database, not being reflected by their static amount of ram and the variables set for their tablesize cache, and etc in MySQL.

So my friend kindly put together this excellent oneliner. Check it out!

$ for i in $(seq 50); do curl -sL http://www.google.com/ -o /dev/null -w %{time_total}\\n; sleep 1; done
0.698
0.493
0.365
0.293
0.326
0.525
0.342
0.527
0.445
0.263
0.493

Pretty neat eh

Using configdrive cloud-config to execute commands post server creation

A lot of customers might want to setup automation, for installing common packages and making configurations for vanilla images. One way to provide that automation is to use configdrive which allows you to execute commands post server creation, as well as to install certain packages that are required.

The good thing about using this is you can get a server up and running with a single line of automation, and of course your configuration file (which contains all the automation). Here is the steps you need to do it, and it is actually really rather very simple!

Step 1. Create Automation File .cloud-config

#cloud-config

packages:

 - apache2
 - php5
 - php5-mysql
 - mysql-server

runcmd:

 - wget http://wordpress.org/latest.tar.gz -P /tmp/
 - tar -zxf /tmp/latest.tar.gz -C /var/www/
 - mysql -e "create database wordpress; create user 'wpuser'@'localhost' identified by 'changemetoo'; grant all privileges on wordpress . \* to 'wpuser'@'localhost'; flush privileges;"
 - mysql -e "drop database test; drop user 'test'@'localhost'; flush privileges;"
 - mysqladmin -u root password 'changeme'

Install apache2, php5, php-mysql, mysqlserver, download wordpress to /tmp and then extract it into main /var/www folder. Create the wordpress database and user name.

Step 2: Create server using cloud-config in Supernova via the Rackspace API
(not hard! easy!)

supernova customer boot --config-drive=true --flavor performance1-1 --image 09de0a66-3156-48b4-90a5-1cf25a905207 --user-data cloud-config testing-configdrive


+--------------------------------------+-------------------------------------------------------------------------------+
| Property                             | Value                                                                         |
+--------------------------------------+-------------------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                                        |
| OS-EXT-STS:power_state               | 0                                                                             |
| OS-EXT-STS:task_state                | scheduling                                                                    |
| OS-EXT-STS:vm_state                  | building                                                                      |
| RAX-PUBLIC-IP-ZONE-ID:publicIPZoneId |                                                                               |
| accessIPv4                           |                                                                               |
| accessIPv6                           |                                                                               |
| adminPass                            | SECUREPASSWORDHERE                                                            |
| config_drive                         | True                                                                          |
| created                              | 2015-10-20T11:10:23Z                                                          |
| flavor                               | 1 GB Performance (performance1-1)                                             |
| hostId                               |                                                                               |
| id                                   | ef084d0f-70cc-4366-b348-daf987909899                                          |
| image                                | Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM) (09de0a66-3156-48b4-90a5-1cf25a905207) |
| key_name                             | -                                                                             |
| metadata                             | {}                                                                            |
| name                                 | testing-configdrive                                                           |
| progress                             | 0                                                                             |
| status                               | BUILD                                                                         |
| tenant_id                            | 10000000                                                                      |
| updated                              | 2015-10-20T11:10:24Z                                                          |
| user_id                              | 05b18e859cad42bb9a5a35ad0a6fba2f                                              |
+--------------------------------------+-------------------------------------------------------------------------------+

In my case my supernova was setup already, however I have another article on how to setup supernova on this site, just take a look there for how to install it. MY supernova configuration looks like (with the API KEY removed ofcourse!)

[customer]
OS_AUTH_URL=https://identity.api.rackspacecloud.com/v2.0/
OS_AUTH_SYSTEM=rackspace
#OS_COMPUTE_API_VERSION=1.1
NOVA_RAX_AUTH=1
OS_REGION_NAME=LON
NOVA_SERVICE_NAME=cloudServersOpenStack
OS_PASSWORD=90bb3pd0a7MYMOCKAPIKEYc419572678abba136a2
OS_USERNAME=mycloudusername
OS_TENANT_NAME=100000

OS_TENANT_NAME is your customer number, take it from the url in mycloud.rackspace.com after logging on. OS_PASSWORD is your API KEY, get it from the account settings url in mycloud.rackspace.co.uk, and your OS_USERNAME, that is your username that you use to login to the Rackspace mycloud control panel. Simples!

Step 3: Confirm your server built as expected

root@testing-configdrive:~# ls /tmp
latest.tar.gz

root@testing-configdrive:~# ls /var/www/wordpress
index.php    readme.html      wp-admin            wp-comments-post.php  wp-content   wp-includes        wp-load.php   wp-mail.php      wp-signup.php     xmlrpc.php
license.txt  wp-activate.php  wp-blog-header.php  wp-config-sample.php  wp-cron.php  wp-links-opml.php  wp-login.php  wp-settings.php  wp-trackback.php

In my case, I noticed that everything went fine and ‘wordpress’ installed to /var/www just fine. But what if I wanted wordpress www dir configured to html by default? That’s pretty easy. It’s just an extra.

mv /var/www/html /var/www/html_old
mv /var/www/wordpress /var/www/html

So lets add that to our automation script:

#cloud-config

packages:

 - apache2
 - php5
 - php5-mysql
 - mysql-server

runcmd:

 - wget http://wordpress.org/latest.tar.gz -P /tmp/
 - tar -zxf /tmp/latest.tar.gz -C /var/www/; mv /var/www/html /var/www/html_old; mv /var/www/wordpress /var/www/html
 - mysql -e "create database wordpress; create user 'wpuser'@'localhost' identified by 'changemetoo'; grant all privileges on wordpress . \* to 'wpuser'@'localhost'; flush privileges;"
 - mysql -e "drop database test; drop user 'test'@'localhost'; flush privileges;"
 - mysqladmin -u root password 'changeme'

Job done. Just a case of re-running the command now:

supernova customer boot --config-drive=true --flavor performance1-1 --image 09de0a66-3156-48b4-90a5-1cf25a905207 --user-data cloud-config testing-configdrive

And then checking that our wordpress website loads correctly without any additional configuration or having to login to the machine! Not bad automation thar.

I could have quite easily achieved something like this by using the API directly. No supernova and no filesystem. Just the raw command! Yeah that’d be better than not bad!

Creating Post BUILD Automation thru API via CURL

Here’s how to do it.

Step 1. Prepare your execution script by converting it to BASE_64 character encoding

Unencoded Script:

#cloud-config

packages:

 - apache2
 - php5
 - php5-mysql
 - mysql-server

runcmd:

 - wget http://wordpress.org/latest.tar.gz -P /tmp/
 - tar -zxf /tmp/latest.tar.gz -C /var/www/; mv /var/www/html /var/www/html_old; mv /var/www/wordpress /var/www/html
 - mysql -e "create database wordpress; create user 'wpuser'@'localhost' identified by 'changemetoo'; grant all privileges on wordpress . \* to 'wpuser'@'localhost'; flush privileges;"
 - mysql -e "drop database test; drop user 'test'@'localhost'; flush privileges;"
 - mysqladmin -u root password 'changeme'

Encoded Script:

I2Nsb3VkLWNvbmZpZw0KDQpwYWNrYWdlczoNCg0KIC0gYXBhY2hlMg0KIC0gcGhwNQ0KIC0gcGhwNS1teXNxbA0KIC0gbXlzcWwtc2VydmVyDQoNCnJ1bmNtZDoNCg0KIC0gd2dldCBodHRwOi8vd29yZHByZXNzLm9yZy9sYXRlc3QudGFyLmd6IC1QIC90bXAvDQogLSB0YXIgLXp4ZiAvdG1wL2xhdGVzdC50YXIuZ3ogLUMgL3Zhci93d3cvIDsgbXYgL3Zhci93d3cvaHRtbCAvdmFyL3d3dy9odG1sX29sZDsgbXYgL3Zhci93d3cvd29yZHByZXNzIC92YXIvd3d3L2h0bWwNCiAtIG15c3FsIC1lICJjcmVhdGUgZGF0YWJhc2Ugd29yZHByZXNzOyBjcmVhdGUgdXNlciAnd3B1c2VyJ0AnbG9jYWxob3N0JyBpZGVudGlmaWVkIGJ5ICdjaGFuZ2VtZXRvbyc7IGdyYW50IGFsbCBwcml2aWxlZ2VzIG9uIHdvcmRwcmVzcyAuIFwqIHRvICd3cHVzZXInQCdsb2NhbGhvc3QnOyBmbHVzaCBwcml2aWxlZ2VzOyINCiAtIG15c3FsIC1lICJkcm9wIGRhdGFiYXNlIHRlc3Q7IGRyb3AgdXNlciAndGVzdCdAJ2xvY2FsaG9zdCc7IGZsdXNoIHByaXZpbGVnZXM7Ig0KIC0gbXlzcWxhZG1pbiAtdSByb290IHBhc3N3b3JkICdjaGFuZ2VtZSc=

Step 2: Get Authorization token from identity API endpoint

Command:


$ curl -s https://identity.api.rackspacecloud.com/v2.0/tokens -X 'POST'        -d '{"auth":{"passwordCredentials":{"username":"adambull", "password":"superBRAIN%!7912105!"}}}'        -H "Content-Type: application/json"

Response:

{"access":{"token":{"id":"AAD4gu67KlOPQeRSTJVC_8MLrTomBCxN6HdmVhlI4y9SiOa-h-Ytnlls2dAJo7wa60E9nQ9Se0uHxgJuHayVPEssmIm--MOCKTOKEN_EXAMPLE-0Wv5n0ZY0A","expires":"2015-10-21T15:06:44.577Z"

It’s also possible to use your API Key to retrieve the TOKEN ID used by API:
(if you don’t like using your control panel password!)

curl -s https://identity.api.rackspacecloud.com/v2.0/tokens -X 'POST' \
       -d '{"auth":{"RAX-KSKEY:apiKeyCredentials":{"username":"yourUserName", "apiKey":"yourApiKey"}}}' \
       -H "Content-Type: application/json" | python -m json.tool

Step 3: Construct Script to Execute Command directly thru API


#!/bin/sh

# Your Rackspace ACCOUNT DDI, look for a number like below when you login to the Rackspace mycloud controlpanel
account='10000000'

# Using the token that was returned to us in step 2
token="AAD4gu6FH-KoLCKiPWpqHONkCqGJ0YiDuO6yvQG4J1jRSjcQoZSqRK94u0jaYv5BMOCKTOKENpMsI3NEkjNqApipi0Lr2MFLjw"

# London Datacentre Endpoint, could by SYD, IAD, ORD, DFW etc
curl -v https://lon.servers.api.rackspacecloud.com/v2/$account/servers \
       -X POST \
       -H "X-Auth-Project-Id: $account" \
       -H "Content-Type: application/json" \
       -H "Accept: application/json" \
       -H "X-Auth-Token: $token" \
       -d '{"server": {"name": "testing-cloud-init-api", "imageRef": "09de0a66-3156-48b4-90a5-1cf25a905207", "flavorRef": "general1-1", "config_drive": "true", "user_data": "I2Nsb3VkLWNvbmZpZw0KDQpwYWNrYWdlczoNCg0KIC0gYXBhY2hlMg0KIC0gcGhwNQ0KIC0gcGhwNS1teXNxbA0KIC0gbXlzcWwtc2VydmVyDQoNCnJ1bmNtZDoNCg0KIC0gd2dldCBodHRwOi8vd29yZHByZXNzLm9yZy9sYXRlc3QudGFyLmd6IC1QIC90bXAvDQogLSB0YXIgLXp4ZiAvdG1wL2xhdGVzdC50YXIuZ3ogLUMgL3Zhci93d3cvIDsgbXYgL3Zhci93d3cvaHRtbCAvdmFyL3d3dy9odG1sX29sZDsgbXYgL3Zhci93d3cvd29yZHByZXNzIC92YXIvd3d3L2h0bWwNCiAtIG15c3FsIC1lICJjcmVhdGUgZGF0YWJhc2Ugd29yZHByZXNzOyBjcmVhdGUgdXNlciAnd3B1c2VyJ0AnbG9jYWxob3N0JyBpZGVudGlmaWVkIGJ5ICdjaGFuZ2VtZXRvbyc7IGdyYW50IGFsbCBwcml2aWxlZ2VzIG9uIHdvcmRwcmVzcyAuIFwqIHRvICd3cHVzZXInQCdsb2NhbGhvc3QnOyBmbHVzaCBwcml2aWxlZ2VzOyINCiAtIG15c3FsIC1lICJkcm9wIGRhdGFiYXNlIHRlc3Q7IGRyb3AgdXNlciAndGVzdCdAJ2xvY2FsaG9zdCc7IGZsdXNoIHByaXZpbGVnZXM7Ig0KIC0gbXlzcWxhZG1pbiAtdSByb290IHBhc3N3b3JkICdjaGFuZ2VtZSc="}}' \
      | python -m json.tool

Zomg what does this mean?

X-Auth-Token: is just the header that is sent to authorise your request. You got the token using your mycloud username and password, or mycloud username and API key in step 2.
ImageRef: this is just the ID assigned to the base image of Ubuntu LTS 14.04. Take a look below at all the different images you can use (and the image id of each):

$ supernova customer image-list

| ade87903-9d82-4584-9cc1-204870011de0 | Arch 2015.7 (PVHVM)                                          | ACTIVE |                                      |
| fdaf64c7-d9f3-446c-bd7c-70349305ae91 | CentOS 5 (PV)                                                | ACTIVE |                                      |
| 21612eaf-a350-4047-b06f-6bb8a8a7bd99 | CentOS 6 (PV)                                                | ACTIVE |                                      |
| fabe045f-43f8-4991-9e6c-5cabd617538c | CentOS 6 (PVHVM)                                             | ACTIVE |                                      |
| 6595f1b7-e825-4bd2-addc-c7b1c803a37f | CentOS 7 (PVHVM)                                             | ACTIVE |                                      |
| 2c12f6da-8540-40bc-b974-9a72040173e0 | CoreOS (Alpha)                                               | ACTIVE |                                      |
| 8dc7d5d8-4ad4-41b6-acf1-958dfeadcb17 | CoreOS (Beta)                                                | ACTIVE |                                      |
| 415ca2e6-df92-44e6-ba95-8ee36b436b24 | CoreOS (Stable)                                              | ACTIVE |                                      |
| eaaf94d8-55a6-4bfa-b0a8-473febb012dc | Debian 7 (Wheezy) (PVHVM)                                    | ACTIVE |                                      |
| c3aacaf9-8d1e-4d41-bb47-045fbc392a1c | Debian 8 (Jessie) (PVHVM)                                    | ACTIVE |                                      |
| 081a8b12-515c-41c9-8ce4-13139e1904f7 | Debian Testing (Stretch) (PVHVM)                             | ACTIVE |                                      |
| 498c59a0-3c26-4357-92c0-dd938baca3db | Debian Unstable (Sid) (PVHVM)                                | ACTIVE |                                      |
| 46975098-7799-4e72-8ae0-d6ef9d2d26a1 | Fedora 21 (PVHVM)                                            | ACTIVE |                                      |
| 0976b31e-f6d7-4d74-81e9-007fca25067e | Fedora 22 (PVHVM)                                            | ACTIVE |                                      |
| 7a1cf8de-7721-4d56-900b-1e65def2ada5 | FreeBSD 10 (PVHVM)                                           | ACTIVE |                                      |
| 7451d607-426d-416f-8d29-97e57f6f3ad5 | Gentoo 15.3 (PVHVM)                                          | ACTIVE |                                      |
| 79436148-753f-41b7-aee9-5acbde16582c | OpenSUSE 13.2 (PVHVM)                                        | ACTIVE |                                      |
| 05dd965d-84ce-451b-9ca1-83a134e523c3 | Red Hat Enterprise Linux 5 (PV)                              | ACTIVE |                                      |
| 783f71f4-d2d8-4d38-b2e1-8c916de79a38 | Red Hat Enterprise Linux 6 (PV)                              | ACTIVE |                                      |
| 5176fde9-e9d6-4611-9069-1eecd55df440 | Red Hat Enterprise Linux 6 (PVHVM)                           | ACTIVE |                                      |
| 92f8a8b8-6019-4c27-949b-cf9910b84ffb | Red Hat Enterprise Linux 7 (PVHVM)                           | ACTIVE |                                      |
| 36076d08-3e8b-4436-9253-7a8868e4f4d7 | Scientific Linux 6 (PVHVM)                                   | ACTIVE |                                      |
| 6118e449-3149-475f-bcbb-99d204cedd56 | Scientific Linux 7 (PVHVM)                                   | ACTIVE |                                      |
| 656e65f7-6441-46e8-978d-0d39beaaf559 | Ubuntu 12.04 LTS (Precise Pangolin) (PV)                     | ACTIVE |                                      |
| 973775ab-0653-4ef8-a571-7a2777787735 | Ubuntu 12.04 LTS (Precise Pangolin) (PVHVM)                  | ACTIVE |                                      |
| 5ed162cc-b4eb-4371-b24a-a0ae73376c73 | Ubuntu 14.04 LTS (Trusty Tahr) (PV)                          | ACTIVE |                                      |
| ***09de0a66-3156-48b4-90a5-1cf25a905207*** | Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)                       | ACTIVE |                                      |
| 658a7d3b-4c58-4e29-b339-2509cca0de10 | Ubuntu 15.04 (Vivid Vervet) (PVHVM)                          | ACTIVE |                                      |
| faad95b7-396d-483e-b4ae-77afec7e7097 | Vyatta Network OS 6.7R9                                      | ACTIVE |                                      |
| ee71e392-12b0-4050-b097-8f75b4071831 | Windows Server 2008 R2 SP1                                   | ACTIVE |                                      |
| 5707f82f-43f0-41e0-8e51-bfb597852825 | Windows Server 2008 R2 SP1 + SQL Server 2008 R2 SP2 Standard | ACTIVE |                                      |
| b684e5a0-11a8-433e-a4b8-046137783e1b | Windows Server 2008 R2 SP1 + SQL Server 2008 R2 SP2 Web      | ACTIVE |                                      |
| d16fd3df-3b24-49ee-ae6a-317f450006e7 | Windows Server 2012                                          | ACTIVE |                                      |
| f495b41d-07e1-44c5-a3e8-65c4412a7eb8 | Windows Server 2012 + SQL Server 2012 SP1 Standard           | ACTIVE |                                      |

flavorRef: is simply referring to what server type to start up, it’s pretty darn simple

$ supernova lon flavor-list

+------------------+-------------------------+-----------+------+-----------+------+-------+-------------+-----------+
| ID               | Name                    | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+------------------+-------------------------+-----------+------+-----------+------+-------+-------------+-----------+
| 2                | 512MB Standard Instance | 512       | 20   | 0         |      | 1     |             | N/A       |
| 3                | 1GB Standard Instance   | 1024      | 40   | 0         |      | 1     |             | N/A       |
| 4                | 2GB Standard Instance   | 2048      | 80   | 0         |      | 2     |             | N/A       |
| 5                | 4GB Standard Instance   | 4096      | 160  | 0         |      | 2     |             | N/A       |
| 6                | 8GB Standard Instance   | 8192      | 320  | 0         |      | 4     |             | N/A       |
| 7                | 15GB Standard Instance  | 15360     | 620  | 0         |      | 6     |             | N/A       |
| 8                | 30GB Standard Instance  | 30720     | 1200 | 0         |      | 8     |             | N/A       |
| compute1-15      | 15 GB Compute v1        | 15360     | 0    | 0         |      | 8     |             | N/A       |
| compute1-30      | 30 GB Compute v1        | 30720     | 0    | 0         |      | 16    |             | N/A       |
| compute1-4       | 3.75 GB Compute v1      | 3840      | 0    | 0         |      | 2     |             | N/A       |
| compute1-60      | 60 GB Compute v1        | 61440     | 0    | 0         |      | 32    |             | N/A       |
| compute1-8       | 7.5 GB Compute v1       | 7680      | 0    | 0         |      | 4     |             | N/A       |
| general1-1       | 1 GB General Purpose v1 | 1024      | 20   | 0         |      | 1     |             | N/A       |
| general1-2       | 2 GB General Purpose v1 | 2048      | 40   | 0         |      | 2     |             | N/A       |
| general1-4       | 4 GB General Purpose v1 | 4096      | 80   | 0         |      | 4     |             | N/A       |
| general1-8       | 8 GB General Purpose v1 | 8192      | 160  | 0         |      | 8     |             | N/A       |
| io1-120          | 120 GB I/O v1           | 122880    | 40   | 1200      |      | 32    |             | N/A       |
| io1-15           | 15 GB I/O v1            | 15360     | 40   | 150       |      | 4     |             | N/A       |
| io1-30           | 30 GB I/O v1            | 30720     | 40   | 300       |      | 8     |             | N/A       |
| io1-60           | 60 GB I/O v1            | 61440     | 40   | 600       |      | 16    |             | N/A       |
| io1-90           | 90 GB I/O v1            | 92160     | 40   | 900       |      | 24    |             | N/A       |
| memory1-120      | 120 GB Memory v1        | 122880    | 0    | 0         |      | 16    |             | N/A       |
| memory1-15       | 15 GB Memory v1         | 15360     | 0    | 0         |      | 2     |             | N/A       |
| memory1-240      | 240 GB Memory v1        | 245760    | 0    | 0         |      | 32    |             | N/A       |
| memory1-30       | 30 GB Memory v1         | 30720     | 0    | 0         |      | 4     |             | N/A       |
| memory1-60       | 60 GB Memory v1         | 61440     | 0    | 0         |      | 8     |             | N/A       |
| performance1-1   | 1 GB Performance        | 1024      | 20   | 0         |      | 1     |             | N/A       |
| performance1-2   | 2 GB Performance        | 2048      | 40   | 20        |      | 2     |             | N/A       |
| performance1-4   | 4 GB Performance        | 4096      | 40   | 40        |      | 4     |             | N/A       |
| performance1-8   | 8 GB Performance        | 8192      | 40   | 80        |      | 8     |             | N/A       |
| performance2-120 | 120 GB Performance      | 122880    | 40   | 1200      |      | 32    |             | N/A       |
| performance2-15  | 15 GB Performance       | 15360     | 40   | 150       |      | 4     |             | N/A       |
| performance2-30  | 30 GB Performance       | 30720     | 40   | 300       |      | 8     |             | N/A       |
| performance2-60  | 60 GB Performance       | 61440     | 40   | 600       |      | 16    |             | N/A       |
| performance2-90  | 90 GB Performance       | 92160     | 40   | 900       |      | 24    |             | N/A       |
+------------------+-------------------------+-----------+------+-----------+------+-------+-------------+-----------+

Cloud Files Container Bulk Deletion Script written in BASH

So, we had a lot of customers asking for ways to delete all of their cloud files in a single container, instead of having to do manually. This is possible using the bulk delete function defined in the rackspace docs. Find below the steps required to do this.

Step 1: Make an auth.json file (for simplicity)

{
    "auth": {
        "RAX-KSKEY:apiKeyCredentials": {
            "username": "mycloudusername",
            "apiKey": "mycloudapikey"
        }
    }
}

It’s quite simple and nothing intimidating.

For step 2 I’m using an application called jq, to install it do


wget https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64
mv jq-linux64 /bin/jq
alias jq = '/bin/jq'

Now you can use jq at the commandline.

Step 2: Set variable called $TOKEN that can store the api token password output, the nice thing is there is no token stored in the script so its kind of secure

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d @auth.json -H "Content-type: application/json" | jq .access.token.id | sed 's/"//g'`
echo $TOKEN

Step 3: Set a variable for the container name

# Container to Upload to
CONTAINER=meh2

Step 4: Populate a List of all the files in the $CONTAINER variable, in this case ‘meh2’.

# Populate File List
echo "Populating File List"
curl -X GET https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567/meh2 -H "X-Auth-Token: $TOKEN" > filelist.txt

Step 5: Add container name to the file listing by rewriting the output file filelist.txt to a deletelist.txt

sed -e "s/^/\/$CONTAINER\//" <  filelist.txt > deletelist.txt

Step 6: Bulk Delete Files thru API

echo "Deleting Files.."
curl -i -v -XDELETE -H"x-auth-token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567\?bulk-delete -T ./deletelist.txt

Step 7: Confirm the deletion success

# Confirm Deleted
echo "Confirming Deleted in $CONTAINER.."
curl -i -X GET https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567/meh2 -H "X-Auth-Token: $TOKEN"

The completed script looks like this:


 Mass Delete Container

# Get Token

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d @auth.json -H "Content-type: application/json" | jq .access.token.id | sed 's/"//g'`
echo $TOKEN

# Container to Upload to
CONTAINER=meh2


# Populate File List
echo "Populating File List"
curl -X GET https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567/meh2 -H "X-Auth-Token: $TOKEN" > filelist.txt


# Add Container Prefix
echo "Adding Container Prefix.."
sed -e "s/^/\/$CONTAINER\//" <  filelist.txt > deletelist.txt


# Delete Files
echo "Deleting Files.."
curl -i -v -XDELETE -H"x-auth-token: $TOKEN" https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567\?bulk-delete -T ./deletelist.txt

# Confirm Deleted
echo "Confirming Deleted in $CONTAINER.."
curl -i -X GET https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567/meh2 -H "X-Auth-Token: $TOKEN"

Pretty simple!

Running it..

* About to connect() to storage101.lon3.clouddrive.com port 443 (#0)
* Trying 2a00:1a48:7900::100...
* Connected to storage101.lon3.clouddrive.com (2a00:1a48:7900::100) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* SSL connection using TLS_DHE_RSA_WITH_AES_256_CBC_SHA
* Server certificate:
* subject: CN=storage101.lon3.clouddrive.com
* start date: May 18 00:00:00 2015 GMT
* expire date: Nov 17 23:59:59 2016 GMT
* common name: storage101.lon3.clouddrive.com
* issuer: CN=thawte DV SSL CA - G2,OU=Domain Validated SSL,O="thawte, Inc.",C=US
> DELETE /v1/MossoCloudFS_10045567?bulk-delete HTTP/1.1
> User-Agent: curl/7.29.0
> Host: storage101.lon3.clouddrive.com
> Accept: */*
> x-auth-token: AAA7uz-F91SDsaMOCKTOKEN-gOLeB5bbffh8GBGwAPl9F313Pcy4Xg_zP8jtgZolMOudXhsZh-nh9xjBbOfVssaSx_shBMqkxIEEgW1zt8xESJbZLIsvBTNzfVBlTitbUS4RarUOiXEw
> Content-Length: 515
> Expect: 100-continue
>
< HTTP/1.1 100 Continue HTTP/1.1 100 Continue * We are completely uploaded and fine < HTTP/1.1 200 OK HTTP/1.1 200 OK < Content-Type: text/plain Content-Type: text/plain < X-Trans-Id: tx010194ea9a104443b89bb-00161f7f1dlon3 X-Trans-Id: tx010194ea9a104443b89bb-001611f7f1dlon3 < Date: Thu, 15 Oct 2015 10:25:35 GMT Date: Thu, 15 Oct 2015 10:25:35 GMT < Transfer-Encoding: chunked Transfer-Encoding: chunked < Number Deleted: 44 Number Not Found: 0 Response Body: Response Status: 200 OK Errors: * Connection #0 to host storage101.lon3.clouddrive.com left intact

Using Rackspace Cloud Files & the API

Hi. So, when I started working at Rackspace I didn’t know very much about API. I know what it is, what it does, and why it’s used, but my experience was rather limited. So I was understandably a little bit concerned about using cloud files and API. Specifically using POST and GET thru CURL, and simple things such as authorization and identification thru header tokens.

First of all to use API, and make API requests we need an access token. The access token is totally different to my mycloud username, password and API Key itself. The token is a bit like a session, wheras the API key is a bit like a form username and password. It’s also worth noting, just like a http session, the $TOKEN will expire every now and then requiring you to authorise yourself to get a new token.

Authorisation & End Points

When you authorise yourself, you will get a token and a list of all the possible endpoints to GET and POST data, that is to say to retrieve or store records or query against particular search pattern, etc.

Step 1: Get User Token Thru Identity API using JSON auth structure

File: auth.json

{
    "auth": {
        "RAX-KSKEY:apiKeyCredentials": {
            "username": "mycloudusernamehere",
            "apiKey": "mycloudapikeyhere"
        }
    }
}
curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d @auth.json -H "Content-type: application/json"

It is possible to do this without the auth.json file and just use the string, like so:

 curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{"auth":{"RAX-KSKEY:apiKeyCredentials":{"username":"yourUserName","apiKey":"yourAPIPassword"}}}'  -H "Content-type: application/json" | python -m json.tool

It is also possible to connect to API using just USERNAME and API KEY:

curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{"auth":{"passwordCredentials":{"username":"yourUserName","password":"yourmycloudpassword"}}}' -H "Content-type: application/json"

After running one of these commands you will receive a large, and something like the below will be inside it:

This includes your full ‘token’ ID. In this case ‘BBBySDLKsxkj4CXdioidkj_a-vHqc4k6PYjxM2fu6D57Bf0dP0-Su6OO2beafdzoKDavyw32Sjd6SpiMhI-cUb654odmeiglz_2tsplnZ26T2Vj2h3LF-vwXNBEYS1IXvy7ZpARRMVranXw’

This is the token we need to use to authenticate against the API.

 

       "token": {
            "RAX-AUTH:authenticatedBy": [
                "APIKEY"
            ],
            "expires": "2015-09-29T17:04:56.092Z",
            "id": "BBBySDLKsxkj4CXdioidkj_a-vHqc4k6PYjxM2fu6D57Bf0dP0-Su6OO2beafdzoKDavyw32Sjd6SpiMhI-cUb654odmeiglz_2tsplnZ26T2Vj2h3LF-vwXNBEYS1IXvy7ZpARRMVranXw",
            "tenant": {
                "id": "1000000",
                "name": "1000000"
            }
        },

Step 2. Uploading a File with CURL to a Cloud File Container

# Set the TOKEN Variable in the BASH SHELL
TOKEN='BBBySDLKsxkj4CXdioidkj_a-vHqc4k6PYjxM2fu6D57Bf0dP0-Su6OO2beafdzoKDavyw32Sjd6SpiMhI-cUb654odmeiglz_2tsplnZ26T2Vj2h3LF-vwXNBEYS1IXvy7ZpARRMVranXw'

# CURL REQUEST TO API
CURL -i -X PUT https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567/meh/1.txt -T /Users/adam9261/1.txt -H "X-Auth-Token: $TOKEN"
curl -i -X PUT https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567/meh/2.txt -T /Users/adam9261/2.txt -H "X-Auth-Token: $TOKEN"
curl -i -X PUT https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567/meh/3.txt -T /Users/adam9261/3.txt -H "X-Auth-Token: $TOKEN"

Output


HTTP/1.1 100 Continue

HTTP/1.1 201 Created
Last-Modified: Mon, 28 Sep 2015 17:34:45 GMT
Content-Length: 0
Etag: 8aec10927922e92a963f6a1155ccc773
Content-Type: text/html; charset=UTF-8
X-Trans-Id: txca64c51affee434abaa79-0056097a34lon3
Date: Mon, 28 Sep 2015 17:34:45 GMT

HTTP/1.1 100 Continue

HTTP/1.1 201 Created
Last-Modified: Mon, 28 Sep 2015 17:34:46 GMT
Content-Length: 0
Etag: 83736eb7b9eb0dce9d11abcf711ca062
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx6df90d2dede54565bccfe-0056097a35lon3
Date: Mon, 28 Sep 2015 17:34:46 GMT

HTTP/1.1 100 Continue

HTTP/1.1 201 Created
Last-Modified: Mon, 28 Sep 2015 17:34:47 GMT
Content-Length: 0
Etag: 42f2826fff018420731e7bead0f124df
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx79612b28754c4bb3aedd6-0056097a36lon3
Date: Mon, 28 Sep 2015 17:34:46 GMT

In this case I had 3 txt files, each approximately 1MB, that I uploaded to cloud files using CURL and the commandline. The pertinent part is “X-Auth-Token: $TOKEN”, which is the header which contains my authorisation key which was set at the commandline previously with TOKEN= line above it.

Now, it’s possible using a manifest to append these 3 files together into a single file. For instance if these files were larger, say 5GB and at the maximum limit for a file, such as a large dvd-iso, to exceed the 5GB limit we’d need to split that large file up into smaller files. But we want the cloud files container to send the whole file, all 3 5GB parts as if it were one file, this can be achieved with manifests. Here is how to do it!

Creating a Manifest file, defining 3 file parts as one single file download in the data stream

Step 1: Create Manifest File
In my case I created a manifest file first which tells cloud files which individual files make up the large file

[

        {
                "path": "/meh/1.txt",
                "etag": "8aec10927922e92a963f6a1155ccc773",
                "size_bytes": 1048585
        },

        {
                "path": "/meh/2.txt",
                "etag": "83736eb7b9eb0dce9d11abcf711ca062",
                "size_bytes": 1048585
        },

        {
                "path": "/meh/3.txt",
                "etag": "42f2826fff018420731e7bead0f124df",
                "size_bytes": 1048587
        }
]

Step 2: Assign Manifest


curl -i -X PUT https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_10045567/meh/mylargeappendedfile?multipart-manifest=put -d @manifest -H "X-Auth-Token: $TOKEN"

As you can see, it’s really quite simple, by giving the path, etag and size_bytes, it’s possible to append these 3 file references to a single file association. Now when downloading mylargeappendedfile from the cloud storage container, you’ll get /meh/1.txt /meh/2.txt and /meh/3.txt all appended to a single file. This is pretty handy when dealing with files over 5GB, as the maximum limit for an individual file is 5GB, but that doesn’t stop you from spreading some size across multiple files, just like for instance in a rar archive.

Generated Output:


HTTP/1.1 201 Created
Last-Modified: Mon, 28 Sep 2015 17:32:18 GMT
Content-Length: 0
Etag: "8f49c8861f0aef2eb750099223050c27"
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx64093603d9444e088f242-00560979a0lon3
Date: Mon, 28 Sep 2015 17:32:32 GMT

Compiling Grsecurity into a Linux Kernel

So, I am good friends with this really cool guy at work who is an excellent Linux Technician but also an extremely gifted pentester and security consultant. He has been telling me about the goodness of grsecurity and what it can do for my Linux Box. He says, even if my box is completely compromised, they probably won’t be able to do anything. Immediately after this I wanted to know what this rare moonshine was, and whether it was worth the trouble of kernel modifications and the whole shebang of configuration. After a day of on and off exploration at work, I have decided it is a most worthwhile endeavor and is probably the most extensive security you could install on a Linux server. That is, if you’re able to install it. For your average user it might be a stretch, so here is a nice little how to about how to achieve patching and compiling a linux Kernel with grsecurity module with PaX and advanced filesystem and kernel structure security. In other words, very darn cool.

For debian you might want to do something like
Step 1. Download Kernel Source (DEBIAN/possibly ubuntu)

apt-get source linux-image-$(uname -r)

Step 1. Download Kernel Source for http://www.kernel.org. In this case I’m compiling a version of 4.1.7

cd /tmp
wget https://www.kernel.org/pub/linux/kernel/v4.x/linux-4.1.7.tar.gz

Step 2. Download the latest grsecurity kernel patch, being sure to match your grsecurity patch with the kernel version you want to use on your box

wget https://grsecurity.net/test/grsecurity-3.1-4.1.7-201509201149.patch

Step 3. untar the Linux Kernel, in my case I’m using Linux-4.1.7 Kernel

tar zxvf wget linux-4.1.7.tar.gz

Step 4. Apply the grsecurity patch in the linux-4.1.7 directory we just untarred into /tmp/linux-4.1.7

cd /tmp/linux-4.1.7
patch -p1 < ../grsecurity-3.1-4.1.7-201509201149.patch

Step 5. Ensure that the correct dependancies are installed for both compiling a kernel, and configuring the kernel


# needed for configuring a kernel with make menuconfig
apt-get install ncurses-dev

# needed for building a kernel with kpkg
apt-get install fakeroot kernel-package

Step 6. run make menuconfig within /tmp/linux-4.1.7

cd /tmp/linux-4.1.7
make menuconfig

Step 7. Refer to the grsecurity instructions on how to enable the grsecurity kernel module patches at https://grsecurity.net/quickstart.pdf

Navigate in the make menuconfig graphical interface as follows
Security Options -> GrSecurity --> *
Ensure that you are using Configuration Method (automatic), this is fine for most non power users. See the image below

Screen Shot 2015-09-24 at 5.23.26 PM

Step 8. Compile the Kernel Image, in debian this is something like;

fakeroot make-kpkg --initrd --revision=1 kernel_image

For other operating systems it will be more similar to

make dep bzImage modules modules_install install

For those persons that weren't able to complete this tutorial, maybe they will benefit from the documentation offered by grsecurity wiki, and the quickstart guide pdf they offer;

https://en.wikibooks.org/wiki/Grsecurity/Configuring_and_Installing_grsecurity
https://grsecurity.net/quickstart.pdf

Some more (very helpful) information about compiling kernel in debian:

https://www.debian.org/releases/stable/i386/ch08s06.html.en
https://debian-handbook.info/browse/stable/sect.kernel-compilation.html

Install and Configure Rackspace cloud backup Agent manually

Installing the cloud backup agent manually is quite easy. First take a look at http://meta.packages.cloudmonitoring.rackspace.com/ for your operating system distribution. In my case it’s Debian Wheezy (release 7).

From time to time the commands change, so always check the reference first (link above) before running the below command

wget http://meta.packages.cloudmonitoring.rackspace.com/debian-wheezy-x86_64/rackspace-cloud-monitoring-meta-stable_1.0_all.deb
dpkg -i rackspace-cloud-monitoring-meta-stable_1.0_all.deb
apt-get update
apt-get install rackspace-monitoring-agent

That’s it your done! Rackspace Cloud Backup agent is now manually running.

To be sure it is installed successfully and is starting up on boot, run a reboot and then verify in the rackspace control panel that it is running.

reboot

Screen Shot 2015-09-21 at 11.31.14 AM

If you at any point want to remove the monitoring agent, you can run:

apt-get remove rackspace-monitoring-agent

again confirming in the rackspace mycloud control panel that the service is no longer running:

Screen Shot 2015-09-21 at 11.34.27 AM

Mounting A disk in Linux Operating Systems

So, apparently, some people aren’t sure how to properly add a new disk to a Linux server. Well, it’s pretty simple if you are using something like Cloud Block Storage and have ondemand. But even if you don’t, after adding the disk into your Linux machine, here is the process of going about adding a standard disk, fdisking (partitioning), formating (mkfs), and the mounting process itself, including that in fstab.

Step 1. List the disk’s on the box

 
$ fdisk -l

Device     Boot Start      End  Sectors Size Id Type
/dev/xvda1 *     2048 41940991 41938944  20G 83 Linux

Disk /dev/xvdb: 75 GiB, 80530636800 bytes, 157286400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

As we can see the new 75GiB drive I added hasn’t any file system on it yet.

Step 2. Let’s start partitioning the disk

$ fdisk /dev/xvdb

Device     Boot Start      End  Sectors Size Id Type
/dev/xvda1 *     2048 41940991 41938944  20G 83 Linux

Disk /dev/xvdb: 75 GiB, 80530636800 bytes, 157286400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Step 3. Let’s create a new partition using ‘n’ key, and then let’s state we want a primary partition using ‘p’.
Step 4: Let’s set the first and last sectors of the disk (you can type enter and the machine will normally chose something sane for you). It’s also possible to add multiple partitions of a single device, but for this tutorial we won’t be covering that.


Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)

Select (default p): p

Partition number (1-4, default 1): 1
First sector (2048-157286399, default 2048): 
Last sector, +sectors or +size{K,M,G,T,P} (2048-157286399, default 157286399): 

Created a new partition 1 of type 'Linux' and of size 75 GiB.

Step 4. Write the disk using the ‘w’ key, and then after it’s finished confirm with fdisk that new partition is present.


Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

root@tesladump:/home# fdisk -l

Disk /dev/xvda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0c708d98

Device     Boot Start      End  Sectors Size Id Type
/dev/xvda1 *     2048 41940991 41938944  20G 83 Linux

Disk /dev/xvdb: 75 GiB, 80530636800 bytes, 157286400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x07c2e0a6

Device     Boot Start       End   Sectors Size Id Type
/dev/xvdb1       2048 157286399 157284352  75G 83 Linux

Now the disk has been partitioned.

Step 5. So lets create an (EXT3) file system on the device.


$ mkfs -t ext3 /dev/xvdb1

mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 19660544 4k blocks and 4915200 inodes
Filesystem UUID: c717ddbd-d5c9-4bb1-a8af-b521d38cbb14
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624, 11239424

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

So that is the file system done. Now all we need to do is mount it. We can do that thru the /etc/fstab file, and also using the mount -a command.

Step 6. Simply we edit our /etc/fstab to accommodate the new disk

 

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#                
/dev/xvda1      /               ext3    errors=remount-ro,noatime,barrier=0 0       1


/dev/xvdb1      /home/thetesladump ext3 defaults,noatime,nofail 0    0

Here we choose to mount the partition1 /dev/xvdb (/dev/xvdb1) on the symlink directory /home/thetesladump . A little FTP I temporarily wanted to host.

So we setup the new 75Gig cloud block device to be mounted at the ftp user thetesladump’s root home user directory. All ready to go. wooo.

Step 7. Complete hte process by running a mount -a

 

mount -a

Step 8. Confirm your disks are mounted where you wanted


root@tesladump:/home/theteslasociety# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       20G  1.1G   18G   6% /
udev             10M     0   10M   0% /dev
tmpfs           199M   21M  179M  11% /run
tmpfs           498M     0  498M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           498M     0  498M   0% /sys/fs/cgroup
/dev/xvdb1       74G  178M   70G   1% /home/thetesladump