Enable Rackspace Cloud Database root user (Script/Wizard for API)

I have noticed that we get quite a few customers asking how to enable root user in the Rackspace cloud database product. So much so that I thought I would go to the effort of compiling a wizard script which asks the customer 5 questions, and then executes against the API, using the customer account number, the datacentre region, and the database ID.

To Install and Run the script you only need to do:

curl -s -o /tmp/1.sh http://adam.haxed.me.uk/db-root-enable.sh && bash /tmp/1.sh

Screen Shot 2015-12-03 at 9.33.17 AM

However I have included the script source code underneath for reference. This has been tested and works.

Script Code:

#!/bin/bash
# Enable root dbaas user access
# User Alterable variables
# Author: Adam Bull
# Date: Monday, November 30 2015
# Company: Rackspace UK Server Hosting

# ACCOUNTID forms part of your control panel login; https://mycloud.rackspace.co.uk/cloud/1001111/database#rax%3Adatabase%2CcloudDatabases%2CLON/321738d5-1b20-4b0f-ad43-ded24f4b3655

echo “Enter your Account (DDI) this is the number which forms part of your control panel login e.g. https://mycloud.rackspace.co.uk/cloud/1001111/”
read ACCOUNTID

echo “Enter your Database ID, this is the number which forms part of your control panel login when browsing the database instance e.g. https://mycloud.rackspace.co.uk/cloud/1001111/database#rax%3Adatabase%2CcloudDatabases%2CLON/242738d5-1b20-4b0f-ad43-ded24f4b3655”
read DATABASEID

echo “Enter what Region your database is in i.e. lon, dfw, ord, iad, syd, etc”
read REGION

echo “Enter your customer username login (visible from account settings page)”
read USERNAME

echo “Enter your customer apikey (visible from account settings page)”
read APIKEY

echo “$USERNAME $APIKEY”

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d ‘{ “auth”:{“RAX-KSKEY:apiKeyCredentials”: { “username”:”‘$USERNAME'”, “apiKey”: “‘$APIKEY'” }} }’ -H “Content-type: application/json” | python -mjson.tool | grep -A5 token | grep id | cut -d ‘”‘ -f4`

echo “Enabling root access for instance $DATABASEID…see below for credentials”
# Enable the root user for instance id
curl -X POST -i \
-H “X-Auth-Token: $TOKEN” \
-H ‘Content-Type: application/json’ \
“https://$REGION.databases.api.rackspacecloud.com/v1.0/$ACCOUNTID/instances/$DATABASEID/root”

# Confirm root user added
curl -i \
-H “X-Auth-Token: $TOKEN” \
-H ‘Content-Type: application/json’ \
“https://$REGION.databases.api.rackspacecloud.com/v1.0/$ACCOUNTID/instances/$DATABASEID/root”

Automating Rackspace SSL Load Balancer Certificate Mappings

This one doesn’t really come up that often at work, but it was some harmless fun I had this morning, when I thought, ‘is it possible to take some cert and key files’ and then build json around it with echo >> and sed the privateKey and publicCertificate into their rightful places in an lb.json file, and then curl a request against the Rackspace Load Balancer API.

So whats the point/joy of doing this? Well, it allows you to add certificate mappings with relative ease. Just plop your .cert and your .key file in the certificates folder, and the script can do all the rest. Of course you need to provide your username and APIKEY, but you always need to do that when making requests to the API. It’s also worth noting the TOKEN is generated automatically.

Next I will write a script that generates self signed certificates and then injects them in, so literally no user action is required. Obviously this isn’t going to be that useful, but if I connected it to an API-like certificate making service that was authorised ssl reseller, it would be a pretty tight product, I would go so far to say awesome.

Here is how I achieved it:

#!/bin/bash

USERNAME='mycloudusernamehere'
APIKEY='apikeyhere'

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`


echo '
{
  "certificateMapping": {
     "hostName": "my.com",
     "certificate": "' > lb.json

cat certificates/private.key | sed ':a;N;$!ba;s/\n/\\n/g' > certificates/private.short
cat certificates/public.cert | sed ':a;N;$!ba;s/\n/\\n/g' >  certificates/public.short

cat certificates/public.short >> lb.json
echo '", "privateKey": "' >> lb.json
cat certificates/private.short >> lb.json
echo '" } }' >> lb.json


curl -v -H "X-Auth-Token: $TOKEN" -d @lb.json -X POST -H "content-type: application/json"  https://lon.loadbalancers.api.rackspacecloud.com/v1.0/10011111/loadbalancers/157089/ssltermination/certificatemappings

My colleague referred this as a ‘sneaky way’ to parse Json. He is indeed correct, I am quite sneaky, but if it’s simple and it works, then booyah. This is what the lb.json file looks like after it’s created by the above shellscript.

{
  "certificateMapping": {
     "hostName": "my.com",
     "certificate": "
-----BEGIN CERTIFICATE-----\nMIIC/TCCAeWgAwIBAgIJAP5bHAHitdeoMA0GCSqGSIb3DQEBBQUAMBUxEzARBgNV\nBAMMCnd3dy5teS5jb20wHhcNMTUxMjAyMDkzNjEzWhcNMjUxMTI5MDkzNjEzWjAV\nMRMwEQYDVQQDDAp3d3cubXkuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB\nCgKCAQEAxcSqtsqQUrFEY327avnR7uxxO6svkvPzzv7ANUhZ142OYZ4727sgDJeA\nbKllpxrCqZfnVDfd+YcloLukcHoEKYC0/6R/nygZbaXwA0WGLhNX+L43MEsldtGx\ntk3eO0Gs3B1t9na9NEjTO0YMxXsgnXrTZFUB2bD/UL8TkdtoWdlVgPwtIPeVyGZF\nhj3dBzO6SPvfixTrZLz8EAZ95I1bOHR+0UnXHZ6z7Bh+fKD4NQbXTSEFH/0HoAXV\nfHm5BxwsheFrQm3/0fisraArPFhDVfOrkCcVta8MniJn6SMtk8Us66ACIdl7uydM\nHqLqs29TQOGyB9nIxTL1h4T7+tbHiwIDAQABo1AwTjAdBgNVHQ4EFgQUOpK+W3FR\nUcttjZtmCEYwlXUon3AwHwYDVR0jBBgwFoAUOpK+W3FRUcttjZtmCEYwlXUon3Aw\nDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOCAQEAL8Oo1nrykXCr2hYBg6on\nXLi5Tehsp6495U8xZygUL0fK08TUovjnVjln3qEsarotREZaTtmAjVrNZwYJrrn7\nHoxoOiccHw0FL3UfPR4q2oS+Z94Q+ZG9kXptO84nPV6WAx96lOXfPCVast9CsaVs\nkZRyZBQtYO+Mh53zxhouqNG69/OvSdDz4tCGi6MTZWmZGhnGx7SaTMITfOeK7IU8\nN4sMZwmHHsubKVZvcB0xN8Q+1Zwv7SPUuOi+OSd7v7llxlJ4bu2UQ55cLWb697dZ\nNCAChW2xsi157XUfPGnayfO/DbEQFdRULkKStY8o2jiu7GaovWtPVHY0kxjQKfY4\nQg==\n-----END CERTIFICATE-----\n
", "privateKey": "
-----BEGIN RSA PRIVATE KEY-----\nMIIEowIBAAKCAQEAxcSqtsqQUrFEY327avnR7uxxO6svkvPzzv7ANUhZ142OYZ47\n27sgDJeAbKllpxrCqZfnVDfd+YcloLukcHoEKYC0/6R/nygZbaXwA0WGLhNX+L43\nMEsldtGxtk3eO0Gs3B1t9na9NEjTO0YMxXsgnXrTZFUB2bD/UL8TkdtoWdlVgPwt\nIPeVyGZFhj3dBzO6SPvfixTrZLz8EAZ95I1bOHR+0UnXHZ6z7Bh+fKD4NQbXTSEF\nH/0HoAXVfHm5BxwsheFrQm3/0fisraArPFhDVfOrkCcVta8MniJn6SMtk8Us66AC\nIdl7uydMHqLqs29TQOGyB9nIxTL1h4T7+tbHiwIDAQABAoIBAQCj+HBWR9KrTSBX\noQqAIoslnlIv17oFDFDMAbnZM5iRuGMhmrEkeJyU9BPdhAGtL+nP9Qsub3eSiLPw\n9ULcor3Kr1TiVEAf9H5Iw/kgrUcX8p/Qs91MJDH2ttuyPBOSa9xnT9s5Kq+qpurD\nzUuPfIvJJeoY2MZE+JRnHVWbbB+zxZ9dCzXGFsx5u4Yq1dI85vxB+5pzvPDJtQwy\nsIGszREHm6m1qeCXB3Hh3gU5un8fLh4kMfKAGcJEgS9AHXsKDgPSHOsCO3LnHGTW\nVyMtDpMEqq3rs/C2p533IDJylq+eoelnMnl8s2ieyxNjRCZLClQjpZdFgdULyPEK\nhWPOZgXBAoGBAP35DDvmWunIjEZxIlKnLn+vtz6kX+99HWpNouM3XegGp7rF8/7t\nlbwmYr8G290CjZNEjtvKW5vIPTkE8ZK8hZsmdbWkf92GUo1/cbIrZcfqBkC38rck\n5bWqXtyzzguRVMFj2UhqfYto4w6/bsA/8phnI5G0i8Op/VqE9rN5wpthAoGBAMdY\nxim7Clb54d1lCkq+uz3FA3WQkCEiq9ou6okEV3RqkqxqVjJW7Bjh0q4GSW8u2Xvh\nVaGx4Jk8Q9LCTB3x70TRTfAbg3RZqetclDPRan0tg1WHVcjzEqeS5xVa7uCBnBut\naTiT37MBzZRAh8oZQLOuFX+Y/pC5UTgv/p+glZZrAoGAOz23m9VMyZGNHvVO00bJ\n8uDS9pqzAhMGJIC9iRCmJ/Q9dbStCH702XF+wR5hdLkeuwZX6G7YVYsstLsxek/d\nPmaHOHqJlOu7H+RlafDzieFN2hTOWegSaQC3pfWPD2W0BnQ6/8hPRpCNvifrNo70\nEJamVltt6pMhVNcFELJLMaECgYBphjC//mbmy7gofkgIcRalCBlgrnndUIEwKg21\nIjs5QQELi+69Dw5Dzaa8wE83L9GopguyYHrIIwK0Gm44m81Q3IspQyc+/Afas1Mw\nava39NPE/rMGgMWrNzRkNZKl/XYpoI5GiOCt3ZJ5m/9FmECL3Oc8eDypV7AK0j0z\nOsp0qQKBgEhaQnwVN8+el/GEW/+weESP1GHWdvtDedeE19DOXnTNpR+V/wOpcpC7\n4oOlWARVCj4gGE+ugBSeX4slQmzu1L6p0npQ8jEIfbxR1znn+RK4EWKQKsoyfb1u\nw4ewR/Bwubv6iL7ct0FLFSjJXeNMc1+VmVpBTICpV0PrKbCP9uTw\n-----END RSA PRIVATE KEY-----
" } }

4 way NORAID mirror using ZFS

So I thought about a cool way to backup my files without using anything too fancy and I started to think about ZFS. Don’t know why I didn’t before because it’s ultra ultra resilient. Cheers Oracle. This is in Debian 7 Wheezy.

Step 1 Install zfs

# apt-get install lsb-release
# wget http://archive.zfsonlinux.org/debian/pool/main/z/zfsonlinux/zfsonlinux_6_all.deb
# dpkg -i zfsonlinux_6_all.deb

# apt-get update
# apt-get install debian-zfs

Step 2 Create Mirrored Disk Config with Zpool.
Here i’m using 4 x 75GB SATA Cloud Block Storage Devices to have 4 copies of the same data with ZFS great error checking abilities

zpool create -f noraidpool mirror xvdb xvdd xvde xvdf

Step 3. Write a little disk write utility

#!/bin/bash


while :
do

        echo "Testing." $x >> file.txt
        sleep 0.02
  x=$(( $x + 1 ))
done

Step 4 (Optional). Start killing the Disks with fire, kill iscsi connection etc, and see if file.txt is still tailing.

./write.sh & ; tail -f /noraidpool/file.txt

Step 5. Observe that as long as one of the 4 disks has it’s virtual block device connection your data is staying up. So it will be OK even if there is 3 or less I/O errors simultaneously. Not baaaad.


root@zfs-noraid-testing:/noraidpool# /sbin/modprobe zfs
root@zfs-noraid-testing:/noraidpool# lsmod | grep zfs
zfs                  2375910  1
zunicode              324424  1 zfs
zavl                   13071  1 zfs
zcommon                35908  1 zfs
znvpair                46464  2 zcommon,zfs
spl                    62153  3 znvpair,zcommon,zfs
root@zfs-noraid-testing:/noraidpool# zpool status
  pool: noraidpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        noraidpool  ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            xvdb    ONLINE       0     0     0
            xvdd    ONLINE       0     0     0
            xvde    ONLINE       0     0     0
            xvdf    ONLINE       0     0     0

errors: No known data errors

Step 6. Some more benchmark tests

time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync"

Step 7. Some concurrent fork tests

#!/bin/bash

while :
do

time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" &
        echo "Testing." $x >> file.txt
        sleep 2
  x=$(( $x + 1 ))
 zpool iostat
clear
done

or better

#!/bin/bash

time sh -c "dd if=/dev/zero of=ddfile bs=128k count=250000 && sync" &
time sh -c "dd if=/dev/zero of=ddfile bs=24k count=250000 && sync" &
time sh -c "dd if=/dev/zero of=ddfile bs=16k count=250000 && sync" &
while :
do

        echo "Testing." $x >> file.txt
        sleep 2
  x=$(( $x + 1 ))
 zpool iostat
clear
done

bwm-ng ‘elegant’ style output of disk I/O using zpool status


#!/bin/bash

time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" &
while :
do
clear
 zpool iostat
sleep 2
clear
done

To test the resiliency of ZFS I removed 3 of the disks, completely unlatching them


        NAME                      STATE     READ WRITE CKSUM
        noraidpool                DEGRADED     0     0     0
          mirror-0                DEGRADED     0     0     0
            1329894881439961679   UNAVAIL      0     0     0  was /dev/xvdb1
            12684627022060038255  UNAVAIL      0     0     0  was /dev/xvdd1
            4058956205729958166   UNAVAIL      0     0     0  was /dev/xvde1
            xvdf                  ONLINE       0     0     0

And noticed with just one remaining Cloud block storage device I was still able to access the data on the disk as well as create data:

cat file.txt  | tail
Testing. 135953
Testing. 135954
Testing. 135955
Testing. 135956
Testing. 135957
Testing. 135958
Testing. 135959
Testing. 135960
Testing. 135961
Testing. 135962

# mkdir test
root@zfs-noraid-testing:/noraidpool# ls -a
.  ..  ddfile  file.txt  forktest.sh  stat.sh  test  writetest.sh


That’s pretty flexible.

Deploying Devstack successfully in CentOS 7

So, do you want to setup your own openstack infrastructure? With Cinder, Nova, nova API, keystone and the such? That’s easy enough. Here is how to do it.

Step 1. Deploy CentOS7, any basic install should be fine. I deployed using the Rackspace cloud server 8Gigs standard instance type. (standard install should be fine!)

Step 2. Add stack user

adduser stack

Step 3. Add stack to sudoers wheel group, ensuring sudo is there

yum install -y sudo
echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

Step 4. Modify /etc/passwd so that the home directory for stack is /opt/stack . It needs this. And chown

vi /etc/passwd
# make sure home directory for stack is /opt/stack (thats all!)
mkdir /opt/stack
chown -R stack:stack /opt/stack

Step 5. Clone Devstack from git

sudo yum install -y git
su stack
git clone https://git.openstack.org/openstack-dev/devstack
cd devstack

Step 6. Cp base config sample file

cp samples/local.conf .

Step 7. Deploy stack

./stack.sh

Configuring a Console Prompt for BASH Linux

In BASH it’s pretty simple to customize the console prompt. There are a few good reasons for doing this, for instance if you are pulling out data from the commandline or running automation tasks and want to know when each section was executed. Here is how I did it:

Edit .bash_profile

cd
vi .bash_profile

Insert this line into .bash_profile

PS1='bash-\v \d \t \H \w\$ '

source it

$source .bash_profile
bash-4.9 Tue Nov 24 11:32:09 pirax-test ~#

Nice.

Retrieving Xenstore Networking settings from within a Rackspace Server

When a customer of ours is having issues with their networking, such as the configured gateway or netmask, we are able to provide a oneliner that allows them to run on the VM guest a command which takes the information directly from xenstore, (xe-linux-distribution). Find the command below.

xenstore-read vm-data/networking/$(xenstore-ls vm-data/networking | awk '/private/{print$1}')
{"label": "private", "broadcast": "10.255.255.255", "ips": [{"ip": "10.177.194.237", "netmask": "255.255.255.0", "enabled": "1", "gateway": null}], "mac": "BC:76:4E:11:11:11", "dns": ["83.138.151.81", "83.138.151.80"], "routes": [{"route": "10.208.0.0", "netmask": "255.255.0.0", "gateway": "10.166.255.1"}, {"route": "10.116.0.0", "netmask": "255.240.0.0", "gateway": "10.1.1.1"}], "gateway": null}

Please note that the information was modified for privacy. This is just grabbing servicenet. To gather all the vm-data use

xenstore-ls vm-data

Altogether now:

Step 1. Retrieve all vm-data


$ xenstore-ls vm-data
 user-metadata = ""
 rax_service_level_automation = ""Complete""
 build_config = """"
networking = ""
 BC764E182CB = "{"label": "private", "broadcast": "10.177.1.1", "ips": [{"ip": "10.177.1.1", "netmask": "255.255.255.0", "enabled": "1", "gateway": null}], "mac": "BC:76\..."
 BC764E0192DB = "{"ip6s": [{"ip": "2a00:1a48:7803:107:be76:4eff::", "netmask": 64, "enabled": "1", "gateway": "fg80::def"}], "label": "public", "broadcast": "37.188.117.2\..."
meta = "{"rxtx_cap": 80.0}"
auto-disk-config = "False"

Step 2. Retrieve data for Network MACID

xenstore-read vm-data/networking/BC764E182CB
"label": "private", "broadcast": "10.177.255.255", "ips": [{"ip": "10.177.1.1", "netmask": "255.255.255.0", "enabled": "1", "gateway": null}], "mac": "", "dns": ["83.138.151.81", "83.138.151.80"], "routes": [{"route": "10.1.0.0", "netmask": "255.255.255.0", "gateway": "10.177.1.1"}, {"route": "10.1.1.0", "netmask": "255.255.0.0", "gateway": "10.101.1.1"}], "gateway": null}

xenstore-read vm-data/networking/BC764E0192DB
{"ip6s": [{"ip": "2a00:1a48:7803:107:be76:4eff:fe08:9cc3", "netmask": 64, "enabled": "1", "gateway": "fe80::def"}], "label": "public", "broadcast": "37.1.117.255", "ips": [{"ip": "37.188.117.48", "netmask": "255.255.255.0", "enabled": "1", "gateway": "37.1.117.1"}], "mac": "", "gateway_v6": "ge77::def", "dns": ["83.138.151.81", "83.138.151.80"], "gateway": "37.1.117.1"}

Please note I sanitised the MACID and IP address information, altering it not to show my real ips and subnets, it is just to give you an idea of the two virtual intefaces, publicnet & servicenet.

Booting an Image in specific cell/region

This particular oneliner uses NOVA API to boot an image with the id=9876fa2-99df-4be3-989f-eec1e8c08afd and the flavor=general purpose 4GB RAM and the hint ensures that the server reaches the correct cell and hypervisor host.

supernova customer boot --image 9876fa2-99df-4be3-989f-eec1e8c08afd --flavor general1-4 --hint target_cell='lon!z0001' --hint 0z0ne_target_host=c-10-0-12-119 myservername

Checking Size of Database within MySQL

We had an issue where the rackspace intelligence monitor was giving a different value to the control panel instance of a dbaas. So I came up with a way of testing MySQL for the database size. There is nothing more reliable than running the query like this, I think.

SELECT table_schema AS "Database", 
ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS "Size (MB)" 
FROM information_schema.TABLES 
GROUP BY table_schema;