Using Nova and Supernova to manage Firewall IP access lists, automation & more

So, a customer today reached out to us asking if Rackspace provided the entire infrastructure IP address ranges in use on cloud. The answer is, no. However, that doesn’t mean that making your firewall rules, or autoscale automation need to be painful.

In fact, Rackspace Cloud utilizes Openstack which fully supports API calls which will easily be able to provide this detail in just a few simple short steps. To do this you require nova to be installed, this is really relatively easy to install, and instructions for installing it can be found here;

https://support.rackspace.com/how-to/installing-python-novaclient-on-linux-and-mac-os/

Once you have installed nova, it’s simply a case of making sure you set these 4 lines correctly in your .bash_profile

OS_USERNAME=mycloudusernamegoeshere
OS_TENANT_NAME=yourrackspaceaccountnumbergoeshereusuallysomethinglike1010101010
OS_AUTH_SYSTEM=rackspace
OS_PASSWORD=apikeygoeshere
OS_AUTH_URL=https://identity.api.rackspacecloud.com/v2.0/
OS_REGION_NAME=LON
OS_NO_CACHE=1
export OS_USERNAME OS_TENANT_NAME OS_AUTH_SYSTEM OS_PASSWORD OS_AUTH_URL OS_REGION_NAME OS_NO_CACHE

OS_USERNAME is your mycloud login username (normally the primary user).
OS_TENANT_NAME is your Customer ID, it’s the number that appears in the URL of your control panel link, see below picture for illustration

Screen Shot 2016-08-10 at 2.45.05 PM

OS_PASSWORD is a bit misleading, this is actually where your apikey goes , but I think it’s possible to authenticate using your control panel password too, don’t do that for security reasons.

OS_REGION_NAME is pretty self explanatory, this is simply the region that you would like to list cloud-server IP’s in or rather, the region that you wish to perform NOVA API calls.

Making the API call using the nova API wrapper

# supernova lon list --tenant 100010101 --fields accessIPv4,name
[SUPERNOVA] Running nova against lon...
+--------------------------------------+-----------------+-----------+
| ID                                   | accessIPv4      | Name      |
+--------------------------------------+-----------------+-----------+
| 7e5a7f99-60ae-4c28-b2b8              | 1.1.1.1  |  xapp      |
| 94747603-812d-4594-850b              | 1.1.1.1  |   rabbit2   |
| d5b318aa-0fa2-4269-ae00              | 1.1.1.1  |   elastic5  |
| 6c1d8d33-ae5e-44be-b9f0              | 1.1.1.1  | | elastic6  |
| 9f79a7dc-fd19-4f8f-9c26              |1.1.1.1   | | elastic3  |
| 05b1c52b-6ced-4db0-8af2              | 11.1.1.1 | | elastic1  |
| c8302366-f2f9-4c36-8f7a              | 1.1.1.1  | | app5      |
| b159cd07-8e68-49bc-83ee              | 1.1.1.1  | | app6      |
| f1f31eef-97c6-4c68-b01a              | 1.1.1.1  | | ruby1     |
| 64b7f0fd-8f2f-4d5f-8f89              | 1.1.1.1  | | build3    |
| e320c051-b5cf-473a-9f96              | 1.1.1.1  |   mysql2    |
| 4fddd022-59a8-4502-bf6e              | 1.1.1.1  | | mysql1    |
| c9ad6951-f5f9-4351-b31d              | 1.1.1.1  | | worker2   |
+--------------------------------------+-----------------+-----------+

This is pretty useful for managing autoscale permissions if you need to make sure your corporate network can be connected to from your cloud-servers when new cloud-servers with new IP are built out. considerations like this are really important when putting together a solution. The nice thing is the tools are really quite simple and flexible. If I wanted I could have pulled out detail for servicenet instead. I hope this helps make some folks lives a bit easier and works to demystify API to others that haven’t had the opportunity to use it.

You are probably wondering though, what field names can I use? a nova show will reveal this against one of your server UUID’s

# supernova lon show someuuidgoeshere
+-------------------------------------+------------------------------------------------------------------+
| Property                            | Value                                                            |
+-------------------------------------+------------------------------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                                           |
| OS-EXT-SRV-ATTR:host                | censored                                                   |
| OS-EXT-SRV-ATTR:hypervisor_hostname | censored                                                 |
| OS-EXT-SRV-ATTR:instance_name       | instance-734834278-sdfdsfds-                   |
| OS-EXT-STS:power_state              | 1                                                                |
| OS-EXT-STS:task_state               | -                                                                |
| OS-EXT-STS:vm_state                 | active                                                           |
| censorednet network                 | censored                                                     |
| accessIPv4                          | censored                                                 |
| accessIPv6                          | censored                      |
| created                             | 2015-12-11T14:12:08Z                                             |
| flavor                              | 15 GB I/O v1 (io1-15)                                            |
| hostId                              | 860...         |
| id                                  | 9f79a7dc-fd19-4f8f-9c26-72a335ed2be8                             |
| image                               | Debian 8 (Jessie) (PVHVM) (cf16c435-7bed-4dc3-b76e-57b09987866d) |
| metadata                            | {"build_config": "", "rax_service_level_automation": "Complete"} |
| name                                | elastic3                                                         |
| private network                     |                                                 |
| progress                            | 100                                                              |
| public network                      |          |
| status                              | ACTIVE                                                           |
| tenant_id                           |                                                    |
| updated                             | 2016-02-27T09:30:20Z                                             |
| user_id                             |                             |
+-------------------------------------+------------------------------------------------------------------+

I censored some of the fields.. but you can see all of the column names, so if you wanted to see metadata and progress only, with the server uuid and server name.



nova list --fields name, metadata, progress

This could be pretty handy for detecting when a process has finished building, or detecting once automation has completed. The possibilities with API are quite endless. API is certainly the future, and, there is no reason why, in the future, people won't be building and deploying websites thru API only, and some sophisticated UI wrapper like NOVA.

Admittedly, this is very far away, but that should be what the future technology will be made of, stuff like LAMBDA, serverless architecture, will be the future.

Adding some excludes for Lsyncd

A customer was having some issues with their syncing, as was shown by their inotify

Error: Terminating since out of inotify watches.
Consider increasing /proc/sys/fs/inotify/max_user_watches

Fix was quite simple, to remove other folders from sync that aren’t necessary.

Adding this line to the /etc/lsyncd.conf

excludeFrom="/etc/lsyncd-excludes.txt",

And creating the ‘excludes’ file for LsyncD, i.e. what folders you want to ignore, in this case we wanted to ignore old httpdocs.OLD backup.

# cat /etc/lsyncd-excludes.txt
somewebsite.com/httpdocs.OLD/

A shockingly simple fix.

Please note that the path in lsyncd-excludes.txt is determined by the path in lsyncd. (do not give full path, give relative path inside the parent). It was a simple fix.

Moving Rackspace Cloud Servers between Regions with automation II

Hey folks. So, recently I have been doing a bit of work on the Rackspace community, specifically trying to document and make as easy as possible the importing and exporting of cloud server VHD’s between Rackspace regions. This might be really useful if you are designing some HA or multi-region and/or load balancing solution that might be utilizing autoscale, and other kinds of redundancy too, but moving your ‘golden image’ between regions might be quite difficult if doing the entire process manually or step by step as I have documented in the below two articles:

Exporting Cloud server images from a Rackspace Region https://community.rackspace.com/products/f/25/t/7089

Importing Cloud Server Images to a Rackspace Region https://community.rackspace.com/products/f/25/t/7186

In this article I completely finish writing the ‘automation demo’ of how to specifically move images, without changing much at all, apart from one ‘serverID’ variable, and the source and destination. The script isn’t finished yet, however the last time I posted this on my blog I was so excited, I actually forgot to include the import function. (which is kind of important!) sorry about that.


#!/bin/bash

USERNAME='yourmycloudusernamehere'
APIKEY='youapikeyhere'
API_ENDPOINT='https://lon.servers.api.rackspacecloud.com/v2/1000000'
SERVER_ID='94157dc7-924a-424a-8825-c5ffbd341622'
TENANT='1000000'
CUSTOMER_ID='1000000'

#### DO NOT CHANGE BELOW THIS LINE

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# START IMAGE CREATION
echo "Creating Image at Local Datacentre"

curl -v -D export-headers \
-H "X-Auth-Token: $TOKEN" \
-H "Accept: application/json" \
-H "content-type: application/json" \
-d '{"createImage" : {"name" : "RA-'$SERVER_ID'", "metadata": { "ImageType": "Rackspace Automation Image Exported from '$TENANT'", "ImageVersion": "2.0"}}}' \
-X POST "$API_ENDPOINT/servers/$SERVER_ID/action" -o /tmp/export-file

echo "export headers"
cat export-headers

# Retrieve correct ImageID and use to check status of image
IMAGEID=$(cat export-headers | grep -i location | sed 's/\// /g' | awk '{print $7}')
sleep 5
echo "image id"
echo $IMAGEID

API_ENDPOINT='https://lon.images.api.rackspacecloud.com/v2/images/'
URL=$API_ENDPOINT$IMAGEID
URL=${URL%$'\r'}

curl -v \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 1000000" \
-H "Accept: application/json" \
-H "content-type: application/json" \
-X GET "$URL" | python -mjson.tool > imagestatus

echo "imagestatus: $imagestatus"

STATUS=$(cat imagestatus | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')

## WAIT FOR IMAGE TO EXIT SAVE STATE

echo "Waiting for image to complete..."
sleep 5
while [ "$STATUS" != "active" ]; do
echo "image $IMAGEID is still saving..."
sleep 10
curl -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 1000000" \
-H "Accept: application/json" \
-H "content-type: application/json" \
-X GET "$URL" | python -mjson.tool > imagestatus

STATUS=$(cat imagestatus | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')
done

## PREPARE/CREATE CLOUD FILES CONTAINER for EXPORT

echo "Preparing/Creating Cloud Files Container for Export"
API_ENDPOINT='https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_1000000'

curl -v -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 1000000" \
-H "Accept: application/json" \
-X PUT "$API_ENDPOINT/export"
sleep 5

## EXPORT VHD TO CLOUD FILES

echo "Exporting VHD to Cloud Files"
# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

echo "IMAGEID detected as $IMAGEID"
# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
# > export-cloudfiles

echo "THE IMAGE ID IS: $IMAGEID"
IMAGEID=${IMAGEID%$'\r'}
curl -v "https://lon.images.api.rackspacecloud.com/v2/$TENANT/tasks" -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" -d '{"type": "export", "input": {"image_uuid": "'$IMAGEID'" , "receiving_swift_container": "export"}}' -o export-cloudfiles
echo "Export looks like"

cat export-cloudfiles

sleep 15

echo "export cloud-files looks like:"
cat export-cloudfiles

TASKID_EXPORT=$(cat export-cloudfiles | python -mjson.tool | grep '"id"' | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')

echo "task ID export looks like"
echo "$TASKID_EXPORT"

API_ENDPOINT='https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_1000000'

sleep 15

echo "Waiting for Task to complete..."
## WAIT FOR TASKID EXPORT TO COMPLETE TO CLOUD FILES

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# This section requests the Glance API to copy the cloud server image uuid to a cloud files container called export
curl "https://lon.images.api.rackspacecloud.com/v2/1000000/tasks/$TASKID_EXPORT" -X GET -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" | python -mjson.tool > export-status

EXPORT_STATUS=$(cat export-status | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')

while [ "$EXPORT_STATUS" = "processing" ]; do
sleep 15
curl "https://lon.images.api.rackspacecloud.com/v2/1000000/tasks/$TASKID_EXPORT" -X GET -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" | python -mjson.tool > export-status
EXPORT_STATUS=$(cat export-status | grep status | awk '{print $2}' | sed 's/"//g' | sed 's/,//g')
done

# SET CORRECT CLOUD FILES NAME
CLOUD_FILES_NAME=$(cat export-cloudfiles | python -mjson.tool | grep image_uuid | awk '{print $2}' | sed 's/,//g' | sed 's/"//g')

## Download VHD Cloud from Cloud Files to this server

API_ENDPOINT='https://storage101.lon3.clouddrive.com/v1/MossoCloudFS_1000000'

# GET FILE FROM SOURCE CLOUD FILES

URL="$API_ENDPOINT/export/$CLOUD_FILES_NAME.vhd"
URL=${URL%$'\r'}

curl -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: $TENANT" \
-H "Accept: application/json" \
-X GET "$API_ENDPOINT/export/$CLOUD_FILES_NAME.vhd" > $CLOUD_FILES_NAME.vhd

## NEW API USER/PASS REQUIRED FOR 2ND REGION

### DO NOT CHANGE ANYTHING ABOVE THIS POINT

USERNAME='yourmycloudusernamegoeshere'
APIKEY='yourapikeyfromsecondregiongoeshere'

### DO NOT CHANGE ANYTHING BELOW THIS POINT

## Now for uploading the VHD to Cloud Files to Destination REGION

API_ENDPOINT='https://storage101.ord1.clouddrive.com/v1/MossoCloudFS_900000'
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

curl -v -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 900000" \
-H "Accept: application/json" \
-X PUT "$API_ENDPOINT/import"

## Upload VHD Image to Cloud Files destination for import
curl -v -s \
-H "X-Auth-Token: $TOKEN" \
-H "X-Project-Id: 900000" \
-H "Accept: application/json" \
-X PUT "$API_ENDPOINT/import/$CLOUD_FILES_NAME.vhd" -T "$CLOUD_FILES_NAME.vhd"

# Find the Customer_ID
IMPORT_IMAGE_ENDPOINT=https://ord.images.api.rackspacecloud.com/v2/$CUSTOMER_ID

# This section simply retrieves the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

VHD_NOTES="autoimport-$SERVER_ID"
IMPORT_CONTAINER=import
VHD_FILENAME="$CLOUD_FILES_NAME.vhd"

curl -X POST "$IMPORT_IMAGE_ENDPOINT/tasks" \
-H "X-Auth-Token: $TOKEN" \
-H "Content-Type: application/json" \
-d "{\"type\":\"import\",\"input\":{\"image_properties\":{\"name\":\"$VHD_NOTES\"},\"import_from\":\"$IMPORT_CONTAINER/$VHD_FILENAME\"}}" |\
python -mjson.tool

As You can probably see my code is still rather rough, but it’s just so darn exciting that this script works from start to finish, nicely I just HAD to share it a bit earlier! The plan now is to add commandline function so that you can specify ./moveregion {SOURCE_REGION} {DEST_REGION} {SERVER_ID} {TENANT_ID} . Then a customer or a racker would only need these 4 variables to import and export images in an automated way.

I can rewrite the script in such a way that it would accept a .txt file of a couple of hundred cloud server UUID’s, and it would take the server UUID of each, use that uuid to create an image of each server, export to cloud files, import to cloud files, and then import to glance image store for the second region destination. Which naturally, would save hundreds of hours of human time doing this manually.. which is … nice 😀

I would really like to make a UI frontend, using something like Django, and utilize some form of ‘light’ database, that keeps track of all the API import/exports, and even provides estimated time for completion, but my UI skills are really limited to xhtml, css php and mysql.. I need a python or django guy to help out with some of this. If anyone is interested, please reach out to me.

This project will be avaialble on github soon

Configuring Basic NFS Server+Client on RHEL7

So, you want to configure NFS? This isn’t too difficult to do. First of all you will need to, in the simplest setup, create 2 servers, one acting as the NFS server which hosts the content and attached disks. The second server, acting as the client, which mounts the filesystem of the NFS server over the network to a local mount point on the client. In RHEL 7 this is remarkably easy to do.

Install and Configure NFS on the Server

Install dependencies

yum -y install nfs-utils rpcbind

Create a directory on the server

This is the directory we will share

 mkdir -p /opt/nfs

Configure access for the client server on ip 10.0.0.2

vi /etc/exports

# alternatively you can directly pipe the configuration but I don't recommend it
echo "/opt/nfs 10.0.0.2(no_root_squash,rw,sync)" > /etc/exports

Open Firewall ports used by NFS

firewall-cmd --zone=public --add-port=2049/tcp --permanent
firewall-cmd --reload

Restart NFS services & check NFS status

service rpcbind start; service nfs start
service nfs status 

Install and configure NFS on the Client

Install dependencies & start rpcbind

yum install nfs-utils rpcbind
service rpcbind start

Create directory to mount NFS

# Directory we will mount our Network filesystem on the client
mkdir -p /mnt/nfs
# The server ip address is 10.0.0.1, with the path /opt/nfs, we want to mount it to the client on /mnt/nfs this could be anything like
# /mnt/randomdata-1234 etc as long as the folder exists;
mount 10.0.0.1:/opt/nfs /mnt/nfs/

Check that the NFS works

echo "meh testing.." > /mnt/nfs/testing.txt
cat /mnt/nfs/testing.txt
ls -al /mnt/nfs

You should see the filesystem now has testing.txt on it. Confirming you setup NFS correctly.

Make NFS mount permanent by enabling the service permanently, and adding the mount to fstab

This will cause the server to automount the fs during boot time

systemctl enable nfs-server
vi /etc/fstab
10.0.0.1:/opt/nfs	/mnt/nfs	nfs	defaults 		0 0

# OR you could simply pipe the configuration to the file (this is really dangerous though)
# Unless you are absolutely sure what you are doing
echo "10.0.0.1:/opt/nfs	/mnt/nfs	nfs	defaults 		0 0" >> /etc/fstab

If you reboot the client now, you should see that the NFS mount comes back.