Installing Nova Agent Linux on Xen Guest VM

So, sometimes every now and then a customer wants to use a custom image with our services. The thing is for the build to succesfully complete and the VM to get networking, it needs to be able to communicate with lil ole nova-agent.

PLEASE ALSO SEE http://www.haxed.me.uk/index.php/2016/10/06/rackspace-cloud-server-not-coming-building/

1. Download the nova-agent-linux

cd ~/
mkdir nova-agent
cd nova-agent
wget http://boot.rackspace.com/files/nova-agent/nova-agent-Linux-x86_64-1.39.0.tar.gz

2. Extract and run installer script

tar xzf nova-agent-Linux-x86_64-1.39.0.tar.gz

3. Inject LSB headers into the script (if not already there)

 
sed '1i### BEGIN INIT INFO\n# Provides: Nova-Agent\n# Required-Start: $remote_fs $syslog\n# Required-Stop: $remote_fs $syslog\n# Default-Start: 2 3 4 5\n# Default-Stop: 0 1 6\n# Short-Description: Start daemon at boot time\n# Description: Enable service provided by daemon.\n### END INIT INFO\n' /usr/share/nova-agent/1.39.0/etc/generic/nova-agent > /usr/share/nova-agent/1.39.0/etc/generic/nova-agent.lsb

4. Move the init script in place and make it executable

cp -av /usr/share/nova-agent/1.39.0/etc/generic/nova-agent.lsb /etc/init.d/nova-agent
chmod +x /etc/init.d/nova-agent

5. Set the script to start automatically in the event of a reboot.

# RHEL, CentOS, Fedora, OpenSuse
chkconfig nova-agent on

# Debian, Ubuntu
update-rc.d -f nova-agent defaults

PLEASE ALSO SEE: http://www.haxed.me.uk/index.php/2016/10/06/rackspace-cloud-server-not-coming-building/

Testing if Nova Agent is available on server when an image build is not getting networking

So, we occasionally get customers who are having issues creating a new server from an image of a previous server. Normally this is caused by the nova-agent not being set to start on boot, or the xe-linux-distribution being missing from the VM. It’s possible to check whether a virtual machine is configured right and I put together this little piece with the help of my colleague and friend Zoltan.

1. Checking if Nova Agent is installed and can be started

# /etc/init.d/nova-agent start

2. Check if nova-agent and xe-linux-distribution is running on VM

# ps auxf | grep nova
# ps auxf | grep xe-daemon

If processes called nova-agent or xe-daemon return then you know they are running OK.

3. Ensure that both services do start during boot

# chkconfig nova-agent on
# chkconfig xe-linux-distribution on

For Debian and Ubuntu Systems you may need to use

update-rc.d -f nova-agent defaults

Once you confirm that these services are running it’s safe to take an image, and create a new VM with it. These 2 processes need to be running because when the new server is built the way that the VM gets its networking set is using xenstore and novaagent to retrieve and set the network interfaces file with correct ip, subnet and gateway.

– A

Manually Creating a Bootable CBS using NOVA

A customer was getting a bad error: : Block Device Mapping is Invalid.

It was because the cbs wasn’t building in time from the image , and the build was timing out. So the solution was pretty simple. Add the CBS first:


 supernova customer volume-create 55 --volume-type=SSD --display-name=starating --image-id=5674345-dfgegdf-34553531123

Oh, thanks Aaron dude. You rock.

Deleting All the Files in a Cloud Container

Hey. So if only I had a cake for every customer that asked if we could delete all of their cloud files in a single container for them (i’d be really really really fat so maybe that is a bad idea). A dollar though, now there’s a thought.

On that note, here is a dollar. Probably the best dollar you’ll see today. You could probably do this with php, bash or swiftly, but doing it *THIS* way is also awesome, and I learnt (although some might say learned) something. Here is how I did it. I should also importantly thank Matt Dorn for his contributions to this article. Without him this wouldn’t exist.

Step 1. Install Python, pip

yum install python pip
apt-get install python pip

Step 2. Install Pyrax (rackspace Python Openstack Library)

pip install pyrax

Step 3. Install Libevent

curl -L -O https://github.com/downloads/libevent/libevent/libevent-2.0.21-stable.tar.gz
tar xzf libevent-2.0.21-stable.tar.gz
cd libevent-2.0.21-stable
./configure --prefix="$VIRTUAL_ENV"
make && make install
cd $VIRTUAL_ENV/..

Step 4. Install Greenlet and Gevent


pip install greenlet
pip install gevent

Step 5. Check gevent library loading in Python Shell

python
import gevent

If nothing comes back, the gevent lib works OK.

Step 6. Create the code to delete all the files

#!/usr/bin/python
# -*- coding: utf-8 -*-
from gevent import monkey
from gevent.pool import Pool
from gevent import Timeout
monkey.patch_all()
import pyrax

if __name__ == '__main__':
    pool = Pool(100)
pyrax.set_setting('identity_type', 'rackspace')
pyrax.set_setting('verify_ssl', False)
# Rackspace Credentials Go here, Region LON, username: mycloudusername apikey: myrackspaceapikey. 
pyrax.set_setting('region', 'LON')
pyrax.set_credentials('mycloudusername', 'myrackspaceapikey')

cf = pyrax.cloudfiles
# Remember to set the container correctly (which container to delete all files within?)
container = cf.get_container('testing')
objects = container.get_objects(full_listing=True)


def delete_object(obj):

# added timeout of 5 seconds just in case

    with Timeout(5, False):
        try:
            obj.delete()
        except:
            pass


for obj in objects:
    pool.spawn(delete_object, obj)
pool.join()

It’s well worth noting that this can also be used to list all of the objects as well, but that is something for later…

Step 7. Execute (not me the script!)

The timeout can be adjusted. And the script can be run several times to ensure any missed files are retried to be deleted.

Retrieving Image Meta Data and setting vm_mode

So, today we had a customer that was using a custom VHD/VDI with his server and it wasn’t working. I knew it would be one of the 3 things:

1) incorrect vm_mode flag
2) Image size too big for flavors ‘min disk’
3) image using journaling

As it turned out this customer was using journaling. However, if it was a PV HVM type, here is how I would correctly set the mode

supernova customer image-meta vfd09a81-g431-4279-9467-5e4284944b53 set vm_mode=hvm

Pretty simple fix.

Extracting IPV4 IP addresses from a list of machine ID’s

for id in $(cat list.txt); do supernova lon show $id | awk '/accessIPv4/ {print $4}'; done >> iplist.txt

It’s a pretty simple hack. thanks to Jan for this.

Of course you need to extract the machine id’s to run the above statement. Here is how I did that:

nova list --tenant 10010101 > list.txt

Pretty cool. And yeah I know, I put step 1 after step 2, but you get the idea !