Configuring Basic NFS Server+Client on RHEL7

So, you want to configure NFS? This isn’t too difficult to do. First of all you will need to, in the simplest setup, create 2 servers, one acting as the NFS server which hosts the content and attached disks. The second server, acting as the client, which mounts the filesystem of the NFS server over the network to a local mount point on the client. In RHEL 7 this is remarkably easy to do.

Install and Configure NFS on the Server

Install dependencies

yum -y install nfs-utils rpcbind

Create a directory on the server

This is the directory we will share

 mkdir -p /opt/nfs

Configure access for the client server on ip 10.0.0.2

vi /etc/exports

# alternatively you can directly pipe the configuration but I don't recommend it
echo "/opt/nfs 10.0.0.2(no_root_squash,rw,sync)" > /etc/exports

Open Firewall ports used by NFS

firewall-cmd --zone=public --add-port=2049/tcp --permanent
firewall-cmd --reload

Restart NFS services & check NFS status

service rpcbind start; service nfs start
service nfs status 

Install and configure NFS on the Client

Install dependencies & start rpcbind

yum install nfs-utils rpcbind
service rpcbind start

Create directory to mount NFS

# Directory we will mount our Network filesystem on the client
mkdir -p /mnt/nfs
# The server ip address is 10.0.0.1, with the path /opt/nfs, we want to mount it to the client on /mnt/nfs this could be anything like
# /mnt/randomdata-1234 etc as long as the folder exists;
mount 10.0.0.1:/opt/nfs /mnt/nfs/

Check that the NFS works

echo "meh testing.." > /mnt/nfs/testing.txt
cat /mnt/nfs/testing.txt
ls -al /mnt/nfs

You should see the filesystem now has testing.txt on it. Confirming you setup NFS correctly.

Make NFS mount permanent by enabling the service permanently, and adding the mount to fstab

This will cause the server to automount the fs during boot time

systemctl enable nfs-server
vi /etc/fstab
10.0.0.1:/opt/nfs	/mnt/nfs	nfs	defaults 		0 0

# OR you could simply pipe the configuration to the file (this is really dangerous though)
# Unless you are absolutely sure what you are doing
echo "10.0.0.1:/opt/nfs	/mnt/nfs	nfs	defaults 		0 0" >> /etc/fstab

If you reboot the client now, you should see that the NFS mount comes back.

Using Rackspace Cloud Files, swiftly and cron to Backup Data to multiple data-centres cheaply

So, you have some really important data, so much so that 99.99% redundancy is not enough for you. One solution to this is to use multiple copies in multiple datacentres. Most enterprise backup will have on-site, an off-site, and an archival copy. What I’m going to show here is how to make 4 different copies of your data, in 4 different datacentres around the world. This will provide a very high redundancy of storage, and greatly reduce the likelihood of data loss. Although it costs a bit more, this kind of solution may be suitable for many small, medium and large businesses. Naturally, depending on the size of the data, and the importance of redundancy. You might not have many files to backup, perhaps a small cd worth.. it will be very inexpensive if you have a small backup to make. However, due to the way that cloud files is billed, copying data to cloud files costs money in bandwidth when writing from a server in London to a cloud files in Sydney, Chicago or Dallas for instance, so it’s very important to consider the impact of bandwidth costs when utilizing an additional 3 cloud files endpoints that are not in the local datacentre region. Which, is essentially what we are doing in this guide.

Setup swiftly

yum install python-devel python-pip -y
pip install swiftly eventlet 

Create your swiftly environments (setting the name for each file)

==> /root/.swiftly-dfw.conf <==
[swiftly]
auth_user = myusername
auth_key = censored
auth_url = https://identity.api.rackspacecloud.com/v2.0
region = dfw

==> /root/.swiftly-iad.conf <==
[swiftly]
auth_user = myusername
auth_key = censored
auth_url = https://identity.api.rackspacecloud.com/v2.0
region = iad

==> /root/.swiftly-ord.conf <==
[swiftly]
auth_user = myusername
auth_key = censored
auth_url = https://identity.api.rackspacecloud.com/v2.0
region = ord

==> /root/.swiftly-syd.conf <==
[swiftly]
auth_user = myusername
auth_key = censored
auth_url = https://identity.api.rackspacecloud.com/v2.0
region = syd

Create your Script

# Adam Bull
# Adam Bull, Rackspace UK
# May 17, 2016


# This can be sequential or, it can be parallel, not sure which is better yet use & for parallel
# This backs up /documents file and puts it in the 'managed_backup' cloud files container at the following 4 datacentres ,DFW, IAD, ORD and SYD

swiftly --verbose --conf ~/.swiftly-dfw.conf --concurrency 100 put -i /documents /managed_backup
swiftly --verbose --no-snet --conf ~/.swiftly-iad.conf --concurrency 100 put -i /documents /managed_backup
swiftly --verbose --no-snet --conf ~/.swiftly-ord.conf --concurrency 100 put -i /documents /managed_backup
swiftly --verbose --no-snet --conf ~/.swiftly-syd.conf --concurrency 100 put -i /documents /managed_backup

Because the other 3 endpoints are in different datacentres, we can't use servicenet, so we defined --no-snet option for swiftly as above.

Execute your script

chmod +x multibackup.sh
./multibackup.sh

This obviously is a basic system and script of taking backups, and it is not for production use (yet). This is an alpha project I started today. The cool thing is that it works, and quite nicely. Although it is far from finished yet as a workable script.

Once the script is made, you can simply add it to crontab -e as you would usually. Make sure the user you execute with cron has access to the .conf files in their home directory!

Checking Load Balancer Connectivity & Automating it in some interesting ways

So, in a dream last night, I woke up realising I had forgot to write my automated load balancer connectivity checker.

Basically, sometimes a customer will complain their site is down because their ‘load balancer is broken’! In many cases, this is actually due to a firewall on one of the nodes behind the load balancer, or an issue with the webserver application listening on the port. So, I wrote a little piece of automation in the form of a BASH script, that accepts an Load Balancer ID and then uses the API to pull the server nodes behind that Load Balancer, including the ports being used to communicate, and then uses, either netcat or nmap to check that port for connectivity. There were a few ways to achieve this, but the below is what I was happiest with.

#!/bin/bash

# Username used to login to control panel
USERNAME='mycloudusernamegoeshere'

# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY="apikeygoeshere"

# Your Rackspace account number (the number that is in the URL of the control panel after logging in)
ACCOUNT=100101010

# Your Rackspace loadbalancerID
LOADBALANCERID=157089

# Rackspace LoadBalancer Endpoint
ENDPOINT="https://lon.loadbalancers.api.rackspacecloud.com/v1.0"

# This section simply retrieves and sets the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

#   (UNUSED) METHOD 1Extract IP addresses (Currently assuming port 80 only)
#curl -H "X-Auth-Token: $TOKEN" -H "Accept: application/json" -X GET "$ENDPOINT/$ACCOUNT/loadbalancers/$LOADBALANCERID/nodes" | jq .nodes[].address | xargs -i nmap -p 80 {}
#   (UNUSED) Extract ports
# curl -H "X-Auth-Token: $TOKEN" -H "Accept: application/json" -X GET "$ENDPOINT/$ACCOUNT/loadbalancers/$LOADBALANCERID/nodes" | jq .nodes[].port | xargs -i nmap -p 80 {}


# I opted for using this method to extract the important detail
curl -H "X-Auth-Token: $TOKEN" -H "Accept: application/json" -X GET "$ENDPOINT/$ACCOUNT/loadbalancers/$LOADBALANCERID/nodes" | jq .nodes[].address | sed 's/"//g' > address.txt
curl -H "X-Auth-Token: $TOKEN" -H "Accept: application/json" -X GET "$ENDPOINT/$ACCOUNT/loadbalancers/$LOADBALANCERID/nodes" | jq .nodes[].port > port.txt

# Loop thru both output files sequentially, order is important
# WARNING script does not ignore whitespace

while read addressfile1 <&3 && read portfile2 <&4; do
   ncat $addressfile1 $portfile2
done 3

Output looks a bit like;

# ./lbtest.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5143 100 5028 100 115 4731 108 0:00:01 0:00:01 --:--:-- 4734
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 225 100 225 0 0 488 0 --:--:-- --:--:-- --:--:-- 488
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 225 100 225 0 0 679 0 --:--:-- --:--:-- --:--:-- 681
Ncat: No route to host.
Ncat: Connection timed out.

I plan to add some additional support that will check the load balancer is up, AND the servicenet connection between the cloud servers.

Please note that this script must be run on a machine with access to servicenet network, in the same Rackspace Datacenter to be able to check servicenet connectivity of servers. The script can give false positives if strict firewall rules are setup on the cloud server nodes behind the load balancer. It's kind of alpha-draft but I thought I would share it as a proof of concept.

You will need to download and install jq to use it. To download jq please see; https://stedolan.github.io/jq/download/

Windows Password reset for Rackspace Cloud Servers

In the previous articles Using API and BASH to validate changing conditions and Reset windows administrator password using rescue mode without nova-agent I explained both the steps how to reset the password of a windows VM instance by modifying the SAM file by using a Linux ‘rescue’ image in the cloud, and, I also explained how to automate checks for BASH automation thru the API. The checks specifically waited until the server entered rescue, and then lifted the ipv4 address, connecting only when the rescue server had finished building.

That way the automation is handling the delay it takes, as well as setting and lifting the access credentials and ip address to use each time. Here is the complete script. Please note that backticks are deprecated but I’m a bit ‘oldskool’. This is a rough alpha, but it works really nicely. After testing it consistently allows ourselves, or our customers to reset a Windows Cloud Server password, in the case that a customer loses access to it, and cannot use other Rackspace services to do the reset. This effectively turns a useless server, back into a usable one again and saves a lot of time.

#!/bin/bash
# Adam Bull, Rackspace UK
# This script automates the resetting of windows passwords
# Arguments $1 == username
# Arguments $2 == apikey
# Arguments $3 == ddi
# Arguments $4 == instanceid

echo "Rackspace windows cloud server Password Reset"
echo "written by Adam Bull, Rackspace UK"
sleep 2
PASSWORD=39fdfgk4d3fdovszc932456j2oZ

# Provide an instance uuid to rescue and reset windows password

USERNAME=mycloudusernamehere
APIKEY=myapikeyhere
# DDI is the 'customer ID', if you don't know this login to the control panel and check the number in the URL
DDI=10010101
# The instance uuid you want to rescue
INSTANCE=ca371a8b-748e-46da-9e6d-8c594691f71c

# INITIATE RESCUE PROCESS

nova  --os-username $USERNAME --os-auth-system=rackspace  --os-tenant-name $DDI --os-auth-url https://lon.identity.api.rackspacecloud.com/v2.0/ --os-password $APIKEY --insecure rescue --password "$PASSWORD" --image 7fade26a-0cca-415f-a988-49c021768fca $INSTANCE

# LOOP UNTIL STATE DETECTED AS RESCUED

STATE=0
until [[ $STATE == rescued ]]; do
echo "start rescue check"
STATE=`nova --os-username $USERNAME --os-auth-system=rackspace  --os-tenant-name $DDI --os-auth-url https://lon.identity.api.rackspacecloud.com/v2.0/ --os-password $APIKEY --insecure show $INSTANCE | grep rescued | awk '{print $4}'`

echo "STATE =" $STATE
echo "sleeping.."
sleep 5
done

# EXTRACT PUBLIC ipv4 FROM INSTANCE

IP=`nova --os-username $USERNAME --os-auth-system=rackspace  --os-tenant-name $DDI --os-auth-url https://lon.identity.api.rackspacecloud.com/v2.0/ --os-password $APIKEY --insecure show $INSTANCE | grep public | awk '{print $5}' | grep -oE '((1?[0-9][0-9]?|2[0-4][0-9]|25[0-5])\.){3}(1?[0-9][0-9]?|2[0-4][0-9]|25[0-5])'`
echo "IP = $IP"

# UPDATE AND INSTALL RESCUE TOOLS AND RESET WINDOWS PASS
# Set environment locally
yum install sshpass -y

# Execute environment remotely
echo "Performing Rescue..."
sshpass -p "$PASSWORD" ssh -o StrictHostKeyChecking=no root@"$IP" 'yum update -y; yum install ntfs-3g -y; mount /dev/xvdb1 /mnt; curl li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm -o /root/nux.rpm; rpm -Uvh /root/nux.rpm; yum install chntpw -y; cd /mnt/Windows/System32/config; echo -e "1\ny\n" | chntpw -u "Administrator" SAM'

echo "Unrescuing in 100 seconds..."
sleep 100
nova  --os-username $USERNAME --os-auth-system=rackspace  --os-tenant-name $DDI --os-auth-url https://lon.identity.api.rackspacecloud.com/v2.0/ --os-password $APIKEY --insecure unrescue $INSTANCE

Thanks again to my friend Cory who gave me the instructions, I simply automated the process to make it easier and learned something in the process 😉

Disabling SELinux

Today we had a customer that needed to perform a first generation server to next generation migration however they cannot have SELinux enabled during this process.

I explain to the customer how to disable this, it’s pretty simple.

vi /etc/sysconfig/selinux

SELINUX=enforcing

Needs to be changed to

SELINUX=disabled

Job done. A simple one but nonetheless important stuff. If you wanted to automate this it would look something like this;

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g'/etc/selinux/config

This sed oneliner simply swaps SELINUX=enforcing to SELINUX=disabled, pretty simple stuffs. It will work on CentOS6 and 7 anyway, and should but I can’t guarantee work on CentOS 5.