Simple way to perform a body check on a website

So, I was testing with curl today and I know that it’s possible to direct to /dev/null to suppress the page. But that’s not very handy if you are checking whether html page loads, so I came up with some better body checks to use.

A Basic body check using wc -l to count the lines of the site

 time curl https://www.google.com/ > 1; echo "non zero indicates server up and served content of n lines"; cat 1 | wc -l
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  167k    0  167k    0     0  79771      0 --:--:--  0:00:02 --:--:-- 79756

real	0m2.162s
user	0m0.042s
sys	0m0.126s
non zero indicates server up and served content of n lines
2134

A body check for Google analytics

$ time curl https://www.groundworkjobs.com/ > 1; echo "Checking for google analytics html elements string"; cat 1 | grep "www.google-analytics.com/analytics.js"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  167k    0  167k    0     0  76143      0 --:--:--  0:00:02 --:--:-- 76152

real	0m2.265s
user	0m0.042s
sys	0m0.133s
Checking for google analytics html elements string
				})(window,document,'script','//www.google-analytics.com/analytics.js','ga');

Such commands might be useful when troubleshooting a cluster for instance, where one server shows more up to date versions, (different number of lines). There’s probably better way to do this with ls and awk and use the html filesize, since number of lines wouldn’t be so accurate.

Check Filesize from request

$ time curl https://www.groundworkjobs.com/ > 1; var=$(ls -al 1 | awk '{print $5}') ; echo "Page size is: $var kB"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  167k    0  167k    0     0  79467      0 --:--:--  0:00:02 --:--:-- 79461

real	0m2.170s
user	0m0.048s
sys	0m0.111s
Page size is: 171876 kB

Pretty simple.. but you could take the oneliner even further… populate a variable called $var with the filesize using ls and awk , and then use an if statement to check that var is not 0, indicating the page is answering positively, or alternatively not answering at all.

Check Filesize and populate a variable with the filesize, then validate variable

$ time curl https://www.groundworkjobs.com/ > 1; var=$(ls -al 1 | awk '{print $5}') ; echo "Page size is: $var kB"; if [ "$var" -gt 0 ] ; then echo "The filesize was greater than 0, which indicates box is up but may be giving an error page"; fi
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  167k    0  167k    0     0  78915      0 --:--:--  0:00:02 --:--:-- 78950

real	0m2.185s
user	0m0.041s
sys	0m0.132s
Page size is: 171876 kB
The filesize was greater than 0, which indicates box is up but may be giving an error page

The second exercise is not particularly useful or practical as a means of testing, since if the site was timing out the script would take ages to reply and make the whole test pointless, but as a learning exercise being able to assemble one liners on the fly like this is an enjoyable, rewarding and useful investment of time and effort. Understanding such things are the fundamentals of automating tasks. In this case with output filtering, variable creation, and subsequent validation logic. It’s a simple test, but the concept is exactly the same for any advanced automation procedure too.

Whitelisting IP’s in modsecurity 1 and modsecurity 2

Hey folks, so I have noticed that in the new modsecurity CRS version 2, that ‘chained’ rules are supported. This means that whitelisting IP’s has been altered slightly.

Previously whitelisting in modsecurity v2 ip whitelisting was simpler like:
SecRule REMOTE_ADDR “^11.22.33.44” phase:1,nolog,allow,ctl:ruleEngine=off

Now in modsecurity v2 the whitelist configuration must look something like

SecRule REMOTE_ADDR "^11\.22\.33\.44$" phase:1,log,allow,ctl:ruleEngine=Off,id:999945

Now it’s kind of weird, but I hear that chains are much more secure so in that regard maybe v2 has something awesome to offer. Just was head scratching on this one for a good 20 minutes!

You might be wondering why you are receiving an error like ‘configtest failed’ when restarting apache2 using modsecurity. This is probably the fix for v2 you need.

Obscene Redundancy utilizing Rackspace Cloud Files

So, you may have noticed over the past weeks and months I have been a little bit quieter about the articles I have been writing. Mainly because I’ve been working on a new github project, which, although simple, and lightweight is actually really rather outrageously powerful.

https://github.com/aziouk/obsceneredundancy

Imagine being able to take 15+ redundant replica copies of your files, across 5 or 6 different datacentres. Rackspace Cloud Files API powered, but also with a lot of the flexibility of Bourne Again Shell (BASH).

This was actually quite a neat achievement and I am pleased with the results. There are still some limitations of this redundant replica application, and there are a few bugs, but it is a great proof of concept which shows what you can do with the API both quickly and cheaply (ish). Using filesystems as a service will be the future with some further innovation on the world wide network infrastructure, and it would only take a small breakthrough to rapidly alter the way that OS and machines boot/backup.

If you want to see the project and read the source code before I lay out and describe/explain the entire process of writing this software as well as how to deploy it with cron on linux, then you need wait no longer. Revision 1 alpha is now tested, ready and working in 5 different datacentres.

You can actually toggle which datacentres you wish to utilize as well, it is slightly flexible. The only important consideration here is to understand that there are some limitations such as a lack of de-duping, and this uses tar’s and swiftly, instead of directly querying the API. Since directly uploading thru the API a tar file is relatively simple, I will probably implement it like that as I have before and get rid of swiftly in future iterations, however such a project is really ideal for learning more about BASH , CRON, API and programmatic automation of and sequential filesystems utilizing functional programming and division of labour between workers,

https://github.com/aziouk/obsceneredundancy

Test it (please note it will be a little bit buggy on different environments and there is no instructions yet)

git clone https://github.com/aziouk/obsceneredundancy

Cheers &

Best wishes,
Adam

Configuring Basic NFS Server+Client on RHEL7

So, you want to configure NFS? This isn’t too difficult to do. First of all you will need to, in the simplest setup, create 2 servers, one acting as the NFS server which hosts the content and attached disks. The second server, acting as the client, which mounts the filesystem of the NFS server over the network to a local mount point on the client. In RHEL 7 this is remarkably easy to do.

Install and Configure NFS on the Server

Install dependencies

yum -y install nfs-utils rpcbind

Create a directory on the server

This is the directory we will share

 mkdir -p /opt/nfs

Configure access for the client server on ip 10.0.0.2

vi /etc/exports

# alternatively you can directly pipe the configuration but I don't recommend it
echo "/opt/nfs 10.0.0.2(no_root_squash,rw,sync)" > /etc/exports

Open Firewall ports used by NFS

firewall-cmd --zone=public --add-port=2049/tcp --permanent
firewall-cmd --reload

Restart NFS services & check NFS status

service rpcbind start; service nfs start
service nfs status 

Install and configure NFS on the Client

Install dependencies & start rpcbind

yum install nfs-utils rpcbind
service rpcbind start

Create directory to mount NFS

# Directory we will mount our Network filesystem on the client
mkdir -p /mnt/nfs
# The server ip address is 10.0.0.1, with the path /opt/nfs, we want to mount it to the client on /mnt/nfs this could be anything like
# /mnt/randomdata-1234 etc as long as the folder exists;
mount 10.0.0.1:/opt/nfs /mnt/nfs/

Check that the NFS works

echo "meh testing.." > /mnt/nfs/testing.txt
cat /mnt/nfs/testing.txt
ls -al /mnt/nfs

You should see the filesystem now has testing.txt on it. Confirming you setup NFS correctly.

Make NFS mount permanent by enabling the service permanently, and adding the mount to fstab

This will cause the server to automount the fs during boot time

systemctl enable nfs-server
vi /etc/fstab
10.0.0.1:/opt/nfs	/mnt/nfs	nfs	defaults 		0 0

# OR you could simply pipe the configuration to the file (this is really dangerous though)
# Unless you are absolutely sure what you are doing
echo "10.0.0.1:/opt/nfs	/mnt/nfs	nfs	defaults 		0 0" >> /etc/fstab

If you reboot the client now, you should see that the NFS mount comes back.

Checking Load Balancer Connectivity & Automating it in some interesting ways

So, in a dream last night, I woke up realising I had forgot to write my automated load balancer connectivity checker.

Basically, sometimes a customer will complain their site is down because their ‘load balancer is broken’! In many cases, this is actually due to a firewall on one of the nodes behind the load balancer, or an issue with the webserver application listening on the port. So, I wrote a little piece of automation in the form of a BASH script, that accepts an Load Balancer ID and then uses the API to pull the server nodes behind that Load Balancer, including the ports being used to communicate, and then uses, either netcat or nmap to check that port for connectivity. There were a few ways to achieve this, but the below is what I was happiest with.

#!/bin/bash

# Username used to login to control panel
USERNAME='mycloudusernamegoeshere'

# Find the APIKey in the 'account settings' part of the menu of the control panel
APIKEY="apikeygoeshere"

# Your Rackspace account number (the number that is in the URL of the control panel after logging in)
ACCOUNT=100101010

# Your Rackspace loadbalancerID
LOADBALANCERID=157089

# Rackspace LoadBalancer Endpoint
ENDPOINT="https://lon.loadbalancers.api.rackspacecloud.com/v1.0"

# This section simply retrieves and sets the TOKEN
TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

#   (UNUSED) METHOD 1Extract IP addresses (Currently assuming port 80 only)
#curl -H "X-Auth-Token: $TOKEN" -H "Accept: application/json" -X GET "$ENDPOINT/$ACCOUNT/loadbalancers/$LOADBALANCERID/nodes" | jq .nodes[].address | xargs -i nmap -p 80 {}
#   (UNUSED) Extract ports
# curl -H "X-Auth-Token: $TOKEN" -H "Accept: application/json" -X GET "$ENDPOINT/$ACCOUNT/loadbalancers/$LOADBALANCERID/nodes" | jq .nodes[].port | xargs -i nmap -p 80 {}


# I opted for using this method to extract the important detail
curl -H "X-Auth-Token: $TOKEN" -H "Accept: application/json" -X GET "$ENDPOINT/$ACCOUNT/loadbalancers/$LOADBALANCERID/nodes" | jq .nodes[].address | sed 's/"//g' > address.txt
curl -H "X-Auth-Token: $TOKEN" -H "Accept: application/json" -X GET "$ENDPOINT/$ACCOUNT/loadbalancers/$LOADBALANCERID/nodes" | jq .nodes[].port > port.txt

# Loop thru both output files sequentially, order is important
# WARNING script does not ignore whitespace

while read addressfile1 <&3 && read portfile2 <&4; do
   ncat $addressfile1 $portfile2
done 3

Output looks a bit like;

# ./lbtest.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5143 100 5028 100 115 4731 108 0:00:01 0:00:01 --:--:-- 4734
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 225 100 225 0 0 488 0 --:--:-- --:--:-- --:--:-- 488
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 225 100 225 0 0 679 0 --:--:-- --:--:-- --:--:-- 681
Ncat: No route to host.
Ncat: Connection timed out.

I plan to add some additional support that will check the load balancer is up, AND the servicenet connection between the cloud servers.

Please note that this script must be run on a machine with access to servicenet network, in the same Rackspace Datacenter to be able to check servicenet connectivity of servers. The script can give false positives if strict firewall rules are setup on the cloud server nodes behind the load balancer. It's kind of alpha-draft but I thought I would share it as a proof of concept.

You will need to download and install jq to use it. To download jq please see; https://stedolan.github.io/jq/download/

Testing your servers available bandwidth & DDOS resiliency with iperf

So, if you buy a server with say a 1.6Gbps connection in this customers case, you might want to test you have the bandwidth you need, for instance to be resilient against small DOS and DDOS in the sub 500mbit -1000mbit range.

Here is how I did it (quick summary)


$ iperf -c somedestipiwanttospeedtest-censored -p 80 -P 2 -b 100m
WARNING: option -b implies udp testing
------------------------------------------------------------
Client connecting to somedestipiwanttospeedtest-censored, UDP port 80
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  4] local someipsrc port 53898 connected with somedestipiwanttospeedtest-censored port 80
[  3] local someipsrc port 50460 connected with somedestipiwanttospeedtest-censored port 80


[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  4] Sent 85471 datagrams
[  3]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  3] Sent 85471 datagrams
[SUM]  0.0-10.0 sec   240 MBytes   201 Mbits/sec
[  3] WARNING: did not receive ack of last datagram after 10 tries.
[  4] WARNING: did not receive ack of last datagram after 10 tries.


$ iperf -c somedestipiwanttospeedtest-censored -p 80 -P 10 -b 100m
WARNING: option -b implies udp testing
------------------------------------------------------------
Client connecting to somedestipiwanttospeedtest-censored, UDP port 80
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[ 12] local someipsrc port 50725 connected with somedestipiwanttospeedtest-censored port 80
[  5] local someipsrc port 40410 connected with somedestipiwanttospeedtest-censored port 80
[  6] local someipsrc port 51075 connected with somedestipiwanttospeedtest-censored port 80
[  4] local someipsrc port 58020 connected with somedestipiwanttospeedtest-censored port 80
[  3] local someipsrc port 50056 connected with somedestipiwanttospeedtest-censored port 80
[  7] local someipsrc port 57017 connected with somedestipiwanttospeedtest-censored port 80
[  8] local someipsrc port 49473 connected with somedestipiwanttospeedtest-censored port 80
[  9] local someipsrc port 50491 connected with somedestipiwanttospeedtest-censored port 80
[ 10] local someipsrc port 40974 connected with somedestipiwanttospeedtest-censored port 80
[ 11] local someipsrc port 38348 connected with somedestipiwanttospeedtest-censored port 80
[ ID] Interval       Transfer     Bandwidth
[ 12]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[ 12] Sent 81355 datagrams
[  5]  0.0-10.0 sec   114 MBytes  95.8 Mbits/sec
[  5] Sent 81448 datagrams
[  6]  0.0-10.0 sec   114 MBytes  95.8 Mbits/sec
[  6] Sent 81482 datagrams
[  4]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[  4] Sent 81349 datagrams
[  3]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[  3] Sent 81398 datagrams
[  7]  0.0-10.0 sec   114 MBytes  95.8 Mbits/sec
[  7] Sent 81443 datagrams
[  8]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[  8] Sent 81408 datagrams
[  9]  0.0-10.0 sec   114 MBytes  95.8 Mbits/sec
[  9] Sent 81421 datagrams
[ 10]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[ 10] Sent 81404 datagrams
[ 11]  0.0-10.0 sec   114 MBytes  95.8 Mbits/sec
[ 11] Sent 81427 datagrams
[SUM]  0.0-10.0 sec  1.11 GBytes   957 Mbits/sec


It looks like you are getting the bandwidth you desire, when repeating the test with 20 connections I can see the bandwidth hits a total of 2.01Gbits/sec

# iperf -c somedestipiwanttospeedtest-censored -p 80 -P 20 -b 100m
WARNING: option -b implies udp testing
------------------------------------------------------------
Client connecting to somedestipiwanttospeedtest-censored, UDP port 80
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[ 22] local someipsrc port 44231 connected with somedestipiwanttospeedtest-censored port 80
[  4] local someipsrc port 55259 connected with somedestipiwanttospeedtest-censored port 80
[  7] local someipsrc port 49519 connected with somedestipiwanttospeedtest-censored port 80
[  3] local someipsrc port 45301 connected with somedestipiwanttospeedtest-censored port 80
[  6] local someipsrc port 48654 connected with somedestipiwanttospeedtest-censored port 80
[  5] local someipsrc port 33666 connected with somedestipiwanttospeedtest-censored port 80
[  8] local someipsrc port 33963 connected with somedestipiwanttospeedtest-censored port 80
[  9] local someipsrc port 39593 connected with somedestipiwanttospeedtest-censored port 80
[ 10] local someipsrc port 36229 connected with somedestipiwanttospeedtest-censored port 80
[ 11] local someipsrc port 36331 connected with somedestipiwanttospeedtest-censored port 80
[ 14] local someipsrc port 54622 connected with somedestipiwanttospeedtest-censored port 80
[ 13] local someipsrc port 36159 connected with somedestipiwanttospeedtest-censored port 80
[ 12] local someipsrc port 53881 connected with somedestipiwanttospeedtest-censored port 80
[ 15] local someipsrc port 43221 connected with somedestipiwanttospeedtest-censored port 80
[ 16] local someipsrc port 60284 connected with somedestipiwanttospeedtest-censored port 80
[ 17] local someipsrc port 49735 connected with somedestipiwanttospeedtest-censored port 80
[ 18] local someipsrc port 43866 connected with somedestipiwanttospeedtest-censored port 80
[ 19] local someipsrc port 44631 connected with somedestipiwanttospeedtest-censored port 80
[ 20] local someipsrc port 56852 connected with somedestipiwanttospeedtest-censored port 80
[ 21] local someipsrc port 59338 connected with somedestipiwanttospeedtest-censored port 80
[ ID] Interval       Transfer     Bandwidth
[ 22]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 22] Sent 85471 datagrams
[  4]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  4] Sent 85449 datagrams
[  7]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  7] Sent 85448 datagrams
[  3]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  3] Sent 85448 datagrams
[  6]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  6] Sent 85449 datagrams
[  5]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  5] Sent 85448 datagrams
[  8]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  8] Sent 85453 datagrams
[  9]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  9] Sent 85453 datagrams
[ 10]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 10] Sent 85454 datagrams
[ 11]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 11] Sent 85456 datagrams
[ 14]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 14] Sent 85457 datagrams
[ 13]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 13] Sent 85457 datagrams
[ 12]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 12] Sent 85457 datagrams
[ 15]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 15] Sent 85460 datagrams
[ 16]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 16] Sent 85461 datagrams
[ 17]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 17] Sent 85462 datagrams
[ 18]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 18] Sent 85464 datagrams
[ 19]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 19] Sent 85467 datagrams
[ 20]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 20] Sent 85467 datagrams
[ 21]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 21] Sent 85467 datagrams
[SUM]  0.0-10.0 sec  2.34 GBytes  2.01 Gbits/sec

The last test I did used 2 connections only at 500mbit each;

# iperf -c somedestipiwanttospeedtest-censored -p 80 -P 2 -b 500m
WARNING: option -b implies udp testing
------------------------------------------------------------
Client connecting to somedestipiwanttospeedtest-censored, UDP port 80
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  4] local someipsrc port 60841 connected with somedestipiwanttospeedtest-censored port 80
[  3] local someipsrc port 51495 connected with somedestipiwanttospeedtest-censored port 80
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   570 MBytes   479 Mbits/sec
[  4] Sent 406935 datagrams
[  3]  0.0-10.0 sec   570 MBytes   479 Mbits/sec
[  3] Sent 406933 datagrams
[SUM]  0.0-10.0 sec  1.11 GBytes   957 Mbits/sec

Custom Error pages for Linux Apache2 and the Rackspace Load Balancer

This article describes how to configure custom error pages for Apache2 and Zeus Load Balancer thru Rackspace API.

I have noticed that every now and then this question comes up. For instance, for most people there will define the errorpage within apache2, but if your not able to do that, and want a more helpful error page to be custom set, perhaps in the same xhtml layout as on your website, then custom error page for Load Balancer may be useful to you.

In apache2 this is traditionally set using the ErrorDocument directive.

The most common error pages are:

400 — Bad Request
The server did not understand the request due to bad syntax.

401 — Unauthorized
The visitor must by authorized (e.g., have a password) to access the page.

403 — Forbidden
The server understood the request but was unable to execute it. This could be due to an incorrect username and/or password or the server requires different input.

404 — Not Found
The server cannot find a matching URL.

500 — Internal Server Error
The server encountered an unexpected condition which prevented it from fulfilling the request.

In your apache2 httpd.conf you will need to add some directives to configure custom error pages. These custom error-pages will be forwarded to the Load Balancer in the case you have servers behind the Load Balancer, and the cloud server behind it encounters an error, it will be sent to the LB and then relayed to the customer, obviously. Instead of being sent directly to the customer as with a traditional apache2 webserver. To form your error page directive edit your httpd conf file either in sites-enabled or httpd.conf or similar, and merely add the following:

ErrorDocument 404 /my404page.html
ErrorDocument 500 /myphppage.php

ErrorDocument 403 /my-custom-forbidden-error-page.html
ErrorDocument 400 /my-bad-request-error-page.html

Then ensure that the error page defined (i.e. my404page.html my-custom-forbidden-error-page.html is placed in the correct directory i.e. the websites root /).

I.e. if your Documentroot is /var/www/html then your my404page.html should go in there.

For some people, for example who are using Load Balancer ACL, the blocked customer/client/visitor won’t be able to see/contact your server when it’s added to the LB DENY list. Therefore you might want to setup another error page on the LB to help customers that are accidentally/wrongly blocked on what process to carry out to remove the block, i.e. contact admin email, etc, etc.

To set custom error page on the load balancer you can use the API like so. You will need two files to achieve this, which will be detailed below:

errorpage.json (file)

{"errorpage":
{"content":"\n\n   Warning- Your IP has been blocked by the Load Balancer ACL, or there is an error contacting the servers behind the load balancer. Please contact [email protected] if you believe you have been blocked in error. \n\n"}
}

customerror.sh (file)

#!/bin/bash
# Make sure to set your customer account number, if you don't know what it is, it's the number that appears in the URL
# after logging in to mycloud.rackspace.com
# ALSO set your username and apikey, visible from the 'account settings' part of rackspace mycloud control panel

USERNAME='mycloudusernamehere'
APIKEY='mycloudapikeygoeshere'
ACCOUNTNUMBER='1001100'

# The load balancer ID, shown in the load balancer details page in mycloud contrl panel
LOADBALANCERID='157089'


API_ENDPOINT="https://lon.loadbalancers.api.rackspacecloud.com/v1.0/$ACCOUNTNUMBER"


# Store the API auth password response in the variable TOKEN

TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`

# Execute the API call to update the load balancer error page, submitting as a file the errorpage.json to be used

curl -s -v  \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNTNUMBER" \
-H "Accept: application/json"  \
-d @errorpage.json -X PUT -H "content-type: application/json" \
"$API_ENDPOINT/loadbalancers/$LOADBALANCERID/errorpage"

That it, this concludes the discussion/tutorial on how to set basic error pages both in apache2 (without load balancer) , and with the load balancer only as described above. Please note, it appears you can only set a single error page for the load balancer, as opposed to one error page for each HTTP code.

If anyone see’s any differences or errors though, please let me know!

Best wishes,
Adam