Troubleshooting Rackspace CDN not serving files

A customer came to us with an issue with their CDN which was strange and odd. I wanted to document this so that it is understood why this happened.

The customer is using two TLS origins, and HTTP2. Why that is a problem will become evidently clear. This is a general method of troubleshooting in terms of replicating behaviour of the CDN and origin with host headers. This can be applied no matter what the problem, to understand the HTTP code given by the origin, which at least half of the time turns out to be the cause. The origin being the cloud-server your CDN is backed by.

Question
Hi,
We are currently experiencing some issues with the Cloud CDN. We are using this for our CSS and images and now everything is getting a HTTP/503 SERVICE UNAVAILABLE. If you want to test, you may test this url:
https://cdn.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css

This is supposed to deliver this file:
https://origin.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css

Is something mis-configured or are there some issues on the appliance?

Answer

First we confirm the origin is UP

# curl -I https://originserver.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css
HTTP/1.1 200 OK
Date: Tue, 11 Oct 2016 08:45:42 GMT
Server: Apache
Last-Modified: Tue, 11 Oct 2016 06:57:53 GMT
ETag: "ed26-53e91653c61d0"
Accept-Ranges: bytes
Content-Length: 60710
Vary: Accept-Encoding
Cache-Control: max-age=31536000, public
Expires: Wed, 11 Oct 2017 08:45:42 GMT
Access-Control-Allow-Origin: *
X-Frame-Options: SAMEORIGIN
Content-Type: text/css

The origin is the cloud-server where the CDN pulls from. As we can see the site is up. So what is causing the issue? The way CDN works is it provides a host header for the domain, so the site has to have a host for both domains. The reason is that the CDN uses CNAME hostnames to identify which CDN is which. I.e. which path like /media/ directs to which static origins subdomain.

The best way to look further at the situation now is to check the origin (the subdomain that you’ve associated with your CDN subdomain that raxspace gives you, when sending the host header for the CDN url we get:

root@myweb:~# curl -I https://origin.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css -H 'host: cdn.cusomerdomain.no'
HTTP/1.1 421 Misdirected Request
Date: Tue, 11 Oct 2016 08:17:38 GMT
Server: Apache
Content-Type: text/html; charset=iso-8859-1

As we can see we get this odd HTTP 421 misdirected request.

# curl -I https://origin.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css -H 'host: mycdnname1.scdn4.secure.raxcdn.com'
HTTP/1.1 421 Misdirected Request
Date: Tue, 11 Oct 2016 08:18:06 GMT
Server: Apache
Content-Type: text/html; charset=iso-8859-1

~# curl -I https://origin.customerdomain.com/static/version1476169182/adminhtml/Magento/backend/nb_NO/extjs/resources/css/ext-all.min.css -H 'host: cdncustomercname.cusomerdomain.com'
HTTP/1.1 421 Misdirected Request
Date: Tue, 11 Oct 2016 08:17:45 GMT
Server: Apache
Content-Type: text/html; charset=iso-8859-1

https://httpd.apache.org/docs/2.4/mod/mod_http2.html

Looking at the definition for HTTP 2, this issue was caused by different TLS configurations for your domains and mod http2 trying to reuse the same connection, which will not work if the TLS configurations are not the same on the origin cloud-server side.

You just need to disable HTTP2, or configure the TLS configurations to be the same on the apache2 side. I hope that this clarifies and makes sense to you, of course if you have additional questions, comments or concerns please don't hesitate to reach out to us, we are here to help!

As you can see the importance of debugging CDN by sending host header to the origin that the CDN uses, to replicate the issue the customer was experiencing, which was essentially, the CDN edgenodes (the machines around the world that pull from the origin for content distribution really worldwide), weren't able to retrieve the files from the origin with the host header domain that is defined in the control panel.

This customer needed to in this case adjust their Apache2 configuration. The problem was likely caused by updating Apache2 or similar.

Securing your WordPress with chmod 644 and chmod 755 the easy (but pro) way

Let’s say we have a document root like:

It’s interesting to note the instructions for this will vary from environment to environment, it depends on which user is looking after apache2, etc.

/var/www/mysite.com/htdocs

Make all files read/write and owned by www-data apache2 user only

root@meine:/var/www/mysite.com/htdocs# find . -type f -exec chown apache2:apache2 {} \; 
root@meine:/var/www/mysite.com/htdocs# find . -type f -exec chmod 644 {} \;

Make all folders accessible Read + Execute, but no write permissions

root@meine:/var/www/mysite.com/htdocs# find . -type d -exec chmod 755 {} \;
root@meine:/var/www/mysite.com/htdocs# find . -type d -exec chown apache2:apache2 {} \;

PLEASE NOTE THIS BREAKS YOUR WORDPRESS ABILITY TO AUTO-UPDATE ITSELF. BUT IT IS MORE SECURE 😀

Note debian users, may need to use www-data:www-data instead.

Fixing Innodb Registered as a storage Engine Failed

So you have a nice wordpress (or similar site) running, but then all of a sudden you got a nasty message telling you that, the InnobDB registration as a storage engine failed.

# tail /var/log/mariadb/mariadb.log
InnoDB: If that is the case, please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.5/en/error-creating-innodb.html
160914  6:45:29 [ERROR] Plugin 'InnoDB' init function returned error.
160914  6:45:29 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
160914  6:45:29 [ERROR] Failed to initialize plugins.
160914  6:45:29 [ERROR] Aborting
160914  6:45:29 [Note] /usr/libexec/mysqld: Shutdown complete

160914 06:45:29 mysqld_safe mysqld from pid file /var/lib/mysql/wpstack.localdomain.pid ended

# systemctl status mariadb
● mariadb.service - MariaDB database server
   Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/mariadb.service.d
           └─limits.conf
   Active: failed (Result: exit-code) since Wed 2016-09-14 06:25:38 UTC; 2min 32s ago
  Process: 1434 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=1/FAILURE)
  Process: 1433 ExecStart=/usr/bin/mysqld_safe --basedir=/usr (code=exited, status=0/SUCCESS)
  Process: 1305 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS)
 Main PID: 1433 (code=exited, status=0/SUCCESS)

Sep 14 06:25:35 wpstack.localdomain systemd[1]: Starting MariaDB database se....
Sep 14 06:25:37 wpstack.localdomain mysqld_safe[1433]: 160914 06:25:37 mysqld...
Sep 14 06:25:37 wpstack.localdomain mysqld_safe[1433]: 160914 06:25:37 mysqld...
Sep 14 06:25:38 wpstack.localdomain systemd[1]: mariadb.service: control pro...1
Sep 14 06:25:38 wpstack.localdomain systemd[1]: Failed to start MariaDB data....
Sep 14 06:25:38 wpstack.localdomain systemd[1]: Unit mariadb.service entered....
Sep 14 06:25:38 wpstack.localdomain systemd[1]: mariadb.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
[root@wpstack ~]# systemctl start mariadb
Job for mariadb.service failed because the control process exited with error code. See "systemctl status mariadb.service" and "journalctl -xe" for details.
[root@wpstack ~]# systemctl start mariadb
Job for mariadb.service failed because the control process exited with error code. See "systemctl status mariadb.service" and "journalctl -xe" for details.

This is naturally a problem since until it is resolved you have no database service running. It is fairly easy to resolve in most cases, by removing the log from var/lib/mysql

mv /var/lib/mysql/ib_logfile0{,.bak}  	   	 	   
mv /var/lib/mysql/ib_logfile1{,.bak}  	   	 	   
mv /var/lib/mysql/ibdata1{,.bak} 

These simply move the files away, which allows InnoDB to continue operating. Please note that this may not always fix your issue and in some situations might result in data loss, so it is advisable to take a backup of the database filesystem before proceeding.

Automating Backups in Public Cloud using Cloud Files

Hey folks, I know it’s been a little while since I put an article together. However I have been putting together a really article explaining how to write bespoke backup systems for the Rackspace Community. It’s a proof of concept/demonstration/tutorial as opposed to a production application. However people looking to create custom cloud backup scripts may benefit from the experience of reading thru it.

You can see the article at the below URL:

https://community.rackspace.com/products/f/25/t/7857

Grabbing network activity from server without network utility

So, is it possible to look at a network interfaces activity without bwm-ng, iptraf, or other tools? Yes.

while true do
RX1=`cat /sys/class/net/${INTERFACE}/statistics/rx_bytes`
TX1=`cat /sys/class/net/${INTERFACE}/statistics/tx_bytes`
DOWN=$(($RX1-$RX2))
UP=$(($TX1-$TX2))
DOWN_Bits=$(($DOWN * 8 ))
UP_Bits=$(($UP * 8 ))
DOWNmbps=$(( $DOWN_Bits >> 20 ))
UPmbps=$(($UP_Bits >> 20 ))
echo -e "RX:${DOWN}\tTX:${UP} B/s | RX:${DOWNmbps}\tTX:${UPmbps} Mb/s"
RX2=$RX1; TX2=$TX1
sleep 1; done

I found this little gem yesterday, but couldn’t understand why they had not used clear. I guess they wanted to log activity or something… still this was a really nice find. I can’t remember where I found it yesterday but googling part of it should lead you to the original source 😀

Simple way to perform a body check on a website

So, I was testing with curl today and I know that it’s possible to direct to /dev/null to suppress the page. But that’s not very handy if you are checking whether html page loads, so I came up with some better body checks to use.

A Basic body check using wc -l to count the lines of the site

 time curl https://www.google.com/ > 1; echo "non zero indicates server up and served content of n lines"; cat 1 | wc -l
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  167k    0  167k    0     0  79771      0 --:--:--  0:00:02 --:--:-- 79756

real	0m2.162s
user	0m0.042s
sys	0m0.126s
non zero indicates server up and served content of n lines
2134

A body check for Google analytics

$ time curl https://www.groundworkjobs.com/ > 1; echo "Checking for google analytics html elements string"; cat 1 | grep "www.google-analytics.com/analytics.js"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  167k    0  167k    0     0  76143      0 --:--:--  0:00:02 --:--:-- 76152

real	0m2.265s
user	0m0.042s
sys	0m0.133s
Checking for google analytics html elements string
				})(window,document,'script','//www.google-analytics.com/analytics.js','ga');

Such commands might be useful when troubleshooting a cluster for instance, where one server shows more up to date versions, (different number of lines). There’s probably better way to do this with ls and awk and use the html filesize, since number of lines wouldn’t be so accurate.

Check Filesize from request

$ time curl https://www.groundworkjobs.com/ > 1; var=$(ls -al 1 | awk '{print $5}') ; echo "Page size is: $var kB"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  167k    0  167k    0     0  79467      0 --:--:--  0:00:02 --:--:-- 79461

real	0m2.170s
user	0m0.048s
sys	0m0.111s
Page size is: 171876 kB

Pretty simple.. but you could take the oneliner even further… populate a variable called $var with the filesize using ls and awk , and then use an if statement to check that var is not 0, indicating the page is answering positively, or alternatively not answering at all.

Check Filesize and populate a variable with the filesize, then validate variable

$ time curl https://www.groundworkjobs.com/ > 1; var=$(ls -al 1 | awk '{print $5}') ; echo "Page size is: $var kB"; if [ "$var" -gt 0 ] ; then echo "The filesize was greater than 0, which indicates box is up but may be giving an error page"; fi
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  167k    0  167k    0     0  78915      0 --:--:--  0:00:02 --:--:-- 78950

real	0m2.185s
user	0m0.041s
sys	0m0.132s
Page size is: 171876 kB
The filesize was greater than 0, which indicates box is up but may be giving an error page

The second exercise is not particularly useful or practical as a means of testing, since if the site was timing out the script would take ages to reply and make the whole test pointless, but as a learning exercise being able to assemble one liners on the fly like this is an enjoyable, rewarding and useful investment of time and effort. Understanding such things are the fundamentals of automating tasks. In this case with output filtering, variable creation, and subsequent validation logic. It’s a simple test, but the concept is exactly the same for any advanced automation procedure too.

Checking requests to apache2 webserver during downtime

A customer of ours was having some serious disruptions to his webserver, with 15 minute outages happening here and there. He said he couldn’t see an increase in traffic and therefore didn’t understand why it reached maxclients. Here was a quick way to prove whether traffic really increased or not by directly grepping the access logs for the time and day in question and using wc -l to count them, and a for loop to step thru the minutes of the hour in between the events.

Proud of this simple one.. much simpler than a lot of other scripts that do the same thing I’ve seen out there!

root@anonymousbox:/var/log/apache2# for i in `seq 01 60`;  do  printf "total visits: 13:$i\n\n"; grep "12/Jul/2016:13:$i" access.log | wc -l; done

total visits: 13:1

305
total visits: 13:2

474
total visits: 13:3

421
total visits: 13:4

411
total visits: 13:5

733
total visits: 13:6

0
total visits: 13:7

0
total visits: 13:8

0
total visits: 13:9

0
total visits: 13:10

30
total visits: 13:11

36
total visits: 13:12

30
total visits: 13:13

29
total visits: 13:14

28
total visits: 13:15

26
total visits: 13:16

26
total visits: 13:17

32
total visits: 13:18

37
total visits: 13:19

31
total visits: 13:20

42
total visits: 13:21

47
total visits: 13:22

65
total visits: 13:23

51
total visits: 13:24

57
total visits: 13:25

38
total visits: 13:26

40
total visits: 13:27

51
total visits: 13:28

51
total visits: 13:29

32
total visits: 13:30

56
total visits: 13:31

37
total visits: 13:32

36
total visits: 13:33

32
total visits: 13:34

36
total visits: 13:35

36
total visits: 13:36

39
total visits: 13:37

70
total visits: 13:38

52
total visits: 13:39

27
total visits: 13:40

38
total visits: 13:41

46
total visits: 13:42

46
total visits: 13:43

47
total visits: 13:44

39
total visits: 13:45

36
total visits: 13:46

39
total visits: 13:47

49
total visits: 13:48

41
total visits: 13:49

30
total visits: 13:50

57
total visits: 13:51

68
total visits: 13:52

99
total visits: 13:53

52
total visits: 13:54

92
total visits: 13:55

66
total visits: 13:56

75
total visits: 13:57

70
total visits: 13:58

87
total visits: 13:59

67
total visits: 13:60

root@anonymousbox:/var/log/apache2# for i in `seq 01 60`; do printf “total visits: 12:$i\n\n”; grep “12/Jul/2016:12:$i” access.log | wc -l; done
total visits: 12:1

169
total visits: 12:2

248
total visits: 12:3

298
total visits: 12:4

200
total visits: 12:5

341
total visits: 12:6

0
total visits: 12:7

0
total visits: 12:8

0
total visits: 12:9

0
total visits: 12:10

13
total visits: 12:11

11
total visits: 12:12

30
total visits: 12:13

11
total visits: 12:14

11
total visits: 12:15

13
total visits: 12:16

16
total visits: 12:17

28
total visits: 12:18

26
total visits: 12:19

10
total visits: 12:20

19
total visits: 12:21

35
total visits: 12:22

12
total visits: 12:23

19
total visits: 12:24

28
total visits: 12:25

25
total visits: 12:26

30
total visits: 12:27

43
total visits: 12:28

13
total visits: 12:29

24
total visits: 12:30

39
total visits: 12:31

35
total visits: 12:32

25
total visits: 12:33

22
total visits: 12:34

33
total visits: 12:35

21
total visits: 12:36

31
total visits: 12:37

31
total visits: 12:38

22
total visits: 12:39

39
total visits: 12:40

11
total visits: 12:41

18
total visits: 12:42

11
total visits: 12:43

28
total visits: 12:44

19
total visits: 12:45

27
total visits: 12:46

18
total visits: 12:47

17
total visits: 12:48

22
total visits: 12:49

29
total visits: 12:50

22
total visits: 12:51

31
total visits: 12:52

44
total visits: 12:53

38
total visits: 12:54

38
total visits: 12:55

41
total visits: 12:56

38
total visits: 12:57

32
total visits: 12:58

26
total visits: 12:59

31
total visits: 12:60

Checking File integrity with Cloud Files, post upload file

So, as you may already be aware, I am working on a lightweight backup script called obscene redundancy’. An redundant backup software capable of 18 replicas of data to Rackspace Cloud Files API service. It’s so redundant… it’s obscene redundancy.

For more details visit the project URL:
https://github.com/aziouk/obsceneredundancy/

Today, I was discussing with my colleague, that it was all very well uploading your tar to cloud files, but, wouldn’t you really like to know if the file you uploaded is completely identical number of bits, and order? Enter, Cloud Files ‘HEAD’and Etag. Our MD5 friend.

What I did to improve the obscene redundancy script was quite simple here:

# We define a variable that takes the 'Etag' (MD5Sum) value for the cloud files archive
cfmd5sum=$(swiftly --conf swiftly-configs/swiftly-${SHORT_REGION,,}.conf head
"${BACKUP_DEST}/${FILE}" | grep -i Etag | awk '{print $2}')

# We Define a variable that generates an 'MD5Sum' for the local file archive
localmd5sum=$(md5sum "$BACKUP_DIR"/"$FILE")

echo "Checking Data integrity of Cloud Files upload to $REGION"
echo "Cloud Files Archive MD5:  $cfmd5sum  ....... Local File Archive MD5: $localmd5sum"

# If these values
if [[ "$cfmd5sum" -ne "$localmd5sum" ]];
then
echo "VALUES NOT EQUAL"
echo "$REGION CRC OK..."
else
echo "VALEUS EQUAL
echo "$REGION CRC missing, in error, or NOT OK..."
fi

After all this I found that the script wasn’t working properly… so I did some debugging about this to check, at least, first of all , the length of each variable.

   if [[ "$cfmd5sum" == "$localmd5sum" ]]; then
                        echo "VALUES EQUAL, (local md5sum length given first)"
                        echo "$localmd5sum"| wc -L
                        echo "$cfmd5sum"| wc -L


                        echo "$REGION CRC OK..."
                else
                        echo "VALUES NOT EQUAL"
                        echo "$localmd5sum"|wc -L
                        echo "$cfmd5sum"|wc -L
                        echo "$REGION CRC missing, in error, or NOT OK..."
                fi

The output shown me that the variable length was different. At this stage I’ve no idea why, but will add updates here. I’m going to commit this to obsceneredundancy because proof of concept is working and valid, as shown by the output of the script. (i.e. the method is fine, it’s just the way the string is compared in the if, statement, I suspect it is to do with special character or \n characters as I had before. So, when I made this addition to the multi-dc-backup.sh script.. the output now looks like:

Creating Container in LON for obsceneredundancy

LON: Backing up ...
Source: /var/www/ ---> Dest: cloudfiles://LON/obsceneredundancy/varwww-2016-07-06-6bd657e9-d268-4883-9f40-3859f690aadb.tar.gz

Checking Data integrity of Cloud Files upload to BACKUP_TO_LON
Cloud Files Archive MD5:  65147eb66f8bbeff03a229570b0a1be7  ....... Local File Archive MD5: 65147eb66f8bbeff03a229570b0a1be7  /var/backup/varwww-2016-07-06-6bd657e9-d268-4883-9f40-3859f690aadb.tar.gz
VALUES NOT EQUAL
107
32
BACKUP_TO_LON CRC missing, in error, or NOT OK...
lon: COMPLETED OK 15504796/15504796
ORD: Not backing up ...



Creating Container in IAD for obsceneredundancy

IAD: Backing up ...
Source: /var/www/ ---> Dest: cloudfiles://IAD/obsceneredundancy/varwww-2016-07-06-6bd657e9-d268-4883-9f40-3859f690aadb.tar.gz

Checking Data integrity of Cloud Files upload to BACKUP_TO_IAD
Cloud Files Archive MD5:  65147eb66f8bbeff03a229570b0a1be7  ....... Local File Archive MD5: 65147eb66f8bbeff03a229570b0a1be7  /var/backup/varwww-2016-07-06-6bd657e9-d268-4883-9f40-3859f690aadb.tar.gz
VALUES NOT EQUAL
107
32
BACKUP_TO_IAD CRC missing, in error, or NOT OK...
iad: COMPLETED OK 15504796/15504796
DFW: Not backing up ...

As we can see the 107 (localmd5size) and the 32 (cloudfilesmd5size) are different! I’ve no idea why, since when echoing the variables they look the same. I suspect gremlins and Trolls. A fresh head tomorrow will probably solve this in a few minutes!

Cheers &
Best wishes,
Adam

Obscene Redundancy utilizing Rackspace Cloud Files

So, you may have noticed over the past weeks and months I have been a little bit quieter about the articles I have been writing. Mainly because I’ve been working on a new github project, which, although simple, and lightweight is actually really rather outrageously powerful.

https://github.com/aziouk/obsceneredundancy

Imagine being able to take 15+ redundant replica copies of your files, across 5 or 6 different datacentres. Rackspace Cloud Files API powered, but also with a lot of the flexibility of Bourne Again Shell (BASH).

This was actually quite a neat achievement and I am pleased with the results. There are still some limitations of this redundant replica application, and there are a few bugs, but it is a great proof of concept which shows what you can do with the API both quickly and cheaply (ish). Using filesystems as a service will be the future with some further innovation on the world wide network infrastructure, and it would only take a small breakthrough to rapidly alter the way that OS and machines boot/backup.

If you want to see the project and read the source code before I lay out and describe/explain the entire process of writing this software as well as how to deploy it with cron on linux, then you need wait no longer. Revision 1 alpha is now tested, ready and working in 5 different datacentres.

You can actually toggle which datacentres you wish to utilize as well, it is slightly flexible. The only important consideration here is to understand that there are some limitations such as a lack of de-duping, and this uses tar’s and swiftly, instead of directly querying the API. Since directly uploading thru the API a tar file is relatively simple, I will probably implement it like that as I have before and get rid of swiftly in future iterations, however such a project is really ideal for learning more about BASH , CRON, API and programmatic automation of and sequential filesystems utilizing functional programming and division of labour between workers,

https://github.com/aziouk/obsceneredundancy

Test it (please note it will be a little bit buggy on different environments and there is no instructions yet)

git clone https://github.com/aziouk/obsceneredundancy

Cheers &

Best wishes,
Adam

Using Rackspace Cloud Files, swiftly and cron to Backup Data to multiple data-centres cheaply

So, you have some really important data, so much so that 99.99% redundancy is not enough for you. One solution to this is to use multiple copies in multiple datacentres. Most enterprise backup will have on-site, an off-site, and an archival copy. What I’m going to show here is how to make 4 different copies of your data, in 4 different datacentres around the world. This will provide a very high redundancy of storage, and greatly reduce the likelihood of data loss. Although it costs a bit more, this kind of solution may be suitable for many small, medium and large businesses. Naturally, depending on the size of the data, and the importance of redundancy. You might not have many files to backup, perhaps a small cd worth.. it will be very inexpensive if you have a small backup to make. However, due to the way that cloud files is billed, copying data to cloud files costs money in bandwidth when writing from a server in London to a cloud files in Sydney, Chicago or Dallas for instance, so it’s very important to consider the impact of bandwidth costs when utilizing an additional 3 cloud files endpoints that are not in the local datacentre region. Which, is essentially what we are doing in this guide.

Setup swiftly

yum install python-devel python-pip -y
pip install swiftly eventlet 

Create your swiftly environments (setting the name for each file)

==> /root/.swiftly-dfw.conf <==
[swiftly]
auth_user = myusername
auth_key = censored
auth_url = https://identity.api.rackspacecloud.com/v2.0
region = dfw

==> /root/.swiftly-iad.conf <==
[swiftly]
auth_user = myusername
auth_key = censored
auth_url = https://identity.api.rackspacecloud.com/v2.0
region = iad

==> /root/.swiftly-ord.conf <==
[swiftly]
auth_user = myusername
auth_key = censored
auth_url = https://identity.api.rackspacecloud.com/v2.0
region = ord

==> /root/.swiftly-syd.conf <==
[swiftly]
auth_user = myusername
auth_key = censored
auth_url = https://identity.api.rackspacecloud.com/v2.0
region = syd

Create your Script

# Adam Bull
# Adam Bull, Rackspace UK
# May 17, 2016


# This can be sequential or, it can be parallel, not sure which is better yet use & for parallel
# This backs up /documents file and puts it in the 'managed_backup' cloud files container at the following 4 datacentres ,DFW, IAD, ORD and SYD

swiftly --verbose --conf ~/.swiftly-dfw.conf --concurrency 100 put -i /documents /managed_backup
swiftly --verbose --no-snet --conf ~/.swiftly-iad.conf --concurrency 100 put -i /documents /managed_backup
swiftly --verbose --no-snet --conf ~/.swiftly-ord.conf --concurrency 100 put -i /documents /managed_backup
swiftly --verbose --no-snet --conf ~/.swiftly-syd.conf --concurrency 100 put -i /documents /managed_backup

Because the other 3 endpoints are in different datacentres, we can't use servicenet, so we defined --no-snet option for swiftly as above.

Execute your script

chmod +x multibackup.sh
./multibackup.sh

This obviously is a basic system and script of taking backups, and it is not for production use (yet). This is an alpha project I started today. The cool thing is that it works, and quite nicely. Although it is far from finished yet as a workable script.

Once the script is made, you can simply add it to crontab -e as you would usually. Make sure the user you execute with cron has access to the .conf files in their home directory!