Manually change Rackspace root password for Cloud Server

If you lose your password to your machine, and aren’t able to reset the password for your VM thru the mycloud control panel, then it’s possible to do this manually by putting the server into rescue mode and chrooting. Here is how ;

1. Put server into rescue mode. Noting the root password autogenerated for the rescue mode.

Screen Shot 2016-01-21 at 11.14.26 AM
2. Login to server via web console or ssh
3. Mount the ‘old’ original disk (usually partition xvdb1).

 mount /dev/xvdb1 /mnt

4. Chroot to the ‘old’ original disk

  
chroot /mnt

5. Change the root passwd

 passwd

7. Take the server out of rescue mode.
8. You should now be able to login to the server using the new root password. (done in the same way as putting it into rescue mode)

Testing your servers available bandwidth & DDOS resiliency with iperf

So, if you buy a server with say a 1.6Gbps connection in this customers case, you might want to test you have the bandwidth you need, for instance to be resilient against small DOS and DDOS in the sub 500mbit -1000mbit range.

Here is how I did it (quick summary)


$ iperf -c somedestipiwanttospeedtest-censored -p 80 -P 2 -b 100m
WARNING: option -b implies udp testing
------------------------------------------------------------
Client connecting to somedestipiwanttospeedtest-censored, UDP port 80
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  4] local someipsrc port 53898 connected with somedestipiwanttospeedtest-censored port 80
[  3] local someipsrc port 50460 connected with somedestipiwanttospeedtest-censored port 80


[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  4] Sent 85471 datagrams
[  3]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  3] Sent 85471 datagrams
[SUM]  0.0-10.0 sec   240 MBytes   201 Mbits/sec
[  3] WARNING: did not receive ack of last datagram after 10 tries.
[  4] WARNING: did not receive ack of last datagram after 10 tries.


$ iperf -c somedestipiwanttospeedtest-censored -p 80 -P 10 -b 100m
WARNING: option -b implies udp testing
------------------------------------------------------------
Client connecting to somedestipiwanttospeedtest-censored, UDP port 80
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[ 12] local someipsrc port 50725 connected with somedestipiwanttospeedtest-censored port 80
[  5] local someipsrc port 40410 connected with somedestipiwanttospeedtest-censored port 80
[  6] local someipsrc port 51075 connected with somedestipiwanttospeedtest-censored port 80
[  4] local someipsrc port 58020 connected with somedestipiwanttospeedtest-censored port 80
[  3] local someipsrc port 50056 connected with somedestipiwanttospeedtest-censored port 80
[  7] local someipsrc port 57017 connected with somedestipiwanttospeedtest-censored port 80
[  8] local someipsrc port 49473 connected with somedestipiwanttospeedtest-censored port 80
[  9] local someipsrc port 50491 connected with somedestipiwanttospeedtest-censored port 80
[ 10] local someipsrc port 40974 connected with somedestipiwanttospeedtest-censored port 80
[ 11] local someipsrc port 38348 connected with somedestipiwanttospeedtest-censored port 80
[ ID] Interval       Transfer     Bandwidth
[ 12]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[ 12] Sent 81355 datagrams
[  5]  0.0-10.0 sec   114 MBytes  95.8 Mbits/sec
[  5] Sent 81448 datagrams
[  6]  0.0-10.0 sec   114 MBytes  95.8 Mbits/sec
[  6] Sent 81482 datagrams
[  4]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[  4] Sent 81349 datagrams
[  3]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[  3] Sent 81398 datagrams
[  7]  0.0-10.0 sec   114 MBytes  95.8 Mbits/sec
[  7] Sent 81443 datagrams
[  8]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[  8] Sent 81408 datagrams
[  9]  0.0-10.0 sec   114 MBytes  95.8 Mbits/sec
[  9] Sent 81421 datagrams
[ 10]  0.0-10.0 sec   114 MBytes  95.7 Mbits/sec
[ 10] Sent 81404 datagrams
[ 11]  0.0-10.0 sec   114 MBytes  95.8 Mbits/sec
[ 11] Sent 81427 datagrams
[SUM]  0.0-10.0 sec  1.11 GBytes   957 Mbits/sec


It looks like you are getting the bandwidth you desire, when repeating the test with 20 connections I can see the bandwidth hits a total of 2.01Gbits/sec

# iperf -c somedestipiwanttospeedtest-censored -p 80 -P 20 -b 100m
WARNING: option -b implies udp testing
------------------------------------------------------------
Client connecting to somedestipiwanttospeedtest-censored, UDP port 80
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[ 22] local someipsrc port 44231 connected with somedestipiwanttospeedtest-censored port 80
[  4] local someipsrc port 55259 connected with somedestipiwanttospeedtest-censored port 80
[  7] local someipsrc port 49519 connected with somedestipiwanttospeedtest-censored port 80
[  3] local someipsrc port 45301 connected with somedestipiwanttospeedtest-censored port 80
[  6] local someipsrc port 48654 connected with somedestipiwanttospeedtest-censored port 80
[  5] local someipsrc port 33666 connected with somedestipiwanttospeedtest-censored port 80
[  8] local someipsrc port 33963 connected with somedestipiwanttospeedtest-censored port 80
[  9] local someipsrc port 39593 connected with somedestipiwanttospeedtest-censored port 80
[ 10] local someipsrc port 36229 connected with somedestipiwanttospeedtest-censored port 80
[ 11] local someipsrc port 36331 connected with somedestipiwanttospeedtest-censored port 80
[ 14] local someipsrc port 54622 connected with somedestipiwanttospeedtest-censored port 80
[ 13] local someipsrc port 36159 connected with somedestipiwanttospeedtest-censored port 80
[ 12] local someipsrc port 53881 connected with somedestipiwanttospeedtest-censored port 80
[ 15] local someipsrc port 43221 connected with somedestipiwanttospeedtest-censored port 80
[ 16] local someipsrc port 60284 connected with somedestipiwanttospeedtest-censored port 80
[ 17] local someipsrc port 49735 connected with somedestipiwanttospeedtest-censored port 80
[ 18] local someipsrc port 43866 connected with somedestipiwanttospeedtest-censored port 80
[ 19] local someipsrc port 44631 connected with somedestipiwanttospeedtest-censored port 80
[ 20] local someipsrc port 56852 connected with somedestipiwanttospeedtest-censored port 80
[ 21] local someipsrc port 59338 connected with somedestipiwanttospeedtest-censored port 80
[ ID] Interval       Transfer     Bandwidth
[ 22]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 22] Sent 85471 datagrams
[  4]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  4] Sent 85449 datagrams
[  7]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  7] Sent 85448 datagrams
[  3]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  3] Sent 85448 datagrams
[  6]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  6] Sent 85449 datagrams
[  5]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  5] Sent 85448 datagrams
[  8]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  8] Sent 85453 datagrams
[  9]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[  9] Sent 85453 datagrams
[ 10]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 10] Sent 85454 datagrams
[ 11]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 11] Sent 85456 datagrams
[ 14]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 14] Sent 85457 datagrams
[ 13]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 13] Sent 85457 datagrams
[ 12]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 12] Sent 85457 datagrams
[ 15]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 15] Sent 85460 datagrams
[ 16]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 16] Sent 85461 datagrams
[ 17]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 17] Sent 85462 datagrams
[ 18]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 18] Sent 85464 datagrams
[ 19]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 19] Sent 85467 datagrams
[ 20]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 20] Sent 85467 datagrams
[ 21]  0.0-10.0 sec   120 MBytes   101 Mbits/sec
[ 21] Sent 85467 datagrams
[SUM]  0.0-10.0 sec  2.34 GBytes  2.01 Gbits/sec

The last test I did used 2 connections only at 500mbit each;

# iperf -c somedestipiwanttospeedtest-censored -p 80 -P 2 -b 500m
WARNING: option -b implies udp testing
------------------------------------------------------------
Client connecting to somedestipiwanttospeedtest-censored, UDP port 80
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  4] local someipsrc port 60841 connected with somedestipiwanttospeedtest-censored port 80
[  3] local someipsrc port 51495 connected with somedestipiwanttospeedtest-censored port 80
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   570 MBytes   479 Mbits/sec
[  4] Sent 406935 datagrams
[  3]  0.0-10.0 sec   570 MBytes   479 Mbits/sec
[  3] Sent 406933 datagrams
[SUM]  0.0-10.0 sec  1.11 GBytes   957 Mbits/sec

Disable TCP Offloading on Linux NIC

# read -p "Interface: " iface; ethtool -k $iface | awk -F: '/offload: on$/{print$1}' | sed 's/^\(.\).*-\(.\).*-\(.\).*/\1\2\3/' | xargs --no-run-if-empty -n1 -I{} ethtool -K $iface {} off


Disable offloading for all interfaces:

# for iface in $(cd /sys/class/net; echo *); do ethtool -k $iface | awk -F: '/offload: on$/{print$1}' | sed 's/^\(.\).*-\(.\).*-\(.\).*/\1\2\3/' | xargs --no-run-if-empty -n1 -I{} ethtool -K $iface {} off; done

A big thank you to Daniel C. for this!

How to speed test a Rackspace CDN?

So, today, a customer was asking if we could show speed tests to CDN.

So I used my french server to test external connections from outside of Rackspace. For a CDN, it’s fairly speedy!

#!/bin/bash
CSTATS=`curl -w '%{speed_download}\t%{time_namelookup}\t%{time_total}\n' -o /dev/null -s http://6281487ef0c74fc1485b-69e4500000000000dfasdcd1b6b.r12.cf1.rackcdn.com/bigfile-rackspace-testing`
SPEED=`echo $CSTATS | awk '{print $1}' | sed 's/\..*//'`
DNSTIME=`echo $CSTATS | awk '{print $2}'`
TOTALTIME=`echo $CSTATS | awk '{print $3}'`
echo "Transfered $SPEED bytes/sec in $TOTALTIME seconds."
echo "DNS Resolve Time was $DNSTIME seconds."
# ./speedtest.sh
Transfered 3991299 bytes/sec in 26.272 seconds.
DNS Resolve Time was 0.061 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 7046221 bytes/sec in 14.881 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 29586916 bytes/sec in 3.544 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 14539272 bytes/sec in 7.212 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 9060846 bytes/sec in 11.573 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 25551753 bytes/sec in 4.104 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 28225927 bytes/sec in 3.715 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 9036412 bytes/sec in 11.604 seconds.
DNS Resolve Time was 0.004 seconds.
root@ns310045:~# ./speedtest.sh
Transfered 32328623 bytes/sec in 3.243 seconds.
DNS Resolve Time was 0.004 seconds.

Perform a traceroute thru the Rackspace monitoring API

So, I was thinking about the Rackspace traceroute monitoring API and wondering what I could do with it, when I come across this gem

/monitoring_zones/mzsyd/traceroute

Well what is it you ask. Well it’s an API path for performing a traceroute on the 6 different region endpoints. This means you can use an API call to run traceroutes (for free!) thru the Rackspace cloud monitoring API. This would be pretty handy at testing connectivity around the world to your chosen destination from each datacentre. Handy Andy.

Then you ask what does the mzsyd mean? That’s a region ID: Let’s see about putting together a script to list the region ID’s we can run the traceroutes on first of all:

File: list-monitoring-zones.sh

!/bin/bash

USERNAME='mycloudusername'
APIKEY='mycloudapikey'
ACCOUNTNUMBER='10010110'
API_ENDPOINT="https://monitoring.api.rackspacecloud.com/v1.0/$ACCOUNTNUMBER"


TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`




curl -s -v  \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNTNUMBER" \
-H "Accept: application/json"  \
-X GET  \
"$API_ENDPOINT/monitoring_zones"

Lets take a look at the response when I run this monitoring zone list.


chmod +x list-monitoring-zones.sh
./list-monitoring-zones.sh

Response

< Content-Type: application/json; charset=UTF-8
< Via: 1.1 Repose (Repose/7.3.0.0)
< Vary: Accept-Encoding
< X-LB: api1.dfw1.prod.cm.k1k.me
< Transfer-Encoding: chunked
<
{
    "values": [
        {
            "id": "mzdfw",
            "label": "Dallas Fort Worth (DFW)",
            "country_code": "US",
            "source_ips": [
                "2001:4800:7902:0001::/64",
                "50.56.142.128/26"
            ]
        },
        {
            "id": "mzhkg",
            "label": "Hong Kong (HKG)",
            "country_code": "HK",
            "source_ips": [
                "2401:1800:7902:0001::/64",
                "180.150.149.64/26"
            ]
        },
        {
            "id": "mziad",
            "label": "Northern Virginia (IAD)",
            "country_code": "US",
            "source_ips": [
                "2001:4802:7902:0001::/64",
                "69.20.52.192/26"
            ]
        },
        {
            "id": "mzlon",
            "label": "London (LON)",
            "country_code": "GB",
            "source_ips": [
                "2a00:1a48:7902:0001::/64",
                "78.136.44.0/26"
            ]
        },
        {
            "id": "mzord",
            "label": "Chicago (ORD)",
            "country_code": "US",
            "source_ips": [
                "2001:4801:7902:0001::/64",
                "50.57.61.0/26"
            ]
        },
        {
            "id": "mzsyd",
            "label": "Sydney (SYD)",
            "country_code": "AU",
            "source_ips": [
                "2401:1801:7902:0001::/64",
                "119.9.5.0/26"
            ]
        }
    ],
    "metadata": {
        "count": 6,
        "limit": 100,
        "marker": null,
        "next_marker": null,
        "next_href": null
    }
* Connection #0 to host monitoring.api.rackspacecloud.com left intact

We can see many zones available to run our traceroute to;

id 'mzsyd' for Sydney SYD.
id 'mzdfw' for Dallas Fort Worth DFW
id 'mzhkg' for Hong Kong HKG
id 'mziad' for Northern Viginia IAD
id 'mzord' for Chicago ORD
id 'mzlon' for London LON

So now I know what the zone id's are, as defined above here. Now time to use them and run a traceroute to haxed.me.uk. Lets see;

File: perform-traceroute-from-monitoring-zone.sh

!/bin/bash

USERNAME='mycloudusernamehere'
APIKEY='apikeyhere'
ACCOUNTNUMBER=10010110
API_ENDPOINT="https://monitoring.api.rackspacecloud.com/v1.0/$ACCOUNTNUMBER"



TOKEN=`curl https://identity.api.rackspacecloud.com/v2.0/tokens -X POST -d '{ "auth":{"RAX-KSKEY:apiKeyCredentials": { "username":"'$USERNAME'", "apiKey": "'$APIKEY'" }} }' -H "Content-type: application/json" |  python -mjson.tool | grep -A5 token | grep id | cut -d '"' -f4`




curl -s -v  \
-H "X-Auth-Token: $TOKEN"  \
-H "X-Project-Id: $ACCOUNTNUMBER" \
-H "Accept: application/json"  \
-d @ip.json -H "content-type: application/json" -X POST  \
"$API_ENDPOINT/monitoring_zones/mzsyd/traceroute"

You also need the ip.json file. It's easy to make, put it in the same dir as the shellscript.

File: ip.json

{
        "target":               "haxed.me.uk",
        "target_resolver":      "IPv4"
}

We're going to refer to ip.json file which contains our destination data. You can do this with IPv6 IP's too if you wanted! That is pretty cool!
It is possible to do this without including the file, and actually just pass the json directly, with -d { "target": "haxed.me.uk", "target_resolver": "IPv4"} , but lets do it properly 😀


chmod +x perform-traceroute-from-monitoring-zone.sh
./perform-traceroute-from-monitoring-zone

the response, a nice traceroute of course from syd to my lon server.

Response

 Accept: application/json
> content-type: application/json
> Content-Length: 55
>
* upload completely sent off: 55 out of 55 bytes
< HTTP/1.1 200 OK
< Date: Wed, 13 Jan 2016 11:19:14 GMT
< Server: Jetty(9.2.z-SNAPSHOT)
< X-RateLimit-Type: traceroute
< X-RateLimit-Remaining: 296
< X-RateLimit-Window: 24 hours
< x-trans-id: eyJyZXF1ZXN0SWQiOiI5MTNhNTY1Mi05ODAyLTQ5MmQtOTAwYS05NDU1M2ZhNDJmNzUiLCJvcmlnaW4
< X-RateLimit-Limit: 300
< X-Response-Id: .rh-TI8E.h-api1.ord1.prod.cm.k1k.me.r-4RFTh9up.c-28452540.ts-1452683954386.v-91eaf0a
< Content-Type: application/json; charset=UTF-8
< Via: 1.1 Repose (Repose/7.3.0.0)
< Vary: Accept-Encoding
< X-LB: api0.ord1.prod.cm.k1k.me
< Transfer-Encoding: chunked
<
{
    "result": [
        {
            "ip": "119.9.5.2",
            "hostname": null,
            "number": 1,
            "rtts": [
                0.421,
                0.384,
                0.442,
                0.457,
                0.455
            ]
        },
        {
            "ip": "119.9.0.30",
            "hostname": null,
            "number": 2,
            "rtts": [
                1.015,
                0.872,
                0.817,
                1.014,
                0.926
            ]
        },
        {
            "ip": "119.9.0.109",
            "hostname": null,
            "number": 3,
            "rtts": [
                1.203,
                1.179,
                1.185,
                1.232,
                1.182
            ]
        },
        {
            "ip": "202.84.223.2",
            "hostname": null,
            "number": 4,
            "rtts": [
                3.53,
                5.301,
                3.975,
                5.772,
                3.804
            ]
        },
        {
            "ip": "202.84.223.1",
            "hostname": null,
            "number": 5,
            "rtts": [
                3.437,
                3.522,
                2.837,
                4.274,
                2.805
            ]
        },
        {
            "ip": "202.84.140.206",
            "hostname": null,
            "number": 6,
            "rtts": [
                141.198,
                140.746,
                143.871,
                140.987,
                141.545
            ]
        },
        {
            "ip": "202.40.149.238",
            "hostname": null,
            "number": 7,
            "rtts": [
                254.354,
                175.559,
                176.787,
                176.701,
                175.634
            ]
        },
        {
            "ip": "134.159.63.18",
            "hostname": null,
            "number": 8,
            "rtts": [
                175.302,
                175.299,
                175.183,
                175.146,
                175.149
            ]
        },
        {
            "ip": "64.125.26.6",
            "hostname": null,
            "number": 9,
            "rtts": [
                175.395,
                175.408,
                175.469,
                175.49,
                175.475
            ]
        },
        {
            "ip": "64.125.30.184",
            "hostname": null,
            "number": 10,
            "rtts": [
                285.818,
                285.872,
                285.801,
                285.835,
                285.887
            ]
        },
        {
            "ip": "64.125.29.52",
            "hostname": null,
            "number": 11,
            "rtts": [
                285.864,
                285.938,
                285.826,
                285.922,
                303.125
            ]
        },
        {
            "ip": "64.125.28.98",
            "hostname": null,
            "number": 12,
            "rtts": [
                284.711,
                284.865,
                284.73,
                284.697,
                284.713
            ]
        },
        {
            "ip": "64.125.29.48",
            "hostname": null,
            "number": 13,
            "rtts": [
                287.341,
                310.82,
                287.33,
                287.359,
                287.455
            ]
        },
        {
            "ip": "64.125.29.130",
            "hostname": null,
            "number": 14,
            "rtts": [
                286.168,
                286.012,
                286.108,
                286.105,
                286.168
            ]
        },
        {
            "ip": "64.125.30.235",
            "hostname": null,
            "number": 15,
            "rtts": [
                284.61,
                284.681,
                284.667,
                284.892,
                286.069
            ]
        },
        {
            "ip": "64.125.20.97",
            "hostname": null,
            "number": 16,
            "rtts": [
                287.516,
                287.435,
                287.557,
                287.581,
                287.438
            ]
        },
        {
            "ip": "94.31.42.254",
            "hostname": null,
            "number": 17,
            "rtts": [
                288.156,
                288.019,
                288.034,
                288.08
            ]
        },
        {
            "ip": null,
            "hostname": null,
            "number": 18,
            "rtts": []
        },
        {
            "ip": "134.213.131.251",
            "hostname": null,
            "number": 19,
            "rtts": [
                292.687,
                293.72,
                295.335,
                293.981
            ]
        },
        {
            "ip": "162.13.232.1",
            "hostname": null,
            "number": 20,
            "rtts": [
                293.295,
                293.738,
                295.46,
                294.301
            ]
        },
        {
            "ip": "162.13.232.103",
            "hostname": null,
            "number": 21,
            "rtts": [
                294.733,
                294.996,
                298.884,
                295.056
            ]
        },
        {
            "ip": "162.13.136.211",
            "hostname": null,
            "number": 22,
            "rtts": [
                294.919,
                294.77,
                298.956,
                296.481
            ]
        }
    ]
* Connection #0 to host monitoring.api.rackspacecloud.com left intact

This is pretty cool. If we want to run a traceroute from lets say chicago, we just swap out the 'mzsyd' variable to show 'mziad', wow thats simple 🙂

Securing your segmented Network, and interpretation of tcpdump on dual NIC segmented network

Howdy! So, here is a real life example of some basic networking security analysis.

Without giving too much away about the way my own home network works. I have a basic firewall-like setup segmenting my red internet routers network offering from the nic port handling my local network. This offers a level of separation or isolation, that prevents any-old-packet reaching the local network. By default I am allowing services to be routed out to internets only when it is requested first by an application running on boxes inside the local network. If any external client attempts to connect to my router on a particular port without it being SYN ACK, then it won’t be accepted.

Another good thing is it’s easy to find out what destination and source is because eth1 shows me primarily all internet bound and internet from traffic, and the eth0 adapter shows me primarily all local traffic on it’s way to be internet bound thru the firewall and then out the other NIC.

Today I saw some worrying traffic coming thru on eth1, something called teamview;

Here is what it looked like:

# tcpdump -i eth1 | grep teamview
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
16:07:05.641711 IP firewall.home.50490 > server19703.teamviewer.com.https: Flags [P.], seq 2473010817:2473010841, ack 788873912, win 255, length 24
16:07:05.676803 IP server19703.teamviewer.com.https > firewall.home.50490: Flags [P.], seq 1:25, ack 24, win 251, length 24
16:07:05.891233 IP firewall.home.50490 > server19703.teamviewer.com.https: Flags [.], ack 25, win 255, length 0

As we can see the firewall just ‘dialed’ out to a remote server19703, and I am like ‘wtf’ is this? So I start to panic, then I run:

]# tcpdump -i eth0 | grep teamview
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
16:07:05.641686 IP 192.168.0.120.50490 > server19703.teamviewer.com.https: Flags [P.], seq 2473010817:2473010841, ack 788873912, win 255, length 24
16:07:05.676846 IP server19703.teamviewer.com.https > 192.168.0.120.50490: Flags [P.], seq 1:25, ack 24, win 251, length 24
16:07:05.891204 IP 192.168.0.120.50490 > server19703.teamviewer.com.https: Flags [.], ack 25, win 255, length 0

This allows me to see, what the nature of the request was, just before the firewall started to route it out on the eth1 adapter. As such it shows me that the local network machine devices ip address on eth0, which is 192.168.0.120, or in other words, my parentals new computer, specifically the one I bought them for christmas from ebay.

What does this mean? It means I’ll be paying a visit to their box to make sure that this is disabled.

Fail2ban on CentOS 7 not working [and solution]

because configuration settings in fail2ban 0.9.0 having been completely re-factored, CentOS7 fail2ban hardening automation now is not safe by merely running an yum install fail2ban.

It will also apparently no longer work if you uncomment the sshd enabled jail in local.conf or jail.conf.

The newer re-factored configuration suggests to use a dedicated file for this to prevent being overwritten as I have now set in my /etc/fail2ban/jail.d/sshd.local

[sshd] enabled = true
port = ssh
#action = firewallcmd-ipset
logpath = %(sshd_log)s
maxretry = 5
bantime = 86400

Do note firewallcmd-ipset needs to be commented out or fail2ban will not start.

Once it has been configured like this, it is happy again. And worked straight away banning my home IP! Whilst before it was quite literally failing to ban :- )

Of course you might need to install it first:

yum install -y epel-release
yum install -y fail2ban fail2ban-systemd

You might also want to start fail2ban, and also set it to run on startup:

systemctl enable fail2ban
systemctl start fail2ban

If you run selinux, then you’ll need (running this command may have security implications)

yum update selinux-policy*

Generate SSH Keys pairs and copy public key to guests the fast way

What it says on the tin!

 ssh-keygen -t dsa
ssh-copy-id root@iporhostnamehere
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys

So simple. Thanks to my colleague Jan for this.

4 way NORAID mirror using ZFS

So I thought about a cool way to backup my files without using anything too fancy and I started to think about ZFS. Don’t know why I didn’t before because it’s ultra ultra resilient. Cheers Oracle. This is in Debian 7 Wheezy.

Step 1 Install zfs

# apt-get install lsb-release
# wget http://archive.zfsonlinux.org/debian/pool/main/z/zfsonlinux/zfsonlinux_6_all.deb
# dpkg -i zfsonlinux_6_all.deb

# apt-get update
# apt-get install debian-zfs

Step 2 Create Mirrored Disk Config with Zpool.
Here i’m using 4 x 75GB SATA Cloud Block Storage Devices to have 4 copies of the same data with ZFS great error checking abilities

zpool create -f noraidpool mirror xvdb xvdd xvde xvdf

Step 3. Write a little disk write utility

#!/bin/bash


while :
do

        echo "Testing." $x >> file.txt
        sleep 0.02
  x=$(( $x + 1 ))
done

Step 4 (Optional). Start killing the Disks with fire, kill iscsi connection etc, and see if file.txt is still tailing.

./write.sh & ; tail -f /noraidpool/file.txt

Step 5. Observe that as long as one of the 4 disks has it’s virtual block device connection your data is staying up. So it will be OK even if there is 3 or less I/O errors simultaneously. Not baaaad.


root@zfs-noraid-testing:/noraidpool# /sbin/modprobe zfs
root@zfs-noraid-testing:/noraidpool# lsmod | grep zfs
zfs                  2375910  1
zunicode              324424  1 zfs
zavl                   13071  1 zfs
zcommon                35908  1 zfs
znvpair                46464  2 zcommon,zfs
spl                    62153  3 znvpair,zcommon,zfs
root@zfs-noraid-testing:/noraidpool# zpool status
  pool: noraidpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        noraidpool  ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            xvdb    ONLINE       0     0     0
            xvdd    ONLINE       0     0     0
            xvde    ONLINE       0     0     0
            xvdf    ONLINE       0     0     0

errors: No known data errors

Step 6. Some more benchmark tests

time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync"

Step 7. Some concurrent fork tests

#!/bin/bash

while :
do

time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" &
        echo "Testing." $x >> file.txt
        sleep 2
  x=$(( $x + 1 ))
 zpool iostat
clear
done

or better

#!/bin/bash

time sh -c "dd if=/dev/zero of=ddfile bs=128k count=250000 && sync" &
time sh -c "dd if=/dev/zero of=ddfile bs=24k count=250000 && sync" &
time sh -c "dd if=/dev/zero of=ddfile bs=16k count=250000 && sync" &
while :
do

        echo "Testing." $x >> file.txt
        sleep 2
  x=$(( $x + 1 ))
 zpool iostat
clear
done

bwm-ng ‘elegant’ style output of disk I/O using zpool status


#!/bin/bash

time sh -c "dd if=/dev/zero of=ddfile bs=8k count=250000 && sync" &
while :
do
clear
 zpool iostat
sleep 2
clear
done

To test the resiliency of ZFS I removed 3 of the disks, completely unlatching them


        NAME                      STATE     READ WRITE CKSUM
        noraidpool                DEGRADED     0     0     0
          mirror-0                DEGRADED     0     0     0
            1329894881439961679   UNAVAIL      0     0     0  was /dev/xvdb1
            12684627022060038255  UNAVAIL      0     0     0  was /dev/xvdd1
            4058956205729958166   UNAVAIL      0     0     0  was /dev/xvde1
            xvdf                  ONLINE       0     0     0

And noticed with just one remaining Cloud block storage device I was still able to access the data on the disk as well as create data:

cat file.txt  | tail
Testing. 135953
Testing. 135954
Testing. 135955
Testing. 135956
Testing. 135957
Testing. 135958
Testing. 135959
Testing. 135960
Testing. 135961
Testing. 135962

# mkdir test
root@zfs-noraid-testing:/noraidpool# ls -a
.  ..  ddfile  file.txt  forktest.sh  stat.sh  test  writetest.sh


That’s pretty flexible.