{"id":289,"date":"2015-11-29T18:37:22","date_gmt":"2015-11-29T18:37:22","guid":{"rendered":"http:\/\/www.haxed.me.uk\/?p=289"},"modified":"2015-11-29T20:46:01","modified_gmt":"2015-11-29T20:46:01","slug":"4-way-noraid-mirror-using-zfs","status":"publish","type":"post","link":"https:\/\/haxed.me.uk\/index.php\/2015\/11\/29\/4-way-noraid-mirror-using-zfs\/","title":{"rendered":"4 way NORAID mirror using ZFS"},"content":{"rendered":"<p>So I thought about a cool way to backup my files without using anything too fancy and I started to think about ZFS. Don&#8217;t know why I didn&#8217;t before because it&#8217;s ultra ultra resilient. Cheers Oracle. This is in Debian 7 Wheezy.<\/p>\n<p>Step 1 Install zfs<\/p>\n<pre>\r\n# apt-get install lsb-release\r\n# wget http:\/\/archive.zfsonlinux.org\/debian\/pool\/main\/z\/zfsonlinux\/zfsonlinux_6_all.deb\r\n# dpkg -i zfsonlinux_6_all.deb\r\n\r\n# apt-get update\r\n# apt-get install debian-zfs\r\n<\/pre>\n<p>Step 2 Create Mirrored Disk Config with Zpool.<br \/>\nHere i&#8217;m using 4 x 75GB SATA Cloud Block Storage Devices to have 4 copies of the same data with ZFS great error checking abilities<\/p>\n<pre>\r\nzpool create -f noraidpool mirror xvdb xvdd xvde xvdf\r\n<\/pre>\n<p>Step 3. Write a little disk write utility<\/p>\n<pre>\r\n#!\/bin\/bash\r\n\r\n\r\nwhile :\r\ndo\r\n\r\n        echo \"Testing.\" $x >> file.txt\r\n        sleep 0.02\r\n  x=$(( $x + 1 ))\r\ndone\r\n\r\n<\/pre>\n<p>Step 4 (Optional). Start killing the Disks with fire, kill iscsi connection etc, and see if file.txt is still tailing.<\/p>\n<pre>\r\n.\/write.sh & ; tail -f \/noraidpool\/file.txt\r\n<\/pre>\n<p>Step 5. Observe that as long as one of the 4 disks has it&#8217;s virtual block device connection your data is staying up. So it will be OK even if there is 3 or less I\/O errors simultaneously. Not baaaad.<\/p>\n<pre>\r\n\r\nroot@zfs-noraid-testing:\/noraidpool# \/sbin\/modprobe zfs\r\nroot@zfs-noraid-testing:\/noraidpool# lsmod | grep zfs\r\nzfs                  2375910  1\r\nzunicode              324424  1 zfs\r\nzavl                   13071  1 zfs\r\nzcommon                35908  1 zfs\r\nznvpair                46464  2 zcommon,zfs\r\nspl                    62153  3 znvpair,zcommon,zfs\r\nroot@zfs-noraid-testing:\/noraidpool# zpool status\r\n  pool: noraidpool\r\n state: ONLINE\r\n  scan: none requested\r\nconfig:\r\n\r\n        NAME        STATE     READ WRITE CKSUM\r\n        noraidpool  ONLINE       0     0     0\r\n          mirror-0  ONLINE       0     0     0\r\n            xvdb    ONLINE       0     0     0\r\n            xvdd    ONLINE       0     0     0\r\n            xvde    ONLINE       0     0     0\r\n            xvdf    ONLINE       0     0     0\r\n\r\nerrors: No known data errors\r\n\r\n<\/pre>\n<p>Step 6. Some more benchmark tests<\/p>\n<pre>\r\ntime sh -c \"dd if=\/dev\/zero of=ddfile bs=8k count=250000 && sync\"\r\n<\/pre>\n<p>Step 7. Some concurrent fork tests<\/p>\n<pre>\r\n#!\/bin\/bash\r\n\r\nwhile :\r\ndo\r\n\r\ntime sh -c \"dd if=\/dev\/zero of=ddfile bs=8k count=250000 && sync\" &\r\n        echo \"Testing.\" $x >> file.txt\r\n        sleep 2\r\n  x=$(( $x + 1 ))\r\n zpool iostat\r\nclear\r\ndone\r\n\r\n<\/pre>\n<p>or better<\/p>\n<pre>\r\n#!\/bin\/bash\r\n\r\ntime sh -c \"dd if=\/dev\/zero of=ddfile bs=128k count=250000 && sync\" &\r\ntime sh -c \"dd if=\/dev\/zero of=ddfile bs=24k count=250000 && sync\" &\r\ntime sh -c \"dd if=\/dev\/zero of=ddfile bs=16k count=250000 && sync\" &\r\nwhile :\r\ndo\r\n\r\n        echo \"Testing.\" $x >> file.txt\r\n        sleep 2\r\n  x=$(( $x + 1 ))\r\n zpool iostat\r\nclear\r\ndone\r\n\r\n<\/pre>\n<p>bwm-ng &#8216;elegant&#8217; style output of disk I\/O using zpool status<\/p>\n<pre>\r\n\r\n#!\/bin\/bash\r\n\r\ntime sh -c \"dd if=\/dev\/zero of=ddfile bs=8k count=250000 && sync\" &\r\nwhile :\r\ndo\r\nclear\r\n zpool iostat\r\nsleep 2\r\nclear\r\ndone\r\n\r\n<\/pre>\n<p>To test the resiliency of ZFS I removed 3 of the disks, completely unlatching them<\/p>\n<pre>\r\n\r\n        NAME                      STATE     READ WRITE CKSUM\r\n        noraidpool                DEGRADED     0     0     0\r\n          mirror-0                DEGRADED     0     0     0\r\n            1329894881439961679   UNAVAIL      0     0     0  was \/dev\/xvdb1\r\n            12684627022060038255  UNAVAIL      0     0     0  was \/dev\/xvdd1\r\n            4058956205729958166   UNAVAIL      0     0     0  was \/dev\/xvde1\r\n            xvdf                  ONLINE       0     0     0\r\n\r\n<\/pre>\n<p>And noticed with just one remaining Cloud block storage device I was still able to access the data on the disk as well as create data:<\/p>\n<pre>\r\ncat file.txt  | tail\r\nTesting. 135953\r\nTesting. 135954\r\nTesting. 135955\r\nTesting. 135956\r\nTesting. 135957\r\nTesting. 135958\r\nTesting. 135959\r\nTesting. 135960\r\nTesting. 135961\r\nTesting. 135962\r\n\r\n# mkdir test\r\nroot@zfs-noraid-testing:\/noraidpool# ls -a\r\n.  ..  ddfile  file.txt  forktest.sh  stat.sh  test  writetest.sh\r\n\r\n\r\n<\/pre>\n<p>That&#8217;s pretty flexible.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>So I thought about a cool way to backup my files without using anything too fancy and I started to think about ZFS. Don&#8217;t know why I didn&#8217;t before because it&#8217;s ultra ultra resilient. Cheers Oracle. This is in Debian &hellip; <a href=\"https:\/\/haxed.me.uk\/index.php\/2015\/11\/29\/4-way-noraid-mirror-using-zfs\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[19,15,10,9,7,25],"tags":[],"class_list":["post-289","post","type-post","status-publish","format-standard","hentry","category-bash","category-cloud","category-filesystem","category-linux","category-management-tools","category-zfs"],"_links":{"self":[{"href":"https:\/\/haxed.me.uk\/index.php\/wp-json\/wp\/v2\/posts\/289","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/haxed.me.uk\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/haxed.me.uk\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/haxed.me.uk\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/haxed.me.uk\/index.php\/wp-json\/wp\/v2\/comments?post=289"}],"version-history":[{"count":5,"href":"https:\/\/haxed.me.uk\/index.php\/wp-json\/wp\/v2\/posts\/289\/revisions"}],"predecessor-version":[{"id":294,"href":"https:\/\/haxed.me.uk\/index.php\/wp-json\/wp\/v2\/posts\/289\/revisions\/294"}],"wp:attachment":[{"href":"https:\/\/haxed.me.uk\/index.php\/wp-json\/wp\/v2\/media?parent=289"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/haxed.me.uk\/index.php\/wp-json\/wp\/v2\/categories?post=289"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/haxed.me.uk\/index.php\/wp-json\/wp\/v2\/tags?post=289"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}