Converting a QEMU qcow2 cloud server image to an native disk img and putting on physical disk

Got this question at work a lot. Thought I’d finally get around to putting it down since it’s came up for me. I’ve got a virtual machine using virtio passthrough for my pcie, and I found actually that disk access via the qcow2 is pretty naff.

sudo apt-get install qemu-kvm

qemu-img convert windows10cloudimage.qcow2 -O raw diskimage.img

dd if=/path/to/windos10cloudimage.qcow2 of=/dev/sdc2

Please note in my case the physical partition I’d made was sdc2. I’d actually resized another 5TB disk I have in my system using gparted. Just so I can attach a physical partition with libvirtd. Evidently though libvirtd-manager doesn’t allow this business so I have to edit the xlm file in /etc/qemu/windows10.xml .

 

root@adam:/etc/libvirt/qemu# virsh  define /etc/libvirt/qemu/win10-uefi.xml 
Domain win10-uefi defined from /etc/libvirt/qemu/win10-uefi.xml
root@adam:/etc/libvirt/qemu# virt-manager

yeah baby!


You could alternatively do it all in one like below, though you may desire a copy of the img file as well as putting it to the disk.

qemu-img convert windows10.qcow2 -O raw /dev/sdc

Configuring PCIE Passthrough and SLI/Crossfire on Ubuntu 16.04 step by step (for real)

GREETINGS! Welcome to this unusual Howto. So, thinking about setting up some PCIE Passthrough on your Linux box, so you can feel less embarrassed about needing windows to run your steam-games? Look no further!

Looking to setup SLI on your Linux Ubuntu Windows virtual machine? Look no further!!!

Are you looking to overcome some of the mean issues that are presented by devices with the same ID? (This is a big problem guys sometimes when passing through devices, if the device ID’s are the same you can also get problems).

Are you looking to discover why USB passthrough doesn’t work properly (or at all?) for your mouse/keyboard/usb audio? Look no further!! It’s quite common apparently that Ubuntu’s App armor, either, without being insecurely disabled, or a secure rule added, will prevent you from using libvirtd usb passthrough, due to the way the apparmor prevents sharing between applications that are unsupported. Whilst apparmor goes a little bit beyond the scope of this tutorial. Setting up this basic (starter) VM with 2 RX 580, is really a synch once you have it down. There are some important considerations before beginning though.

1. You will need an IGFX (integrated gfx) on your motherboard. Why you ask? Thats what Ubuntu is going to use. Whilst we will pass to pci-stub (and possibly vtio too I believe in some setups). VTIO pci passthrough is more modern the pci-stub passthrough I use. Though, when mining with the 2 rx 580 sapphire nitro I have within the virtual machine I got 22MH/s, knowing that these cards run stock unvirtualized in this machine at 23MH/s. I was able to calculate that PCIE passthrough, when configured correctly really is capable of 95% efficiency. To ensure that the RX580 devices are properly used, we must set in our BIOS, IGFX.

Step 1. Prepare BIOS display adapter/primary order


The first thing you need to do for Ubuntu PCIE passthrough is set the initial display output,
Change this from PCIE 1 slot (or whatever is set) to IGFX. Some bios might have different menu for enabling the igfx. What this setting does is toggle the integrated graphics as the primary graphics adapter when passed via the BIOS. This will actually allow you to start Ubuntu 16.04 without the amd_driver used for the RX580 cards. Also, because X on ubuntu will automatically bind to the IGFX, you don’t need to update your Ubuntu configuration to achieve this wondrous task of passthrough.

Though, if you have windows 7 or 10 on dual boot, Windows may prefer you to set the Initial display output back to PCIE 1 or whatever it was before. So take special care to note (or save) your bios configuration before making these changes.

I have 2 profiles ‘virtualized igfx for ubuntu’ and ‘pcie slot for windows’ are the name of my configs for logical purposes!

Step 2. Understanding IGFX, and the nature of the VT-d and VTIO and what it needs to work

Now you’ve setup the IGFX you can boot into Ubuntu normally. Installing Linux goes kind of below the scope of this tutorial. If you do not know how to configure Linux. Please find another howto and install Ubuntu 16.04 Linux first, then continue onto this step.


Be sure to ensure that VT-d or ‘Virtualization’ is also enabled in your BIOS, without it, nothing works, and you will cry with your kernel config wondering why.

Ubuntu will not be using the RX580 card, and the drivers may or may not be loaded in the kernel, depending on whether the OS is using the cards. In my situation, I had already disabled the rx580 driver in the kernel, and set the blacklist in modprobe.d accordingly, as well as ensured the module order list was correct, to allow me to achieve the ‘pass’ of the RX580 PCI interupts to the pci-stub kernel driver. The pci-stub kernel driver will then in turn pass that thru the VT-D (virtualisation) and VTIO (virtual input/output) bus supported by your CPU, and this is how the graphics cards get passed to the pass-through virtual machine. Therefore, if you haven’t done so already. Make sure that VT-d or ‘virtualization’ is enabled in your BIOS. Please note that some cpu do not support VTd or VT-x extensions. However if you have intel i3, i5, i7 etc, or any modern chip, you should definitely have this. Some motherboards have limited virtualization settings, so ensure you’ve properly looked into step 1 before going further here. It’s very important everything is just-right. There are a lot of turning wheels in pci-passthrough, but it’s remarkably simple once you get the basics.

Step 3: Final preparations for Linux Config. Stuff you may need

Remember GPU passthrough relies on specific technology. Providing your CPU, BIOS and PCIE graphics card, such as the RX580 I use in this tutorial are modern. You should be OK.

If in the BIOS you do not see something similar to “VT-d” (Intel) or “AMD Virtualization” (AMD), you should probably consult your BIOS manual.

Once you know the hardware is suitable, you’ll probably want to consider these additional factors, just to make sure your not wasting your time with this howto;

  • You should have at least 2 graphics cards (i.e. one onboard gpu on the motherboard and another discrete graphics card in one of the PCIE slots of the motherboard).


I’m using the RX580 Sapphire NITRO+ 8GB Graphics cards, 2 of them actually, + IGFX. Don’t worry this tutorial shows you how to configure, 1 or 2 PCIE passthroughs, and in SLI or Crossfire too! How awessseom!!?

  • Consider a second monitor, it’s quite annoying using the same monitor and having to switch from my DVI to HDMI, all the time. (DVI being the IGFX port connected to the monitors DVI input and HDMI being the input on my monitor plugged into the discrete PCIE RX580 graphics card(s).)
  • You do need a copy of Windows. I downloaded the iso available here which you can use without a key temporarily.
  • Although not 100% necessary, it’s much better to use a dedicated additional SSD disk with your virtual machine, VTIO is capable of passing through the physical hard disk, instead of relying on the vmdk or qcow raw file, which has the limitation of being a chained vdi. This has serious performance implications in terms of read/writes from the disk. Obviously the advantage of also passing an SSD to the machine is the speed optimization from the kernel being able to pass directly between the PCI and north bridge bus of the motherboard, directly thru the cpu and to the virtual machine, without any kind of abstraction being required (as was the case before PCIE passthrough virtualization type devices i.e. cirrus/vga etc).
  • In my example I’m just using a Qcow2 file, but adding SSD is so easy to do, you’ll have no difficulty if your capable of doing the PCIE passthrough kernel config.

MY SYSTEM

  • Graphics: AMD Sapphire ‘Radeon’ RX580 8GB GDDR5 NITRO+
  • CPU: Intel Core i7-6700 4GHz
  • Motherboard: Gigabyte Z170X ultra gaming
  • 32GB System RAM
  • USB Audio device if you do not want to use your monitor’s speakers for HDMI OUTPUT (yes the virtual machine can use the inbuilt audio of your graphics card, and yes, your graphics card obviously has inbuilt audio for HDMI standard which includes sound with audio bus. Link of the device I used below.
  • https://www.amazon.co.uk/Plugable-Headphone-Microphone-Aluminum-Compatibility-Black/dp/B00NMXY2MO/ref=sr_1_7?s=computers&ie=UTF8&qid=1517989287&sr=1-7&keywords=usb+audio

Step 4. Checking it all checks out

You might not see this output straight away, and you may need to change your module order later in step 5 to see some of these confirmation checks show good. Don’t freak out. So many moving wheels, arrghh.

Check Motherboard is playing the Virtualization extensions

dmesg | grep -e "Directed I/O"

For AMD Systems

dmesg | grep AMD-Vi

Output should be like:

AMD-Vi: Enabling IOMMU at 0000:00:00.2 cap 0x40
AMD-Vi: Lazy IO/TLB flushing enabled
AMD-Vi: Initialized for Passthrough Mode

HURRAH! We’re probably OK. Now lets get down to it.

Step 5. Locating my PCIE Graphics Cards

lspci | grep VGA

In my setup you can see there are 3 Adapters in total. The Intel Corporation Sky Lake integrated graphics adapter, is the motherboard hdmi output VGA. The other 2 VGA below are the 2 rx580 I have in both PCIE slots 1 and 2. As denoted by their PCI bus id 01:00.0 and 02:00.0 respectively. These are the hardware addresses used by the Local Linux ‘HOST OS’. That is to say, the hypervisor machine, the machine which is not virtualized hosting our virtual machine, is the one which knows about these addresses, the virtual machine knows nothing though 🙁

Step 6 Instructing the Linux Kernel

We need to tell Linux what to do , so the pci-stub kernel driver module is able to do it’s heavy lifting and just redirect command struct to the pcie ports irrelevant to the Ubuntu OS. If you kind of think about how awesome this is and how fancy it is have the host operating system pass a device to a virtual OS and BIOS as if the graphics card was running on a host windows OS. And 95% efficiency too (see my mining tests later on!). Simply amazing. Lets tell linux about it;

Step 6A: identifying the vendor ID of the RX580 Graphics cards is necessary to pass to pci-stub

lspci -nn | grep 01:00
lspci -nn | grep 02:00


Just showing the ID is the same for my two RX580 cards, mainly because they both have exactly the same model number. I found other PCIE passthrough guides which basically said oh you can’t do it properly because VTIO blah blah blah can’t do it. Well it can actually and I did achieve working RX580 x 2 in my windows guest. I guess this could be useful for people that wanted to virtualization their mining operations.  Note our PCIE RX580 graphics cards have an AUDIO DEVICE!!!!!

It’s a real pain having to drive out somewhere, the idea you could create this on a virtualization layer that was rock solid stable like Ubuntu has to be good. At least if you weren’t using something like EthOS already. Which by the way, is very good if your into your mining. Probably irrelevant though. Most people just want to play their games, and me too!!

We’ll come back to the hardware ID’s in a sec.

STEP 7A: Use Grub Bootloader to blacklist kernel drivers (we don’t want our Host Ubuntu Linux Operating System stealing devices from the bus, only 1 host or 1 guest can access devices at once, if they attempt to together you will crash both OS and your computer will lock)


Had to experiment around a bit. radeon.modeset=0 and amdgpu.runpm=0 are apparently effective ways to disable the Radeon Catalyst drivers Ubuntu automatically installed when you installed it (at least if the rx580 were plugged in then). If they weren’t, there is the small potential the radeon linux default settings aren’t necessary.

The things that are definitely necessary is passing to pci-stub (or VTIO as well as :
intel_iommu=on which tells to the system to load VT-d (amd_iommu=on for AMD CPUs)

I’ve done that by defining a pci-stub.ids=1002:67df,1002:aaf0

Ignore the last ID in my config, sorry if its confusing. 8086:a170. Stupid me thought I needed to use pci stub for USB audio for some reason.  You don’t.


Not the best diagram illustrating the setup, but you get the idea

The reason why there are two PCI stub ID’s is simple. The pci output bus for each discrete radeon rx580 graphics card actually has 2 devices! One audio device and one video device. They work together to output to the hdmi port MUX. (they mix the shit and throw it out hdmi). Your computer does all this stuff transparently normally, but to get the passthrough to work, because of the nature of the architecture, to use the VGA card you actually need to assign both sub-busses of the device, if that makes sense.

About MMIO Groups (Memory Mapped IO)

Hence why we have them inside the group, it’s called an MMIO group, essentially for the device to be passed, all sub-id’s need to be a part of it. This is actually the exact reason why 2 x rx580 setups don’t work, or are not regarded popularly to be workable (due to laziness/stupidity/inability probly), since the MMIO group of the devices is usually determined by the defining ID, so how can you group a vga device and an audio card in that group using the vendorID, if there are two cards on two pcie slots, with the same vendor ID! Anyways, you get the idea. To understand how this shiz work you really need to get a fundamental understanding of why some combinations can’t work, or why people think they might not, and then look to overcoming the problem. I was able to overcome this problem, so keep reading if you are trying to as well, its a royal pain in the butt, and I hope my article can help you along. Hopefully it’s as complete as it needs to be for you to complete Crossfire rx580 passthrough, but if it isn’t drop a message and I’ll try help you get it set.


There’s a lot of technology at work making the regular GPU on your windows OS go. You see that when configuring livirtd, that these addresses really really matter and essentially influence whether your PCIE passthrough setup is a muddled success, failure, or complete failure.

For most of us muddled success is quite acceptable. Providing we understand this isn’t for use in production. This is my home setup.

Step 7B: Ensure Linux Kernel Modules (ubuntu’s equivalent of windows Drivers) are loaded

Also ensuring to check your GRUB_CMDLINE_LINUX_DEFAULT is saved too (as done earlier):


Apologies for the extra text there ,8086:a170. You dont need the bit that says ‘,8086:a170’, it’s relevant but shouldn’t hurt though if your confused and just want it working 🙂

Step 7C: Making Linux use your changes, Sorting BOOT & INIT

Now we need to make sure that the GRUB CMDLINE default is actually being used, to do that we need to rebuild grub bootloader, so that the command gets included in the /boot/grub/grub.cfg or similar file. (basically boot options like windows safe mode but much more awwwsome). There is something important to know here GRUB CAN BREAK YOUR WINDOWS BOOT IF YOU HAVE DUAL BOOT. It caused me much aggrievance so THINK BEFORE ACTING.

In my case I had 3 disks. /dev/sda, /dev/sdb and /dev/sdc. Where /dev/sdb1 was the partition and /dev/sdb was the physical device that disk partition was stored. /dev/sda was my Windows 10 disk. Windows 10 has a MBR BIOS type bootloader installed to the disk on /dev/sda, if I instruct grub to update the bootloader, it will update all of them , dev/sda, dev/sdb and /dev/sdc (the bootloader isn’t stored on a partition, its stored on the first blocks of the physical device, hence /sdb /sdc and not sdb1.).

So whilst this shouldn’t cause DATA LOSS itself, it will cause bootloader loss if you run grub-update unaccordingly. It happened to me a few times, because I’m so noobish with windows bootloaders. So, the way to avoid breaking your windows MBR BIOS bootloader is simply to issue the grub-update command the device you wish to regenerate the bootloader for (as afterall the only reason we are doing this is to include that default command to make sure our pci stubs get passed thru early and drivers dont load up on the HOST OS which will kill the guest OS if they both have connections to those PCIE busses).

Simply then to apply the grub bootloader updater only to my Ubuntu /dev/sdb partition Linux runs only.

update-grub /dev/sdb

You may (most definitely will need to) to also update the initramfs , mainly because initramfs interacts with the module blacklist.

If you haven’t used modprobe.d before, it just disables your kernel drivers. Here is the 3 lines I added to the end of my file /etc/modprobe.d/blacklist.conf (make the file if it doesnt exist, and ensure .conf is the file extension or its ignored).


Might not be necessary, may be slightly different drivers for you to disable, etc. use lspci -v to find out ful details about what drivers your pcie cards are using, and you’ll be able to blacklist the correct driver.

In my case the reason why I blacklisted snd_hda_intel was because of this output from my lspci -v for that device revealed that kernel module driver used;

In my case it says kernel-driver in use:pci-stub because I’m writing this tutorial and have already configured it. Note that’s what it’ll say for your graphics card too once it is passed to pci-stub. And then, once thats done, and only then you’ll be able to use the card on the virtual machine, whew!!! I’m tired, I don’t know about you!

Just to give you an idea about the MMIO groups used by the VTIO/VT-d technologies, this is the structure of the tree of devices;

You can see our graphics cards at 01.0 and the MMIO [01] group consisting of the two sub chips on that bus 00.0, and 00.1.

We still need to make sure that pci-stub module is loaded. If it’s not in there already add pci_stub (note _ not -) in the configuration. Yes I put both in my configuration because it doesn’t matter, and I had a hell of a time getting this to work in Ubuntu kk).

Once done simply finally update the initramfs-tools, to ensure this change to /etc/initramfs-tools/modules gets compiled into the linux boot image. The Linux boot image is used at bootime, and contains included module drivers for Linux Kernel. Don’t miss this bit.

REBOOT!

Step 8: Checking it all worked right


This is what you should see. You want to see those pci-stub add’s against the device vendor ID’s. Don’t tell me how I did it, but as you can see both vendor ID’s get added, but not both MMIO groups. Yet, you will still find, that this works with multiple RX580 with same vendor ID (or whatever gpu you have). And I’m wondering if that has simply been fixed in the latest 16.04 kernel

For anyone having difficulty matching my environment;

root@adam:/home/adam# uname -a
Linux adam 4.13.0-32-generic #35~16.04.1-Ubuntu SMP Thu Jan 25 10:13:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Also, more is to come including the libvirtd installation and configuration. To get it working from here, just install libvirtd as normal, and add the PCI devices that now become available with the 440FX and Q35 chipsets.

HAVE FUN y’all! I did spend a little while getting my head around this and hope that others can benefit or make comments of anything I’ve missed, or mistaken. It’s a huge piece of turning wheels!

Configure Nested KVM for Intel & AMD based Machines

So, we are configuring some openstack and kvm stuff at work for some projects. We’re ‘cloudy’ guys. What can I say? 😀 One Issue I had when installing xenserver, underneath KVM.

(why would we do this?) In our testing environment we’re using a single OnMetal v2 server, and, instead of running xenserver directly on the server, and requiring additional servers, we are using a single 128GB RAM hypervisor for the test environment. One issue though is that Windows is only supported with xenserver when directly run on the ‘host’. Because Xen is running virtualized under KVM we have a problem.

Enter, tested virtualization support. Hardware virtualization assist support will now work for xenserver thru KVM, which means I can boot windows servers. YAY! uh.. 😉 kinda.

Check if Nested hardware virtualization assist is enabled

$cat /sys/module/kvm_intel/parameters/nested
N

It wasn’t 🙁 Lets enable it

Enable nested hardware virtualization assist

sudo rmmod kvm-intel
sudo sh -c "echo 'options kvm-intel nested=y' >> /etc/modprobe.d/dist.conf"
sudo modprobe kvm-intel

Ensure nested hardware virtualization is enabled

cat /sys/module/kvm_intel/parameters/nested
Y

modinfo kvm_intel | grep nested
parm:           nested:bool

It worked!

This can also be done for AMD systems simply substituting kvm_amd.

http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html

Installing KVM, libvirtd virt-manager and Xenserver for Rackspace onmetal using ZFS & X11 Forwarding

So, you want to run your own hypervisor using xenserver, but you want to have some of the flexibility of KVM too. This instructional guide explains how to install and configure KVM with virt-manager and with X11 forwarding. We will go step by step. In this case I am using a mac.

Step 1 – Create Rackspace onmetalv2 server

Screen Shot 2016-04-27 at 10.05.06 AM
In this case I’ll be using a 40 cpu 128GB machine as the host utilizing the new onmetalv2 server range offered by Rackspace public cloud.

Please note that this is a bare metal server, not a cloud server, however it is offered by the same cloud platform at mycloud.rackspace.co.uk

Step 2 – Install and configure KVM

sudo yum update -y
sudo yum -y install kvm virt-manager libvirt virt-install qemu-kvm xauth dejavu-lgc-sans-fonts

Step 3 – Start and configure libvirtd

chkconfig libvirtd
service libvirtd status
service libvirtd restart
service libvirtd status

Step 4 – MAC SYSTEMS – Install X Quartz

For mac users simply install X Quartz, which can be found at http://www.xquartz.org/

Step 4a – Windows Systems – Install Xming

Windows users can get in on the action too, using xming which can be found at https://sourceforge.net/projects/xming/

Step 5 – MAC SYSTEMS ONLY – Configure X11 Forwarding

Xming will work out of the box for windows, but for Mac users you need to make sure you have enabled X11 forwarding.

touch ~/.ssh/config
echo "ForwardX11 yes" >> ~/.ssh/config 

This simply allows X11 forwarding for Mac users which needs to be done at the client side. Then you can virtualize any application you like on the client, but running the application such as firefox , or even a virtual machine on the remote server. SSHv2 is beautiful. That’s it you’ve completed the most important steps.

Running virt-manager for the first time

 
[root@on-metal-test-2 ~]# virt-manager

After running the above command you will see something like the image below. You’ll see an X window open on your local client machine, which is associated with an application running on the remote server your connected to via SSH. This is pretty damn cool.

Screen Shot 2016-04-27 at 10.26.53 AM

Lets take this further and install firefox to demonstrate how awesome this is!

yum install firefox -y

Now we’re using firefox thru ssh, much better and more convenient to use X11 forwarding for this, than using a proxy for instance on the client configured with tunnel or vpn.

Screen Shot 2016-04-27 at 10.33.23 AM

Nice!

Lets take it a bit further and start installing xen server with KVM. I am very tempted to use ZFS for this since onmetal v2 has 2 1600GB disks…

Create partitions for KVM store

fdisk -l 
fdisk /dev/sdc

# type m , then type n, then type p, enter, enter, enter, enter, then type w

fdisk /dev/sdd

# type m , then type n, then type p, enter, enter, enter, enter, then type w
 

Create filesystem for KVM store

[root@on-metal-test-2 ~]# mkfs.ext3 /dev/sdc1 && mkfs.ext3 /dev/sdd1
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
97656832 inodes, 390624640 blocks
19531232 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
11921 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
	4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
	102400000, 214990848

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
97656832 inodes, 390624640 blocks
19531232 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
11921 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
	4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
	102400000, 214990848

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

Now we have created the filesystem. What about creating the ZFS partition. To do this we need to go thru a fairly laborious process (at least if you don’t know what your doing). As I discovered my yum installation wasn’t automatically providing the correct devel source for the kernel to use the ZFS DKMS module. As ZFS is really a native BSD package.

One of the problems I had was this

Loading new spl-0.6.5.6 DKMS files...
Building for 3.10.0-327.10.1.el7.x86_64
Module build for kernel 3.10.0-327.10.1.el7.x86_64 was skipped since the
kernel source for this kernel does not seem to be installed.
  Installing : zfs-dkms-0.6.5.6-1.el7.centos.noarch                                                                                                                       4/6
Loading new zfs-0.6.5.6 DKMS files...
Building for 3.10.0-327.10.1.el7.x86_64
Module build for kernel 3.10.0-327.10.1.el7.x86_64 was skipped since the
kernel source for this kernel does not seem to be installed.

This can be checked out in more detail by running an;

yum search --show-duplicates kernel-devel
# and
rpm -qa | grep kernel

This gave me the right version of the devel kernel I needed to install ZFS to my current kernel with a module, as opposed to completely recompiling the whole thing. Nice!

Install ZFS and kernel devel

sudo yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm
sudo yum install epel-release

sudo yum install zfs kernel-devel-3.10.0-327.10.1.el7.x86_64
 

Enable ZFS

[root@on-metal-test-2 adam]# /sbin/modprobe zfs

Create the 2 disk mirror using ZFS

[root@on-metal-test-2 adam]# zpool create -f kvmstore mirror sdc1 sdd1

Check KVM store disk

[root@on-metal-test-2 adam]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/md126p1    220G  2.3G  209G   2% /
devtmpfs         63G     0   63G   0% /dev
tmpfs            63G     0   63G   0% /dev/shm
tmpfs            63G   26M   63G   1% /run
tmpfs            63G     0   63G   0% /sys/fs/cgroup
tmpfs            13G  4.0K   13G   1% /run/user/0
kvmstore        1.5T     0  1.5T   0% /kvmstore

Run Virt manager to create Xenserver VM

Now we’ve created our partition and filesystem and configured ZFS we can run the virtual machines off the new kvm partition store. simples

Click top left icon on corner to create new VM

Screen Shot 2016-04-27 at 11.27.52 AM

Download the Xenserver ISO to /root of hypervisor

root@on-metal-test-2 ~]# wget http://downloadns.citrix.com.edgesuite.net/10175/XenServer-6.5.0-xenserver.org-install-cd.iso
--2016-04-27 10:29:22--  http://downloadns.citrix.com.edgesuite.net/10175/XenServer-6.5.0-xenserver.org-install-cd.iso
Resolving downloadns.citrix.com.edgesuite.net (downloadns.citrix.com.edgesuite.net)... 104.86.110.32, 104.86.110.49
Connecting to downloadns.citrix.com.edgesuite.net (downloadns.citrix.com.edgesuite.net)|104.86.110.32|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 603744256 (576M) [application/octet-stream]
Saving to: ‘XenServer-6.5.0-xenserver.org-install-cd.iso’

100%[====================================================================================================================================>] 603,744,256 17.6MB/s   in 38s

Select Local Media (we’re going to use a Xenserver ISO)

Screen Shot 2016-04-27 at 11.28.28 AM

Screen Shot 2016-04-27 at 11.31.17 AM

Click browse, then press the bottom left + icon to add some pools. We’re going to add /root which has our iso in it, and we’re also going to add kvmstore aswell.

Screen Shot 2016-04-27 at 11.32.17 AM

Screen Shot 2016-04-27 at 11.34.11 AM

Screen Shot 2016-04-27 at 11.34.21 AM

Screen Shot 2016-04-27 at 11.34.33 AM

Congratulations you have now added the stores. Now all we need to do is finish configuring the VM.

We want to select the root partition now we have set up the pool, and choose the xenserver iso we just recently downloaded.

Screen Shot 2016-04-27 at 11.36.51 AM

Screen Shot 2016-04-27 at 11.37.32 AM

We are almost there now! Lets set the number of cpu and ram! Also lets make sure we use the kvmstore we just setup instead of the ‘main disk’ of the server.

Screen Shot 2016-04-27 at 11.38.19 AM

Select our KVM store ‘pool’ on the left hand side, and then press + to add the kvmstore.qcow2 volume, see the images for illustration.

Screen Shot 2016-04-27 at 11.39.46 AM

Screen Shot 2016-04-27 at 11.39.02 AM

click choose volume at the bottom left to confirm! And finally name the server

Screen Shot 2016-04-27 at 11.41.52 AM

awwww crap , we got this error because the libvirtd kvm configuration isnt running as root

Screen Shot 2016-04-27 at 11.44.33 AM

This can be quickly resolved by editing the /etc/libvirt/qemu.conf and making sure user = “root” and group = “root” are present.

Screen Shot 2016-04-27 at 11.47.36 AM

Job done!

Install KVM and virt-manager on CentOS 7

So, you wanna install KVM on CentOS7. First we want to check if the instruction set for the cpu supports virtualisation emulation. This is important for great performance but in the case it is missing.

$ egrep -c '(vmx|svm)' /proc/cpuinfo
2

If the result comes back 0, you don’t have it!

Installing KVM

sudo yum install kvm virt-manager libvirt virt-install qemu-kvm xauth dejavu-lgc-sans-fonts