Home

The Book of Xen Part 11

The Book of Xen - novelonlinefull.com

You’re read light novel The Book of Xen Part 11 online at NovelOnlineFull.com. Please use the follow button to get notification about the latest chapter next time when you visit NovelOnlineFull.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy

#echo"xenbackendd=YES">>/etc/rc.conf #echo"xend=YES">>/etc/rc.conf Finally, to get networking to work, create /etc/ifconfig.bridge0 /etc/ifconfig.bridge0 with these contents: with these contents: create !brconfig$intaddfxp0up At this point you're most likely done. Reboot to test, or start the Xen services manually: #/etc/rc.d/xenbackenddstart Startingxenbackendd.

#/etc/rc.d/xendstart Startingxend You should now be able to run xm list xm list: #xmlist NameIDMemVCPUsStateTime(s) Domain-00641r-----282.1 Installing NetBSD as a DomU Installing NetBSD as a domU is easy, even with a Linux dom0. In fact, because NetBSD's INSTALL kernels include a ramdisk with everything necessary to complete the installation, we can even do it without modifying the configuration from the dom0, given a sufficiently versatile PyGRUB or PV-GRUB setup.

For this discussion, we a.s.sume that you've got a domU of some sort already set up-perhaps one of the generic prgmr.com Linux domains. In this domU, you'll need to have a small boot part.i.tion that GRUB[50] can read. This is where we'll store the kernel and GRUB configuration. can read. This is where we'll store the kernel and GRUB configuration.

First, from within your domain, download the NetBSD kernels: #wgethttp://mirror.planetunix.net/pub/NetBSD/NetBSD-5.0/amd64/binary/kernel/ netbsd-INSTALL_XEN3_DOMU.gz #wgethttp://mirror.planetunix.net/pub/NetBSD/NetBSD-5.0/amd64/binary/kernel/ netbsd-XEN3_DOMU.gz Then, edit the domain's GRUB menu (most likely at /boot/grub/menu.lst /boot/grub/menu.lst) to load the INSTALL kernel on next reboot. (On the reboot after that, when the installation's done, you'll select the NetBSD run option.) t.i.tleNetBSDinstall root(hd0,0) kernel/boot/netbsd-INSTALL_XEN3_DOMU

t.i.tleNetBSDrun root(hd0,0) kernel/boot/netbsd-XEN3_DOMUroot=xbd1a Reboot, selecting the NetBSD install option.



As if by magic, your domain will begin running the NetBSD installer, until you end up in a totally ordinary NetBSD install session. Go through the steps of the NetBSD FTP install. There's some very nice doc.u.mentation at http://netbsd.org/docs/guide/en/chap-exinst.html.

NoteAt this point you have to be careful not to overwrite your boot device. For example, prgmr.com gives you only a single physical block device, from which you'll need to carve a /boot /boot part.i.tion in addition to the normal filesystem layout part.i.tion in addition to the normal filesystem layout.

The only sticky point is that you have to be careful to set up a boot device that PyGRUB can read, in the place where PyGRUB expects it. (If you have multiple physical devices, PyGRUB will try to boot from the first one.) Since we're installing within the standard prgmr.com domU setup, we only have a single physical block device to work with, which we'll carve into separate /boot /boot and and / / part.i.tions. Our disklabel, with a part.i.tions. Our disklabel, with a 32 MB FFS /boot 32 MB FFS /boot part.i.tion, looks like this: part.i.tion, looks like this: WenowhaveyourBSD-disklabelpart.i.tionsas: Thisisyourlastchancetochangethem.

StartMBEndMBSizeMBFStypeNewfsMountMountpoint ---------------------------------------------------------- a:3129122882FFSv1YesYes/ b:29133040128swap c:030713072NetBSDpart.i.tion d:030713072Wholedisk e:03031LinuxExt2 f:000unused g:Showallunusedpart.i.tions h:Changeinputunits(sectors/cylinders/MB) >x:Part.i.tionsizesok Once the install's done, reboot. Select the regular kernel in PyGRUB, and your domU should be ready to go.

After NetBSD's booted, if you want to change the bootloader configuration, you can mount the ext2 ext2 part.i.tion thus: part.i.tion thus: #mount_ext2fs/dev/xbd0d/mnt This will allow you upgrade the domU kernel. Just remember that, whenever you want to upgrade the kernel, you need to mount the part.i.tion that PyGRUB loads the kernel from and make sure to update that kernel and menu.lst menu.lst. It would also be a good idea to install the NetBSD kernel in the usual place, in the root of the domU filesystem, but it isn't strictly necessary.

And there you have it-a complete, fully functional NetBSD domU, without any intervention from the dom0 at all. (If you have dom0 access, you can specify the install kernel on the kernel= kernel= line of the domain config file in the usual way-but what would be the fun of that?) line of the domain config file in the usual way-but what would be the fun of that?)

[50] More accurately, of course, your GRUB simulator. If this is PyGRUB, it relies on More accurately, of course, your GRUB simulator. If this is PyGRUB, it relies on libfsimage libfsimage.

Beyond Paravirtualization: HVM In this chapter, we have outlined the general steps necessary to use Solaris and NetBSD as both dom0 and domU operating systems. This isn't meant to exhaustively list the operating systems that work with Xen-in particular, we haven't mentioned Plan9 or FreeBSD at all-but it does give you a good idea of the sort of differences that you might encounter and easy recipes for using at least two systems other than Linux.

Furthermore, they each have their own advantages: NetBSD is a very lightweight operating system, much better than Linux at handling low-memory conditions. This comes in handy with Xen. Solaris isn't as light, but it is extremely robust and has interesting technologies, such as ZFS. Both of these OSs can support any OS as domU, as long as it's been modified to work with Xen. That's virtualization in action, if you like.

NoteThe new Linux paravirt_ops functionality that is included in the kernel.org kernels requires Xen hypervisor version 3.3 or later, so it works with NetBSD but not OpenSolaris.

Finally, the addition of hardware virtualization extensions to recent processors means that virtually any OS can be used as a domU, even if it hasn't been modified specifically to work with Xen. We discuss Xen's support for these extensions in Chapter12 Chapter12 and then describe using HVM to run Windows under Xen in and then describe using HVM to run Windows under Xen in Chapter13 Chapter13. Stay tuned.

Chapter9.XEN MIGRATION In these situations the combination of virtualization and migration significantly improves manageability.-Clark et al., "Live Migration of Virtual Machines" "Live Migration of Virtual Machines"

So let's review: Xen, in poetic terms, is an abstraction, built atop other abstractions, wrapped around still further abstractions. The goal of all this abstraction is to ensure that you, in your snug and secure domU, never even have to think about the messy, noisy, fallible hardware that actually sends electrical pulses out the network ports.

Of course, once in a while the hardware becomes, for reasons of its own, unable to run Xen. Perhaps it's overloaded, or maybe it needs some preventive maintenance. So long as you have advance warning, even this need not interrupt your virtual machine. One benefit of the sort of total hardware independence total hardware independence offered by Xen is the ability to move an entire virtual machine instance to another machine and transparently resume operation-a process referred to as offered by Xen is the ability to move an entire virtual machine instance to another machine and transparently resume operation-a process referred to as migration migration.

Xen migration transfers the entire virtual machine-the in-memory state of the kernel, all processes, and all application states. From the user's perspective, a live migration isn't even noticeable-at most, a few packets are dropped. This has the potential to make scheduled downtime a thing of the past. (Unscheduled downtime, like death and taxes, shows every sign of being inescapable.[51]) Migration may be either live live or or cold cold,[52] with the distinction based on whether the instance is running at the time of migration. In a live migration, the domain continues to run during transfer, and downtime is kept to a minimum. In a cold migration, the virtual machine is paused, saved, and sent to another physical machine. with the distinction based on whether the instance is running at the time of migration. In a live migration, the domain continues to run during transfer, and downtime is kept to a minimum. In a cold migration, the virtual machine is paused, saved, and sent to another physical machine.

In either of these cases, the saved machine will expect its IP address and ARP cache to work on the new subnet. This is no surprise, considering that the in-memory state of the network stack persists unchanged. Attempts to initiate live migration between different layer 2 subnets will fail outright. Cold migration between different subnets will work, in that the VM will successfully transfer but will most likely need to have its networking reconfigured. We'll mention these characteristics again later in our discussion of live migration.

First, though, let's examine a basic, manual method for moving a domain from one host to another.

Migration for Troglodytes The most basic, least elegant way to move a Xen instance from one physical machine to another is to stop it completely, move its backing storage, and re-create the domain on the remote host. This requires a full shutdown and reboot cycle for the VM. It isn't even "migration" in the formal Xen sense, but you may find it necessary if, for example, you need to change out the underlying block device or if certain machine-specific attributes change, for example, if you're moving a VM between different CPU architectures or from a machine that uses PAE to one that doesn't.[53]

Begin by shutting down the virtual machine normally, either from within the operating system or by doing an xm shutdown xm shutdown from the dom0. Copy its backing store, kernel image (if necessary), and config file over, and finally from the dom0. Copy its backing store, kernel image (if necessary), and config file over, and finally xm create xm create the machine as usual on the new host. the machine as usual on the new host.

It's primitive, but at least it's almost certain to work and doesn't require any sort of complex infrastructure. We mention it mostly for completeness; this is a way to move a Xen domain from one physical machine to another.

[51] Maybe not; see Project Kemari or Project Remus at Maybe not; see Project Kemari or Project Remus at http://www.osrg.net/kemari/ and and http://dsg.cs.ubc.ca/remus/ for work being done on adding hardware redundancy to Xen. for work being done on adding hardware redundancy to Xen.

[52] We also like the terms We also like the terms hot hot and and dead dead, which are the less-commonly used parallels of the more common terms.

[53] For example, NetBurst (Pentium 4 and friends) to Core (Core 2 et al.). Xen offers no ability to move a VM from, say, x86 to PPC. For example, NetBurst (Pentium 4 and friends) to Core (Core 2 et al.). Xen offers no ability to move a VM from, say, x86 to PPC.

Migration with xm save and xm restore This "cowboy" method aside, all forms of migration are based on the basic idea of saving the domain on one machine and restoring it on another. You can do this manually using the xm save xm save and and xm restore xm restore commands, simulating the automatic process. commands, simulating the automatic process.

The Xen doc.u.mentation likens the xm save xm save and and restore restore cycle to hibernation on a physical machine. When a machine hibernates, it enters a power-saving mode that saves the memory image to disk and physically powers off the machine. When the machine turns on again, the operating system loads the saved memory image from the disk and picks up where it left off. cycle to hibernation on a physical machine. When a machine hibernates, it enters a power-saving mode that saves the memory image to disk and physically powers off the machine. When the machine turns on again, the operating system loads the saved memory image from the disk and picks up where it left off. xm save xm save behaves exactly the same way. Just like with physical hibernation, the saved domain drops its network connections, takes some time to pause and resume, and consumes no CPU or memory until it is restored. behaves exactly the same way. Just like with physical hibernation, the saved domain drops its network connections, takes some time to pause and resume, and consumes no CPU or memory until it is restored.

Even if you're not planning to do anything fancy involving migration, you may still find yourself saving machines when the physical Xen server reboots. Xen includes an init script to save domains automatically when the system shuts down and restore them on boot. To accommodate this, we suggest making sure that /var /var is large enough to hold the complete contents of the server's memory (in addition to logs, DNS databases, etc.). is large enough to hold the complete contents of the server's memory (in addition to logs, DNS databases, etc.).

To save the machine, issue: #xmsave This command tells the domain to suspend itself; the domain releases its resources back to domain 0, detaches its interrupt handlers, and converts its physical memory mappings back to domain-virtual mappings (because the physical memory mappings will almost certainly change when the domain is restored).

NoteThose of you who maintain a constant burning focus on implementation will notice that this implies domU OS-level support for Xen. HVM save and restore-that is, when the guest can't be counted on to be Xen-aware-are done slightly differently. See Chapter12 Chapter12 for details for details.

At this point, domain 0 takes over, stops the domU, and checkpoints the domain state to a file. During this process it makes sure that all memory page references are canonical (that is, domain virtual, because references to machine memory pages will almost certainly be invalid on restore). Then it writes the contents of pages to disk, reclaiming pages as it goes.

After this process is complete, the domain has stopped running. The entire contents of its memory are in a savefile approximately the size of its memory allocation, which you can restore at will. In the meantime, you can run other domains, reboot the physical machine, back up the domain's virtual disks, or do whatever else required you to take the domain offline in the first place.

NoteAlthough xm save xm save ordinarily stops the domain while saving it, you can also invoke it with the ordinarily stops the domain while saving it, you can also invoke it with the -c -c option, for checkpoint. This tells option, for checkpoint. This tells xm xm to leave the domain running. It's a bit complex to set up, though, because you also need some way to snapshot the domain's storage during the save. This usually involves an external device migration script to leave the domain running. It's a bit complex to set up, though, because you also need some way to snapshot the domain's storage during the save. This usually involves an external device migration script.

When that's done, restoring the domain is easy: #xmrestore Restoration operates much like saving in reverse; the hypervisor allocates memory for the domain, writes out pages from the savefile to the newly allocated memory, and translates shadow page table entries to point at the new physical addresses. When this is accomplished, the domain resumes execution, reinstates everything it removed when it suspended, and begins functioning as if nothing happened.

NoteThe savefile remains intact; if something goes wrong with the restarted machine, you can restore the savefile and try again.

This ability to save and restore on the local machine works as the backbone of the more complex forms of migration supported by Xen.

Cold Migration Before we get into Xen's automated migration, we'll give an outline of a manual cold migration cold migration process that approximates the flow of live migration to get an idea of the steps involved. process that approximates the flow of live migration to get an idea of the steps involved.

In this case, migration begins by saving the domain. The administrator manually moves the save file and the domain's underlying storage over to the new machine and restores the domain state. Because the underlying block device is moved over manually, there's no need to have the same filesystem accessible from both machines, as would be necessary for live migration. All that matters is transporting the content of the Xen virtual disk.

Here are some steps to cold migrate a Xen domain: #xmsave #scp Perform the appropriate steps to copy the domain's storage to the target computer-rsync, scp, dd piped into piped into ssh ssh, whatever floats your boat. Whatever method you choose, ensure that it copies the disk in such a way that is bit-for-bit the same and has the same path on both physical machines. In particular, do not mount the domU filesystem on machine A and copy its files over to the new domU filesystem on machine B. This will cause the VM to crash upon restoration.

Finally, restart the domain on the new machine: #xmrestore There's no need to copy the domain config file over to the new machine; the savefile contains all the configuration information necessary to start the machine. Conversely, this also means that you can't change the parameters of the machine between save and restore and expect that to have any effect at all.[54]

[54] To forestall the inevitable question, we did try using a hex editor on the savefile. The result was an immediate crash. To forestall the inevitable question, we did try using a hex editor on the savefile. The result was an immediate crash.

Live Migration Cold migration has its place, but one of the absolute neatest features of Xen is the ability to move a domain from one physical machine to another transparently, that is, imperceptibly to the outside world. This feature is live migration live migration.

As with cold migration, live migration transfers the domain's configuration as part of its state; it doesn't require the administrator to manually copy over a config file. Manual copying is, in fact, not required at all. Of course, you will still need the config file if you want to recreate the domain from scratch on the new machine.

Live migration has some extra prerequisites. It relies on the domain's storage being accessible from both machines and on the machines being on the same subnet. Finally, because the copy phase occurs automatically over the network, the machines must run a network service.

How It Works We would really like to say that live migration works by magic. In reality, however, it works by the application of sufficiently advanced technology.

Live migration is based on the basic idea of save and restore save and restore only in the most general sense. The machine doesn't hibernate until the very last phase of the migration, and it comes back out of its virtual hibernation almost immediately. only in the most general sense. The machine doesn't hibernate until the very last phase of the migration, and it comes back out of its virtual hibernation almost immediately.

As shown in Figure9-1 Figure9-1 Xen live migration begins by sending a request, or Xen live migration begins by sending a request, or reservation reservation, to the target specifying the resources the migrating domain will need. If the target accepts the request, the source begins the iterative precopy iterative precopy phase of migration. During this step, Xen copies pages of memory over a TCP connection to the destination host. While this is happening, pages that change are marked as dirty and then recopied. The machine iterates this until only very frequently changed pages remain, at which point it begins the phase of migration. During this step, Xen copies pages of memory over a TCP connection to the destination host. While this is happening, pages that change are marked as dirty and then recopied. The machine iterates this until only very frequently changed pages remain, at which point it begins the stop and copy stop and copy phase. Now Xen stops the VM and copies over any pages that change too frequently to efficiently copy during the previous phase. In practice, our testing suggests that Xen usually reaches this point after four to eight iterations. Finally the VM starts executing on the new machine. phase. Now Xen stops the VM and copies over any pages that change too frequently to efficiently copy during the previous phase. In practice, our testing suggests that Xen usually reaches this point after four to eight iterations. Finally the VM starts executing on the new machine.

By default, Xen will iterate up to 29 times and stop if the number of dirty pages falls below a certain threshold. You can specify this threshold and the number of iterations at compile time, but the defaults should work fine.

Figure9-1.Overview of live migration Making Xen Migration Work First, note that migration won't work unless the domain is using some kind of network-accessible storage, as described later in this chapter. If you haven't got such a thing, set that up first and come back when it's done.

Second, xend xend has to be set up to listen for migration requests on both physical machines. Note that both machines need to listen; if only the target machine has the relocation server running, the source machine won't be able to shut down its Xen instance at the correct time, and the restarted domain will reboot as if it hadn't shut down cleanly. has to be set up to listen for migration requests on both physical machines. Note that both machines need to listen; if only the target machine has the relocation server running, the source machine won't be able to shut down its Xen instance at the correct time, and the restarted domain will reboot as if it hadn't shut down cleanly.

Enable the migration server by uncommenting the following in /etc/xend-config.sxp /etc/xend-config.sxp: (xend-relocation-serveryes) This will cause xend xend to listen for migration requests on port 8002, which can be changed with the to listen for migration requests on port 8002, which can be changed with the (xend-relocation-port) (xend-relocation-port) directive. Note that this is somewhat of a security risk. You can mitigate this to some extent by adding lines like the following: directive. Note that this is somewhat of a security risk. You can mitigate this to some extent by adding lines like the following: (xend-relocation-address192.168.1.1) (xend-relocation-hosts-allow'^localhost$''^host.example.org$') The xend-relocation-address xend-relocation-address line confines line confines xend xend to listen for migration requests on that address so that you can restrict migration to, for example, an internal subnet or a VPN. The second line specifies a list of hosts to allow migration from as a s.p.a.ce-separated list of quoted regular expressions. Although the idea of migrating from the to listen for migration requests on that address so that you can restrict migration to, for example, an internal subnet or a VPN. The second line specifies a list of hosts to allow migration from as a s.p.a.ce-separated list of quoted regular expressions. Although the idea of migrating from the localhost localhost seems odd, it does have some value for testing. Xen migration to and from seems odd, it does have some value for testing. Xen migration to and from other other hosts will operate fine without hosts will operate fine without localhost localhost in the allowed-hosts list, so feel free to remove it if desired. in the allowed-hosts list, so feel free to remove it if desired.

On distributions that include a firewall, you'll have to open port 8002 (or another port that you've specified using the xend-relocation-port xend-relocation-port directive). Refer to your distro's doc.u.mentation if necessary. directive). Refer to your distro's doc.u.mentation if necessary.

With live migration, Xen can maintain network connections while migrating so that clients don't have to reconnect. The domain, after migration, sends an unsolicited ARP (address request protocol) reply to advertise its new location. (Usually this will work. In some network configurations, depending on your switch configuration, it'll fail horribly. Test it first.) The migrating instance can only maintain its network connections if it's migrating to a machine on the same physical subnet because its IP address remains the same.

The commands are simple: #xmmigrate--live The domain's name in xm list xm list changes to changes to migrating-[domain] migrating-[domain] while the VM copies itself over to the remote host. At this time it also shows up in the while the VM copies itself over to the remote host. At this time it also shows up in the xm list xm list output on the target machine. On our configuration, this copy and run phase took around 1 second per 10MB of domU memory, followed by about 6 seconds of service interruption. output on the target machine. On our configuration, this copy and run phase took around 1 second per 10MB of domU memory, followed by about 6 seconds of service interruption.

NoteIf you, for whatever reason, want the migration to take less total time (at the expense of greater downtime), you can eliminate the repeated incremental copies by simply removing the --live --live option option.#xmmigrate This automatically stops the domain, saves it as normal, sends it to the destination machine, and restores it. Just as with --live --live, the final product is a migrated domain the final product is a migrated domain.

Here's a domain list on the target machine while the migration is in process. Note that the memory usage goes up as the migrating domain transfers more data: NameIDMem(MiB)VCPUsStateTime(s) Domain-0010248r-----169.2 orlando33070-bp---0.0 About 30 seconds later, the domain's transferred a few hundred more MB: NameIDMem(MiB)VCPUsStateTime(s) Domain-0010248r-----184.8 orlando36150-bp---0.0 Another 30 seconds further on, the domain's completely transferred and running: NameIDMem(MiB)VCPUsStateTime(s) Domain-0010248r-----216.0 orlando310231-b----0.0 We also pinged the domain as it was migrating. Note that response times go up dramatically while the domain moves its data: PING(69.12.128.195)56(84)bytesofdata.

64bytesfrom69.12.128.195:icmp_seq=1ttl=56time=15.8ms 64bytesfrom69.12.128.195:icmp_seq=2ttl=56time=13.8ms 64bytesfrom69.12.128.195:icmp_seq=3ttl=56time=53.0ms 64bytesfrom69.12.128.195:icmp_seq=4ttl=56time=179ms 64bytesfrom69.12.128.195:icmp_seq=5ttl=56time=155ms 64bytesfrom69.12.128.195:icmp_seq=6ttl=56time=247ms 64bytesfrom69.12.128.195:icmp_seq=7ttl=56time=239ms After most of the domain's memory has been moved over, there's a brief hiccup as the domain stops, copies over its last few pages, and restarts on the destination host: 64bytesfrom69.12.128.195:icmp_seq=107ttl=56time=14.2ms 64bytesfrom69.12.128.195:icmp_seq=108ttl=56time=13.0ms 64bytesfrom69.12.128.195:icmp_seq=109ttl=56time=98.0ms 64bytesfrom69.12.128.195:icmp_seq=110ttl=56time=15.4ms 64bytesfrom69.12.128.195:icmp_seq=111ttl=56time=14.2ms ---69.12.128.195pingstatistics--- 111packetstransmitted,110received,0%packetloss,time110197ms rttmin/avg/max/mdev=13.081/226.999/382.360/101.826ms At this point the domain is completely migrated.

However, the migration tools don't make any guarantees that the migrated domain will actually run on the target machine. One common problem occurs when migrating from a newer CPU to an older one. Because instructions are enabled at boot time, it's quite possible for the migrated kernel to attempt to execute instructions that simply no longer exist.

For example, the sfence sfence instruction is used to explicitly serialize out-of-order memory writes; any writes issued before instruction is used to explicitly serialize out-of-order memory writes; any writes issued before sfence sfence must complete before writes after the fence. This instruction is part of SSE, so it isn't supported on all Xen-capable machines. A domain started on a machine that supports must complete before writes after the fence. This instruction is part of SSE, so it isn't supported on all Xen-capable machines. A domain started on a machine that supports sfence sfence will try to keep using it after migration, and it'll crash in short order. This may change in upcoming versions of Xen, but at present, all production Xen environments that we know of migrate only between h.o.m.ogeneous hardware. will try to keep using it after migration, and it'll crash in short order. This may change in upcoming versions of Xen, but at present, all production Xen environments that we know of migrate only between h.o.m.ogeneous hardware.

Migrating Storage Live migration only copies the RAM and processor state; ensuring that the migrated domain can access its disk is up to the administrator. As such, the storage issue boils down to a question of capabilities. The migrated domain will expect its disks to be exactly consistent and to retain the same device names on the new machine as on the old machine. In most cases, that means the domU, to be capable of migration, must pull its backing storage over the network. Two popular ways to attain this in the Xen world are ATA over Ethernet (AoE), and iSCSI. We also discussed NFS in Chapter4 Chapter4. Finally, you could just throw a suitcase of money at NetApp.

There are a lot of options beyond these; you may also want to consider cLVM (with some kind of network storage enclosure) and DRBD.

With all of these storage methods, we'll discuss an approach that uses a storage server to export a block device to a dom0, which then makes the storage available to a domU.

Note that both iSCSI and AoE limit themselves to providing simple block devices. Neither allows multiple clients to share the same filesystem without filesystem-level support! This an important point. Attempts to export a single ext3 filesystem and run domUs out of file-backed VBDs on that filesystem will cause almost immediate corruption. Instead, configure your network storage technology to export a block device for each domU. However, the exported devices don't have to correspond to physical devices; we can as easily export files or LVM volumes.

ATA over Ethernet ATA over Ethernet is easy to set up, reasonably fast, and popular. It's not routable, but that doesn't really matter in the context of live migration because live migration always occurs within a layer 2 broadcast domain.

People use AoE to fill the same niche as a basic SAN setup: to make centralized storage available over the network. It exports block devices that can then be used like locally attached disks. For the purposes of this example, we'll export one block device via AoE for each domU.

Let's start by setting up the AoE server. This is the machine that exports disk devices to dom0s, which in their turn host domUs that rely on the devices for backing storage. The first thing you'll need to do is make sure that you've got the kernel AoE driver, which is located in the kernel configuration at: Devicedrivers---> BlockDevices---> <*>ATAoverEthernetsupport You can also make it a module (m). If you go that route, load the module: #modprobeaoe Either way, make sure that you can access the device nodes under /dev/etherd /dev/etherd. They should be created by udev. If they aren't, try installing the kernel source and running the Doc.u.mentation/aoe/udev-install.sh Doc.u.mentation/aoe/udev-install.sh script that comes in the kernel source tree. This script will generate rules and place them in an appropriate location-in our case script that comes in the kernel source tree. This script will generate rules and place them in an appropriate location-in our case /etc/udev/rules.d/50-udev.rules /etc/udev/rules.d/50-udev.rules. You may need to tune these rules for your udev version. The configurations that we used on CentOS 5.3 were: SUBSYSTEM=="aoe",KERNEL=="discover",NAME="etherd/%k",GROUP="disk",MODE="0220"

SUBSYSTEM=="aoe",KERNEL=="err",NAME="etherd/%k",GROUP="disk",MODE="0440"

SUBSYSTEM=="aoe",KERNEL=="interfaces",NAME="etherd/%k",GROUP="disk",MODE="0220"

SUBSYSTEM=="aoe",KERNEL=="revalidate",NAME="etherd/%k",GROUP="disk",MODE="0220"

#aoeblockdevices KERNEL=="etherd*",NAME="%k",GROUP="disk"

AoE also requires some support software. The server package is called vblade and can be obtained from http://aoetools.sourceforge.net/. You'll also need the client tools aoetools on both the server and client machines, so make sure to get those.

First, run the aoe-interfaces aoe-interfaces command on the storage server to tell vblade what interfaces to export on: command on the storage server to tell vblade what interfaces to export on: #aoe-interfaces vblade can export most forms of storage, including SCSI, MD, or LVM. Despite the name ATA over Ethernet, it's not limited to exporting ATA devices; it can export any seekable device file or any ordinary filesystem image. Just specify the filename on the command line. (This is yet another instance where UNIX's everything is a file everything is a file philosophy comes in handy.) philosophy comes in handy.) Although vblade has a configuration file, it's simple enough to specify the options on the command line. The syntax is: #vblade So, for example, to export a file: #ddif=/dev/zeroof=/path/file.imgbs=1024Mcount=1 #vblade00& This exports /path/file.img /path/file.img as as /dev/etherd/e0.0 /dev/etherd/e0.0.

NoteFor whatever reason, the new export is not visible from the server. The AoE maintainers note that this is not actually a bug because it was never a design goal.

AoE may expect the device to have a part.i.tion table, or at least a valid part.i.tion signature. If necessary, you can part.i.tion it locally by making a part.i.tion that spans the entire disk: #losetup/dev/loop0test.img #fdisk/dev/loop0 When you've done that, make a filesystem and detach the loop: #mkfs/dev/loop0 #losetup-d/dev/loop0 Alternately, if you want multiple part.i.tions on the device, fdisk fdisk the device and create multiple part.i.tions as normal. The new part.i.tions will show up on the client with names like the device and create multiple part.i.tions as normal. The new part.i.tions will show up on the client with names like /dev/etherd/e0.0p1 /dev/etherd/e0.0p1. To access the devices from the AoE server, performing kpartx -a kpartx -a on an appropriately set up loop device should work. on an appropriately set up loop device should work.

Now that we've got a functional server, let's set up the client. Large chunks of the AoE client are implemented as a part of the kernel, so you'll need to make sure that AoE's included in the dom0 kernel just as with the storage server. If it's a module, you'll mostly likely want to ensure it loads on boot. If you're using CentOS, you'll probably also need to fix your udev rules, again just as with the server.

Since we're using the dom0 to arbitrate the network storage, we don't need to include the AoE driver in the domU kernel. All Xen virtual disk devices are accessed via the domU xenblk xenblk driver, regardless of what technology they're using for storage. driver, regardless of what technology they're using for storage.[55]

Download aoetools from your distro's package management system or http://aoetools.sourceforge.net/. If necessary, build and install the package.

Please click Like and leave more comments to support and keep us alive.

RECENTLY UPDATED MANGA

Keyboard Immortal

Keyboard Immortal

Keyboard Immortal Chapter 2080: Shop 58 Author(s) : 六如和尚, Monk Of The Six Illusions View : 1,348,337
I Beg You All, Please Shut Up

I Beg You All, Please Shut Up

I Beg You All, Please Shut Up Chapter 273 Author(s) : 天道不轮回, The Cycles Of Heaven Doesn't Exist View : 237,289
My Rich Wife

My Rich Wife

My Rich Wife Chapter 2721: Schizophrenia Author(s) : Taibai And A Qin View : 1,615,480
Medical Master

Medical Master

Medical Master Chapter 1911 Meeting the Holy Lord! Author(s) : 步行天下, Walk The World View : 1,628,330

The Book of Xen Part 11 summary

You're reading The Book of Xen. This manga has been translated by Updating. Author(s): Chris Takemura, Luke S. Crawford. Already has 981 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

NovelOnlineFull.com is a most smartest website for reading manga online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to NovelOnlineFull.com