Home

The Book of Xen Part 21

The Book of Xen - novelonlinefull.com

You’re read light novel The Book of Xen Part 21 online at NovelOnlineFull.com. Please use the follow button to get notification about the latest chapter next time when you visit NovelOnlineFull.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy

AppendixB.THE STRUCTURE OF THE XEN CONFIG FILE

The domain config file is the conventional way to define a Xen domain (and the method that we've used throughout this book). It works by specifying Python variables in a config file, conventionally kept in /etc/xen/ /etc/xen/. When the domain is created, xend xend executes this file and uses it to set variables that will eventually control the output of the domain builder. executes this file and uses it to set variables that will eventually control the output of the domain builder.

Note also that you can override values in the config file from the xm xm command line. For example, to create the domain command line. For example, to create the domain coriola.n.u.s coriola.n.u.s with a different name: with a different name: xmcreatecoriola.n.u.sname=menenius The config file-and it would be difficult to overstate this point-is executed as a standard Python script. Thus, you can embed arbitrary Python in the config file, making it easy to autogenerate configurations based on external constraints. You can see a simple example of this in the example HVM config shipped with Xen, /etc/xen/xmexample.hvm /etc/xen/xmexample.hvm. In this case, the library path is selected based on the processor type (i386 or x86_64).

The xmexample2 xmexample2 file takes this technique even further, using a single config file to handle many domains, which are differentiated by a pa.s.sed-in file takes this technique even further, using a single config file to handle many domains, which are differentiated by a pa.s.sed-in vmid vmid variable. variable.

Python in the config file isn't limited to domain configuration, either. If you're using Xen for hosting, for example, we might suggest tying the domain configuration to the billing and support-ticketing systems, using some Python glue to keep them in sync. By embedding this logic in the config files, or in a separate module included by the config files, you can build a complex infrastructure around the Xen domains.



First, let's start with the basic elements of a domain configuration. Here's a basic config file, specifying the VM name, kernel image, three network cards, a block device, and a kernel parameter: name=coriola.n.u.s kernel="/boot/linux-2.6-xen"

vif=['','','']

disk=['phy:/dev/corioles/coriola.n.u.s-root,sda,rw']

root="/dev/sdaro"

Here we're setting some variables (name, kernel kernel, disk disk, and so on) to strings or lists. You can easily identify the lists because they're enclosed in square brackets.

String quoting follows the standard Python conventions: a single quote for noninterpreted strings, double quotes for strings with variable subst.i.tution, and three single quotes to begin and end a multiline string.

Whites.p.a.ce has significance just as in standard Python-newlines are significant and s.p.a.cing doesn't matter, except when used as indentation.

NoteAlthough these syntax rules are usually true, some external tools that pa.r.s.e the config file may have stricter rules. pypxeboot is an example.

Here's another, more complex example, with an NFS root. In addition, we'll specify a couple of parameters for the vif vif: name=coriola.n.u.s kernel="/boot/linux-2.6-xen"

initrd="/boot/initrd-xen-domU"

memory=256 vif= ['mac=08:de:ad:be:ef:00,bridge=xenbr0','mac=08:de:ad:be:ef:01,bridge=xenbr1']

netmask='255.255.255.0'

gateway='192.168.2.1'

ip='192.168.2.47'

broadcast='192.168.2.255'

root="/dev/nfs"

nfs_server='192.168.2.42'

nfs_root='/export/domains/coriola.n.u.s'NoteYour kernel must have NFS support and your kernel or initrd needs to include xennet xennet for this to work for this to work.

Finally, HVM domains take some other options. Here's a config file that we might use to install an HVM FreeBSD domU.

importos,re arch=os.uname()[4]

ifre.search('64',arch): arch_libdir='lib64'

else: arch_libdir='lib'

kernel="/usr/lib/xen/boot/hvmloader"

builder='hvm'

memory=1024 name="coriola.n.u.s"

vcpus=1 pae=1 acpi=0 vif=['type=ioemu,bridge=xenbr0']

disk=[ 'phy:/dev/corioles/coriola.n.u.s_root,hda,w', 'file:/root/8.0-CURRENT-200809-i386-disc1.iso,hdc:cdrom,r'

device_model='/usr/'+arch_libdir+'/xen/bin/qemu-dm'

boot="cd"

vnc=1 vnclisten="192.168.1.102"

serial='pty'

Here we've added options to specify the QEMU-based backing device model and to control certain aspects of its behavior. Now we pa.s.s in a boot boot option that tells it to boot from CD and options for a virtual framebuffer and serial device. option that tells it to boot from CD and options for a virtual framebuffer and serial device.

List of Directives Here we've tried to list every directive we know about, whether we use it or not, with notes indicating where we cover it in the main text of this book. We have, however, left out stuff that's marked deprecated deprecated as of Xen version 3.3. as of Xen version 3.3.

There are some commands that work with the Xen.org version of Xen but not with the version of Xen included with Red Hat Enterprise Linux/CentOS 5.x. We've marked these with an asterisk (*).

Any Boolean parameters expect values of true true or or false false; 0 0, 1 1, yes yes, and no no will also work. will also work.

bootargs=string This is a list of arguments to pa.s.s to the boot loader. For example, to tell PyGRUB to load a particular kernel image, you can specify bootargs='kernel=vmlinuz-2.6.24' bootargs='kernel=vmlinuz-2.6.24'.

bootloader=string The bootloader bootloader line specifies a program that will be run within dom0 to load and initialize the domain kernel. For example, you can specify line specifies a program that will be run within dom0 to load and initialize the domain kernel. For example, you can specify bootloader=pygrub bootloader=pygrub to get a domain that, on startup, presents a GRUB-like boot menu. We discuss PyGRUB and pypxeboot in to get a domain that, on startup, presents a GRUB-like boot menu. We discuss PyGRUB and pypxeboot in Chapter7 Chapter7 and and Chapter3 Chapter3.

builder=string This defaults to "Linux", which is the paravirtualized Linux (and other Unix-like OSs) domain builder. Ordinarily you will either leave this option blank or specify HVM. Other domain builders are generally regarded as historical curiosities.

cpu_capp=int *

This specifies a maximum share of the CPU time for the domain, expressed in hundredths of a CPU.

cpu=int This option specifies the physical CPU that the domain should run VCPU0 on.

cpu_weight=int *

This specifies the domain's weight for the credit scheduler, just like the xm sched-credit -w xm sched-credit -w command. For example, command. For example, cpu_weight = 1024 cpu_weight = 1024 will give the domain twice as much weight as the default. We talk more about CPU weight in will give the domain twice as much weight as the default. We talk more about CPU weight in Chapter7 Chapter7.

cpus=string The cpus cpus option specifies a list of CPUs that the domain may use. The syntax of the list is fairly expressive. For example, option specifies a list of CPUs that the domain may use. The syntax of the list is fairly expressive. For example, cpus = "0-3,5,^1" cpus = "0-3,5,^1" specifies 0, 2, 3, and 5 while excluding CPU 1. specifies 0, 2, 3, and 5 while excluding CPU 1.

dhcp=bool This directive is only needed if the kernel is getting its IP at boot, usually because you're using an NFS root device. Ordinary DHCP is handled from within the domain by standard users.p.a.ce daemons, and so the DHCP directive is not required.

disk=list The disk disk line specifies one (or more) virtual disk devices. Almost all domains will need at least one, although it's not a requirement as far as Xen's concerned. Each definition is a stanza in the list, each of which has at least three terms: backend device, frontend device, and mode. We go into considerably more detail on the meaning of these terms and the various types of storage in line specifies one (or more) virtual disk devices. Almost all domains will need at least one, although it's not a requirement as far as Xen's concerned. Each definition is a stanza in the list, each of which has at least three terms: backend device, frontend device, and mode. We go into considerably more detail on the meaning of these terms and the various types of storage in Chapter4 Chapter4.

extra=string The extra extra option specifies a string that is appended, unchanged, to the domU kernel options. For example, to boot the domU in single user mode: option specifies a string that is appended, unchanged, to the domU kernel options. For example, to boot the domU in single user mode:extra="s"Many of the other options listed here actually append to the kernel command-line options.

hpet This option enables a virtual high-precision event timer.

kernel=string This option specifies the kernel image that Xen will load and boot. It is required if no bootloader line is specified. Its value should be the absolute path to the kernel, from the dom0's perspective, unless you've also specified a bootloader. If you're using a bootloader and specify a kernel, the domain creation script will pa.s.s the kernel value to the bootloader for further action. For example, PyGRUB will try to load the specified file from the boot media.

maxmem=int This specifies the amount of memory given to the domU. From the guest's perspective, this is the amount of memory plugged in plugged in when it boots. when it boots.

memory=int This is the target memory allocation for the domain. If maxmem maxmem isn't specified, the isn't specified, the memory= memory= line will also set the domain's maximum memory. Because we don't oversubscribe memory, we use this directive rather than line will also set the domain's maximum memory. Because we don't oversubscribe memory, we use this directive rather than max-mem max-mem. We go into a little more detail on memory oversubscription in Chapter14 Chapter14.

name=string This is a unique name for the domain. Make it whatever you like, but we recommend keeping it under 15 characters, because Red Hat's (and possibly other distros') xendomains xendomains script has trouble with longer names. This is one of the few non-optional directives. Every domain needs a name. script has trouble with longer names. This is one of the few non-optional directives. Every domain needs a name.

nfs_root=IP, nfs_server=IP nfs_server=IP These two arguments are used by the kernel when booting via NFS. We describe setting up an NFS root in Chapter4 Chapter4.

nics=int This option is deprecated, but you may see it referenced in other doc.u.mentation. It specifies the number of virtual NICs allocated to the domain. In practice, we always just rely on the number of vif vif stanzas to implicitly declare the NICs. stanzas to implicitly declare the NICs.

on_crash, on_reboot=string on_reboot=string, on_shutdown on_shutdown These three commands control how the domain will react to various halt states-on_shutdown for graceful shutdowns, for graceful shutdowns, on_reboot on_reboot for graceful reboot, and for graceful reboot, and on_crash on_crash for when the domain crashes. Allowed values are: for when the domain crashes. Allowed values are: destroy: Clean up after the domain as usual.

restart: Restart the domain.

preserve: Keep the domain as-is until you destroy it manually.

rename-restart: Preserve the domain, while re-creating another instance with a different name.

on_xend_start=ignore start, on_xend_stop=ignore shutdown suspend on_xend_stop=ignore shutdown suspend Similarly, these two items control how the domain will react to xend xend exiting. Because exiting. Because xend xend sometimes needs to be restarted, and we prefer to minimize disruption of the domUs, we leave these at the default: sometimes needs to be restarted, and we prefer to minimize disruption of the domUs, we leave these at the default: ignore ignore.

pci=BUS:DEV.FUNC This adds a PCI device to the domain using the given parameters, which can be found with lspci lspci in the dom0. We give an example of PCI forwarding in in the dom0. We give an example of PCI forwarding in Chapter14 Chapter14.

ramdisk=string The ramdisk ramdisk option functions like the initrd line in GRUB; it specifies an initial ramdisk, which usually contains drivers and scripts used to access hardware required to mount the root filesystem. option functions like the initrd line in GRUB; it specifies an initial ramdisk, which usually contains drivers and scripts used to access hardware required to mount the root filesystem.Many distros won't require an initrd when installed as domUs, because the domU only needs drivers for extremely simple virtual devices. However, because the distro expects to have an initrd, it's often easier to create one. We go into more detail on that subject in Chapter14 Chapter14.

root=string This specifies the root device for the domain. We usually specify the root device on the extra line.

rtc_offset The rtc_offset rtc_offset allows you to specify an offset from the machine's real-time clock for the guest domain. allows you to specify an offset from the machine's real-time clock for the guest domain.

sdl=bool Xen supports an SDL console as well as the VNC console, although not both at the same time. Set this option to true true to enable a framebuffer console over SDL. Again, we prefer the to enable a framebuffer console over SDL. Again, we prefer the vfb vfb syntax. syntax.

shadow_memory=int This is the domain shadow memory in MB. PV domains will default to none. Xen uses shadow memory to keep copies of domain-specific page tables. We go into more detail on the role of page table shadows in Chapter12 Chapter12.

uuid=string The XenStore requires a UUID to, as the name suggests, uniquely identify a domain. If you don't specify one, it'll be generated for you. The odds of collision are low enough that we don't bother, but you may find it useful if, for example, you want to encode additional information into your UUID.

vcpu_avail=int These are active VCPUs. If you're using CPU hotplugging, this number may differ from the total number of VCPUs, just as max-mem max-mem and and memory memory may differ. may differ.

vcpus=int This specifies the number of virtual CPUs to report to the domain. For performance reasons, we strongly recommend that this be equal to or fewer than the number of physical CPU cores that the domain has available.

vfb=list vfb=[type='vnc'vncunused=1] vfb=[type='vnc'vncunused=1]In this case, we specify a VNC virtual framebuffer, which uses the first unused port in the VNC range. (The default behavior is to use the base VNC port plus domain ID as the listen port for each domain's virtual framebuffer.)Valid options for the vfb vfb line are: line are: vnclisten vnclisten, vncunused vncunused, vncdisplay vncdisplay, display display, videoram videoram, xauthority xauthority, type type, vncpa.s.swd vncpa.s.swd, opengl opengl, and keymap keymap. We discuss more details about virtual framebuffers in Chapter14 Chapter14 and a bit in and a bit in Chapter12 Chapter12. See the vnc= vnc= and and sdl= sdl= options for an alternative syntax. options for an alternative syntax.

videoram=int The videoram videoram option specifies the maximum amount of memory that a PV domain may use for its frame buffer. option specifies the maximum amount of memory that a PV domain may use for its frame buffer.

vif=list The vif vif directive tells Xen about the domain's virtual network devices. Each directive tells Xen about the domain's virtual network devices. Each vif vif specification can include many options, including specification can include many options, including bridge bridge, ip ip, and mac mac. For more information on these, see Chapter5 Chapter5.Allowable options in the vif vif line are line are backend backend, bridge bridge, ip ip, mac mac, script script, type type, vifname vifname, rate rate, model model, accel accel, policy policy, and label label.

vnc=bool Set vnc vnc to 1 to enable the VNC console. You'll also want to set some of the other VNC-related options, such as to 1 to enable the VNC console. You'll also want to set some of the other VNC-related options, such as vncunused vncunused. We prefer the vfb vfb syntax, which allows you to set options related to the syntax, which allows you to set options related to the vfb vfb in a single place, with a similar syntax to the in a single place, with a similar syntax to the vif vif and and disk disk lines. lines.

vncconsole=bool If vncconsole vncconsole is set to is set to yes yes, xend xend automatically sp.a.w.ns a VNC viewer and connects to the domain console when the domain starts up. automatically sp.a.w.ns a VNC viewer and connects to the domain console when the domain starts up.

vncdisplay=int This specifies a VNC display to use. By default, VNC will attach to the display number that corresponds to the domain ID.

vnclisten=IP This specifies an IP address on which to listen for incoming VNC connections. It overrides the value of the same name in xend-config.sxp xend-config.sxp.

vncpa.s.swd=string, vncpa.s.swd="Swordfish" vncpa.s.swd="Swordfish"[87]

These options set the pa.s.sword for the VNC console to the given value. Note that this is independent of any authentication that the domU does.

vscsi=PDEV,VDEV,DOM *

This adds a SCSI device to the domain. The paravirtualized SCSI devices are a mechanism for pa.s.sing a physical SCSI generic device through to a domain. It's not meant to replace the Xen block driver. Rather, you can use pvSCSI, the SCSI pa.s.s-through mechanism, to access devices like tape drives or scanners that are hooked up to the machine's physical SCSI bus.

vtpm=['instance=INSTANCE,backend=DOM,type=TYPE']

The vtpm vtpm option, just like the option, just like the vif vif or or disk disk options, describes a virtual device-in this case, a TPM. The TPM instance name is a simple identifier; something like options, describes a virtual device-in this case, a TPM. The TPM instance name is a simple identifier; something like 1 1 will do just fine. The backend is the domain with access to the physical TPM. Usually will do just fine. The backend is the domain with access to the physical TPM. Usually 0 0 is a good value. Finally, type specifies the type of the TPM emulation. This can be either is a good value. Finally, type specifies the type of the TPM emulation. This can be either pvm pvm or or hvm hvm, for paravirtualized and HVM domains, respectively.

HVM Directives Certain directives only apply if you're using Xen's hardware virtualization, HVM. Most of these enable or disable various hardware features.

acpi=bool The acpi acpi option determines whether or not the domain will use ACPI, the Advanced Configuration and Power Interface. Turning it off may improve stability, and will enable some versions of the Windows installer to complete successfully. option determines whether or not the domain will use ACPI, the Advanced Configuration and Power Interface. Turning it off may improve stability, and will enable some versions of the Windows installer to complete successfully.

apic=bool The APIC, or Advanced Programmable Input Controller,[88] is a modern implementation of the venerable PIC. This is on by default. You may want to turn it off if your operating system has trouble with the simulated APIC. is a modern implementation of the venerable PIC. This is on by default. You may want to turn it off if your operating system has trouble with the simulated APIC.

builder=string With HVM domains, you'll use the HVM domain builder. With most paravirtualized domains, you'll want the default Linux domain builder. The domain builder is a bit more low level than the parts that we usually work with. For the most part, we are content to let it do its thing.

device_model=string The device_model device_model directive specifies the full path of the executable being used to emulate devices for HVM domains (and for PV domains if the framebuffer is being used). In most situations, the default directive specifies the full path of the executable being used to emulate devices for HVM domains (and for PV domains if the framebuffer is being used). In most situations, the default qemu-dm qemu-dm should work fine. should work fine.

feature=string This is a pipe-separated list of features to enable in the guest kernel. The list of available features, fresh from the source, is as follows:[XENFEAT_writable_page_tables]="writable_page_tables", [XENFEAT_writable_descriptor_tables]="writable_descriptor_tables", [XENFEAT_auto_translated_physmap]="auto_translated_physmap", [XENFEAT_supervisor_mode_kernel]="supervisor_mode_kernel", [XENFEAT_pae_pgdir_above_4gb]="pae_pgdir_above_4gb"We have always had good luck using the defaults for this option.

hap=bool This directive tells the domain whether or not to take advantage of Hardware-a.s.sisted Paging on recent machines. Implementations include AMD's nested paging nested paging and Intel's and Intel's extended paging extended paging. If the hardware supports this feature, Xen can substantially improve HVM performance by taking advantage of it.

loader=string This is the path to HVM firmware. We've always been completely satisfied with the default.

pae=bool This enables or disables PAE on an HVM domain. Note that this won't enable a non-PAE kernel to run on a PAE or 64-bit box. This option is on by default.

Device Model Options There are some directives that specify options for the device model. As far as we know, these are specific to the QEMU-based model, but, because no others exist, it seems safe to consider them part of Xen's configuration.

access_control_policy=POLICY,label=LABEL The access_control_policy access_control_policy directive defines the security policy and label to a.s.sociate with the domain. directive defines the security policy and label to a.s.sociate with the domain.

Please click Like and leave more comments to support and keep us alive.

RECENTLY UPDATED MANGA

Spirit Vessel

Spirit Vessel

Spirit Vessel Chapter 1192: Rising Stars' Appearance Author(s) : Jiu Dang Jia,九当家 View : 2,318,714

The Book of Xen Part 21 summary

You're reading The Book of Xen. This manga has been translated by Updating. Author(s): Chris Takemura, Luke S. Crawford. Already has 1179 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

NovelOnlineFull.com is a most smartest website for reading manga online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to NovelOnlineFull.com