Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages

  
 
  

home | help
xl.cfg(5)			      Xen			     xl.cfg(5)

NAME
       xl.cfg -	xl domain configuration	file syntax

SYNOPSIS
	/etc/xen/xldomain

DESCRIPTION
       Creating	a VM (a	domain in Xen terminology, sometimes called a guest)
       with xl requires	the provision of a domain configuration	file.
       Typically, these	live in	/etc/xen/DOMAIN.cfg, where DOMAIN is the name
       of the domain.

SYNTAX
       A domain	configuration file consists of a series	of options, specified
       by using	"KEY=VALUE" pairs.

       Some "KEY"s are mandatory, some are general options which apply to any
       guest type, while others	relate only to specific	guest types (e.g. PV
       or HVM guests).

       A "VALUE" can be	one of:

       "STRING"
	   A string, surrounded	by either single or double quotes. But if the
	   STRING is part of a SPEC_STRING, the	quotes should be omitted.

       NUMBER
	   A number, in	either decimal,	octal (using a 0 prefix) or
	   hexadecimal (using a	"0x" prefix) format.

       BOOLEAN
	   A "NUMBER" interpreted as "False" (0) or "True" (any	other value).

       [ VALUE,	VALUE, ... ]
	   A list of "VALUE"s of the above types. Lists	can be heterogeneous
	   and nested.

       The semantics of	each "KEY" defines which type of "VALUE" is required.

       Pairs may be separated either by	a newline or a semicolon.  Both	of the
       following are valid:

	 name="h0"
	 type="hvm"

	 name="h0"; type="hvm"

OPTIONS
   Mandatory Configuration Items
       The following key is mandatory for any guest type.

       name="NAME"
	   Specifies the name of the domain.  Names of domains existing	on a
	   single host must be unique.

   Selecting Guest Type
       type="pv"
	   Specifies that this is to be	a PV domain, suitable for hosting Xen-
	   aware guest operating systems. This is the default on x86.

       type="pvh"
	   Specifies that this is to be	an PVH domain. That is a lightweight
	   HVM-like guest without a device model and without many of the
	   emulated devices available to HVM guests. Note that this mode
	   requires a PVH aware	kernel on x86. This is the default on Arm.

       type="hvm"
	   Specifies that this is to be	an HVM domain. That is,	a fully
	   virtualised computer	with emulated BIOS, disk and network
	   peripherals,	etc.

       Deprecated guest	type selection

       Note that the builder option is being deprecated	in favor of the	type
       option.

       builder="generic"
	   Specifies that this is to be	a PV domain, suitable for hosting Xen-
	   aware guest operating systems. This is the default.

       builder="hvm"
	   Specifies that this is to be	an HVM domain.	That is, a fully
	   virtualised computer	with emulated BIOS, disk and network
	   peripherals,	etc.

   General Options
       The following options apply to guests of	any type.

       CPU Allocation

       pool="CPUPOOLNAME"
	   Put the guest's vCPUs into the named	CPU pool.

       vcpus=N
	   Start the guest with	N vCPUs	initially online.

       maxvcpus=M
	   Allow the guest to bring up a maximum of M vCPUs. When starting the
	   guest, if vcpus=N is	less than maxvcpus=M then the first N vCPUs
	   will	be created online and the remainder will be created offline.

       cpus="CPULIST"
	   List	of host	CPUs the guest is allowed to use. Default is no
	   pinning at all (more	on this	below).	A "CPULIST" may	be specified
	   as follows:

	   "all"
	       To allow	all the	vCPUs of the guest to run on all the CPUs on
	       the host.

	   "0-3,5,^1"
	       To allow	all the	vCPUs of the guest to run on CPUs 0,2,3,5. It
	       is possible to combine this with	"all", meaning "all,^7"
	       results in all the vCPUs	of the guest being allowed to run on
	       all the CPUs of the host	except CPU 7.

	   "nodes:0-3,^node:2"
	       To allow	all the	vCPUs of the guest to run on the CPUs from
	       NUMA nodes 0,1,3	of the host. So, if CPUs 0-3 belong to node 0,
	       CPUs 4-7	belong to node 1, CPUs 8-11 to node 2 and CPUs 12-15
	       to node 3, the above would mean all the vCPUs of	the guest
	       would be	allowed	to run on CPUs 0-7,12-15.

	       Combining this notation with the	one above is possible. For
	       instance, "1,node:1,^6",	means all the vCPUs of the guest will
	       run on CPU 1 and	on all the CPUs	of NUMA	node 1,	but not	on CPU
	       6. Following the	same example as	above, that would be CPUs
	       1,4,5,7.

	       Combining this with "all" is also possible, meaning
	       "all,^node:1" results in	all the	vCPUs of the guest running on
	       all the CPUs on the host, except	for the	CPUs belonging to the
	       host NUMA node 1.

	   ["2", "3-8,^5"]
	       To ask for specific vCPU	mapping. That means (in	this example),
	       vCPU 0 of the guest will	run on CPU 2 of	the host and vCPU 1 of
	       the guest will run on CPUs 3,4,6,7,8 of the host	(excluding CPU
	       5).

	       More complex notation can be also used, exactly as described
	       above. So "all,^5-8", or	just "all", or
	       "node:0,node:2,^9-11,18-20" are all legal, for each element of
	       the list.

	   If this option is not specified, no vCPU to CPU pinning is
	   established,	and the	vCPUs of the guest can run on all the CPUs of
	   the host. If	this option is specified, the intersection of the vCPU
	   pinning mask, provided here,	and the	soft affinity mask, if
	   provided via	cpus_soft=, is utilized	to compute the domain node-
	   affinity for	driving	memory allocations.

       cpus_soft="CPULIST"
	   Exactly as cpus=, but specifies soft	affinity, rather than pinning
	   (hard affinity). When using the credit scheduler, this means	what
	   CPUs	the vCPUs of the domain	prefer.

	   A "CPULIST" is specified exactly as for cpus=, detailed earlier in
	   the manual.

	   If this option is not specified, the	vCPUs of the guest will	not
	   have	any preference regarding host CPUs. If this option is
	   specified, the intersection of the soft affinity mask, provided
	   here, and the vCPU pinning, if provided via cpus=, is utilized to
	   compute the domain node-affinity for	driving	memory allocations.

	   If this option is not specified (and	cpus= is not specified
	   either), libxl automatically	tries to place the guest on the	least
	   possible number of nodes. A heuristic approach is used for choosing
	   the best node (or set of nodes), with the goal of maximizing
	   performance for the guest and, at the same time, achieving
	   efficient utilization of host CPUs and memory. In that case,	the
	   soft	affinity of all	the vCPUs of the domain	will be	set to host
	   CPUs	belonging to NUMA nodes	chosen during placement.

	   For more details, see xl-numa-placement(7).

       CPU Scheduling

       cpu_weight=WEIGHT
	   A domain with a weight of 512 will get twice	as much	CPU as a
	   domain with a weight	of 256 on a contended host.  Legal weights
	   range from 1	to 65535 and the default is 256.  Honoured by the
	   credit and credit2 schedulers.

       cap=N
	   The cap optionally fixes the	maximum	amount of CPU a	domain will be
	   able	to consume, even if the	host system has	idle CPU cycles.  The
	   cap is expressed as a percentage of one physical CPU: 100 is	1
	   physical CPU, 50 is half a CPU, 400 is 4 CPUs, etc.	The default,
	   0, means there is no	cap.  Honoured by the credit and credit2
	   schedulers.

	   NOTE: Many systems have features that will scale down the computing
	   power of a CPU that is not 100% utilized.  This can be done in the
	   operating system, but can also sometimes be done below the
	   operating system, in	the BIOS.  If you set a	cap such that
	   individual cores are	running	at less	than 100%, this	may have an
	   impact on the performance of	your workload over and above the
	   impact of the cap. For example, if your processor runs at 2GHz, and
	   you cap a VM	at 50%,	the power management system may	also reduce
	   the clock speed to 1GHz; the	effect will be that your VM gets 25%
	   of the available power (50% of 1GHz)	rather than 50%	(50% of	2GHz).
	   If you are not getting the performance you expect, look at
	   performance and CPU frequency options in your operating system and
	   your	BIOS.

       Memory Allocation

       memory=MBYTES
	   Start the guest with	MBYTES megabytes of RAM.

       maxmem=MBYTES
	   Specifies the maximum amount	of memory a guest can ever see.	 The
	   value of maxmem= must be equal to or	greater	than that of memory=.

	   In combination with memory= it will start the guest "pre-
	   ballooned", if the values of	memory=	and maxmem= differ.  A "pre-
	   ballooned" HVM guest	needs a	balloon	driver,	without	a balloon
	   driver it will crash.

	   NOTE: Because of the	way ballooning works, the guest	has to
	   allocate memory to keep track of maxmem pages, regardless of	how
	   much	memory it actually has available to it.	 A guest with
	   maxmem=262144 and memory=8096 will report significantly less	memory
	   available for use than a system with	maxmem=8096 memory=8096	due to
	   the memory overhead of having to track the unused pages.

       Guest Virtual NUMA Configuration

       vnuma=[ VNODE_SPEC, VNODE_SPEC, ... ]
	   Specify virtual NUMA	configuration with positional arguments. The
	   nth VNODE_SPEC in the list specifies	the configuration of the nth
	   virtual node.

	   Note	that virtual NUMA is not supported for PV guests yet, because
	   there is an issue with the CPUID instruction	handling that affects
	   PV virtual NUMA. Furthermore, guests	with virtual NUMA cannot be
	   saved or migrated because the migration stream does not preserve
	   node	information.

	   Each	VNODE_SPEC is a	list, which has	a form of
	   "[VNODE_CONFIG_OPTION, VNODE_CONFIG_OPTION, ... ]"  (without	the
	   quotes).

	   For example,	vnuma =	[
	   ["pnode=0","size=512","vcpus=0-4","vdistances=10,20"] ] means vnode
	   0 is	mapped to pnode	0, has 512MB ram, has vcpus 0 to 4, the
	   distance to itself is 10 and	the distance to	vnode 1	is 20.

	   Each	VNODE_CONFIG_OPTION is a quoted	"KEY=VALUE" pair. Supported
	   VNODE_CONFIG_OPTIONs	are (they are all mandatory at the moment):

	   pnode=NUMBER
	       Specifies which physical	node this virtual node maps to.

	   size=MBYTES
	       Specifies the size of this virtual node.	The sum	of memory
	       sizes of	all vnodes will	become maxmem=.	If maxmem= is
	       specified separately, a check is	performed to make sure the sum
	       of all vnode memory matches maxmem=.

	   vcpus="CPUSTRING"
	       Specifies which vCPUs belong to this node. "CPUSTRING" is a
	       string of numerical values separated by a comma.	You can
	       specify a range and/or a	single CPU.  An	example	would be
	       "vcpus=0-5,8", which means you specified	vCPU 0 to vCPU 5, and
	       vCPU 8.

	   vdistances=NUMBER, NUMBER, ...
	       Specifies the virtual distance from this	node to	all nodes
	       (including itself) with positional arguments. For example,
	       "vdistance=10,20" for vnode 0 means the distance	from vnode 0
	       to vnode	0 is 10, from vnode 0 to vnode 1 is 20.	The number of
	       arguments supplied must match the total number of vnodes.

	       Normally	you can	use the	values from xl info -n or numactl
	       --hardware to fill the vdistances list.

       Event Actions

       on_poweroff="ACTION"
	   Specifies what should be done with the domain if it shuts itself
	   down.  The ACTIONs are:

	   destroy
	       destroy the domain

	   restart
	       destroy the domain and immediately create a new domain with the
	       same configuration

	   rename-restart
	       rename the domain which terminated, and then immediately	create
	       a new domain with the same configuration	as the original

	   preserve
	       keep the	domain.	 It can	be examined, and later destroyed with
	       xl destroy.

	   coredump-destroy
	       write a "coredump" of the domain	to /var/lib/xen/dump/NAME and
	       then destroy the	domain.

	   coredump-restart
	       write a "coredump" of the domain	to /var/lib/xen/dump/NAME and
	       then restart the	domain.

	   soft-reset
	       Reset all Xen specific interfaces for the Xen-aware HVM domain
	       allowing	it to reestablish these	interfaces and continue
	       executing the domain. PV	and non-Xen-aware HVM guests are not
	       supported.

	   The default for on_poweroff is destroy.

       on_reboot="ACTION"
	   Action to take if the domain	shuts down with	a reason code
	   requesting a	reboot.	 Default is restart.

       on_watchdog="ACTION"
	   Action to take if the domain	shuts down due to a Xen	watchdog
	   timeout.  Default is	destroy.

       on_crash="ACTION"
	   Action to take if the domain	crashes.  Default is destroy.

       on_soft_reset="ACTION"
	   Action to take if the domain	performs a 'soft reset'	(e.g. does
	   kexec).  Default is soft-reset.

       Direct Kernel Boot

       Direct kernel boot allows booting guests	with a kernel and an initrd
       stored on a filesystem available	to the host physical machine, allowing
       command line arguments to be passed directly. PV	guest direct kernel
       boot is supported. HVM guest direct kernel boot is supported with some
       limitations (it's supported when	using qemu-xen and the default BIOS
       'seabios', but not supported in case of using stubdom-dm	and the	old
       'rombios'.)

       kernel="PATHNAME"
	   Load	the specified file as the kernel image.

       ramdisk="PATHNAME"
	   Load	the specified file as the ramdisk.

       cmdline="STRING"
	   Append STRING to the	kernel command line. (Note: the	meaning	of
	   this	is guest specific). It can replace root="STRING" along with
	   extra="STRING" and is preferred. When cmdline="STRING" is set,
	   root="STRING" and extra="STRING" will be ignored.

       root="STRING"
	   Append root=STRING to the kernel command line (Note:	the meaning of
	   this	is guest specific).

       extra="STRING"
	   Append STRING to the	kernel command line. (Note: the	meaning	of
	   this	is guest specific).

       Non direct Kernel Boot

       Non direct kernel boot allows booting guests with a firmware. This can
       be used by all types of guests, although	the selection of options is
       different depending on the guest	type.

       This option provides the	flexibly of letting the	guest decide which
       kernel they want	to boot, while preventing having to poke at the	guest
       file system form	the toolstack domain.

       PV guest	options

       firmware="pvgrub32|pvgrub64"
	   Boots a guest using a para-virtualized version of grub that runs
	   inside of the guest.	The bitness of the guest needs to be know, so
	   that	the right version of pvgrub can	be selected.

	   Note	that xl	expects	to find	the pvgrub32.bin and pvgrub64.bin
	   binaries in /usr/local/lib/xen/boot.

       HVM guest options

       firmware="bios"
	   Boot	the guest using	the default BIOS firmware, which depends on
	   the chosen device model.

       firmware="uefi"
	   Boot	the guest using	the default UEFI firmware, currently OVMF.

       firmware="seabios"
	   Boot	the guest using	the SeaBIOS BIOS firmware.

       firmware="rombios"
	   Boot	the guest using	the ROMBIOS BIOS firmware.

       firmware="ovmf"
	   Boot	the guest using	the OVMF UEFI firmware.

       firmware="PATH"
	   Load	the specified file as firmware for the guest.

       PVH guest options

       Currently there's no firmware available for PVH guests, they should be
       booted using the	Direct Kernel Boot method or the bootloader option.

       pvshim=BOOLEAN
	   Whether to boot this	guest as a PV guest within a PVH container.
	   Ie, the guest will experience a PV environment, but processor
	   hardware extensions are used	to separate its	address	space to
	   mitigate the	Meltdown attack	(CVE-2017-5754).

	   Default is false.

       pvshim_path="PATH"
	   The PV shim is a specially-built firmware-like executable
	   constructed from the	hypervisor source tree.	 This option specifies
	   to use a non-default	shim.  Ignored if pvhsim is false.

       pvshim_cmdline="STRING"
	   Command line	for the	shim.  Default is "pv-shim console=xen,pv".
	   Ignored if pvhsim is	false.

       pvshim_extra="STRING"
	   Extra command line arguments	for the	shim.  If supplied, appended
	   to the value	for pvshim_cmdline.  Default is	empty.	Ignored	if
	   pvhsim is false.

       Other Options

       uuid="UUID"
	   Specifies the UUID of the domain.  If not specified,	a fresh	unique
	   UUID	will be	generated.

       seclabel="LABEL"
	   Assign an XSM security label	to this	domain.

       init_seclabel="LABEL"
	   Specify an XSM security label used for this domain temporarily
	   during its build. The domain's XSM label will be changed to the
	   execution seclabel (specified by seclabel) once the build is
	   complete, prior to unpausing	the domain. With a properly
	   constructed security	policy (such as	nomigrate_t in the example
	   policy), this can be	used to	build a	domain whose memory is not
	   accessible to the toolstack domain.

       max_grant_frames=NUMBER
	   Specify the maximum number of grant frames the domain is allowed to
	   have.  This value controls how many pages the domain	is able	to
	   grant access	to for other domains, needed e.g. for the operation of
	   paravirtualized devices.  The default is settable via xl.conf(5).

       max_maptrack_frames=NUMBER
	   Specify the maximum number of grant maptrack	frames the domain is
	   allowed to have. This value controls	how many pages of foreign
	   domains can be accessed via the grant mechanism by this domain. The
	   default value is settable via xl.conf(5).

       nomigrate=BOOLEAN
	   Disable migration of	this domain.  This enables certain other
	   features which are incompatible with	migration. Currently this is
	   limited to enabling the invariant TSC feature flag in CPUID results
	   when	TSC is not emulated.

       driver_domain=BOOLEAN
	   Specify that	this domain is a driver	domain.	This enables certain
	   features needed in order to run a driver domain.

       device_tree=PATH
	   Specify a partial device tree (compiled via the Device Tree
	   Compiler).  Everything under	the node "/passthrough"	will be	copied
	   into	the guest device tree. For convenience,	the node "/aliases" is
	   also	copied to allow	the user to define aliases which can be	used
	   by the guest	kernel.

	   Given the complexity	of verifying the validity of a device tree,
	   this	option should only be used with	a trusted device tree.

	   Note	that the partial device	tree should avoid using	the phandle
	   65000 which is reserved by the toolstack.

       passthrough="STRING"
	   Specify whether IOMMU mappings are enabled for the domain and hence
	   whether it will be enabled for passthrough hardware.	Valid values
	   for this option are:

	   disabled
	       IOMMU mappings are disabled for the domain and so hardware may
	       not be passed through.

	       This option is the default if no	passthrough hardware is
	       specified in the	domain's configuration.

	   enabled
	       This option enables IOMMU mappings and selects an appropriate
	       default operating mode (see below for details of	the operating
	       modes). For HVM/PVH domains running on platforms	where the
	       option is available, this is equivalent to share_pt. Otherwise,
	       and also	for PV domains,	this option is equivalent to sync_pt.

	       This option is the default if passthrough hardware is specified
	       in the domain's configuration.

	   sync_pt
	       This option means that IOMMU mappings will be synchronized with
	       the domain's P2M	table as follows:

	       For a PV	domain,	all writable pages assigned to the domain are
	       identity	mapped by MFN in the IOMMU page	table. Thus a device
	       driver running in the domain may	program	passthrough hardware
	       for DMA using MFN values	(i.e. host/machine frame numbers)
	       looked up in its	P2M.

	       For an HVM/PVH domain, all non-foreign RAM pages	present	in its
	       P2M will	be mapped by GFN in the	IOMMU page table. Thus a
	       device driver running in	the domain may program passthrough
	       hardware	using GFN values (i.e. guest physical frame numbers)
	       without any further translation.

	       This option is not currently available on Arm.

	   share_pt
	       This option is unavailable for a	PV domain. For an HVM/PVH
	       domain, this option means that the IOMMU	will be	programmed to
	       directly	reference the domain's P2M table as its	page table.
	       From the	point of view of a device driver running in the	domain
	       this is functionally equivalent to sync_pt but places less load
	       on the hypervisor and so	should generally be selected in
	       preference. However, the	availability of	this option is
	       hardware	specific. If xl	info reports virt_caps containing
	       iommu_hap_pt_share then this option may be used.

	   default
	       The default, which chooses between disabled and enabled
	       according to whether passthrough	devices	are enabled in the
	       config file.

       xend_suspend_evtchn_compat=BOOLEAN
	   If this option is true the xenstore path for	the domain's suspend
	   event channel will not be created. Instead the old xend behaviour
	   of making the whole xenstore	device sub-tree	writable by the	domain
	   will	be re-instated.

	   The existence of the	suspend	event channel path can cause problems
	   with	certain	PV drivers running in the guest	(e.g. old Red Hat PV
	   drivers for Windows).

	   If this option is not specified then	it will	default	to false.

   Devices
       The following options define the	paravirtual, emulated and physical
       devices which the guest will contain.

       disk=[ "DISK_SPEC_STRING", "DISK_SPEC_STRING", ...]
	   Specifies the disks (both emulated disks and	Xen virtual block
	   devices) which are to be provided to	the guest, and what objects on
	   the host they should	map to.	 See xl-disk-configuration(5) for more
	   details.

       vif=[ "NET_SPEC_STRING",	"NET_SPEC_STRING", ...]
	   Specifies the network interfaces (both emulated network adapters,
	   and Xen virtual interfaces) which are to be provided	to the guest.
	   See xl-network-configuration(5) for more details.

       vtpm=[ "VTPM_SPEC_STRING", "VTPM_SPEC_STRING", ...]
	   Specifies the Virtual Trusted Platform module to be provided	to the
	   guest. See xen-vtpm(7) for more details.

	   Each	VTPM_SPEC_STRING is a comma-separated list of "KEY=VALUE"
	   settings from the following list:

	   backend=domain-id
	       Specifies the backend domain name or id.	This value is
	       required!  If this domain is a guest, the backend should	be set
	       to the vTPM domain name.	If this	domain is a vTPM, the backend
	       should be set to	the vTPM manager domain	name.

	   uuid=UUID
	       Specifies the UUID of this vTPM device. The UUID	is used	to
	       uniquely	identify the vTPM device. You can create one using the
	       uuidgen(1) program on unix systems. If left unspecified,	a new
	       UUID will be randomly generated every time the domain boots.
	       If this is a vTPM domain, you should specify a value. The value
	       is optional if this is a	guest domain.

       p9=[ "9PFS_SPEC_STRING",	"9PFS_SPEC_STRING", ...]
	   Creates a Xen 9pfs connection to share a filesystem from the
	   backend to the frontend.

	   Each	9PFS_SPEC_STRING is a comma-separated list of "KEY=VALUE"
	   settings, from the following	list:

	   tag=STRING
	       9pfs tag	to identify the	filesystem share. The tag is needed on
	       the guest side to mount it.

	   security_model="none"
	       Only "none" is supported	today, which means that	the files are
	       stored using the	same credentials as those they have in the
	       guest (no user ownership	squash or remap).

	   path=STRING
	       Filesystem path on the backend to export.

	   backend=domain-id
	       Specify the backend domain name or id, defaults to dom0.

       pvcalls=[ "backend=domain-id", ... ]
	   Creates a Xen pvcalls connection to handle pvcalls requests from
	   frontend to backend.	It can be used as an alternative networking
	   model.  For more information	about the protocol, see
	   https://xenbits.xenproject.org/docs/unstable/misc/pvcalls.html.

       vfb=[ "VFB_SPEC_STRING",	"VFB_SPEC_STRING", ...]
	   Specifies the paravirtual framebuffer devices which should be
	   supplied to the domain.

	   This	option does not	control	the emulated graphics card presented
	   to an HVM guest. See	Emulated VGA Graphics Device below for how to
	   configure the emulated device. If Emulated VGA Graphics Device
	   options are used in a PV guest configuration, xl will pick up vnc,
	   vnclisten, vncpasswd, vncdisplay, vncunused,	sdl, opengl and	keymap
	   to construct	the paravirtual	framebuffer device for the guest.

	   Each	VFB_SPEC_STRING	is a comma-separated list of "KEY=VALUE"
	   settings, from the following	list:

	   vnc=BOOLEAN
	       Allow access to the display via the VNC protocol.  This enables
	       the other VNC-related settings.	Default	is 1 (enabled).

	   vnclisten=ADDRESS[:DISPLAYNUM]
	       Specifies the IP	address, and optionally	the VNC	display
	       number, to use.

	       Note: if	you specify the	display	number here, you should	not
	       use the vncdisplay option.

	   vncdisplay=DISPLAYNUM
	       Specifies the VNC display number	to use.	 The actual TCP	port
	       number will be DISPLAYNUM+5900.

	       Note: you should	not use	this option if you set the DISPLAYNUM
	       in the vnclisten	option.

	   vncunused=BOOLEAN
	       Requests	that the VNC display setup searches for	a free TCP
	       port to use.  The actual	display	used can be accessed with xl
	       vncviewer.

	   vncpasswd=PASSWORD
	       Specifies the password for the VNC server. If the password is
	       set to an empty string, authentication on the VNC server	will
	       be disabled, allowing any user to connect.

	   sdl=BOOLEAN
	       Specifies that the display should be presented via an X window
	       (using Simple DirectMedia Layer). The default is	0 (not
	       enabled).

	   display=DISPLAY
	       Specifies the X Window display that should be used when the sdl
	       option is used.

	   xauthority=XAUTHORITY
	       Specifies the path to the X authority file that should be used
	       to connect to the X server when the sdl option is used.

	   opengl=BOOLEAN
	       Enable OpenGL acceleration of the SDL display. Only effects
	       machines	using device_model_version="qemu-xen-traditional" and
	       only if the device-model	was compiled with OpenGL support. The
	       default is 0 (disabled).

	   keymap=LANG
	       Configure the keymap to use for the keyboard associated with
	       this display. If	the input method does not easily support raw
	       keycodes	(e.g. this is often the	case when using	VNC) then this
	       allows us to correctly map the input keys into keycodes seen by
	       the guest. The specific values which are	accepted are defined
	       by the version of the device-model which	you are	using. See
	       Keymaps below or	consult	the qemu(1) manpage. The default is
	       en-us.

       channel=[ "CHANNEL_SPEC_STRING",	"CHANNEL_SPEC_STRING", ...]
	   Specifies the virtual channels to be	provided to the	guest. A
	   channel is a	low-bandwidth, bidirectional byte stream, which
	   resembles a serial link. Typical uses for channels include
	   transmitting	VM configuration after boot and	signalling to in-guest
	   agents. Please see xen-pv-channel(7)	for more details.

	   Each	CHANNEL_SPEC_STRING is a comma-separated list of "KEY=VALUE"
	   settings. Leading and trailing whitespace is	ignored	in both	KEY
	   and VALUE. Neither KEY nor VALUE may	contain	',', '=' or '"'.
	   Defined values are:

	   backend=domain-id
	       Specifies the backend domain name or id.	This parameter is
	       optional. If this parameter is omitted then the toolstack
	       domain will be assumed.

	   name=NAME
	       Specifies the name for this device. This	parameter is
	       mandatory!  This	should be a well-known name for	a specific
	       application (e.g.  guest	agent) and should be used by the
	       frontend	to connect the application to the right	channel
	       device. There is	no formal registry of channel names, so
	       application authors are encouraged to make their	names unique
	       by including the	domain name and	a version number in the	string
	       (e.g. org.mydomain.guestagent.1).

	   connection=CONNECTION
	       Specifies how the backend will be implemented. The following
	       options are available:

	       SOCKET
		   The backend will bind a Unix	domain socket (at the path
		   given by path=PATH),	listen for and accept connections. The
		   backend will	proxy data between the channel and the
		   connected socket.

	       PTY The backend will create a pty and proxy data	between	the
		   channel and the master device. The command xl channel-list
		   can be used to discover the assigned	slave device.

       rdm="RDM_RESERVATION_STRING"
	   HVM/x86 only! Specifies information about Reserved Device Memory
	   (RDM), which	is necessary to	enable robust device passthrough. One
	   example of RDM is reporting through the ACPI	Reserved Memory	Region
	   Reporting (RMRR) structure on the x86 platform.

	   RDM_RESERVATION_STRING is a comma separated list of "KEY=VALUE"
	   settings, from the following	list:

	   strategy=STRING
	       Currently there is only one valid type, and that	is "host".

	       host
		   If set to "host" it means all reserved device memory	on
		   this	platform should	be checked to reserve regions in this
		   VM's	address	space. This global RDM parameter allows	the
		   user	to specify reserved regions explicitly,	and using
		   "host" includes all reserved	regions	reported on this
		   platform, which is useful when doing	hotplug.

		   By default this isn't set so	we don't check all RDMs.
		   Instead, we just check the RDM specific to a	given device
		   if we're assigning this kind	of a device.

		   Note: this option is	not recommended	unless you can make
		   sure	that no	conflicts exist.

		   For example,	you're trying to set "memory = 2800" to
		   allocate memory to one given	VM but the platform owns two
		   RDM regions like:

		   Device A [sbdf_A]: RMRR region_A: base_addr ac6d3000
		   end_address ac6e6fff

		   Device B [sbdf_B]: RMRR region_B: base_addr ad800000
		   end_address afffffff

		   In this conflict case,

		   #1. If strategy is set to "host", for example:

		   rdm = "strategy=host,policy=strict" or rdm =
		   "strategy=host,policy=relaxed"

		   it means all	conflicts will be handled according to the
		   policy introduced by	policy as described below.

		   #2. If strategy is not set at all, but

		   pci = [ 'sbdf_A, rdm_policy=xxxxx' ]

		   it means only one conflict of region_A will be handled
		   according to	the policy introduced by rdm_policy=STRING as
		   described inside pci	options.

	   policy=STRING
	       Specifies how to	deal with conflicts when reserving already
	       reserved	device memory in the guest address space.

	       strict
		   Specifies that in case of an	unresolved conflict the	VM
		   can't be created, or	the associated device can't be
		   attached in the case	of hotplug.

	       relaxed
		   Specifies that in case of an	unresolved conflict the	VM is
		   allowed to be created but may cause the VM to crash if a
		   pass-through	device accesses	RDM.  For example, the Windows
		   IGD GFX driver always accesses RDM regions so it leads to a
		   VM crash.

		   Note: this may be overridden	by the rdm_policy option in
		   the pci device configuration.

       usbctrl=[ "USBCTRL_SPEC_STRING",	"USBCTRL_SPEC_STRING", ...]
	   Specifies the USB controllers created for this guest.

	   Each	USBCTRL_SPEC_STRING is a comma-separated list of "KEY=VALUE"
	   settings, from the following	list:

	   type=TYPE
	       Specifies the usb controller type.

	       pv  Specifies a kernel based PVUSB backend.

	       qusb
		   Specifies a QEMU based PVUSB	backend.

	       devicemodel
		   Specifies a USB controller emulated by QEMU.	 It will show
		   up as a PCI-device in the guest.

	       auto
		   Determines whether a	kernel based backend is	installed.  If
		   this	is the case, pv	is used, otherwise qusb	will be	used.
		   For HVM domains devicemodel will be selected.

		   This	option is the default.

	   version=VERSION
	       Specifies the usb controller version.  Possible values include
	       1 (USB1.1), 2 (USB2.0) and 3 (USB3.0).  Default is 2 (USB2.0).
	       Value 3 (USB3.0)	is available for the devicemodel type only.

	   ports=PORTS
	       Specifies the total number of ports of the usb controller. The
	       maximum number is 31. The default is 8.	With the type
	       devicemodel the number of ports is more limited:	a USB1.1
	       controller always has 2 ports, a	USB2.0 controller always has 6
	       ports and a USB3.0 controller can have up to 15 ports.

	       USB controller ids start	from 0.	 In line with the USB
	       specification, however, ports on	a controller start from	1.

	       EXAMPLE

		 usbctrl=["version=1,ports=4", "version=2,ports=8"]

		 The first controller is USB1.1	and has:

		 controller id = 0, and	ports 1,2,3,4.

		 The second controller is USB2.0 and has:

		 controller id = 1, and	ports 1,2,3,4,5,6,7,8.

       usbdev=[	"USBDEV_SPEC_STRING", "USBDEV_SPEC_STRING", ...]
	   Specifies the USB devices to	be attached to the guest at boot.

	   Each	USBDEV_SPEC_STRING is a	comma-separated	list of	"KEY=VALUE"
	   settings, from the following	list:

	   type=hostdev
	       Specifies USB device type. Currently only "hostdev" is
	       supported.

	   hostbus=busnum
	       Specifies busnum	of the USB device from the host	perspective.

	   hostaddr=devnum
	       Specifies devnum	of the USB device from the host	perspective.

	   controller=CONTROLLER
	       Specifies the USB controller id,	to which controller the	USB
	       device is attached.

	       If no controller	is specified, an available controller:port
	       combination will	be used.  If there are no available
	       controller:port combinations, a new controller will be created.

	   port=PORT
	       Specifies the USB port to which the USB device is attached. The
	       port option is valid only when the controller option is
	       specified.

       pci=[ "PCI_SPEC_STRING",	"PCI_SPEC_STRING", ...]
	   Specifies the host PCI devices to passthrough to this guest.	 Each
	   PCI_SPEC_STRING has the form	of
	   [DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,... where:

	   [DDDD:]BB:DD.F
	       Identifies the PCI device from the host perspective in the
	       domain (DDDD), Bus (BB),	Device (DD) and	Function (F) syntax.
	       This is the same	scheme as used in the output of	lspci(1) for
	       the device in question.

	       Note: by	default	lspci(1) will omit the domain (DDDD) if	it is
	       zero and	it is optional here also. You may specify the function
	       (F) as *	to indicate all	functions.

	   @VSLOT
	       Specifies the virtual slot where	the guest will see this
	       device. This is equivalent to the DD which the guest sees. In a
	       guest DDDD and BB are "0000:00".

	   permissive=BOOLEAN
	       By default pciback only allows PV guests	to write "known	safe"
	       values into PCI configuration space, likewise QEMU (both	qemu-
	       xen and qemu-xen-traditional) imposes the same constraint on
	       HVM guests.  However, many devices require writes to other
	       areas of	the configuration space	in order to operate properly.
	       This option tells the backend (pciback or QEMU) to allow	all
	       writes to the PCI configuration space of	this device by this
	       domain.

	       This option should be enabled with caution: it gives the	guest
	       much more control over the device, which	may have security or
	       stability implications.	It is recommended to only enable this
	       option for trusted VMs under administrator's control.

	   msitranslate=BOOLEAN
	       Specifies that MSI-INTx translation should be turned on for the
	       PCI device. When	enabled, MSI-INTx translation will always
	       enable MSI on the PCI device regardless of whether the guest
	       uses INTx or MSI. Some device drivers, such as NVIDIA's,	detect
	       an inconsistency	and do not function when this option is
	       enabled.	Therefore the default is false (0).

	   seize=BOOLEAN
	       Tells xl	to automatically attempt to re-assign a	device to
	       pciback if it is	not already assigned.

	       WARNING:	If you set this	option,	xl will	gladly re-assign a
	       critical	system device, such as a network or a disk controller
	       being used by dom0 without confirmation.	 Please	use with care.

	   power_mgmt=BOOLEAN
	       (HVM only) Specifies that the VM	should be able to program the
	       D0-D3hot	power management states	for the	PCI device. The
	       default is false	(0).

	   rdm_policy=STRING
	       (HVM/x86	only) This is the same as the policy setting inside
	       the rdm option but just specific	to a given device. The default
	       is "relaxed".

	       Note: this would	override global	rdm option.

       pci_permissive=BOOLEAN
	   Changes the default value of	permissive for all PCI devices passed
	   through to this VM. See permissive above.

       pci_msitranslate=BOOLEAN
	   Changes the default value of	msitranslate for all PCI devices
	   passed through to this VM. See msitranslate above.

       pci_seize=BOOLEAN
	   Changes the default value of	seize for all PCI devices passed
	   through to this VM. See seize above.

       pci_power_mgmt=BOOLEAN
	   (HVM	only) Changes the default value	of power_mgmt for all PCI
	   devices passed through to this VM. See power_mgmt above.

       gfx_passthru=BOOLEAN|"STRING"
	   Enable graphics device PCI passthrough. This	option makes an
	   assigned PCI	graphics card become the primary graphics card in the
	   VM. The QEMU	emulated graphics adapter is disabled and the VNC
	   console for the VM will not have any	graphics output. All graphics
	   output, including boot time QEMU BIOS messages from the VM, will go
	   to the physical outputs of the passed through physical graphics
	   card.

	   The graphics	card PCI device	to pass	through	is chosen with the pci
	   option, in exactly the same way a normal Xen	PCI device
	   passthrough/assignment is done.  Note that gfx_passthru does	not do
	   any kind of sharing of the GPU, so you can assign the GPU to	only
	   one single VM at a time.

	   gfx_passthru	also enables various legacy VGA	memory ranges, BARs,
	   MMIOs, and ioports to be passed through to the VM, since those are
	   required for	correct	operation of things like VGA BIOS, text	mode,
	   VBE,	etc.

	   Enabling the	gfx_passthru option also copies	the physical graphics
	   card	video BIOS to the guest	memory,	and executes the VBIOS in the
	   guest to initialize the graphics card.

	   Most	graphics adapters require vendor specific tweaks for properly
	   working graphics passthrough. See the
	   XenVGAPassthroughTestedAdapters
	   <https://wiki.xenproject.org/wiki/XenVGAPassthroughTestedAdapters>
	   wiki	page for graphics cards	currently supported by gfx_passthru.

	   gfx_passthru	is currently supported both with the qemu-xen-
	   traditional device-model and	upstream qemu-xen device-model.

	   When	given as a boolean the gfx_passthru option either disables
	   graphics card passthrough or	enables	autodetection.

	   When	given as a string the gfx_passthru option describes the	type
	   of device to	enable.	Note that this behavior	is only	supported with
	   the upstream	qemu-xen device-model. With qemu-xen-traditional IGD
	   (Intel Graphics Device) is always assumed and options other than
	   autodetect or explicit IGD will result in an	error.

	   Currently, valid values for the option are:

	   0   Disables	graphics device	PCI passthrough.

	   1, "default"
	       Enables graphics	device PCI passthrough and autodetects the
	       type of device which is being used.

	   "igd"
	       Enables graphics	device PCI passthrough but forcing the type of
	       device to Intel Graphics	Device.

	   Note	that some graphics cards (AMD/ATI cards, for example) do not
	   necessarily require the gfx_passthru	option,	so you can use the
	   normal Xen PCI passthrough to assign	the graphics card as a
	   secondary graphics card to the VM. The QEMU-emulated	graphics card
	   remains the primary graphics	card, and VNC output is	available from
	   the QEMU-emulated primary adapter.

	   More	information about the Xen gfx_passthru feature is available on
	   the XenVGAPassthrough
	   <https://wiki.xenproject.org/wiki/XenVGAPassthrough>	wiki page.

       rdm_mem_boundary=MBYTES
	   Number of megabytes to set for a boundary when checking for RDM
	   conflicts.

	   When	RDM conflicts with RAM,	RDM is probably	scattered over the
	   whole RAM space. Having multiple RDM	entries	would worsen this and
	   lead	to a complicated memory	layout.	Here we're trying to figure
	   out a simple	solution to avoid breaking the existing	layout.	When a
	   conflict occurs,

	       #1. Above a predefined boundary
		   - move lowmem_end below the reserved	region to solve	the conflict;

	       #2. Below a predefined boundary
		   - Check if the policy is strict or relaxed.
		   A "strict" policy leads to a	fail in	libxl.
		   Note	that when both policies	are specified on a given region,
		   "strict" is always preferred.
		   The "relaxed" policy	issues a warning message and also masks	this
		   entry INVALID to indicate we	shouldn't expose this entry to
		   hvmloader.

	   The default value is	2048.

       dtdev=[ "DTDEV_PATH", "DTDEV_PATH", ...]
	   Specifies the host device tree nodes	to passt hrough	to this	guest.
	   Each	DTDEV_PATH is an absolute path in the device tree.

       ioports=[ "IOPORT_RANGE", "IOPORT_RANGE", ...]
	   Allow the guest to access specific legacy I/O ports.	Each
	   IOPORT_RANGE	is given in hexadecimal	format and may either be a
	   range, e.g. "2f8-2ff" (inclusive), or a single I/O port, e.g.
	   "2f8".

	   It is recommended to	only use this option for trusted VMs under
	   administrator's control.

       iomem=[ "IOMEM_START,NUM_PAGES[@GFN]", "IOMEM_START,NUM_PAGES[@GFN]",
       ...]
	   Allow auto-translated domains to access specific hardware I/O
	   memory pages.

	   IOMEM_START is a physical page number. NUM_PAGES is the number of
	   pages, beginning with START_PAGE, to	allow access to. GFN specifies
	   the guest frame number where	the mapping will start in the guest's
	   address space. If GFN is not	specified, the mapping will be
	   performed using IOMEM_START as a start in the guest's address
	   space, therefore performing a 1:1 mapping by	default.  All of these
	   values must be given	in hexadecimal format.

	   Note	that the IOMMU won't be	updated	with the mappings specified
	   with	this option. This option therefore should not be used to pass
	   through any IOMMU-protected devices.

	   It is recommended to	only use this option for trusted VMs under
	   administrator's control.

       irqs=[ NUMBER, NUMBER, ...]
	   Allow a guest to access specific physical IRQs.

	   It is recommended to	only use this option for trusted VMs under
	   administrator's control.

	   If vuart console is enabled then irq	32 is reserved for it. See
	   "vuart="uart"" to know how to enable	vuart console.

       max_event_channels=N
	   Limit the guest to using at most N event channels (PV interrupts).
	   Guests use hypervisor resources for each event channel they use.

	   The default of 1023 should be sufficient for	typical	guests.	 The
	   maximum value depends on what the guest supports.  Guests
	   supporting the FIFO-based event channel ABI support up to 131,071
	   event channels.  Other guests are limited to	4095 (64-bit x86 and
	   ARM)	or 1023	(32-bit	x86).

       vdispl=[	"VDISPL_SPEC_STRING", "VDISPL_SPEC_STRING", ...]
	   Specifies the virtual display devices to be provided	to the guest.

	   Each	VDISPL_SPEC_STRING is a	comma-separated	list of	"KEY=VALUE"
	   settings, from the following	list:

	   "backend=DOMAIN"
	       Specifies the backend domain name or id.	If not specified
	       Domain-0	is used.

	   "be-alloc=BOOLEAN"
	       Indicates if backend can	be a buffer provider/allocator for
	       this domain. See	display	protocol for details.

	   "connectors=CONNECTORS"
	       Specifies virtual connectors for	the device in following	format
	       <id>:<W>x<H>;<id>:<W>x<H>... where:

	       "id"
		   String connector unique id. Space, comma symbols are	not
		   allowed.

	       "W" Connector width in pixels.

	       "H" Connector height in pixels.

	       EXAMPLE

		   connectors=id0:1920x1080;id1:800x600;id2:640x480

       dm_restrict=BOOLEAN
	   Restrict the	device model after startup, to limit the consequencese
	   of security vulnerabilities in qemu.

	   See docs/features/qemu-depriv.pandoc	for more information on	Linux
	   and QEMU version requirements, device model user setup, and current
	   limitations.

	   This	feature	is a technology	preview.  See SUPPORT.md for a
	   security support statement.

       device_model_user=USERNAME
	   When	running	dm_restrict, run the device model as this user.

	   NOTE: Each domain MUST have a SEPARATE username.

	   See docs/features/qemu-depriv.pandoc	for more information.

       vsnd=[ VCARD_SPEC, VCARD_SPEC, ... ]
	   Specifies the virtual sound cards to	be provided to the guest.
	   Each	VCARD_SPEC is a	list, which has	a form of "[VSND_ITEM_SPEC,
	   VSND_ITEM_SPEC, ... ]" (without the quotes).	 The virtual sound
	   card	has hierarchical structure.  Every card	has a set of PCM
	   devices and streams,	each could be individually configured.

	   VSND_ITEM_SPEC describes individual item parameters.
	   VSND_ITEM_SPEC is a string of comma separated item parameters
	   headed by item identifier. Each item	parameter is "KEY=VALUE" pair:

	       "identifier, param = value, ...".

	   Identifier shall be one of following	values:	"CARD",	"PCM",
	   "STREAM".  The child	item treated as	belonging to the previously
	   defined parent item.

	   All parameters are optional.

	   There are group of parameters which are common for all items.  This
	   group can be	defined	at higher level	of the hierarchy and be	fully
	   or partially	re-used	by the underlying layers. These	parameters
	   are:

	       * number	of channels (min/max)

	       * supported sample rates

	       * supported sample formats

	   E.g.	one can	define these values for	the whole card,	device or
	   stream.  Every underlying layer in turn can re-define some or all
	   of them to better fit its needs. For	example, card may define
	   number of channels to be in [1; 8] range, and some particular
	   stream may be limited to [1;	2] only.  The rule is that the
	   underlying layer must be a subset of	the upper layer	range.

	   COMMON parameters:

	       sample-rates=RATES
		   List	of integer values separated by semicolon:
		   sample-rates=8000;22050;44100

	       sample-formats=FORMATS
		   List	of string values separated by semicolon:
		   sample-formats=s16_le;s8;u32_be

		   Supported formats: s8, u8, s16_le, s16_be, u16_le, u16_be,
		   s24_le, s24_be, u24_le, u24_be, s32_le, s32_be, u32_le,
		   u32_be, float_le, float_be, float64_le, float64_be,
		   iec958_subframe_le, iec958_subframe_be, mu_law, a_law,
		   ima_adpcm, mpeg, gsm

	       channels-min=NUMBER
		   The minimum amount of channels.

	       channels-max=NUMBER
		   The maximum amount of channels.

	       buffer-size=NUMBER
		   The maximum size in octets of the buffer to allocate	per
		   stream.

	   CARD	specification:

	       backend=domain-id
		   Specify the backend domain name or id, defaults to dom0.

	       short-name=STRING
		   Short name of the virtual sound card.

	       long-name=STRING
		   Long	name of	the virtual sound card.

	   PCM specification:

	       name=STRING
		   Name	of the PCM sound device	within the virtual sound card.

	   STREAM specification:

	       unique-id=STRING
		   Unique stream identifier.

	       type=TYPE
		   Stream type:	"p" - playback stream, "c" - capture stream.

	   EXAMPLE:

	       vsnd = [
		   ['CARD, short-name=Main, sample-formats=s16_le;s8;u32_be',
		       'PCM, name=Main',
			   'STREAM, id=0, type=p',
			   'STREAM, id=1, type=c, channels-max=2'
		   ],
		   ['CARD, short-name=Second',
		       'PCM, name=Second, buffer-size=1024',
			   'STREAM, id=2, type=p',
			   'STREAM, id=3, type=c'
		   ]
	       ]

       vkb=[ "VKB_SPEC_STRING",	"VKB_SPEC_STRING", ...]
	   Specifies the virtual keyboard device to be provided	to the guest.

	   Each	VKB_SPEC_STRING	is a comma-separated list of "KEY=VALUE"
	   settings from the following list:

	   unique-id=STRING
	       Specifies the unique input device id.

	   backend=domain-id
	       Specifies the backend domain name or id.

	   backend-type=type
	       Specifies the backend type: qemu	- for QEMU backend or linux -
	       for Linux PV domain.

	   feature-disable-keyboard=BOOLEAN
	       Indicates if keyboard device is disabled.

	   feature-disable-pointer=BOOLEAN
	       Indicates if pointer device is disabled.

	   feature-abs-pointer=BOOLEAN
	       Indicates if pointer device can return absolute coordinates.

	   feature-raw-pointer=BOOLEAN
	       Indicates if pointer device can return raw (unscaled) absolute
	       coordinates.

	   feature-multi-touch=BOOLEAN
	       Indicates if input device supports multi	touch.

	   multi-touch-width=MULTI_TOUCH_WIDTH
	       Set maximum width for multi touch device.

	   multi-touch-height=MULTI_TOUCH_HEIGHT
	       Set maximum height for multi touch device.

	   multi-touch-num-contacts=MULTI_TOUCH_NUM_CONTACTS
	       Set maximum contacts number for multi touch device.

	   width=WIDTH
	       Set maximum width for pointer device.

	   height=HEIGHT
	       Set maximum height for pointer device.

       tee="STRING"
	   Arm only. Set TEE type for the guest. TEE is	a Trusted Execution
	   Environment -- separate secure OS found on some platforms. STRING
	   can be one of the:

	   none
	       "Don't allow the	guest to use TEE if present on the platform.
	       This is the default value.

	   optee
	       Allow a guest to	access the host	OP-TEE OS. Xen will mediate
	       the access to OP-TEE and	the resource isolation will be
	       provided	directly by OP-TEE. OP-TEE itself may limit the	number
	       of guests that can concurrently use it. This requires a
	       virtualization-aware OP-TEE for this to work.

	       You can refer to	OP-TEE documentation
	       <https://optee.readthedocs.io/en/latest/architecture/virtualization.html>
	       for more	information about how to enable	and configure
	       virtualization support in OP-TEE.

	       This feature is a technology preview.

   Paravirtualised (PV)	Guest Specific Options
       The following options apply only	to Paravirtual (PV) guests.

       bootloader="PROGRAM"
	   Run "PROGRAM" to find the kernel image and ramdisk to use.
	   Normally "PROGRAM" would be "pygrub", which is an emulation of
	   grub/grub2/syslinux.	Either kernel or bootloader must be specified
	   for PV guests.

       bootloader_args=[ "ARG",	"ARG", ...]
	   Append ARGs to the arguments	to the bootloader program.
	   Alternatively if the	argument is a simple string then it will be
	   split into words at whitespace (this	second option is deprecated).

       e820_host=BOOLEAN
	   Selects whether to expose the host e820 (memory map)	to the guest
	   via the virtual e820. When this option is false (0) the guest
	   pseudo-physical address space consists of a single contiguous RAM
	   region. When	this option is specified the virtual e820 instead
	   reflects the	host e820 and contains the same	PCI holes. The total
	   amount of RAM represented by	the memory map is always the same,
	   this	option configures only how it is laid out.

	   Exposing the	host e820 to the guest gives the guest kernel the
	   opportunity to set aside the	required part of its pseudo-physical
	   address space in order to provide address space to map
	   passedthrough PCI devices. It is guest Operating System dependent
	   whether this	option is required, specifically it is required	when
	   using a mainline Linux ("pvops") kernel. This option	defaults to
	   true	(1) if any PCI passthrough devices are configured and false
	   (0) otherwise. If you do not	configure any passthrough devices at
	   domain creation time	but expect to hotplug devices later then you
	   should set this option. Conversely if your particular guest kernel
	   does	not require this behaviour then	it is safe to allow this to be
	   enabled but you may wish to disable it anyway.

   Fully-virtualised (HVM) Guest Specific Options
       The following options apply only	to Fully-virtualised (HVM) guests.

       Boot Device

       boot="STRING"
	   Specifies the emulated virtual device to boot from.

	   Possible values are:

	   c   Hard disk.

	   d   CD-ROM.

	   n   Network / PXE.

	   Note: multiple options can be given and will	be attempted in	the
	   order they are given, e.g. to boot from CD-ROM but fall back	to the
	   hard	disk you can specify it	as dc.

	   The default is cd, meaning try booting from the hard	disk first,
	   but fall back to the	CD-ROM.

       Emulated	disk controller	type

       hdtype=STRING
	   Specifies the hard disk type.

	   Possible values are:

	   ide If thise	mode is	specified xl adds an emulated IDE controller,
	       which is	suitable even for older	operation systems.

	   ahci
	       If this mode is specified, xl adds an ich9 disk controller in
	       AHCI mode and uses it with upstream QEMU	to emulate disks
	       instead of IDE. It decreases boot time but may not be supported
	       by default in older operating systems, e.g.  Windows XP.

	   The default is ide.

       Paging

       The following options control the mechanisms used to virtualise guest
       memory.	The defaults are selected to give the best results for the
       common cases so you should normally leave these options unspecified.

       hap=BOOLEAN
	   Turns "hardware assisted paging" (the use of	the hardware nested
	   page	table feature) on or off.  This	feature	is called EPT
	   (Extended Page Tables) by Intel and NPT (Nested Page	Tables)	or RVI
	   (Rapid Virtualisation Indexing) by AMD. If turned off, Xen will run
	   the guest in	"shadow	page table" mode where the guest's page	table
	   updates and/or TLB flushes etc. will	be emulated.  Use of HAP is
	   the default when available.

       oos=BOOLEAN
	   Turns "out of sync pagetables" on or	off.  When running in shadow
	   page	table mode, the	guest's	page table updates may be deferred as
	   specified in	the Intel/AMD architecture manuals.  However, this may
	   expose unexpected bugs in the guest,	or find	bugs in	Xen, so	it is
	   possible to disable this feature.  Use of out of sync page tables,
	   when	Xen thinks it appropriate, is the default.

       shadow_memory=MBYTES
	   Number of megabytes to set aside for	shadowing guest	pagetable
	   pages (effectively acting as	a cache	of translated pages) or	to use
	   for HAP state. By default this is 1MB per guest vCPU	plus 8KB per
	   MB of guest RAM. You	should not normally need to adjust this	value.
	   However, if you are not using hardware assisted paging (i.e.	you
	   are using shadow mode) and your guest workload consists of a	very
	   large number	of similar processes then increasing this value	may
	   improve performance.

       Processor and Platform Features

       The following options allow various processor and platform level
       features	to be hidden or	exposed	from the guest's point of view.	This
       can be useful when running older	guest Operating	Systems	which may
       misbehave when faced with more modern features. In general, you should
       accept the defaults for these options wherever possible.

       bios="STRING"
	   Select the virtual firmware that is exposed to the guest.  By
	   default, a guess is made based on the device	model, but sometimes
	   it may be useful to request a different one,	like UEFI.

	   rombios
	       Loads ROMBIOS, a	16-bit x86 compatible BIOS. This is used by
	       default when device_model_version=qemu-xen-traditional. This is
	       the only	BIOS option supported when
	       device_model_version=qemu-xen-traditional. This is the BIOS
	       used by all previous Xen	versions.

	   seabios
	       Loads SeaBIOS, a	16-bit x86 compatible BIOS. This is used by
	       default with device_model_version=qemu-xen.

	   ovmf
	       Loads OVMF, a standard UEFI firmware by Tianocore project.
	       Requires	device_model_version=qemu-xen.

       bios_path_override="PATH"
	   Override the	path to	the blob to be used as BIOS. The blob provided
	   here	MUST be	consistent with	the bios= which	you have specified.
	   You should not normally need	to specify this	option.

	   This	option does not	have any effect	if using bios="rombios"	or
	   device_model_version="qemu-xen-traditional".

       pae=BOOLEAN
	   Hide	or expose the IA32 Physical Address Extensions.	These
	   extensions make it possible for a 32	bit guest Operating System to
	   access more than 4GB	of RAM.	Enabling PAE also enabled other
	   features such as NX.	PAE is required	if you wish to run a 64-bit
	   guest Operating System. In general, you should leave	this enabled
	   and allow the guest Operating System	to choose whether or not to
	   use PAE. (X86 only)

       acpi=BOOLEAN
	   Expose ACPI (Advanced Configuration and Power Interface) tables
	   from	the virtual firmware to	the guest Operating System. ACPI is
	   required by most modern guest Operating Systems. This option	is
	   enabled by default and usually you should omit it. However, it may
	   be necessary	to disable ACPI	for compatibility with some guest
	   Operating Systems.  This option is true for x86 while it's false
	   for ARM by default.

       acpi_s3=BOOLEAN
	   Include the S3 (suspend-to-ram) power state in the virtual firmware
	   ACPI	table. True (1)	by default.

       acpi_s4=BOOLEAN
	   Include S4 (suspend-to-disk)	power state in the virtual firmware
	   ACPI	table. True (1)	by default.

       acpi_laptop_slate=BOOLEAN
	   Include the Windows laptop/slate mode switch	device in the virtual
	   firmware ACPI table.	False (0) by default.

       apic=BOOLEAN
	   (x86	only) Include information regarding APIC (Advanced
	   Programmable	Interrupt Controller) in the firmware/BIOS tables on a
	   single processor guest. This	causes the MP (multiprocessor) and PIR
	   (PCI	Interrupt Routing) tables to be	exported by the	virtual
	   firmware. This option has no	effect on a guest with multiple
	   virtual CPUs	as they	must always include these tables. This option
	   is enabled by default and you should	usually	omit it	but it may be
	   necessary to	disable	these firmware tables when using certain older
	   guest Operating Systems. These tables have been superseded by newer
	   constructs within the ACPI tables.

       nx=BOOLEAN
	   (x86	only) Hides or exposes the No-eXecute capability. This allows
	   a guest Operating System to map pages in such a way that they
	   cannot be executed which can	enhance	security. This options
	   requires that PAE also be enabled.

       hpet=BOOLEAN
	   (x86	only) Enables or disables HPET (High Precision Event Timer).
	   This	option is enabled by default and you should usually omit it.
	   It may be necessary to disable the HPET in order to improve
	   compatibility with guest Operating Systems.

       altp2m="MODE"
	   (x86	only) Specifies	the access mode	to the alternate-p2m
	   capability.	Alternate-p2m allows a guest to	manage multiple	p2m
	   guest physical "memory views" (as opposed to	a single p2m).	You
	   may want this option	if you want to access-control/isolate access
	   to specific guest physical memory pages accessed by the guest, e.g.
	   for domain memory introspection or for isolation/access-control of
	   memory between components within a single guest domain. This	option
	   is disabled by default.

	   The valid values are	as follows:

	   disabled
	       Altp2m is disabled for the domain (default).

	   mixed
	       The mixed mode allows access to the altp2m interface for	both
	       in-guest	and external tools as well.

	   external
	       Enables access to the alternate-p2m capability by external
	       privileged tools.

	   limited
	       Enables limited access to the alternate-p2m capability, ie.
	       giving the guest	access only to enable/disable the VMFUNC and
	       #VE features.

       altp2mhvm=BOOLEAN
	   Enables or disables HVM guest access	to alternate-p2m capability.
	   Alternate-p2m allows	a guest	to manage multiple p2m guest physical
	   "memory views" (as opposed to a single p2m).	This option is
	   disabled by default and is available	only to	HVM domains.  You may
	   want	this option if you want	to access-control/isolate access to
	   specific guest physical memory pages	accessed by the	guest, e.g.
	   for HVM domain memory introspection or for isolation/access-control
	   of memory between components	within a single	guest HVM domain. This
	   option is deprecated, use the option	"altp2m" instead.

	   Note: While the option "altp2mhvm" is deprecated, legacy
	   applications	for x86	systems	will continue to work using it.

       nestedhvm=BOOLEAN
	   Enable or disables guest access to hardware virtualisation
	   features, e.g. it allows a guest Operating System to	also function
	   as a	hypervisor. You	may want this option if	you want to run
	   another hypervisor (including another copy of Xen) within a Xen
	   guest or to support a guest Operating System	which uses hardware
	   virtualisation extensions (e.g. Windows XP compatibility mode on
	   more	modern Windows OS).  This option is disabled by	default.

       cpuid="LIBXL_STRING" or cpuid=[ "XEND_STRING", "XEND_STRING" ]
	   Configure the value returned	when a guest executes the CPUID
	   instruction.	 Two versions of config	syntax are recognized: libxl
	   and xend.

	   Both	formats	use a common notation for specifying a single feature
	   bit.	 Possible values are:
	     '1' -> force the corresponding bit	to 1
	     '0' -> force to 0
	     'x' -> Get	a safe value (pass through and mask with the default
	   policy)
	     'k' -> pass through the host bit value (at	boot only - value
	   preserved on	migrate)
	     's' -> legacy alias for 'k'

	   Libxl format:

	       The libxl format	is a single string, starting with the word
	       "host", and followed by a comma separated list of key=value
	       pairs.  A few keys take a numerical value, all others take a
	       single character	which describes	what to	do with	the feature
	       bit.  e.g.:

		   cpuid="host,tm=0,sse3=0"

	       List of keys taking a value:

		   apicidsize brandid clflush family localapicid maxleaf
		   maxhvleaf model nc proccount	procpkg	stepping

	       List of keys taking a character:

		   3dnow 3dnowext 3dnowprefetch	abm acpi adx aes altmovcr8
		   apic	arat avx avx2 avx512-4fmaps avx512-4vnniw avx512bw
		   avx512cd avx512dq avx512er avx512f avx512ifma avx512pf
		   avx512vbmi avx512vl bmi1 bmi2 clflushopt clfsh clwb cmov
		   cmplegacy cmpxchg16 cmpxchg8	cmt cntxid dca de ds dscpl
		   dtes64 erms est extapic f16c	ffxsr fma fma4 fpu fsgsbase
		   fxsr	hle htt	hypervisor ia64	ibs invpcid invtsc lahfsahf lm
		   lwp mca mce misalignsse mmx mmxext monitor movbe mpx	msr
		   mtrr	nodeid nx ospke	osvw osxsave pae page1gb pat pbe pcid
		   pclmulqdq pdcm perfctr_core perfctr_nb pge pku popcnt pse
		   pse36 psn rdrand rdseed rdtscp rtm sha skinit smap smep smx
		   ss sse sse2 sse3 sse4.1 sse4.2 sse4_1 sse4_2	sse4a ssse3
		   svm svm_decode svm_lbrv svm_npt svm_nrips svm_pausefilt
		   svm_tscrate svm_vmcbclean syscall sysenter tbm tm tm2
		   topoext tsc tsc-deadline tsc_adjust umip vme	vmx wdt	x2apic
		   xop xsave xtpr

	   Xend	format:

	       Xend format consists of an array	of one or more strings of the
	       form "leaf:reg=bitstring,...".  e.g. (matching the libxl
	       example above):

		   cpuid=["1:ecx=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx0,edx=xx0xxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
		   ...]

	       "leaf" is an integer, either decimal or hex with	a "0x" prefix.
	       e.g. to specify something in the	AMD feature leaves, use
	       "0x80000001:ecx=...".

	       Some leaves have	subleaves which	can be specified as
	       "leaf,subleaf".	e.g. for the Intel structured feature leaf,
	       use "7,0:ebx=..."

	       The bitstring represents	all bits in the	register, its length
	       must be 32 chars.  Each successive character represent a
	       lesser-significant bit.

	   Note: when specifying cpuid for hypervisor leaves (0x4000xxxx major
	   group) only the lowest 8 bits of leaf's 0x4000xx00 EAX register are
	   processed, the rest are ignored (these 8 bits signify maximum
	   number of hypervisor	leaves).

	   More	info about the CPUID instruction can be	found in the processor
	   manuals, and	on Wikipedia: <https://en.wikipedia.org/wiki/CPUID>

       acpi_firmware="STRING"
	   Specifies a path to a file that contains extra ACPI firmware	tables
	   to pass into	a guest. The file can contain several tables in	their
	   binary AML form concatenated	together. Each table self describes
	   its length so no additional information is needed. These tables
	   will	be added to the	ACPI table set in the guest. Note that
	   existing tables cannot be overridden	by this	feature. For example,
	   this	cannot be used to override tables like DSDT, FADT, etc.

       smbios_firmware="STRING"
	   Specifies a path to a file that contains extra SMBIOS firmware
	   structures to pass into a guest. The	file can contain a set of DMTF
	   predefined structures which will override the internal defaults.
	   Not all predefined structures can be	overridden, only the following
	   types: 0, 1,	2, 3, 11, 22, 39. The file can also contain any	number
	   of vendor defined SMBIOS structures (type 128 - 255). Since SMBIOS
	   structures do not present their overall size, each entry in the
	   file	must be	preceded by a 32b integer indicating the size of the
	   following structure.

       ms_vm_genid="OPTION"
	   Provide a VM	generation ID to the guest.

	   The VM generation ID	is a 128-bit random number that	a guest	may
	   use to determine if the guest has been restored from	an earlier
	   snapshot or cloned.

	   This	is required for	Microsoft Windows Server 2012 (and later)
	   domain controllers.

	   Valid options are:

	   generate
	       Generate	a random VM generation ID every	time the domain	is
	       created or restored.

	   none
	       Do not provide a	VM generation ID.

	   See also "Virtual Machine Generation	ID" by Microsoft:
	   <https://docs.microsoft.com/en-us/windows/win32/hyperv_v2/virtual-machine-generation-identifier>

       Guest Virtual Time Controls

       tsc_mode="MODE"
	   (x86	only) Specifies	how the	TSC (Time Stamp	Counter) should	be
	   provided to the guest. Specifying this option as a number is
	   deprecated.

	   Options are:

	   default
	       Guest rdtsc/p is	executed natively when monotonicity can	be
	       guaranteed and emulated otherwise (with frequency scaled	if
	       necessary).

	       If a HVM	container in default TSC mode is created on a host
	       that provides constant host TSC,	its guest TSC frequency	will
	       be the same as the host.	If it is later migrated	to another
	       host that provide constant host TSC and supports	Intel VMX TSC
	       scaling/AMD SVM TSC ratio, its guest TSC	frequency will be the
	       same before and after migration,	and guest rdtsc/p will be
	       executed	natively after migration as well

	   always_emulate
	       Guest rdtsc/p is	always emulated	and the	virtual	TSC will
	       appear to increment (kernel and user) at	a fixed	1GHz rate,
	       regardless of the pCPU HZ rate or power state. Although there
	       is an overhead associated with emulation, this will NOT affect
	       underlying CPU performance.

	   native
	       Guest rdtsc/p is	always executed	natively (no
	       monotonicity/frequency guarantees). Guest rdtsc/p is emulated
	       at native frequency if unsupported by h/w, else executed
	       natively.

	   native_paravirt
	       This mode has been removed.

	   Please see xen-tscmode(7) for more information on this option.

       localtime=BOOLEAN
	   Set the real	time clock to local time or to UTC. False (0) by
	   default, i.e. set to	UTC.

       rtc_timeoffset=SECONDS
	   Set the real	time clock offset in seconds. No offset	(0) by
	   default.

       vpt_align=BOOLEAN
	   Specifies that periodic Virtual Platform Timers should be aligned
	   to reduce guest interrupts. Enabling	this option can	reduce power
	   consumption,	especially when	a guest	uses a high timer interrupt
	   frequency (HZ) values. The default is true (1).

       timer_mode="MODE"
	   Specifies the mode for Virtual Timers. The valid values are as
	   follows:

	   delay_for_missed_ticks
	       Delay for missed	ticks. Do not advance a	vCPU's time beyond the
	       correct delivery	time for interrupts that have been missed due
	       to preemption. Deliver missed interrupts	when the vCPU is
	       rescheduled and advance the vCPU's virtual time stepwise	for
	       each one.

	   no_delay_for_missed_ticks
	       No delay	for missed ticks. As above, missed interrupts are
	       delivered, but guest time always	tracks wallclock (i.e.,	real)
	       time while doing	so. This is the	default.

	   no_missed_ticks_pending
	       No missed interrupts are	held pending. Instead, to ensure ticks
	       are delivered at	some non-zero rate, if we detect missed	ticks
	       then the	internal tick alarm is not disabled if the vCPU	is
	       preempted during	the next tick period.

	   one_missed_tick_pending
	       One missed tick pending.	Missed interrupts are collapsed
	       together	and delivered as one 'late tick'.  Guest time always
	       tracks wallclock	(i.e., real) time.

       Memory layout

       mmio_hole=MBYTES
	   Specifies the size the MMIO hole below 4GiB will be.	 Only valid
	   for device_model_version="qemu-xen".

	   Cannot be smaller than 256. Cannot be larger	than 3840.

	   Known good large value is 3072.

       Support for Paravirtualisation of HVM Guests

       The following options allow Paravirtualised features (such as devices)
       to be exposed to	the guest Operating System in an HVM guest.  Utilising
       these features requires specific	guest support but when available they
       will result in improved performance.

       xen_platform_pci=BOOLEAN
	   Enable or disable the Xen platform PCI device.  The presence	of
	   this	virtual	device enables a guest Operating System	(subject to
	   the availability of suitable	drivers) to make use of
	   paravirtualisation features such as disk and	network	devices	etc.
	   Enabling these drivers improves performance and is strongly
	   recommended when available. PV drivers are available	for various
	   Operating Systems including HVM Linux (out-of-the-box) and
	   Microsoft Windows <https://xenproject.org/windows-pv-drivers/>.

	   Setting xen_platform_pci=0 with the default device_model "qemu-xen"
	   requires at least QEMU 1.6.

       viridian=[ "GROUP", "GROUP", ...] or viridian=BOOLEAN
	   The groups of Microsoft Hyper-V (AKA	viridian) compatible
	   enlightenments exposed to the guest.	The following groups of
	   enlightenments may be specified:

	   base
	       This group incorporates the Hypercall MSRs, Virtual processor
	       index MSR, and APIC access MSRs.	These enlightenments can
	       improve performance of Windows Vista and	Windows	Server 2008
	       onwards and setting this	option for such	guests is strongly
	       recommended.  This group	is also	a pre-requisite	for all
	       others. If it is	disabled then it is an error to	attempt	to
	       enable any other	group.

	   freq
	       This group incorporates the TSC and APIC	frequency MSRs.	These
	       enlightenments can improve performance of Windows 7 and Windows
	       Server 2008 R2 onwards.

	   time_ref_count
	       This group incorporates Partition Time Reference	Counter	MSR.
	       This enlightenment can improve performance of Windows 8 and
	       Windows Server 2012 onwards.

	   reference_tsc
	       This set	incorporates the Partition Reference TSC MSR. This
	       enlightenment can improve performance of	Windows	7 and Windows
	       Server 2008 R2 onwards.

	   hcall_remote_tlb_flush
	       This set	incorporates use of hypercalls for remote TLB
	       flushing.  This enlightenment may improve performance of
	       Windows guests running on hosts with higher levels of
	       (physical) CPU contention.

	   apic_assist
	       This set	incorporates use of the	APIC assist page to avoid EOI
	       of the local APIC.  This	enlightenment may improve performance
	       of guests that make use of per-vCPU event channel upcall
	       vectors.	 Note that this	enlightenment will have	no effect if
	       the guest is using APICv	posted interrupts.

	   crash_ctl
	       This group incorporates the crash control MSRs. These
	       enlightenments allow Windows to write crash information such
	       that it can be logged by	Xen.

	   stimer
	       This set	incorporates the SynIC and synthetic timer MSRs.
	       Windows will use	synthetic timers in preference to emulated
	       HPET for	a source of ticks and hence enabling this group	will
	       ensure that ticks will be consistent with use of	an enlightened
	       time source (time_ref_count or reference_tsc).

	   hcall_ipi
	       This set	incorporates use of a hypercall	for interprocessor
	       interrupts.  This enlightenment may improve performance of
	       Windows guests with multiple virtual CPUs.

	   defaults
	       This is a special value that enables the	default	set of groups,
	       which is	currently the base, freq, time_ref_count, apic_assist,
	       crash_ctl and stimer groups.

	   all This is a special value that enables all	available groups.

	   Groups can be disabled by prefixing the name	with '!'. So, for
	   example, to enable all groups except	freq, specify:

	       viridian=[ "all", "!freq" ]

	   For details of the enlightenments see the latest version of
	   Microsoft's Hypervisor Top-Level Functional Specification.

	   The enlightenments should be	harmless for other versions of Windows
	   (although they will not give	any benefit) and the majority of other
	   non-Windows OSes.  However it is known that they are	incompatible
	   with	some other Operating Systems and in some circumstance can
	   prevent Xen's own paravirtualisation	interfaces for HVM guests from
	   being used.

	   The viridian	option can be specified	as a boolean. A	value of true
	   (1) is equivalent to	the list [ "defaults" ], and a value of	false
	   (0) is equivalent to	an empty list.

       Emulated	VGA Graphics Device

       The following options control the features of the emulated graphics
       device.	Many of	these options behave similarly to the equivalent key
       in the VFB_SPEC_STRING for configuring virtual frame buffer devices
       (see above).

       videoram=MBYTES
	   Sets	the amount of RAM which	the emulated video card	will contain,
	   which in turn limits	the resolutions	and bit	depths which will be
	   available.

	   When	using the qemu-xen-traditional device-model, the default as
	   well	as minimum amount of video RAM for stdvga is 8 MB, which is
	   sufficient for e.g.	1600x1200 at 32bpp. For	the upstream qemu-xen
	   device-model, the default and minimum is 16 MB.

	   When	using the emulated Cirrus graphics card	(vga="cirrus") and the
	   qemu-xen-traditional	device-model, the amount of video RAM is fixed
	   at 4	MB, which is sufficient	for 1024x768 at	32 bpp.	For the
	   upstream qemu-xen device-model, the default and minimum is 8	MB.

	   For QXL vga,	both the default and minimal are 128MB.	 If videoram
	   is set less than 128MB, an error will be triggered.

       stdvga=BOOLEAN
	   Speficies a standard	VGA card with VBE (VESA	BIOS Extensions) as
	   the emulated	graphics device. If your guest supports	VBE 2.0	or
	   later (e.g. Windows XP onwards) then	you should enable this.
	   stdvga supports more	video ram and bigger resolutions than Cirrus.
	   The default is false	(0) which means	to emulate a Cirrus Logic
	   GD5446 VGA card.  This option is deprecated,	use vga="stdvga"
	   instead.

       vga="STRING"
	   Selects the emulated	video card.  Options are: none,	stdvga,	cirrus
	   and qxl.  The default is cirrus.

	   In general, QXL should work with the	Spice remote display protocol
	   for acceleration, and a QXL driver is necessary in the guest	in
	   that	case.  QXL can also work with the VNC protocol,	but it will be
	   like	a standard VGA card without acceleration.

       vnc=BOOLEAN
	   Allow access	to the display via the VNC protocol.  This enables the
	   other VNC-related settings.	The default is (1) enabled.

       vnclisten="ADDRESS[:DISPLAYNUM]"
	   Specifies the IP address and, optionally, the VNC display number to
	   use.

       vncdisplay=DISPLAYNUM
	   Specifies the VNC display number to use. The	actual TCP port	number
	   will	be DISPLAYNUM+5900.

       vncunused=BOOLEAN
	   Requests that the VNC display setup searches	for a free TCP port to
	   use.	 The actual display used can be	accessed with xl vncviewer.

       vncpasswd="PASSWORD"
	   Specifies the password for the VNC server. If the password is set
	   to an empty string, authentication on the VNC server	will be
	   disabled allowing any user to connect.

       keymap="LANG"
	   Configure the keymap	to use for the keyboard	associated with	this
	   display. If the input method	does not easily	support	raw keycodes
	   (e.g. this is often the case	when using VNC)	then this allows us to
	   correctly map the input keys	into keycodes seen by the guest. The
	   specific values which are accepted are defined by the version of
	   the device-model which you are using. See Keymaps below or consult
	   the qemu(1) manpage.	The default is en-us.

       sdl=BOOLEAN
	   Specifies that the display should be	presented via an X window
	   (using Simple DirectMedia Layer). The default is (0)	not enabled.

       opengl=BOOLEAN
	   Enable OpenGL acceleration of the SDL display. Only effects
	   machines using device_model_version="qemu-xen-traditional" and only
	   if the device-model was compiled with OpenGL	support. Default is
	   (0) false.

       nographic=BOOLEAN
	   Enable or disable the virtual graphics device.  The default is to
	   provide a VGA graphics device but this option can be	used to
	   disable it.

       Spice Graphics Support

       The following options control the features of SPICE.

       spice=BOOLEAN
	   Allow access	to the display via the SPICE protocol.	This enables
	   the other SPICE-related settings.

       spicehost="ADDRESS"
	   Specifies the interface address to listen on	if given, otherwise
	   any interface.

       spiceport=NUMBER
	   Specifies the port to listen	on by the SPICE	server if SPICE	is
	   enabled.

       spicetls_port=NUMBER
	   Specifies the secure	port to	listen on by the SPICE server if SPICE
	   is enabled. At least	one of spiceport or spicetls_port must be
	   given if SPICE is enabled.

	   Note: the options depending on spicetls_port	have not been
	   supported.

       spicedisable_ticketing=BOOLEAN
	   Enable clients to connect without specifying	a password. When
	   disabled, spicepasswd must be set. The default is (0) false.

       spicepasswd="PASSWORD"
	   Specify the password	which is used by clients for establishing a
	   connection.

       spiceagent_mouse=BOOLEAN
	   Whether SPICE agent is used for client mouse	mode. The default is
	   (1) true.

       spicevdagent=BOOLEAN
	   Enables the SPICE vdagent. The SPICE	vdagent	is an optional
	   component for enhancing user	experience and performing guest-
	   oriented management tasks. Its features include: client mouse mode
	   (no need to grab the	mouse by the client, no	mouse lag), automatic
	   adjustment of screen	resolution, copy and paste (text and image)
	   between the client and the guest. It	also requires the vdagent
	   service installed on	the guest OS to	work.  The default is (0)
	   disabled.

       spice_clipboard_sharing=BOOLEAN
	   Enables SPICE clipboard sharing (copy/paste). It requires that
	   spicevdagent	is enabled. The	default	is (0) false.

       spiceusbredirection=NUMBER
	   Enables SPICE USB redirection. Creates a NUMBER of USB redirection
	   channels for	redirecting up to 4 USB	devices	from the SPICE client
	   to the guest's QEMU.	 It requires an	USB controller and, if not
	   defined, it will automatically add an USB2.0	controller. The
	   default is (0) disabled.

       spice_image_compression="COMPRESSION"
	   Specifies what image	compression is to be used by SPICE (if given),
	   otherwise the QEMU default will be used. Please see the
	   documentation of your QEMU version for more details.

	   Available options are: auto_glz, auto_lz, quic, glz,	lz, off.

       spice_streaming_video="VIDEO"
	   Specifies what streaming video setting is to	be used	by SPICE (if
	   given), otherwise the QEMU default will be used.

	   Available options are: filter, all, off.

       Miscellaneous Emulated Hardware

       serial=[	"DEVICE", "DEVICE", ...]
	   Redirect virtual serial ports to DEVICEs. Please see	the -serial
	   option in the qemu(1) manpage for details of	the valid DEVICE
	   options. Default is vc when in graphical mode and stdio if
	   nographic=1 is used.

	   The form serial=DEVICE is also accepted for backwards
	   compatibility.

       soundhw="DEVICE"
	   Select the virtual sound card to expose to the guest. The valid
	   devices are defined by the device model configuration, please see
	   the qemu(1) manpage for details. The	default	is not to export any
	   sound device.

       vkb_device=BOOLEAN
	   Specifies that the HVM guest	gets a vkdb. The default is true (1).

       usb=BOOLEAN
	   Enables or disables an emulated USB bus in the guest.

       usbversion=NUMBER
	   Specifies the type of an emulated USB bus in	the guest, values 1
	   for USB1.1, 2 for USB2.0 and	3 for USB3.0. It is available only
	   with	an upstream QEMU.  Due to implementation limitations this is
	   not compatible with the usb and usbdevice parameters.  Default is
	   (0) no USB controller defined.

       usbdevice=[ "DEVICE", "DEVICE", ...]
	   Adds	DEVICEs	to the emulated	USB bus. The USB bus must also be
	   enabled using usb=1.	The most common	use for	this option is
	   usbdevice=['tablet']	which adds a pointer device using absolute
	   coordinates.	Such devices function better than relative coordinate
	   devices (such as a standard mouse) since many methods of exporting
	   guest graphics (such	as VNC)	work better in this mode. Note that
	   this	is independent of the actual pointer device you	are using on
	   the host/client side.

	   Host	devices	can also be passed through in this way,	by specifying
	   host:USBID, where USBID is of the form xxxx:yyyy.  The USBID	can
	   typically be	found by using lsusb(1)	or usb-devices(1).

	   If you wish to use the "host:bus.addr" format, remove any leading
	   '0' from the	bus and	addr. For example, for the USB device on bus
	   008 dev 002,	you should write "host:8.2".

	   The form usbdevice=DEVICE is	also accepted for backwards
	   compatibility.

	   More	valid options can be found in the "usbdevice" section of the
	   QEMU	documentation.

       vendor_device="VENDOR_DEVICE"
	   Selects which variant of the	QEMU xen-pvdevice should be used for
	   this	guest. Valid values are:

	   none
	       The xen-pvdevice	should be omitted. This	is the default.

	   xenserver
	       The xenserver variant of	the xen-pvdevice (device-id=C000) will
	       be specified, enabling the use of XenServer PV drivers in the
	       guest.

	   This	parameter only takes effect when
	   device_model_version=qemu-xen.  See xen-pci-device-reservations(7)
	   for more information.

   PVH Guest Specific Options
       nestedhvm=BOOLEAN
	   Enable or disables guest access to hardware virtualisation
	   features, e.g. it allows a guest Operating System to	also function
	   as a	hypervisor.  You may want this option if you want to run
	   another hypervisor (including another copy of Xen) within a Xen
	   guest or to support a guest Operating System	which uses hardware
	   virtualisation extensions (e.g. Windows XP compatibility mode on
	   more	modern Windows OS).

	   This	option is disabled by default.

       bootloader="PROGRAM"
	   Run "PROGRAM" to find the kernel image and ramdisk to use.
	   Normally "PROGRAM" would be "pygrub", which is an emulation of
	   grub/grub2/syslinux.	Either kernel or bootloader must be specified
	   for PV guests.

       bootloader_args=[ "ARG",	"ARG", ...]
	   Append ARGs to the arguments	to the bootloader program.
	   Alternatively if the	argument is a simple string then it will be
	   split into words at whitespace (this	second option is deprecated).

       timer_mode="MODE"
	   Specifies the mode for Virtual Timers. The valid values are as
	   follows:

	   delay_for_missed_ticks
	       Delay for missed	ticks. Do not advance a	vCPU's time beyond the
	       correct delivery	time for interrupts that have been missed due
	       to preemption. Deliver missed interrupts	when the vCPU is
	       rescheduled and advance the vCPU's virtual time stepwise	for
	       each one.

	   no_delay_for_missed_ticks
	       No delay	for missed ticks. As above, missed interrupts are
	       delivered, but guest time always	tracks wallclock (i.e.,	real)
	       time while doing	so. This is the	default.

	   no_missed_ticks_pending
	       No missed interrupts are	held pending. Instead, to ensure ticks
	       are delivered at	some non-zero rate, if we detect missed	ticks
	       then the	internal tick alarm is not disabled if the vCPU	is
	       preempted during	the next tick period.

	   one_missed_tick_pending
	       One missed tick pending.	Missed interrupts are collapsed
	       together	and delivered as one 'late tick'.  Guest time always
	       tracks wallclock	(i.e., real) time.

       Paging

       The following options control the mechanisms used to virtualise guest
       memory.	The defaults are selected to give the best results for the
       common cases so you should normally leave these options unspecified.

       hap=BOOLEAN
	   Turns "hardware assisted paging" (the use of	the hardware nested
	   page	table feature) on or off.  This	feature	is called EPT
	   (Extended Page Tables) by Intel and NPT (Nested Page	Tables)	or RVI
	   (Rapid Virtualisation Indexing) by AMD. If turned off, Xen will run
	   the guest in	"shadow	page table" mode where the guest's page	table
	   updates and/or TLB flushes etc. will	be emulated.  Use of HAP is
	   the default when available.

       oos=BOOLEAN
	   Turns "out of sync pagetables" on or	off.  When running in shadow
	   page	table mode, the	guest's	page table updates may be deferred as
	   specified in	the Intel/AMD architecture manuals.  However, this may
	   expose unexpected bugs in the guest,	or find	bugs in	Xen, so	it is
	   possible to disable this feature.  Use of out of sync page tables,
	   when	Xen thinks it appropriate, is the default.

       shadow_memory=MBYTES
	   Number of megabytes to set aside for	shadowing guest	pagetable
	   pages (effectively acting as	a cache	of translated pages) or	to use
	   for HAP state. By default this is 1MB per guest vCPU	plus 8KB per
	   MB of guest RAM. You	should not normally need to adjust this	value.
	   However, if you are not using hardware assisted paging (i.e.	you
	   are using shadow mode) and your guest workload consists of a	very
	   large number	of similar processes then increasing this value	may
	   improve performance.

   Device-Model	Options
       The following options control the selection of the device-model.	 This
       is the component	which provides emulation of the	virtual	devices	to an
       HVM guest.  For a PV guest a device-model is sometimes used to provide
       backends	for certain PV devices (most usually a virtual framebuffer
       device).

       device_model_version="DEVICE-MODEL"
	   Selects which variant of the	device-model should be used for	this
	   guest.

	   Valid values	are:

	   qemu-xen
	       Use the device-model merged into	the upstream QEMU project.
	       This device-model is the	default	for Linux dom0.

	   qemu-xen-traditional
	       Use the device-model based upon the historical Xen fork of
	       QEMU.  This device-model	is still the default for NetBSD	dom0.

	   It is recommended to	accept the default value for new guests.  If
	   you have existing guests then, depending on the nature of the guest
	   Operating System, you may wish to force them	to use the device
	   model which they were installed with.

       device_model_override="PATH"
	   Override the	path to	the binary to be used as the device-model
	   running in toolstack	domain.	The binary provided here MUST be
	   consistent with the device_model_version which you have specified.
	   You should not normally need	to specify this	option.

       stubdomain_kernel="PATH"
	   Override the	path to	the kernel image used as device-model
	   stubdomain.	The binary provided here MUST be consistent with the
	   device_model_version	which you have specified.  In case of qemu-
	   xen-traditional it is expected to be	MiniOS-based stubdomain	image,
	   in case of qemu-xen it is expected to be Linux-based	stubdomain
	   kernel.

       stubdomain_ramdisk="PATH"
	   Override the	path to	the ramdisk image used as device-model
	   stubdomain.	The binary provided here is to be used by a kernel
	   pointed by stubdomain_kernel.  It is	known to be used only by
	   Linux-based stubdomain kernel.

       stubdomain_memory=MBYTES
	   Start the stubdomain	with MBYTES megabytes of RAM. Default is 128.

       device_model_stubdomain_override=BOOLEAN
	   Override the	use of stubdomain based	device-model.  Normally	this
	   will	be automatically selected based	upon the other features	and
	   options you have selected.

       device_model_stubdomain_seclabel="LABEL"
	   Assign an XSM security label	to the device-model stubdomain.

       device_model_args=[ "ARG", "ARG", ...]
	   Pass	additional arbitrary options on	the device-model command line.
	   Each	element	in the list is passed as an option to the device-
	   model.

       device_model_args_pv=[ "ARG", "ARG", ...]
	   Pass	additional arbitrary options on	the device-model command line
	   for a PV device model only. Each element in the list	is passed as
	   an option to	the device-model.

       device_model_args_hvm=[ "ARG", "ARG", ...]
	   Pass	additional arbitrary options on	the device-model command line
	   for an HVM device model only. Each element in the list is passed as
	   an option to	the device-model.

   Keymaps
       The keymaps available are defined by the	device-model which you are
       using. Commonly this includes:

	       ar  de-ch  es  fo     fr-ca  hu	ja  mk	   no  pt-br  sv
	       da  en-gb  et  fr     fr-ch  is	lt  nl	   pl  ru     th
	       de  en-us  fi  fr-be  hr	    it	lv  nl-be  pt  sl     tr

       The default is en-us.

       See qemu(1) for more information.

   Architecture	Specific options
       ARM

       gic_version="vN"
	   Version of the GIC emulated for the guest.

	   Currently, the following versions are supported:

	   v2  Emulate a GICv2

	   v3  Emulate a GICv3.	Note that the emulated GIC does	not support
	       the GICv2 compatibility mode.

	   default
	       Emulate the same	version	as the native GIC hardware used	by the
	       host where the domain was created.

	   This	requires hardware compatibility	with the requested version,
	   either natively or via hardware backwards compatibility support.

       vuart="uart"
	   To enable vuart console, user must specify the following option in
	   the VM config file:

	   vuart = "sbsa_uart"

	   Currently, only the "sbsa_uart" model is supported for ARM.

       x86

       mca_caps=[ "CAP", "CAP",	... ]
	   (HVM	only) Enable MCA capabilities besides default ones enabled by
	   Xen hypervisor for the HVM domain. "CAP" can	be one in the
	   following list:

	   "lmce"
	       Intel local MCE

	   default
	       No MCA capabilities in above list are enabled.

SEE ALSO
       xl(1)
       xl.conf(5)
       xlcpupool.cfg(5)
       xl-disk-configuration(5)
       xl-network-configuration(5)
       xen-tscmode(7)

FILES
       /etc/xen/NAME.cfg /var/lib/xen/dump/NAME

BUGS
       This document may contain items which require further documentation.
       Patches to improve incomplete items (or any other item) are gratefully
       received	on the xen-devel@lists.xenproject.org mailing list. Please see
       <https://wiki.xenproject.org/wiki/Submitting_Xen_Project_Patches> for
       information on how to submit a patch to Xen.

4.14.0				  2020-08-30			     xl.cfg(5)

NAME | SYNOPSIS | DESCRIPTION | SYNTAX | OPTIONS | SEE ALSO | FILES | BUGS

Want to link to this manual page? Use this URL:
<https://www.freebsd.org/cgi/man.cgi?query=xl.cfg&sektion=5&manpath=FreeBSD+12.2-RELEASE+and+Ports>

home | help