Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages


home | help
ZPOOLPROPS(8)		  BSD System Manager's Manual		 ZPOOLPROPS(8)

     zpoolprops	-- available properties	for ZFS	storage	pools

     Each pool has several properties associated with it.  Some	properties are
     read-only statistics while	others are configurable	and change the behav-
     ior of the	pool.

     The following are read-only properties:

	     Amount of storage used within the pool.  See fragmentation	and
	     free for more information.

	     Percentage	of pool	space used.  This property can also be re-
	     ferred to by its shortened	column name, cap.

	     Amount of uninitialized space within the pool or device that can
	     be	used to	increase the total capacity of the pool.  On whole-
	     disk vdevs, this is the space beyond the end of the GPT a typi-
	     cally occurring when a LUN	is dynamically expanded	or a disk re-
	     placed with a larger one.	On partition vdevs, this is the	space
	     appended to the partition after it	was added to the pool a	most
	     likely by resizing	it in-place.  The space	can be claimed for the
	     pool by bringing it online	with autoexpand=on or using zpool
	     online -e.

	     The amount	of fragmentation in the	pool. As the amount of space
	     allocated increases, it becomes more difficult to locate free
	     space. This may result in lower write performance compared	to
	     pools with	more unfragmented free space.

     free    The amount	of free	space available	in the pool.  By contrast, the
	     zfs(8) available property describes how much new data can be
	     written to	ZFS filesystems/volumes.  The zpool free property is
	     not generally useful for this purpose, and	can be substantially
	     more than the zfs available space.	This discrepancy is due	to
	     several factors, including	raidz parity; zfs reservation, quota,
	     refreservation, and refquota properties; and space	set aside by
	     spa_slop_shift (see zfs-module-parameters(5) for more informa-

	     After a file system or snapshot is	destroyed, the space it	was
	     using is returned to the pool asynchronously.  freeing is the
	     amount of space remaining to be reclaimed.	 Over time freeing
	     will decrease while free increases.

     health  The current health	of the pool.  Health can be one	of ONLINE,

     guid    A unique identifier for the pool.

	     A unique identifier for the pool.	Unlike the guid	property, this
	     identifier	is generated every time	we load	the pool (e.g. does
	     not persist across	imports/exports) and never changes while the
	     pool is loaded (even if a reguid operation	takes place).

     size    Total size	of the storage pool.

	     Information about unsupported features that are enabled on	the
	     pool.  See	zpool-features(5) for details.

     The space usage properties	report actual physical space available to the
     storage pool.  The	physical space can be different	from the total amount
     of	space that any contained datasets can actually use.  The amount	of
     space used	in a raidz configuration depends on the	characteristics	of the
     data being	written.  In addition, ZFS reserves some space for internal
     accounting	that the zfs(8)	command	takes into account, but	the zpoolprops
     command does not.	For non-full pools of a	reasonable size, these effects
     should be invisible.  For small pools, or pools that are close to being
     completely	full, these discrepancies may become more noticeable.

     The following property can	be set at creation time	and import time:

	     Alternate root directory.	If set,	this directory is prepended to
	     any mount points within the pool.	This can be used when examin-
	     ing an unknown pool where the mount points	cannot be trusted, or
	     in	an alternate boot environment, where the typical paths are not
	     valid.  altroot is	not a persistent property.  It is valid	only
	     while the system is up.  Setting altroot defaults to using
	     cachefile=none, though this may be	overridden using an explicit

     The following property can	be set only at import time:

	     If	set to on, the pool will be imported in	read-only mode.	 This
	     property can also be referred to by its shortened column name,

     The following properties can be set at creation time and import time, and
     later changed with	the zpool set command:

	     Pool sector size exponent,	to the power of	2 (internally referred
	     to	as ashift ). Values from 9 to 16, inclusive, are valid;	also,
	     the value 0 (the default) means to	auto-detect using the kernel's
	     block layer and a ZFS internal exception list. I/O	operations
	     will be aligned to	the specified size boundaries. Additionally,
	     the minimum (disk)	write size will	be set to the specified	size,
	     so	this represents	a space	vs. performance	trade-off. For optimal
	     performance, the pool sector size should be greater than or equal
	     to	the sector size	of the underlying disks. The typical case for
	     setting this property is when performance is important and	the
	     underlying	disks use 4KiB sectors but report 512B sectors to the
	     OS	(for compatibility reasons); in	that case, set ashift=12
	     (which is 1<<12 = 4096). When set,	this property is used as the
	     default hint value	in subsequent vdev operations (add, attach and
	     replace). Changing	this value will	not modify any existing	vdev,
	     not even on disk replacement; however it can be used, for in-
	     stance, to	replace	a dying	512B sectors disk with a newer 4KiB
	     sectors device: this will probably	result in bad performance but
	     at	the same time could prevent loss of data.

	     Controls automatic	pool expansion when the	underlying LUN is
	     grown.  If	set to on, the pool will be resized according to the
	     size of the expanded device.  If the device is part of a mirror
	     or	raidz then all devices within that mirror/raidz	group must be
	     expanded before the new space is made available to	the pool.  The
	     default behavior is off.  This property can also be referred to
	     by	its shortened column name, expand.

	     Controls automatic	device replacement.  If	set to off, device re-
	     placement must be initiated by the	administrator by using the
	     zpool replace command.  If	set to on, any new device, found in
	     the same physical location	as a device that previously belonged
	     to	the pool, is automatically formatted and replaced.  The	de-
	     fault behavior is off.  This property can also be referred	to by
	     its shortened column name,	replace.  Autoreplace can also be used
	     with virtual disks	(like device mapper) provided that you use the
	     /dev/disk/by-vdev paths setup by vdev_id.conf. See	the vdev_id(8)
	     man page for more details.	 Autoreplace and autoonline require
	     the ZFS Event Daemon be configured	and running.  See the zed(8)
	     man page for more details.

	     When set to on space which	has been recently freed, and is	no
	     longer allocated by the pool, will	be periodically	trimmed.  This
	     allows block device vdevs which support BLKDISCARD, such as SSDs,
	     or	file vdevs on which the	underlying file	system supports	hole-
	     punching, to reclaim unused blocks.  The default setting for this
	     property is off.

	     Automatic TRIM does not immediately reclaim blocks	after a	free.
	     Instead, it will optimistically delay allowing smaller ranges to
	     be	aggregated in to a few larger ones.  These can then be issued
	     more efficiently to the storage.  TRIM on L2ARC devices is	en-
	     abled by setting l2arc_trim_ahead > 0.

	     Be	aware that automatic trimming of recently freed	data blocks
	     can put significant stress	on the underlying storage devices.
	     This will vary depending of how well the specific device handles
	     these commands.  For lower	end devices it is often	possible to
	     achieve most of the benefits of automatic trimming	by running an
	     on-demand (manual)	TRIM periodically using	the zpool trim com-

	     Identifies	the default bootable dataset for the root pool.	This
	     property is expected to be	set mainly by the installation and up-
	     grade programs.  Not all Linux distribution boot processes	use
	     the bootfs	property.

	     Controls the location of where the	pool configuration is cached.
	     Discovering all pools on system startup requires a	cached copy of
	     the configuration data that is stored on the root file system.
	     All pools in this cache are automatically imported	when the sys-
	     tem boots.	 Some environments, such as install and	clustering,
	     need to cache this	information in a different location so that
	     pools are not automatically imported.  Setting this property
	     caches the	pool configuration in a	different location that	can
	     later be imported with zpool import -c.  Setting it to the	value
	     none creates a temporary pool that	is never cached, and the ""
	     (empty string) uses the default location.

	     Multiple pools can	share the same cache file.  Because the	kernel
	     destroys and recreates this file when pools are added and re-
	     moved, care should	be taken when attempting to access this	file.
	     When the last pool	using a	cachefile is exported or destroyed,
	     the file will be empty.

	     A text string consisting of printable ASCII characters that will
	     be	stored such that it is available even if the pool becomes
	     faulted.  An administrator	can provide additional information
	     about a pool using	this property.

	     This property is deprecated and no	longer has any effect.

	     Controls whether a	non-privileged user is granted access based on
	     the dataset permissions defined on	the dataset.  See zfs(8) for
	     more information on ZFS delegated administration.

	     Controls the system behavior in the event of catastrophic pool
	     failure.  This condition is typically a result of a loss of con-
	     nectivity to the underlying storage device(s) or a	failure	of all
	     devices within the	pool.  The behavior of such an event is	deter-
	     mined as follows:

	     wait      Blocks all I/O access until the device connectivity is
		       recovered and the errors	are cleared.  This is the de-
		       fault behavior.

	     continue  Returns EIO to any new write I/O	requests but allows
		       reads to	any of the remaining healthy devices.  Any
		       write requests that have	yet to be committed to disk
		       would be	blocked.

	     panic     Prints out a message to the console and generates a
		       system crash dump.

	     The value of this property	is the current state of	feature_name.
	     The only valid value when setting this property is	enabled	which
	     moves feature_name	to the enabled state.  See zpool-features(5)
	     for details on feature states.

	     Controls whether information about	snapshots associated with this
	     pool is output when zfs list is run without the -t	option.	 The
	     default value is off.  This property can also be referred to by
	     its shortened name, listsnaps.

	     Controls whether a	pool activity check should be performed	during
	     zpool import.  When a pool	is determined to be active it cannot
	     be	imported, even with the	-f option.  This property is intended
	     to	be used	in failover configurations where multiple hosts	have
	     access to a pool on shared	storage.

	     Multihost provides	protection on import only.  It does not	pro-
	     tect against an individual	device being used in multiple pools,
	     regardless	of the type of vdev.  See the discussion under zpool

	     When this property	is on, periodic	writes to storage occur	to
	     show the pool is in use.  See zfs_multihost_interval in the
	     zfs-module-parameters(5) man page.	 In order to enable this prop-
	     erty each host must set a unique hostid.  See genhostid(1)
	     zgenhostid(8) spl-module-parameters(5) for	additional details.
	     The default value is off.

	     The current on-disk version of the	pool.  This can	be increased,
	     but never decreased.  The preferred method	of updating pools is
	     with the zpool upgrade command, though this property can be used
	     when a specific version is	needed for backwards compatibility.
	     Once feature flags	are enabled on a pool this property will no
	     longer have a value.

BSD				August 9, 2019				   BSD


Want to link to this manual page? Use this URL:

home | help