Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages

  
 
  

home | help
PACEMAKER-SCHEDULER(7)	    Pacemaker Configuration	PACEMAKER-SCHEDULER(7)

NAME
       pacemaker-schedulerd - Pacemaker	scheduler options

SYNOPSIS
       [no-quorum-policy=enum] [symmetric-cluster=boolean]
       [maintenance-mode=boolean] [start-failure-is-fatal=boolean]
       [enable-startup-probes=boolean] [shutdown-lock=boolean]
       [shutdown-lock-limit=time] [stonith-enabled=boolean]
       [stonith-action=enum] [stonith-timeout=time] [have-watchdog=boolean]
       [concurrent-fencing=boolean] [startup-fencing=boolean]
       [priority-fencing-delay=time] [cluster-delay=time]
       [batch-limit=integer] [migration-limit=integer]
       [stop-all-resources=boolean] [stop-orphan-resources=boolean]
       [stop-orphan-actions=boolean] [remove-after-stop=boolean]
       [pe-error-series-max=integer] [pe-warn-series-max=integer]
       [pe-input-series-max=integer] [node-health-strategy=enum]
       [node-health-base=integer] [node-health-green=integer]
       [node-health-yellow=integer] [node-health-red=integer]
       [placement-strategy=enum]

DESCRIPTION
       Cluster options used by Pacemaker's scheduler (formerly called pengine)

SUPPORTED PARAMETERS
       no-quorum-policy	= enum [stop]
	   What	to do when the cluster does not	have quorum

	   What	to do when the cluster does not	have quorum Allowed values:
	   stop, freeze, ignore, suicide

       symmetric-cluster = boolean [true]
	   Whether resources can run on	any node by default

       maintenance-mode	= boolean [false]
	   Whether the cluster should refrain from monitoring, starting, and
	   stopping resources

       start-failure-is-fatal =	boolean	[true]
	   Whether a start failure should prevent a resource from being
	   recovered on	the same node

	   When	true, the cluster will immediately ban a resource from a node
	   if it fails to start	there. When false, the cluster will instead
	   check the resource's	fail count against its migration-threshold.

       enable-startup-probes = boolean [true]
	   Whether the cluster should check for	active resources during
	   start-up

       shutdown-lock = boolean [false]
	   Whether to lock resources to	a cleanly shut down node

	   When	true, resources	active on a node when it is cleanly shut down
	   are kept "locked" to	that node (not allowed to run elsewhere) until
	   they	start again on that node after it rejoins (or for at most
	   shutdown-lock-limit,	if set). Stonith resources and Pacemaker
	   Remote connections are never	locked.	Clone and bundle instances and
	   the master role of promotable clones	are currently never locked,
	   though support could	be added in a future release.

       shutdown-lock-limit = time [0]
	   Do not lock resources to a cleanly shut down	node longer than this

	   If shutdown-lock is true and	this is	set to a nonzero time
	   duration, shutdown locks will expire	after this much	time has
	   passed since	the shutdown was initiated, even if the	node has not
	   rejoined.

       stonith-enabled = boolean [true]
	   *** Advanced	Use Only *** Whether nodes may be fenced as part of
	   recovery

	   If false, unresponsive nodes	are immediately	assumed	to be
	   harmless, and resources that	were active on them may	be recovered
	   elsewhere. This can result in a "split-brain" situation,
	   potentially leading to data loss and/or service unavailability.

       stonith-action =	enum [reboot]
	   Action to send to fence device when a node needs to be fenced
	   ("poweroff" is a deprecated alias for "off")

	   Action to send to fence device when a node needs to be fenced
	   ("poweroff" is a deprecated alias for "off")	Allowed	values:
	   reboot, off,	poweroff

       stonith-timeout = time [60s]
	   *** Advanced	Use Only *** Unused by Pacemaker

	   This	value is not used by Pacemaker,	but is kept for	backward
	   compatibility, and certain legacy fence agents might	use it.

       have-watchdog = boolean [false]
	   Whether watchdog integration	is enabled

	   This	is set automatically by	the cluster according to whether SBD
	   is detected to be in	use. User-configured values are	ignored.

       concurrent-fencing = boolean [false]
	   Allow performing fencing operations in parallel

       startup-fencing = boolean [true]
	   *** Advanced	Use Only *** Whether to	fence unseen nodes at start-up

	   Setting this	to false may lead to a "split-brain"
	   situation,potentially leading to data loss and/or service
	   unavailability.

       priority-fencing-delay =	time [0]
	   Apply fencing delay targeting the lost nodes	with the highest total
	   resource priority

	   Apply specified delay for the fencings that are targeting the lost
	   nodes with the highest total	resource priority in case we don't
	   have	the majority of	the nodes in our cluster partition, so that
	   the more significant	nodes potentially win any fencing match, which
	   is especially meaningful under split-brain of 2-node	cluster. A
	   promoted resource instance takes the	base priority +	1 on
	   calculation if the base priority is not 0. Any static/random	delays
	   that	are introduced by `pcmk_delay_base/max`	configured for the
	   corresponding fencing resources will	be added to this delay.	This
	   delay should	be significantly greater than, safely twice, the
	   maximum `pcmk_delay_base/max`. By default, priority fencing delay
	   is disabled.

       cluster-delay = time [60s]
	   Maximum time	for node-to-node communication

	   The node elected Designated Controller (DC) will consider an	action
	   failed if it	does not get a response	from the node executing	the
	   action within this time (after considering the action's own
	   timeout). The "correct" value will depend on	the speed and load of
	   your	network	and cluster nodes.

       batch-limit = integer [0]
	   Maximum number of jobs that the cluster may execute in parallel
	   across all nodes

	   The "correct" value will depend on the speed	and load of your
	   network and cluster nodes. If set to	0, the cluster will impose a
	   dynamically calculated limit	when any node has a high load.

       migration-limit = integer [-1]
	   The number of live migration	actions	that the cluster is allowed to
	   execute in parallel on a node (-1 means no limit)

       stop-all-resources = boolean [false]
	   Whether the cluster should stop all active resources

       stop-orphan-resources = boolean [true]
	   Whether to stop resources that were removed from the	configuration

       stop-orphan-actions = boolean [true]
	   Whether to cancel recurring actions removed from the	configuration

       remove-after-stop = boolean [false]
	   *** Advanced	Use Only *** Whether to	remove stopped resources from
	   the executor

	   Values other	than default are poorly	tested and potentially
	   dangerous.

       pe-error-series-max = integer [-1]
	   The number of scheduler inputs resulting in errors to save

	   Zero	to disable, -1 to store	unlimited.

       pe-warn-series-max = integer [5000]
	   The number of scheduler inputs resulting in warnings	to save

	   Zero	to disable, -1 to store	unlimited.

       pe-input-series-max = integer [4000]
	   The number of scheduler inputs without errors or warnings to	save

	   Zero	to disable, -1 to store	unlimited.

       node-health-strategy = enum [none]
	   How cluster should react to node health attributes

	   Requires external entities to create	node attributes	(named with
	   the prefix "#health") with values "red", "yellow" or	"green".
	   Allowed values: none, migrate-on-red, only-green, progressive,
	   custom

       node-health-base	= integer [0]
	   Base	health score assigned to a node

	   Only	used when node-health-strategy is set to progressive.

       node-health-green = integer [0]
	   The score to	use for	a node health attribute	whose value is "green"

	   Only	used when node-health-strategy is set to custom	or
	   progressive.

       node-health-yellow = integer [0]
	   The score to	use for	a node health attribute	whose value is
	   "yellow"

	   Only	used when node-health-strategy is set to custom	or
	   progressive.

       node-health-red = integer [-INFINITY]
	   The score to	use for	a node health attribute	whose value is "red"

	   Only	used when node-health-strategy is set to custom	or
	   progressive.

       placement-strategy = enum [default]
	   How the cluster should allocate resources to	nodes

	   How the cluster should allocate resources to	nodes Allowed values:
	   default, utilization, minimal, balanced

AUTHOR
       Andrew Beekhof <andrew@beekhof.net>
	   Author.

Pacemaker Configuration		  08/29/2020		PACEMAKER-SCHEDULER(7)

NAME | SYNOPSIS | DESCRIPTION | SUPPORTED PARAMETERS | AUTHOR

Want to link to this manual page? Use this URL:
<https://www.freebsd.org/cgi/man.cgi?query=pacemaker-schedulerd&sektion=7&manpath=FreeBSD+12.1-RELEASE+and+Ports>

home | help