Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages

  
 
  

home | help
COLLECTD.CONF(5)		   collectd		      COLLECTD.CONF(5)

NAME
       collectd.conf - Configuration for the system statistics collection
       daemon collectd

SYNOPSIS
	 BaseDir "/var/db/collectd"
	 PIDFile "/run/collectd.pid"
	 Interval 10.0

	 LoadPlugin cpu
	 LoadPlugin load

	 <LoadPlugin df>
	   Interval 3600
	 </LoadPlugin>
	 <Plugin df>
	   ValuesPercentage true
	 </Plugin>

	 LoadPlugin ping
	 <Plugin ping>
	   Host	"example.org"
	   Host	"provider.net"
	 </Plugin>

DESCRIPTION
       This config file	controls how the system	statistics collection daemon
       collectd	behaves. The most significant option is	LoadPlugin, which
       controls	which plugins to load. These plugins ultimately	define
       collectd's behavior. If the AutoLoadPlugin option has been enabled, the
       explicit	LoadPlugin lines may be	omitted	for all	plugins	with a
       configuration block, i.e. a "<Plugin ...>" block.

       The syntax of this config file is similar to the	config file of the
       famous Apache webserver.	Each line contains either an option (a key and
       a list of one or	more values) or	a section-start	or -end. Empty lines
       and everything after a non-quoted hash-symbol ("#") are ignored.	Keys
       are unquoted strings, consisting	only of	alphanumeric characters	and
       the underscore ("_") character. Keys are	handled	case insensitive by
       collectd	itself and all plugins included	with it. Values	can either be
       an unquoted string, a quoted string (enclosed in	double-quotes) a
       number or a boolean expression. Unquoted	strings	consist	of only
       alphanumeric characters and underscores ("_") and do not	need to	be
       quoted. Quoted strings are enclosed in double quotes ("""). You can use
       the backslash character ("\") to	include	double quotes as part of the
       string. Numbers can be specified	in decimal and floating	point format
       (using a	dot "."	as decimal separator), hexadecimal when	using the "0x"
       prefix and octal	with a leading zero (0).  Boolean values are either
       true or false.

       Lines may be wrapped by using "\" as the	last character before the
       newline.	 This allows long lines	to be split into multiple lines.
       Quoted strings may be wrapped as	well. However, those are treated
       special in that whitespace at the beginning of the following lines will
       be ignored, which allows	for nicely indenting the wrapped lines.

       The configuration is read and processed in order, i.e. from top to
       bottom. So the plugins are loaded in the	order listed in	this config
       file. It	is a good idea to load any logging plugins first in order to
       catch messages from plugins during configuration. Also, unless
       AutoLoadPlugin is enabled, the LoadPlugin option	must occur before the
       appropriate "<Plugin ...>" block.

GLOBAL OPTIONS
       BaseDir Directory
	   Sets	the base directory. This is the	directory beneath which	all
	   RRD-files are created. Possibly more	subdirectories are created.
	   This	is also	the working directory for the daemon.

       LoadPlugin Plugin
	   Loads the plugin Plugin. This is required to	load plugins, unless
	   the AutoLoadPlugin option is	enabled	(see below). Without any
	   loaded plugins, collectd will be mostly useless.

	   Only	the first LoadPlugin statement or block	for a given plugin
	   name	has any	effect.	This is	useful when you	want to	split up the
	   configuration into smaller files and	want each file to be "self
	   contained", i.e. it contains	a Plugin block and the appropriate
	   LoadPlugin statement. The downside is that if you have multiple
	   conflicting LoadPlugin blocks, e.g. when they specify different
	   intervals, only one of them (the first one encountered) will	take
	   effect and all others will be silently ignored.

	   LoadPlugin may either be a simple configuration statement or	a
	   block with additional options, affecting the	behavior of
	   LoadPlugin. A simple	statement looks	like this:

	    LoadPlugin "cpu"

	   Options inside a LoadPlugin block can override default settings and
	   influence the way plugins are loaded, e.g.:

	    <LoadPlugin	perl>
	      Interval 60
	    </LoadPlugin>

	   The following options are valid inside LoadPlugin blocks:

	   Globals true|false
	       If enabled, collectd will export	all global symbols of the
	       plugin (and of all libraries loaded as dependencies of the
	       plugin) and, thus, makes	those symbols available	for resolving
	       unresolved symbols in subsequently loaded plugins if that is
	       supported by your system.

	       This is useful (or possibly even	required), e.g., when loading
	       a plugin	that embeds some scripting language into the daemon
	       (e.g. the Perl and Python plugins). Scripting languages usually
	       provide means to	load extensions	written	in C. Those extensions
	       require symbols provided	by the interpreter, which is loaded as
	       a dependency of the respective collectd plugin.	See the
	       documentation of	those plugins (e.g., collectd-perl(5) or
	       collectd-python(5)) for details.

	       By default, this	is disabled. As	a special exception, if	the
	       plugin name is either "perl" or "python", the default is
	       changed to enabled in order to keep the average user from ever
	       having to deal with this	low level linking stuff.

	   Interval Seconds
	       Sets a plugin-specific interval for collecting metrics. This
	       overrides the global Interval setting. If a plugin provides its
	       own support for specifying an interval, that setting will take
	       precedence.

	   FlushInterval Seconds
	       Specifies the interval, in seconds, to call the flush callback
	       if it's defined in this plugin. By default, this	is disabled.

	   FlushTimeout	Seconds
	       Specifies the value of the timeout argument of the flush
	       callback.

       AutoLoadPlugin false|true
	   When	set to false (the default), each plugin	needs to be loaded
	   explicitly, using the LoadPlugin statement documented above.	If a
	   <Plugin ...>	block is encountered and no configuration handling
	   callback for	this plugin has	been registered, a warning is logged
	   and the block is ignored.

	   When	set to true, explicit LoadPlugin statements are	not required.
	   Each	<Plugin	...> block acts	as if it was immediately preceded by a
	   LoadPlugin statement. LoadPlugin statements are still required for
	   plugins that	don't provide any configuration, e.g. the Load plugin.

       CollectInternalStats false|true
	   When	set to true, various statistics	about the collectd daemon will
	   be collected, with "collectd" as the	plugin name. Defaults to
	   false.

	   The following metrics are reported:

	   "collectd-write_queue/queue_length"
	       The number of metrics currently in the write queue. You can
	       limit the queue length with the WriteQueueLimitLow and
	       WriteQueueLimitHigh options.

	   "collectd-write_queue/derive-dropped"
	       The number of metrics dropped due to a queue length limitation.
	       If this value is	non-zero, your system can't handle all
	       incoming	metrics	and protects itself against overload by
	       dropping	metrics.

	   "collectd-cache/cache_size"
	       The number of elements in the metric cache (the cache you can
	       interact	with using collectd-unixsock(5)).

       Include Path [pattern]
	   If Path points to a file, includes that file. If Path points	to a
	   directory, recursively includes all files within that directory and
	   its subdirectories. If the "wordexp"	function is available on your
	   system, shell-like wildcards	are expanded before files are
	   included. This means	you can	use statements like the	following:

	     Include "/etc/collectd.d/*.conf"

	   Starting with version 5.3, this may also be a block in which
	   further options affecting the behavior of Include may be specified.
	   The following option	is currently allowed:

	     <Include "/etc/collectd.d">
	       Filter "*.conf"
	     </Include>

	   Filter pattern
	       If the "fnmatch"	function is available on your system, a	shell-
	       like wildcard pattern may be specified to filter	which files to
	       include.	This may be used in combination	with recursively
	       including a directory to	easily be able to arbitrarily mix
	       configuration files and other documents (e.g. README files).
	       The given example is similar to the first example above but
	       includes	all files matching "*.conf" in any subdirectory	of
	       "/etc/collectd.d".

	   If more than	one file is included by	a single Include option, the
	   files will be included in lexicographical order (as defined by the
	   "strcmp" function). Thus, you can e.	g. use numbered	prefixes to
	   specify the order in	which the files	are loaded.

	   To prevent loops and	shooting yourself in the foot in interesting
	   ways	the nesting is limited to a depth of 8 levels, which should be
	   sufficient for most uses. Since symlinks are	followed it is still
	   possible to crash the daemon	by looping symlinks. In	our opinion
	   significant stupidity should	result in an appropriate amount	of
	   pain.

	   It is no problem to have a block like "<Plugin foo>"	in more	than
	   one file, but you cannot include files from within blocks.

       PIDFile File
	   Sets	where to write the PID file to.	This file is overwritten when
	   it exists and deleted when the program is stopped. Some init-
	   scripts might override this setting using the -P command-line
	   option.

       PluginDir Directory
	   Path	to the plugins (shared objects)	of collectd.

       TypesDB File [File ...]
	   Set one or more files that contain the data-set descriptions. See
	   types.db(5) for a description of the	format of this file.

	   If this option is not specified, a default file is read. If you
	   need	to define custom types in addition to the types	defined	in the
	   default file, you need to explicitly	load both. In other words, if
	   the TypesDB option is encountered the default behavior is disabled
	   and if you need the default types you have to also explicitly load
	   them.

       Interval	Seconds
	   Configures the interval in which to query the read plugins.
	   Obviously smaller values lead to a higher system load produced by
	   collectd, while higher values lead to more coarse statistics.

	   Warning: You	should set this	once and then never touch it again. If
	   you do, you will have to delete all your RRD	files or know some
	   serious RRDtool magic! (Assuming you're using the RRDtool or
	   RRDCacheD plugin.)

       MaxReadInterval Seconds
	   A read plugin doubles the interval between queries after each
	   failed attempt to get data.

	   This	options	limits the maximum value of the	interval. The default
	   value is 86400.

       Timeout Iterations
	   Consider a value list "missing" when	no update has been read	or
	   received for	Iterations iterations. By default, collectd considers
	   a value list	missing	when no	update has been	received for twice the
	   update interval. Since this setting uses iterations,	the maximum
	   allowed time	without	update depends on the Interval information
	   contained in	each value list. This is used in the Threshold
	   configuration to dispatch notifications about missing values, see
	   collectd-threshold(5) for details.

       ReadThreads Num
	   Number of threads to	start for reading plugins. The default value
	   is 5, but you may want to increase this if you have more than five
	   plugins that	take a long time to read. Mostly those are plugins
	   that	do network-IO. Setting this to a value higher than the number
	   of registered read callbacks	is not recommended.

       WriteThreads Num
	   Number of threads to	start for dispatching value lists to write
	   plugins. The	default	value is 5, but	you may	want to	increase this
	   if you have more than five plugins that may take relatively long to
	   write to.

       WriteQueueLimitHigh HighNum
       WriteQueueLimitLow LowNum
	   Metrics are read by the read	threads	and then put into a queue to
	   be handled by the write threads. If one of the write	plugins	is
	   slow	(e.g. network timeouts,	I/O saturation of the disk) this queue
	   will	grow. In order to avoid	running	into memory issues in such a
	   case, you can limit the size	of this	queue.

	   By default, there is	no limit and memory may	grow indefinitely.
	   This	is most	likely not an issue for	clients, i.e. instances	that
	   only	handle the local metrics. For servers it is recommended	to set
	   this	to a non-zero value, though.

	   You can set the limits using	WriteQueueLimitHigh and
	   WriteQueueLimitLow.	Each of	them takes a numerical argument	which
	   is the number of metrics in the queue. If there are HighNum metrics
	   in the queue, any new metrics will be dropped. If there are less
	   than	LowNum metrics in the queue, all new metrics will be enqueued.
	   If the number of metrics currently in the queue is between LowNum
	   and HighNum,	the metric is dropped with a probability that is
	   proportional	to the number of metrics in the	queue (i.e. it
	   increases linearly until it reaches 100%.)

	   If WriteQueueLimitHigh is set to non-zero and WriteQueueLimitLow is
	   unset, the latter will default to half of WriteQueueLimitHigh.

	   If you do not want to randomly drop values when the queue size is
	   between LowNum and HighNum, set WriteQueueLimitHigh and
	   WriteQueueLimitLow to the same value.

	   Enabling the	CollectInternalStats option is of great	help to	figure
	   out the values to set WriteQueueLimitHigh and WriteQueueLimitLow
	   to.

       Hostname	Name
	   Sets	the hostname that identifies a host. If	you omit this setting,
	   the hostname	will be	determined using the gethostname(2) system
	   call.

       FQDNLookup true|false
	   If Hostname is determined automatically this	setting	controls
	   whether or not the daemon should try	to figure out the "fully
	   qualified domain name", FQDN.  This is done using a lookup of the
	   name	returned by "gethostname". This	option is enabled by default.

       PreCacheChain ChainName
       PostCacheChain ChainName
	   Configure the name of the "pre-cache	chain" and the "post-cache
	   chain". Please see "FILTER CONFIGURATION" below on information on
	   chains and how these	setting	change the daemon's behavior.

PLUGIN OPTIONS
       Some plugins may	register own options. These options must be enclosed
       in a "Plugin"-Section. Which options exist depends on the plugin	used.
       Some plugins require external configuration, too. The "apache plugin",
       for example, required "mod_status" to be	configured in the webserver
       you're going to collect data from. These	plugins	are listed below as
       well, even if they don't	require	any configuration within collectd's
       configuration file.

       A list of all plugins and a short summary for each plugin can be	found
       in the README file shipped with the sourcecode and hopefully binary
       packets as well.

   Plugin "aggregation"
       The Aggregation plugin makes it possible	to aggregate several values
       into one	using aggregation functions such as sum, average, min and max.
       This can	be put to a wide variety of uses, e.g. average and total CPU
       statistics for your entire fleet.

       The grouping is powerful	but, as	with many powerful tools, may be a bit
       difficult to wrap your head around. The grouping	will therefore be
       demonstrated using an example: The average and sum of the CPU usage
       across all CPUs of each host is to be calculated.

       To select all the affected values for our example, set "Plugin cpu" and
       "Type cpu". The other values are	left unspecified, meaning "all
       values".	The Host, Plugin, PluginInstance, Type and TypeInstance
       options work as if they were specified in the "WHERE" clause of an
       "SELECT"	SQL statement.

	 Plugin	"cpu"
	 Type "cpu"

       Although	the Host, PluginInstance (CPU number, i.e. 0, 1, 2, ...)  and
       TypeInstance (idle, user, system, ...) fields are left unspecified in
       the example, the	intention is to	have a new value for each host / type
       instance	pair. This is achieved by "grouping" the values	using the
       "GroupBy" option.  It can be specified multiple times to	group by more
       than one	field.

	 GroupBy "Host"
	 GroupBy "TypeInstance"

       We do neither specify nor group by plugin instance (the CPU number), so
       all metrics that	differ in the CPU number only will be aggregated. Each
       aggregation needs at least one such field, otherwise no aggregation
       would take place.

       The full	example	configuration looks like this:

	<Plugin	"aggregation">
	  <Aggregation>
	    Plugin "cpu"
	    Type "cpu"

	    GroupBy "Host"
	    GroupBy "TypeInstance"

	    CalculateSum true
	    CalculateAverage true
	  </Aggregation>
	</Plugin>

       There are a couple of limitations you should be aware of:

       o   The Type cannot be left unspecified,	because	it is not reasonable
	   to add apples to oranges. Also, the internal	lookup structure won't
	   work	if you try to group by type.

       o   There must be at least one unspecified, ungrouped field. Otherwise
	   nothing will	be aggregated.

       As you can see in the example above, each aggregation has its own
       Aggregation block. You can have multiple	aggregation blocks and
       aggregation blocks may match the	same values, i.e. one value list can
       update multiple aggregations. The following options are valid inside
       Aggregation blocks:

       Host Host
       Plugin Plugin
       PluginInstance PluginInstance
       Type Type
       TypeInstance TypeInstance
	   Selects the value lists to be added to this aggregation. Type must
	   be a	valid data set name, see types.db(5) for details.

	   If the string starts	with and ends with a slash ("/"), the string
	   is interpreted as a regular expression. The regex flavor used are
	   POSIX extended regular expressions as described in regex(7).
	   Example usage:

	    Host "/^db[0-9]\\.example\\.com$/"

       GroupBy Host|Plugin|PluginInstance|TypeInstance
	   Group valued	by the specified field.	The GroupBy option may be
	   repeated to group by	multiple fields.

       SetHost Host
       SetPlugin Plugin
       SetPluginInstance PluginInstance
       SetTypeInstance TypeInstance
	   Sets	the appropriate	part of	the identifier to the provided string.

	   The PluginInstance should include the placeholder "%{aggregation}"
	   which will be replaced with the aggregation function, e.g.
	   "average". Not including the	placeholder will result	in duplication
	   warnings and/or messed up values if more than one aggregation
	   function are	enabled.

	   The following example calculates the	average	usage of all "even"
	   CPUs:

	    <Plugin "aggregation">
	      <Aggregation>
		Plugin "cpu"
		PluginInstance "/[0,2,4,6,8]$/"
		Type "cpu"

		SetPlugin "cpu"
		SetPluginInstance "even-%{aggregation}"

		GroupBy	"Host"
		GroupBy	"TypeInstance"

		CalculateAverage true
	      </Aggregation>
	    </Plugin>

	   This	will create the	files:

	   o   foo.example.com/cpu-even-average/cpu-idle

	   o   foo.example.com/cpu-even-average/cpu-system

	   o   foo.example.com/cpu-even-average/cpu-user

	   o   ...

       CalculateNum true|false
       CalculateSum true|false
       CalculateAverage	true|false
       CalculateMinimum	true|false
       CalculateMaximum	true|false
       CalculateStddev true|false
	   Boolean options for enabling	calculation of the number of value
	   lists, their	sum, average, minimum, maximum and / or	standard
	   deviation. All options are disabled by default.

   Plugin "amqp"
       The AMQP	plugin can be used to communicate with other instances of
       collectd	or third party applications using an AMQP message broker.
       Values are sent to or received from the broker, which handles routing,
       queueing	and possibly filtering out messages.

       Synopsis:

	<Plugin	"amqp">
	  # Send values	to an AMQP broker
	  <Publish "some_name">
	    Host "localhost"
	    Port "5672"
	    VHost "/"
	    User "guest"
	    Password "guest"
	    Exchange "amq.fanout"
	#   ExchangeType "fanout"
	#   RoutingKey "collectd"
	#   Persistent false
	#   ConnectionRetryDelay 0
	#   Format "command"
	#   StoreRates false
	#   GraphitePrefix "collectd."
	#   GraphiteEscapeChar "_"
	#   GraphiteSeparateInstances false
	#   GraphiteAlwaysAppendDS false
	#   GraphitePreserveSeparator false
	  </Publish>

	  # Receive values from	an AMQP	broker
	  <Subscribe "some_name">
	    Host "localhost"
	    Port "5672"
	    VHost "/"
	    User "guest"
	    Password "guest"
	    Exchange "amq.fanout"
	#   ExchangeType "fanout"
	#   Queue "queue_name"
	#   QueueDurable false
	#   QueueAutoDelete true
	#   RoutingKey "collectd.#"
	#   ConnectionRetryDelay 0
	  </Subscribe>
	</Plugin>

       The plugin's configuration consists of a	number of Publish and
       Subscribe blocks, which configure sending and receiving of values
       respectively. The two blocks are	very similar, so unless	otherwise
       noted, an option	can be used in either block. The name given in the
       blocks starting tag is only used	for reporting messages,	but may	be
       used to support flushing	of certain Publish blocks in the future.

       Host Host
	   Hostname or IP-address of the AMQP broker. Defaults to the default
	   behavior of the underlying communications library, rabbitmq-c,
	   which is "localhost".

       Port Port
	   Service name	or port	number on which	the AMQP broker	accepts
	   connections.	This argument must be a	string,	even if	the numeric
	   form	is used. Defaults to "5672".

       VHost VHost
	   Name	of the virtual host on the AMQP	broker to use. Defaults	to
	   "/".

       User User
       Password	Password
	   Credentials used to authenticate to the AMQP	broker.	By default
	   "guest"/"guest" is used.

       Exchange	Exchange
	   In Publish blocks, this option specifies the	exchange to send
	   values to.  By default, "amq.fanout"	will be	used.

	   In Subscribe	blocks this option is optional.	If given, a binding
	   between the given exchange and the queue is created,	using the
	   routing key if configured. See the Queue and	RoutingKey options
	   below.

       ExchangeType Type
	   If given, the plugin	will try to create the configured exchange
	   with	this type after	connecting. When in a Subscribe	block, the
	   queue will then be bound to this exchange.

       Queue Queue (Subscribe only)
	   Configures the queue	name to	subscribe to. If no queue name was
	   configured explicitly, a unique queue name will be created by the
	   broker.

       QueueDurable true|false (Subscribe only)
	   Defines if the queue	subscribed to is durable (saved	to persistent
	   storage) or transient (will disappear if the	AMQP broker is
	   restarted). Defaults	to "false".

	   This	option should be used in conjunction with the Persistent
	   option on the publish side.

       QueueAutoDelete true|false (Subscribe only)
	   Defines if the queue	subscribed to will be deleted once the last
	   consumer unsubscribes. Defaults to "true".

       RoutingKey Key
	   In Publish blocks, this configures the routing key to set on	all
	   outgoing messages. If not given, the	routing	key will be computed
	   from	the identifier of the value. The host, plugin, type and	the
	   two instances are concatenated together using dots as the separator
	   and all containing dots replaced with slashes. For example
	   "collectd.host/example/com.cpu.0.cpu.user". This makes it possible
	   to receive only specific values using a "topic" exchange.

	   In Subscribe	blocks,	configures the routing key used	when creating
	   a binding between an	exchange and the queue.	The usual wildcards
	   can be used to filter messages when using a "topic" exchange. If
	   you're only interested in CPU statistics, you could use the routing
	   key "collectd.*.cpu.#" for example.

       Persistent true|false (Publish only)
	   Selects the delivery	method to use. If set to true, the persistent
	   mode	will be	used, i.e. delivery is guaranteed. If set to false
	   (the	default), the transient	delivery mode will be used, i.e.
	   messages may	be lost	due to high load, overflowing queues or
	   similar issues.

       ConnectionRetryDelay Delay
	   When	the connection to the AMQP broker is lost, defines the time in
	   seconds to wait before attempting to	reconnect. Defaults to 0,
	   which implies collectd will attempt to reconnect at each read
	   interval (in	Subscribe mode)	or each	time values are	ready for
	   submission (in Publish mode).

       Format Command|JSON|Graphite (Publish only)
	   Selects the format in which messages	are sent to the	broker.	If set
	   to Command (the default), values are	sent as	"PUTVAL" commands
	   which are identical to the syntax used by the Exec and UnixSock
	   plugins. In this case, the "Content-Type" header field will be set
	   to "text/collectd".

	   If set to JSON, the values are encoded in the JavaScript Object
	   Notation, an	easy and straight forward exchange format. The
	   "Content-Type" header field will be set to "application/json".

	   If set to Graphite, values are encoded in the Graphite format,
	   which is "<metric> <value> <timestamp>\n". The "Content-Type"
	   header field	will be	set to "text/graphite".

	   A subscribing client	should use the "Content-Type" header field to
	   determine how to decode the values. Currently, the AMQP plugin
	   itself can only decode the Command format.

       StoreRates true|false (Publish only)
	   Determines whether or not "COUNTER",	"DERIVE" and "ABSOLUTE"	data
	   sources are converted to a rate (i.e. a "GAUGE" value). If set to
	   false (the default),	no conversion is performed. Otherwise the
	   conversion is performed using the internal value cache.

	   Please note that currently this option is only used if the Format
	   option has been set to JSON.

       GraphitePrefix (Publish and Format=Graphite only)
	   A prefix can	be added in the	metric name when outputting in the
	   Graphite format.  It's added	before the Host	name.  Metric name
	   will	be "<prefix><host><postfix><plugin><type><name>"

       GraphitePostfix (Publish	and Format=Graphite only)
	   A postfix can be added in the metric	name when outputting in	the
	   Graphite format.  It's added	after the Host name.  Metric name will
	   be "<prefix><host><postfix><plugin><type><name>"

       GraphiteEscapeChar (Publish and Format=Graphite only)
	   Specify a character to replace dots (.) in the host part of the
	   metric name.	 In Graphite metric name, dots are used	as separators
	   between different metric parts (host, plugin, type).	 Default is
	   "_" (Underscore).

       GraphiteSeparateInstances true|false
	   If set to true, the plugin instance and type	instance will be in
	   their own path component, for example "host.cpu.0.cpu.idle".	If set
	   to false (the default), the plugin and plugin instance (and
	   likewise the	type and type instance)	are put	into one component,
	   for example "host.cpu-0.cpu-idle".

       GraphiteAlwaysAppendDS true|false
	   If set to true, append the name of the Data Source (DS) to the
	   "metric" identifier.	If set to false	(the default), this is only
	   done	when there is more than	one DS.

       GraphitePreserveSeparator false|true
	   If set to false (the	default) the "." (dot) character is replaced
	   with	GraphiteEscapeChar. Otherwise, if set to true, the "." (dot)
	   character is	preserved, i.e.	passed through.

   Plugin "apache"
       To configure the	"apache"-plugin	you first need to configure the	Apache
       webserver correctly. The	Apache-plugin "mod_status" needs to be loaded
       and working and the "ExtendedStatus" directive needs to be enabled. You
       can use the following snipped to	base your Apache config	upon:

	 ExtendedStatus	on
	 <IfModule mod_status.c>
	   <Location /mod_status>
	     SetHandler	server-status
	   </Location>
	 </IfModule>

       Since its "mod_status" module is	very similar to	Apache's, lighttpd is
       also supported. It introduces a new field, called "BusyServers",	to
       count the number	of currently connected clients.	This field is also
       supported.

       The configuration of the	Apache plugin consists of one or more
       "<Instance />" blocks. Each block requires one string argument as the
       instance	name. For example:

	<Plugin	"apache">
	  <Instance "www1">
	    URL	"http://www1.example.com/mod_status?auto"
	  </Instance>
	  <Instance "www2">
	    URL	"http://www2.example.com/mod_status?auto"
	  </Instance>
	</Plugin>

       The instance name will be used as the plugin instance. To emulate the
       old (version 4) behavior, you can use an	empty string (""). In order
       for the plugin to work correctly, each instance name must be unique.
       This is not enforced by the plugin and it is your responsibility	to
       ensure it.

       The following options are accepted within each Instance block:

       URL http://host/mod_status?auto
	   Sets	the URL	of the "mod_status" output. This needs to be the
	   output generated by "ExtendedStatus on" and it needs	to be the
	   machine readable output generated by	appending the "?auto"
	   argument. This option is mandatory.

       User Username
	   Optional user name needed for authentication.

       Password	Password
	   Optional password needed for	authentication.

       VerifyPeer true|false
	   Enable or disable peer SSL certificate verification.	See
	   <http://curl.haxx.se/docs/sslcerts.html> for	details. Enabled by
	   default.

       VerifyHost true|false
	   Enable or disable peer host name verification. If enabled, the
	   plugin checks if the	"Common	Name" or a "Subject Alternate Name"
	   field of the	SSL certificate	matches	the host name provided by the
	   URL option. If this identity	check fails, the connection is
	   aborted. Obviously, only works when connecting to a SSL enabled
	   server. Enabled by default.

       CACert File
	   File	that holds one or more SSL certificates. If you	want to	use
	   HTTPS you will possibly need	this option. What CA certificates come
	   bundled with	"libcurl" and are checked by default depends on	the
	   distribution	you use.

       SSLCiphers list of ciphers
	   Specifies which ciphers to use in the connection. The list of
	   ciphers must	specify	valid ciphers. See
	   <http://www.openssl.org/docs/apps/ciphers.html> for details.

       Timeout Milliseconds
	   The Timeout option sets the overall timeout for HTTP	requests to
	   URL,	in milliseconds. By default, the configured Interval is	used
	   to set the timeout.

   Plugin "apcups"
       Host Hostname
	   Hostname of the host	running	apcupsd. Defaults to localhost.	Please
	   note	that IPv6 support has been disabled unless someone can confirm
	   or decline that apcupsd can handle it.

       Port Port
	   TCP-Port to connect to. Defaults to 3551.

       ReportSeconds true|false
	   If set to true, the time reported in	the "timeleft" metric will be
	   converted to	seconds. This is the recommended setting. If set to
	   false, the default for backwards compatibility, the time will be
	   reported in minutes.

       PersistentConnection true|false
	   The plugin is designed to keep the connection to apcupsd open
	   between reads.  If plugin poll interval is greater than 15 seconds
	   (hardcoded socket close timeout in apcupsd NIS), then this option
	   is false by default.

	   You can instruct the	plugin to close	the connection after each read
	   by setting this option to false or force keeping the	connection by
	   setting it to true.

	   If apcupsd appears to close the connection due to inactivity	quite
	   quickly, the	plugin will try	to detect this problem and switch to
	   an open-read-close mode.

   Plugin "aquaero"
       This plugin collects the	value of the available sensors in an Aquaero 5
       board. Aquaero 5	is a water-cooling controller board, manufactured by
       Aqua Computer GmbH <http://www.aquacomputer.de/>, with a	USB2
       connection for monitoring and configuration. The	board can handle
       multiple	temperature sensors, fans, water pumps and water level sensors
       and adjust the output settings such as fan voltage or power used	by the
       water pump based	on the available inputs	using a	configurable
       controller included in the board.  This plugin collects all the
       available inputs	as well	as some	of the output values chosen by this
       controller. The plugin is based on the libaquaero5 library provided by
       aquatools-ng.

       Device DevicePath
	   Device path of the Aquaero 5's USB HID (human interface device),
	   usually in the form "/dev/usb/hiddevX". If this option is no	set
	   the plugin will try to auto-detect the Aquaero 5 USB	device based
	   on vendor-ID	and product-ID.

   Plugin "ascent"
       This plugin collects information	about an Ascent	server,	a free server
       for the "World of Warcraft" game. This plugin gathers the information
       by fetching the XML status page using "libcurl" and parses it using
       "libxml2".

       The configuration options are the same as for the "apache" plugin
       above:

       URL http://localhost/ascent/status/
	   Sets	the URL	of the XML status output.

       User Username
	   Optional user name needed for authentication.

       Password	Password
	   Optional password needed for	authentication.

       VerifyPeer true|false
	   Enable or disable peer SSL certificate verification.	See
	   <http://curl.haxx.se/docs/sslcerts.html> for	details. Enabled by
	   default.

       VerifyHost true|false
	   Enable or disable peer host name verification. If enabled, the
	   plugin checks if the	"Common	Name" or a "Subject Alternate Name"
	   field of the	SSL certificate	matches	the host name provided by the
	   URL option. If this identity	check fails, the connection is
	   aborted. Obviously, only works when connecting to a SSL enabled
	   server. Enabled by default.

       CACert File
	   File	that holds one or more SSL certificates. If you	want to	use
	   HTTPS you will possibly need	this option. What CA certificates come
	   bundled with	"libcurl" and are checked by default depends on	the
	   distribution	you use.

       Timeout Milliseconds
	   The Timeout option sets the overall timeout for HTTP	requests to
	   URL,	in milliseconds. By default, the configured Interval is	used
	   to set the timeout.

   Plugin "barometer"
       This plugin reads absolute air pressure using digital barometer sensor
       on a I2C	bus. Supported sensors are:

       MPL115A2	from Freescale,	see
       <http://www.freescale.com/webapp/sps/site/prod_summary.jsp?code=MPL115A>.
       MPL3115 from Freescale see
       <http://www.freescale.com/webapp/sps/site/prod_summary.jsp?code=MPL3115A2>.
       BMP085 from Bosch Sensortec

       The sensor type - one of	the above - is detected	automatically by the
       plugin and indicated in the plugin_instance (you	will see subdirectory
       "barometer-mpl115" or "barometer-mpl3115", or "barometer-bmp085"). The
       order of	detection is BMP085 -> MPL3115 -> MPL115A2, the	first one
       found will be used (only	one sensor can be used by the plugin).

       The plugin provides absolute barometric pressure, air pressure reduced
       to sea level (several possible approximations) and as an	auxiliary
       value also internal sensor temperature. It uses (expects/provides)
       typical metric units - pressure in [hPa], temperature in	[C], altitude
       in [m].

       It was developed	and tested under Linux only. The only platform
       dependency is the standard Linux	i2c-dev	interface (the particular bus
       driver has to support the SM Bus	command	subset).

       The reduction or	normalization to mean sea level	pressure requires
       (depending on selected method/approximation) also altitude and
       reference to temperature	sensor(s).  When multiple temperature sensors
       are configured the minimum of their values is always used (expecting
       that the	warmer ones are	affected by e.g. direct	sun light at that
       moment).

       Synopsis:

	 <Plugin "barometer">
	    Device	      "/dev/i2c-0";
	    Oversampling      512
	    PressureOffset    0.0
	    TemperatureOffset 0.0
	    Normalization     2
	    Altitude	      238.0
	    TemperatureSensor "myserver/onewire-F10FCA000800/temperature"
	 </Plugin>

       Device device
	   The only mandatory configuration parameter.

	   Device name of the I2C bus to which the sensor is connected.	Note
	   that	typically you need to have loaded the i2c-dev module.  Using
	   i2c-tools you can check/list	i2c buses available on your system by:

	     i2cdetect -l

	   Then	you can	scan for devices on given bus. E.g. to scan the	whole
	   bus 0 use:

	     i2cdetect -y -a 0

	   This	way you	should be able to verify that the pressure sensor
	   (either type) is connected and detected on address 0x60.

       Oversampling value
	   Optional parameter controlling the oversampling/accuracy. Default
	   value is 1 providing	fastest	and least accurate reading.

	   For MPL115 this is the size of the averaging	window.	To filter out
	   sensor noise	a simple averaging using floating window of this
	   configurable	size is	used. The plugin will use average of the last
	   "value" measurements	(value of 1 means no averaging).  Minimal size
	   is 1, maximal 1024.

	   For MPL3115 this is the oversampling	value. The actual oversampling
	   is performed	by the sensor and the higher value the higher accuracy
	   and longer conversion time (although	nothing	to worry about in the
	   collectd context).  Supported values	are: 1,	2, 4, 8, 16, 32, 64
	   and 128. Any	other value is adjusted	by the plugin to the closest
	   supported one.

	   For BMP085 this is the oversampling value. The actual oversampling
	   is performed	by the sensor and the higher value the higher accuracy
	   and longer conversion time (although	nothing	to worry about in the
	   collectd context).  Supported values	are: 1,	2, 4, 8. Any other
	   value is adjusted by	the plugin to the closest supported one.

       PressureOffset offset
	   Optional parameter for MPL3115 only.

	   You can further calibrate the sensor	by supplying pressure and/or
	   temperature offsets.	 This is added to the measured/caclulated
	   value (i.e. if the measured value is	too high then use negative
	   offset).  In	hPa, default is	0.0.

       TemperatureOffset offset
	   Optional parameter for MPL3115 only.

	   You can further calibrate the sensor	by supplying pressure and/or
	   temperature offsets.	 This is added to the measured/caclulated
	   value (i.e. if the measured value is	too high then use negative
	   offset).  In	C, default is 0.0.

       Normalization method
	   Optional parameter, default value is	0.

	   Normalization method	- what approximation/model is used to compute
	   the mean sea	level pressure from the	air absolute pressure.

	   Supported values of the "method" (integer between from 0 to 2) are:

	   0 - no conversion, absolute pressure	is simply copied over. For
	   this	method you do not need to configure "Altitude" or
	   "TemperatureSensor".
	   1 - international formula for conversion , See
	   <http://en.wikipedia.org/wiki/Atmospheric_pressure#Altitude_atmospheric_pressure_variation>.
	   For this method you have to configure "Altitude" but	do not need
	   "TemperatureSensor" (uses fixed global temperature average
	   instead).
	   2 - formula as recommended by the Deutsche Wetterdienst (German
	   Meteorological Service). See
	   <http://de.wikipedia.org/wiki/Barometrische_H%C3%B6henformel#Theorie>
	   For this method you have to configure both  "Altitude" and
	   "TemperatureSensor".
       Altitude	altitude
	   The altitude	(in meters) of the location where you meassure the
	   pressure.

       TemperatureSensor reference
	   Temperature sensor(s) which should be used as a reference when
	   normalizing the pressure using "Normalization" method 2.  When
	   specified more sensors a minimum is found and used each time.  The
	   temperature reading directly	from this pressure sensor/plugin is
	   typically not suitable as the pressure sensor will be probably
	   inside while	we want	outside	temperature.  The collectd reference
	   name	is something like
	   <hostname>/<plugin_name>-<plugin_instance>/<type>-<type_instance>
	   (<type_instance> is usually omitted when there is just single value
	   type). Or you can figure it out from	the path of the	output data
	   files.

   Plugin "battery"
       The battery plugin reports the remaining	capacity, power	and voltage of
       laptop batteries.

       ValuesPercentage	false|true
	   When	enabled, remaining capacity is reported	as a percentage, e.g.
	   "42%	capacity remaining". Otherwise the capacity is stored as
	   reported by the battery, most likely	in "Wh". This option does not
	   work	with all input methods,	in particular when only	"/proc/pmu" is
	   available on	an old Linux system.  Defaults to false.

       ReportDegraded false|true
	   Typical laptop batteries degrade over time, meaning the capacity
	   decreases with recharge cycles. The maximum charge of the previous
	   charge cycle	is tracked as "last full capacity" and used to
	   determine that a battery is "fully charged".

	   When	this option is set to false, the default, the battery plugin
	   will	only report the	remaining capacity. If the ValuesPercentage
	   option is enabled, the relative remaining capacity is calculated as
	   the ratio of	the "remaining capacity" and the "last full capacity".
	   This	is what	most tools, such as the	status bar of desktop
	   environments, also do.

	   When	set to true, the battery plugin	will report three values:
	   charged (remaining capacity), discharged (difference	between	"last
	   full	capacity" and "remaining capacity") and	degraded (difference
	   between "design capacity" and "last full capacity").

       QueryStateFS false|true
	   When	set to true, the battery plugin	will only read statistics
	   related to battery performance as exposed by	StateFS	at /run/state.
	   StateFS is used in Mer-based	Sailfish OS, for example.

   Plugin "bind"
       Starting	with BIND 9.5.0, the most widely used DNS server software
       provides	extensive statistics about queries, responses and lots of
       other information.  The bind plugin retrieves this information that's
       encoded in XML and provided via HTTP and	submits	the values to
       collectd.

       To use this plugin, you first need to tell BIND to make this
       information available. This is done with	the "statistics-channels"
       configuration option:

	statistics-channels {
	  inet localhost port 8053;
	};

       The configuration follows the grouping that can be seen when looking at
       the data	with an	XSLT compatible	viewer,	such as	a modern web browser.
       It's probably a good idea to make yourself familiar with	the provided
       values, so you can understand what the collected	statistics actually
       mean.

       Synopsis:

	<Plugin	"bind">
	  URL "http://localhost:8053/"
	  ParseTime	  false
	  OpCodes	  true
	  QTypes	  true

	  ServerStats	  true
	  ZoneMaintStats  true
	  ResolverStats	  false
	  MemoryStats	  true

	  <View	"_default">
	    QTypes	  true
	    ResolverStats true
	    CacheRRSets	  true

	    Zone "127.in-addr.arpa/IN"
	  </View>
	</Plugin>

       The bind	plugin accepts the following configuration options:

       URL URL
	   URL from which to retrieve the XML data. If not specified,
	   "http://localhost:8053/" will be used.

       ParseTime true|false
	   When	set to true, the time provided by BIND will be parsed and used
	   to dispatch the values. When	set to false, the local	time source is
	   queried.

	   This	setting	is set to true by default for backwards	compatibility;
	   setting this	to false is recommended	to avoid problems with
	   timezones and localization.

       OpCodes true|false
	   When	enabled, statistics about the "OpCodes", for example the
	   number of "QUERY" packets, are collected.

	   Default: Enabled.

       QTypes true|false
	   When	enabled, the number of incoming	queries	by query types (for
	   example "A",	"MX", "AAAA") is collected.

	   Default: Enabled.

       ServerStats true|false
	   Collect global server statistics, such as requests received over
	   IPv4	and IPv6, successful queries, and failed updates.

	   Default: Enabled.

       ZoneMaintStats true|false
	   Collect zone	maintenance statistics,	mostly information about
	   notifications (zone updates)	and zone transfers.

	   Default: Enabled.

       ResolverStats true|false
	   Collect resolver statistics,	i. e. statistics about outgoing
	   requests (e.	g. queries over	IPv4, lame servers). Since the global
	   resolver counters apparently	were removed in	BIND 9.5.1 and 9.6.0,
	   this	is disabled by default.	Use the	ResolverStats option within a
	   View	"_default" block instead for the same functionality.

	   Default: Disabled.

       MemoryStats
	   Collect global memory statistics.

	   Default: Enabled.

       Timeout Milliseconds
	   The Timeout option sets the overall timeout for HTTP	requests to
	   URL,	in milliseconds. By default, the configured Interval is	used
	   to set the timeout.

       View Name
	   Collect statistics about a specific "view". BIND can	behave
	   different, mostly depending on the source IP-address	of the
	   request. These different configurations are called "views". If you
	   don't use this feature, you most likely are only interested in the
	   "_default" view.

	   Within a <View name>	block, you can specify which information you
	   want	to collect about a view. If no View block is configured, no
	   detailed view statistics will be collected.

	   QTypes true|false
	       If enabled, the number of outgoing queries by query type	(e. g.
	       "A", "MX") is collected.

	       Default:	Enabled.

	   ResolverStats true|false
	       Collect resolver	statistics, i. e. statistics about outgoing
	       requests	(e. g. queries over IPv4, lame servers).

	       Default:	Enabled.

	   CacheRRSets true|false
	       If enabled, the number of entries ("RR sets") in	the view's
	       cache by	query type is collected. Negative entries (queries
	       which resulted in an error, for example names that do not
	       exist) are reported with	a leading exclamation mark, e. g.
	       "!A".

	       Default:	Enabled.

	   Zone	Name
	       When given, collect detailed information	about the given	zone
	       in the view. The	information collected if very similar to the
	       global ServerStats information (see above).

	       You can repeat this option to collect detailed information
	       about multiple zones.

	       By default no detailed zone information is collected.

   Plugin "ceph"
       The ceph	plugin collects	values from JSON data to be parsed by libyajl
       (<https://lloyd.github.io/yajl/>) retrieved from	ceph daemon admin
       sockets.

       A separate Daemon block must be configured for each ceph	daemon to be
       monitored. The following	example	will read daemon statistics from four
       separate	ceph daemons running on	the same device	(two OSDs, one MON,
       one MDS)	:

	 <Plugin ceph>
	   LongRunAvgLatency false
	   ConvertSpecialMetricTypes true
	   <Daemon "osd.0">
	     SocketPath	"/var/run/ceph/ceph-osd.0.asok"
	   </Daemon>
	   <Daemon "osd.1">
	     SocketPath	"/var/run/ceph/ceph-osd.1.asok"
	   </Daemon>
	   <Daemon "mon.a">
	     SocketPath	"/var/run/ceph/ceph-mon.ceph1.asok"
	   </Daemon>
	   <Daemon "mds.a">
	     SocketPath	"/var/run/ceph/ceph-mds.ceph1.asok"
	   </Daemon>
	 </Plugin>

       The ceph	plugin accepts the following configuration options:

       LongRunAvgLatency true|false
	   If enabled, latency values(sum,count	pairs) are calculated as the
	   long	run average - average since the	ceph daemon was	started	= (sum
	   / count).  When disabled, latency values are	calculated as the
	   average since the last collection = (sum_now	- sum_last) /
	   (count_now -	count_last).

	   Default: Disabled

       ConvertSpecialMetricTypes true|false
	   If enabled, special metrics (metrics	that differ in type from
	   similar counters) are converted to the type of those	similar
	   counters. This currently only applies to filestore.journal_wr_bytes
	   which is a counter for OSD daemons. The ceph	schema reports this
	   metric type as a sum,count pair while similar counters are treated
	   as derive types. When converted, the	sum is used as the counter
	   value and is	treated	as a derive type.  When	disabled, all metrics
	   are treated as the types received from the ceph schema.

	   Default: Enabled

       Each Daemon block must have a string argument for the plugin instance
       name.  A	SocketPath is also required for	each Daemon block:

       Daemon DaemonName
	   Name	to be used as the instance name	for this daemon.

       SocketPath SocketPath
	   Specifies the path to the UNIX admin	socket of the ceph daemon.

   Plugin "cgroups"
       This plugin collects the	CPU user/system	time for each cgroup by
       reading the cpuacct.stat	files in the first cpuacct-mountpoint
       (typically /sys/fs/cgroup/cpu.cpuacct on	machines using systemd).

       CGroup Directory
	   Select cgroup based on the name. Whether only matching cgroups are
	   collected or	if they	are ignored is controlled by the
	   IgnoreSelected option; see below.

       IgnoreSelected true|false
	   Invert the selection: If set	to true, all cgroups except the	ones
	   that	match any one of the criteria are collected. By	default	only
	   selected cgroups are	collected if a selection is made. If no
	   selection is	configured at all, all cgroups are selected.

   Plugin "chrony"
       The "chrony" plugin collects ntp	data from a chronyd server, such as
       clock skew and per-peer stratum.

       For talking to chronyd, it mimics what the chronyc control program does
       on the wire.

       Available configuration options for the "chrony"	plugin:

       Host Hostname
	   Hostname of the host	running	chronyd. Defaults to localhost.

       Port Port
	   UDP-Port to connect to. Defaults to 323.

       Timeout Timeout
	   Connection timeout in seconds. Defaults to 2.

   Plugin "conntrack"
       This plugin collects IP conntrack statistics.

       OldFiles
	   Assume the conntrack_count and conntrack_max	files to be found in
	   /proc/sys/net/ipv4/netfilter	instead	of /proc/sys/net/netfilter/.

   Plugin "cpu"
       The CPU plugin collects CPU usage metrics. By default, CPU usage	is
       reported	as Jiffies, using the "cpu" type. Two aggregations are
       available:

       o   Sum,	per-state, over	all CPUs installed in the system; and

       o   Sum,	per-CPU, over all non-idle states of a CPU, creating an
	   "active" state.

       The two aggregations can	be combined, leading to	collectd only emitting
       a single	"active" metric	for the	entire system. As soon as one of these
       aggregations (or	both) is enabled, the cpu plugin will report a
       percentage, rather than Jiffies.	In addition, you can request
       individual, per-state, per-CPU metrics to be reported as	percentage.

       The following configuration options are available:

       ReportByState true|false
	   When	set to true, the default, reports per-state metrics, e.g.
	   "system", "user" and	"idle".	 When set to false, aggregates (sums)
	   all non-idle	states into one	"active" metric.

       ReportByCpu true|false
	   When	set to true, the default, reports per-CPU (per-core) metrics.
	   When	set to false, instead of reporting metrics for individual
	   CPUs, only a	global sum of CPU states is emitted.

       ValuesPercentage	false|true
	   This	option is only considered when both, ReportByCpu and
	   ReportByState are set to true. In this case,	by default, metrics
	   will	be reported as Jiffies.	By setting this	option to true,	you
	   can request percentage values in the	un-aggregated (per-CPU,	per-
	   state) mode as well.

       ReportNumCpu false|true
	   When	set to true, reports the number	of available CPUs.  Defaults
	   to false.

   Plugin "cpufreq"
       This plugin doesn't have	any options. It	reads
       /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq (for the first
       CPU installed) to get the current CPU frequency.	If this	file does not
       exist make sure cpufreqd	(<http://cpufreqd.sourceforge.net/>) or	a
       similar tool is installed and an	"cpu governor" (that's a kernel
       module) is loaded.

   Plugin "cpusleep"
       This plugin doesn't have	any options. It	reads CLOCK_BOOTTIME and
       CLOCK_MONOTONIC and reports the difference between these	clocks.	Since
       BOOTTIME	clock increments while device is suspended and MONOTONIC clock
       does not, the derivative	of the difference between these	clocks gives
       the relative amount of time the device has spent	in suspend state. The
       recorded	value is in milliseconds of sleep per seconds of wall clock.

   Plugin "csv"
       DataDir Directory
	   Set the directory to	store CSV-files	under. Per default CSV-files
	   are generated beneath the daemon's working directory, i. e. the
	   BaseDir.  The special strings stdout	and stderr can be used to
	   write to the	standard output	and standard error channels,
	   respectively. This, of course, only makes much sense	when collectd
	   is running in foreground- or	non-daemon-mode.

       StoreRates true|false
	   If set to true, convert counter values to rates. If set to false
	   (the	default) counter values	are stored as is, i. e.	as an
	   increasing integer number.

   cURL	Statistics
       All cURL-based plugins support collection of generic, request-based
       statistics. These are disabled by default and can be enabled
       selectively for each page or URL	queried	from the curl, curl_json, or
       curl_xml	plugins. See the documentation of those	plugins	for specific
       information. This section describes the available metrics that can be
       configured for each plugin. All options are disabled by default.

       See <http://curl.haxx.se/libcurl/c/curl_easy_getinfo.html> for more
       details.

       TotalTime true|false
	   Total time of the transfer, including name resolving, TCP connect,
	   etc.

       NamelookupTime true|false
	   Time	it took	from the start until name resolving was	completed.

       ConnectTime true|false
	   Time	it took	from the start until the connect to the	remote host
	   (or proxy) was completed.

       AppconnectTime true|false
	   Time	it took	from the start until the SSL/SSH connect/handshake to
	   the remote host was completed.

       PretransferTime true|false
	   Time	it took	from the start until just before the transfer begins.

       StarttransferTime true|false
	   Time	it took	from the start until the first byte was	received.

       RedirectTime true|false
	   Time	it took	for all	redirection steps include name lookup,
	   connect, pre-transfer and transfer before final transaction was
	   started.

       RedirectCount true|false
	   The total number of redirections that were actually followed.

       SizeUpload true|false
	   The total amount of bytes that were uploaded.

       SizeDownload true|false
	   The total amount of bytes that were downloaded.

       SpeedDownload true|false
	   The average download	speed that curl	measured for the complete
	   download.

       SpeedUpload true|false
	   The average upload speed that curl measured for the complete
	   upload.

       HeaderSize true|false
	   The total size of all the headers received.

       RequestSize true|false
	   The total size of the issued	requests.

       ContentLengthDownload true|false
	   The content-length of the download.

       ContentLengthUpload true|false
	   The specified size of the upload.

       NumConnects true|false
	   The number of new connections that were created to achieve the
	   transfer.

   Plugin "curl"
       The curl	plugin uses the	libcurl	(<http://curl.haxx.se/>) to read web
       pages and the match infrastructure (the same code used by the tail
       plugin) to use regular expressions with the received data.

       The following example will read the current value of AMD	stock from
       Google's	finance	page and dispatch the value to collectd.

	 <Plugin curl>
	   <Page "stock_quotes">
	     URL "http://finance.google.com/finance?q=NYSE%3AAMD"
	     User "foo"
	     Password "bar"
	     Digest false
	     VerifyPeer	true
	     VerifyHost	true
	     CACert "/path/to/ca.crt"
	     Header "X-Custom-Header: foobar"
	     Post "foo=bar"

	     MeasureResponseTime false
	     MeasureResponseCode false

	     <Match>
	       Regex "<span +class=\"pr\"[^>]*>	*([0-9]*\\.[0-9]+) *</span>"
	       DSType "GaugeAverage"
	       # Note: `stock_value' is	not a standard type.
	       Type "stock_value"
	       Instance	"AMD"
	     </Match>
	   </Page>
	 </Plugin>

       In the Plugin block, there may be one or	more Page blocks, each
       defining	a web page and one or more "matches" to	be performed on	the
       returned	data. The string argument to the Page block is used as plugin
       instance.

       The following options are valid within Page blocks:

       URL URL
	   URL of the web site to retrieve. Since a regular expression will be
	   used	to extract information from this data, non-binary data is a
	   big plus here ;)

       User Name
	   Username to use if authorization is required	to read	the page.

       Password	Password
	   Password to use if authorization is required	to read	the page.

       Digest true|false
	   Enable HTTP digest authentication.

       VerifyPeer true|false
	   Enable or disable peer SSL certificate verification.	See
	   <http://curl.haxx.se/docs/sslcerts.html> for	details. Enabled by
	   default.

       VerifyHost true|false
	   Enable or disable peer host name verification. If enabled, the
	   plugin checks if the	"Common	Name" or a "Subject Alternate Name"
	   field of the	SSL certificate	matches	the host name provided by the
	   URL option. If this identity	check fails, the connection is
	   aborted. Obviously, only works when connecting to a SSL enabled
	   server. Enabled by default.

       CACert file
	   File	that holds one or more SSL certificates. If you	want to	use
	   HTTPS you will possibly need	this option. What CA certificates come
	   bundled with	"libcurl" and are checked by default depends on	the
	   distribution	you use.

       Header Header
	   A HTTP header to add	to the request.	Multiple headers are added if
	   this	option is specified more than once.

       Post Body
	   Specifies that the HTTP operation should be a POST instead of a
	   GET.	The complete data to be	posted is given	as the argument.  This
	   option will usually need to be accompanied by a Header option to
	   set an appropriate "Content-Type" for the post body (e.g. to
	   "application/x-www-form-urlencoded").

       MeasureResponseTime true|false
	   Measure response time for the request. If this setting is enabled,
	   Match blocks	(see below) are	optional. Disabled by default.

	   Beware that requests	will get aborted if they take too long to
	   complete. Adjust Timeout accordingly	if you expect
	   MeasureResponseTime to report such slow requests.

	   This	option is similar to enabling the TotalTime statistic but it's
	   measured by collectd	instead	of cURL.

       MeasureResponseCode true|false
	   Measure response code for the request. If this setting is enabled,
	   Match blocks	(see below) are	optional. Disabled by default.

       <Statistics>
	   One Statistics block	can be used to specify cURL statistics to be
	   collected for each request to the remote web	site. See the section
	   "cURL Statistics" above for details.	If this	setting	is enabled,
	   Match blocks	(see below) are	optional.

       <Match>
	   One or more Match blocks that define	how to match information in
	   the data returned by	"libcurl". The "curl" plugin uses the same
	   infrastructure that's used by the "tail" plugin, so please see the
	   documentation of the	"tail" plugin below on how matches are
	   defined. If the MeasureResponseTime or MeasureResponseCode options
	   are set to true, Match blocks are optional.

       Timeout Milliseconds
	   The Timeout option sets the overall timeout for HTTP	requests to
	   URL,	in milliseconds. By default, the configured Interval is	used
	   to set the timeout. Prior to	version	5.5.0, there was no timeout
	   and requests	could hang indefinitely. This legacy behaviour can be
	   achieved by setting the value of Timeout to 0.

	   If Timeout is 0 or bigger than the Interval,	keep in	mind that each
	   slow	network	connection will	stall one read thread. Adjust the
	   ReadThreads global setting accordingly to prevent this from
	   blocking other plugins.

   Plugin "curl_json"
       The curl_json plugin collects values from JSON data to be parsed	by
       libyajl (<https://lloyd.github.io/yajl/>) retrieved via either libcurl
       (<http://curl.haxx.se/>)	or read	directly from a	unix socket. The
       former can be used, for example,	to collect values from CouchDB
       documents (which	are stored JSON	notation), and the latter to collect
       values from a uWSGI stats socket.

       The following example will collect several values from the built-in
       "_stats"	runtime	statistics module of CouchDB
       (<http://wiki.apache.org/couchdb/Runtime_Statistics>).

	 <Plugin curl_json>
	   <URL	"http://localhost:5984/_stats">
	     Instance "httpd"
	     <Key "httpd/requests/count">
	       Type "http_requests"
	     </Key>

	     <Key "httpd_request_methods/*/count">
	       Type "http_request_methods"
	     </Key>

	     <Key "httpd_status_codes/*/count">
	       Type "http_response_codes"
	     </Key>
	   </URL>
	 </Plugin>

       This example will collect data directly from a uWSGI "Stats Server"
       socket.

	 <Plugin curl_json>
	   <Sock "/var/run/uwsgi.stats.sock">
	     Instance "uwsgi"
	     <Key "workers/*/requests">
	       Type "http_requests"
	     </Key>

	     <Key "workers/*/apps/*/requests">
	       Type "http_requests"
	     </Key>
	   </Sock>
	 </Plugin>

       In the Plugin block, there may be one or	more URL blocks, each defining
       a URL to	be fetched via HTTP (using libcurl) or Sock blocks defining a
       unix socket to read JSON	from directly.	Each of	these blocks may have
       one or more Key blocks.

       The Key string argument must be in a path format. Each component	is
       used to match the key from a JSON map or	the index of an	JSON array. If
       a path component	of a Key is a *	wildcard, the values for all map keys
       or array	indices	will be	collectd.

       The following options are valid within URL blocks:

       Host Name
	   Use Name as the host	name when submitting values. Defaults to the
	   global host name setting.

       Instance	Instance
	   Sets	the plugin instance to Instance.

       Interval	Interval
	   Sets	the interval (in seconds) in which the values will be
	   collected from this URL. By default the global Interval setting
	   will	be used.

       User Name
       Password	Password
       Digest true|false
       VerifyPeer true|false
       VerifyHost true|false
       CACert file
       Header Header
       Post Body
       Timeout Milliseconds
	   These options behave	exactly	equivalent to the appropriate options
	   of the cURL plugin. Please see there	for a detailed description.

       <Statistics>
	   One Statistics block	can be used to specify cURL statistics to be
	   collected for each request to the remote URL. See the section "cURL
	   Statistics" above for details.

       The following options are valid within Key blocks:

       Type Type
	   Sets	the type used to dispatch the values to	the daemon. Detailed
	   information about types and their configuration can be found	in
	   types.db(5).	This option is mandatory.

       Instance	Instance
	   Type-instance to use. Defaults to the current map key or current
	   string array	element	value.

   Plugin "curl_xml"
       The curl_xml plugin uses	libcurl	(<http://curl.haxx.se/>) and libxml2
       (<http://xmlsoft.org/>) to retrieve XML data via	cURL.

	<Plugin	"curl_xml">
	  <URL "http://localhost/stats.xml">
	    Host "my_host"
	    Instance "some_instance"
	    User "collectd"
	    Password "thaiNg0I"
	    VerifyPeer true
	    VerifyHost true
	    CACert "/path/to/ca.crt"
	    Header "X-Custom-Header: foobar"
	    Post "foo=bar"

	    <XPath "table[@id=\"magic_level\"]/tr">
	      Type "magic_level"
	      #InstancePrefix "prefix-"
	      InstanceFrom "td[1]"
	      ValuesFrom "td[2]/span[@class=\"level\"]"
	    </XPath>
	  </URL>
	</Plugin>

       In the Plugin block, there may be one or	more URL blocks, each defining
       a URL to	be fetched using libcurl. Within each URL block	there are
       options which specify the connection parameters,	for example
       authentication information, and one or more XPath blocks.

       Each XPath block	specifies how to get one type of information. The
       string argument must be a valid XPath expression	which returns a	list
       of "base	elements". One value is	dispatched for each "base element".
       The type	instance and values are	looked up using	further	XPath
       expressions that	should be relative to the base element.

       Within the URL block the	following options are accepted:

       Host Name
	   Use Name as the host	name when submitting values. Defaults to the
	   global host name setting.

       Instance	Instance
	   Use Instance	as the plugin instance when submitting values.
	   Defaults to an empty	string (no plugin instance).

       Namespace Prefix	URL
	   If an XPath expression references namespaces, they must be
	   specified with this option. Prefix is the "namespace	prefix"	used
	   in the XML document.	 URL is	the "namespace name", an URI reference
	   uniquely identifying	the namespace. The option can be repeated to
	   register multiple namespaces.

	   Examples:

	     Namespace "s" "http://schemas.xmlsoap.org/soap/envelope/"
	     Namespace "m" "http://www.w3.org/1998/Math/MathML"

       User User
       Password	Password
       Digest true|false
       VerifyPeer true|false
       VerifyHost true|false
       CACert CA Cert File
       Header Header
       Post Body
       Timeout Milliseconds
	   These options behave	exactly	equivalent to the appropriate options
	   of the cURL plugin. Please see there	for a detailed description.

       <Statistics>
	   One Statistics block	can be used to specify cURL statistics to be
	   collected for each request to the remote URL. See the section "cURL
	   Statistics" above for details.

       <XPath XPath-expression>
	   Within each URL block, there	must be	one or more XPath blocks. Each
	   XPath block specifies how to	get one	type of	information. The
	   string argument must	be a valid XPath expression which returns a
	   list	of "base elements". One	value is dispatched for	each "base
	   element".

	   Within the XPath block the following	options	are accepted:

	   Type	Type
	       Specifies the Type used for submitting patches. This determines
	       the number of values that are required /	expected and whether
	       the strings are parsed as signed	or unsigned integer or as
	       double values. See types.db(5) for details.  This option	is
	       required.

	   InstancePrefix InstancePrefix
	       Prefix the type instance	with InstancePrefix. The values	are
	       simply concatenated together without any	separator.  This
	       option is optional.

	   InstanceFrom	InstanceFrom
	       Specifies a XPath expression to use for determining the type
	       instance. The XPath expression must return exactly one element.
	       The element's value is then used	as type	instance, possibly
	       prefixed	with InstancePrefix (see above).

	       This value is required. As a special exception, if the "base
	       XPath expression" (the argument to the XPath block) returns
	       exactly one argument, then this option may be omitted.

	   ValuesFrom ValuesFrom [ValuesFrom ...]
	       Specifies one or	more XPath expression to use for reading the
	       values. The number of XPath expressions must match the number
	       of data sources in the type specified with Type (see above).
	       Each XPath expression must return exactly one element. The
	       element's value is then parsed as a number and used as value
	       for the appropriate value in the	value list dispatched to the
	       daemon.

   Plugin "dbi"
       This plugin uses	the dbi	library	(<http://libdbi.sourceforge.net/>) to
       connect to various databases, execute SQL statements and	read back the
       results.	dbi is an acronym for "database	interface" in case you were
       wondering about the name. You can configure how each column is to be
       interpreted and the plugin will generate	one or more data sets from
       each row	returned according to these rules.

       Because the plugin is very generic, the configuration is	a little more
       complex than those of other plugins. It usually looks something like
       this:

	 <Plugin dbi>
	   <Query "out_of_stock">
	     Statement "SELECT category, COUNT(*) AS value FROM	products WHERE in_stock	= 0 GROUP BY category"
	     # Use with	MySQL 5.0.0 or later
	     MinVersion	50000
	     <Result>
	       Type "gauge"
	       InstancePrefix "out_of_stock"
	       InstancesFrom "category"
	       ValuesFrom "value"
	     </Result>
	   </Query>
	   <Database "product_information">
	     Driver "mysql"
	     Interval 120
	     DriverOption "host" "localhost"
	     DriverOption "username" "collectd"
	     DriverOption "password" "aZo6daiw"
	     DriverOption "dbname" "prod_info"
	     SelectDB "prod_info"
	     Query "out_of_stock"
	   </Database>
	 </Plugin>

       The configuration above defines one query with one result and one
       database. The query is then linked to the database with the Query
       option within the <Database> block. You can have	any number of queries
       and databases and you can also use the Include statement	to split up
       the configuration file in multiple, smaller files. However, the <Query>
       block must precede the <Database> blocks, because the file is
       interpreted from	top to bottom!

       The following is	a complete list	of options:

       Query blocks

       Query blocks define SQL statements and how the returned data should be
       interpreted. They are identified	by the name that is given in the
       opening line of the block. Thus the name	needs to be unique. Other than
       that, the name is not used in collectd.

       In each Query block, there is one or more Result	blocks.	Result blocks
       define which column holds which value or	instance information. You can
       use multiple Result blocks to create multiple values from one returned
       row. This is especially useful, when queries take a long	time and
       sending almost the same query again and again is	not desirable.

       Example:

	 <Query	"environment">
	   Statement "select station, temperature, humidity from environment"
	   <Result>
	     Type "temperature"
	     # InstancePrefix "foo"
	     InstancesFrom "station"
	     ValuesFrom	"temperature"
	   </Result>
	   <Result>
	     Type "humidity"
	     InstancesFrom "station"
	     ValuesFrom	"humidity"
	   </Result>
	 </Query>

       The following options are accepted:

       Statement SQL
	   Sets	the statement that should be executed on the server. This is
	   not interpreted by collectd,	but simply passed to the database
	   server. Therefore, the SQL dialect that's used depends on the
	   server collectd is connected	to.

	   The query has to return at least two	columns, one for the instance
	   and one value. You cannot omit the instance,	even if	the statement
	   is guaranteed to always return exactly one line. In that case, you
	   can usually specify something like this:

	     Statement "SELECT \"instance\", COUNT(*) AS value FROM table"

	   (That works with MySQL but may not be valid SQL according to	the
	   spec. If you	use a more strict database server, you may have	to
	   select from a dummy table or	something.)

	   Please note that some databases, for	example	Oracle,	will fail if
	   you include a semicolon at the end of the statement.

       MinVersion Version
       MaxVersion Value
	   Only	use this query for the specified database version. You can use
	   these options to provide multiple queries with the same name	but
	   with	a slightly different syntax. The plugin	will use only those
	   queries, where the specified	minimum	and maximum versions fit the
	   version of the database in use.

	   The database	version	is determined by
	   "dbi_conn_get_engine_version", see the libdbi documentation
	   <http://libdbi.sourceforge.net/docs/programmers-guide/reference-
	   conn.html#DBI-CONN-GET-ENGINE-VERSION> for details. Basically, each
	   part	of the version is assumed to be	in the range from 00 to	99 and
	   all dots are	removed. So version "4.1.2" becomes "40102", version
	   "5.0.42" becomes "50042".

	   Warning: The	plugin will use	all matching queries, so if you
	   specify multiple queries with the same name and overlapping ranges,
	   weird stuff will happen. Don't to it! A valid example would be
	   something along these lines:

	     MinVersion	40000
	     MaxVersion	49999
	     ...
	     MinVersion	50000
	     MaxVersion	50099
	     ...
	     MinVersion	50100
	     # No maximum

	   In the above	example, there are three ranges	that don't overlap.
	   The last one	goes from version "5.1.0" to infinity, meaning "all
	   later versions". Versions before "4.0.0" are	not specified.

       Type Type
	   The type that's used	for each line returned.	See types.db(5)	for
	   more	details	on how types are defined. In short: A type is a
	   predefined layout of	data and the number of values and type of
	   values has to match the type	definition.

	   If you specify "temperature"	here, you need exactly one gauge
	   column. If you specify "if_octets", you will	need two counter
	   columns. See	the ValuesFrom setting below.

	   There must be exactly one Type option inside	each Result block.

       InstancePrefix prefix
	   Prepends prefix to the type instance. If InstancesFrom (see below)
	   is not given, the string is simply copied. If InstancesFrom is
	   given, prefix and all strings returned in the appropriate columns
	   are concatenated together, separated	by dashes ("-").

       InstancesFrom column0 [column1 ...]
	   Specifies the columns whose values will be used to create the
	   "type-instance" for each row. If you	specify	more than one column,
	   the value of	all columns will be joined together with dashes	("-")
	   as separation characters.

	   The plugin itself does not check whether or not all built instances
	   are different. It's your responsibility to assure that each is
	   unique. This	is especially true, if you do not specify
	   InstancesFrom: You have to make sure	that only one row is returned
	   in this case.

	   If neither InstancePrefix nor InstancesFrom is given, the type-
	   instance will be empty.

       ValuesFrom column0 [column1 ...]
	   Names the columns whose content is used as the actual data for the
	   data	sets that are dispatched to the	daemon.	How many such columns
	   you need is determined by the Type setting above. If	you specify
	   too many or not enough columns, the plugin will complain about that
	   and no data will be submitted to the	daemon.

	   The actual data type	in the columns is not that important. The
	   plugin will automatically cast the values to	the right type if it
	   know	how to do that.	So it should be	able to	handle integer an
	   floating point types, as well as strings (if	they include a number
	   at the beginning).

	   There must be at least one ValuesFrom option	inside each Result
	   block.

       MetadataFrom [column0 column1 ...]
	   Names the columns whose content is used as metadata for the data
	   sets	that are dispatched to the daemon.

	   The actual data type	in the columns is not that important. The
	   plugin will automatically cast the values to	the right type if it
	   know	how to do that.	So it should be	able to	handle integer an
	   floating point types, as well as strings (if	they include a number
	   at the beginning).

       Database	blocks

       Database	blocks define a	connection to a	database and which queries
       should be sent to that database.	Since the used "dbi" library can
       handle a	wide variety of	databases, the configuration is	very generic.
       If in doubt, refer to libdbi's documentation - we stick as close	to the
       terminology used	there.

       Each database needs a "name" as string argument in the starting tag of
       the block. This name will be used as "PluginInstance" in	the values
       submitted to the	daemon.	Other than that, that name is not used.

       Interval	Interval
	   Sets	the interval (in seconds) in which the values will be
	   collected from this database. By default the	global Interval
	   setting will	be used.

       Driver Driver
	   Specifies the driver	to use to connect to the database. In many
	   cases those drivers are named after the database they can connect
	   to, but this	is not a technical necessity. These drivers are
	   sometimes referred to as "DBD", DataBase Driver, and	some
	   distributions ship them in separate packages. Drivers for the "dbi"
	   library are developed by the	libdbi-drivers project at
	   <http://libdbi-drivers.sourceforge.net/>.

	   You need to give the	driver name as expected	by the "dbi" library
	   here. You should be able to find that in the	documentation for each
	   driver. If you mistype the driver name, the plugin will dump	a list
	   of all known	driver names to	the log.

       DriverOption Key	Value
	   Sets	driver-specific	options. What option a driver supports can be
	   found in the	documentation for each driver, somewhere at
	   <http://libdbi-drivers.sourceforge.net/>. However, the options
	   "host", "username", "password", and "dbname"	seem to	be de facto
	   standards.

	   DBDs	can register two types of options: String options and numeric
	   options. The	plugin will use	the "dbi_conn_set_option" function
	   when	the configuration provides a string and	the
	   "dbi_conn_require_option_numeric" function when the configuration
	   provides a number. So these two lines will actually result in
	   different calls being used:

	     DriverOption "Port" 1234	   # numeric
	     DriverOption "Port" "1234"	   # string

	   Unfortunately, drivers are not too keen to report errors when an
	   unknown option is passed to them, so	invalid	settings here may go
	   unnoticed. This is not the plugin's fault, it will report errors if
	   it gets them	from the library / the driver. If a driver complains
	   about an option, the	plugin will dump a complete list of all
	   options understood by that driver to	the log. There is no way to
	   programmatically find out if	an option expects a string or a
	   numeric argument, so	you will have to refer to the appropriate
	   DBD's documentation to find this out. Sorry.

       SelectDB	Database
	   In some cases, the database name you	connect	with is	not the
	   database name you want to use for querying data. If this option is
	   set,	the plugin will	"select" (switch to) that database after the
	   connection is established.

       Query QueryName
	   Associates the query	named QueryName	with this database connection.
	   The query needs to be defined before	this statement,	i. e. all
	   query blocks	you want to refer to must be placed above the database
	   block you want to refer to them from.

       Host Hostname
	   Sets	the host field of value	lists to Hostname when dispatching
	   values. Defaults to the global hostname setting.

   Plugin "df"
       Device Device
	   Select partitions based on the devicename.

       MountPoint Directory
	   Select partitions based on the mountpoint.

       FSType FSType
	   Select partitions based on the filesystem type.

       IgnoreSelected true|false
	   Invert the selection: If set	to true, all partitions	except the
	   ones	that match any one of the criteria are collected. By default
	   only	selected partitions are	collected if a selection is made. If
	   no selection	is configured at all, all partitions are selected.

       ReportByDevice true|false
	   Report using	the device name	rather than the	mountpoint. i.e. with
	   this	false, (the default), it will report a disk as "root", but
	   with	it true, it will be "sda1" (or whichever).

       ReportInodes true|false
	   Enables or disables reporting of free, reserved and used inodes.
	   Defaults to inode collection	being disabled.

	   Enable this option if inodes	are a scarce resource for you, usually
	   because many	small files are	stored on the disk. This is a usual
	   scenario for	mail transfer agents and web caches.

       ValuesAbsolute true|false
	   Enables or disables reporting of free and used disk space in
	   1K-blocks.  Defaults	to true.

       ValuesPercentage	false|true
	   Enables or disables reporting of free and used disk space in
	   percentage.	Defaults to false.

	   This	is useful for deploying	collectd on the	cloud, where machines
	   with	different disk size may	exist. Then it is more practical to
	   configure thresholds	based on relative disk size.

   Plugin "disk"
       The "disk" plugin collects information about the	usage of physical
       disks and logical disks (partitions). Values collected are the number
       of octets written to and	read from a disk or partition, the number of
       read/write operations issued to the disk	and a rather complex "time" it
       took for	these commands to be issued.

       Using the following two options you can ignore some disks or configure
       the collection only of specific disks.

       Disk Name
	   Select the disk Name. Whether it is collected or ignored depends on
	   the IgnoreSelected setting, see below. As with other	plugins	that
	   use the daemon's ignorelist functionality, a	string that starts and
	   ends	with a slash is	interpreted as a regular expression. Examples:

	     Disk "sdd"
	     Disk "/hda[34]/"

       IgnoreSelected true|false
	   Sets	whether	selected disks,	i. e. the ones matches by any of the
	   Disk	statements, are	ignored	or if all other	disks are ignored. The
	   behavior (hopefully)	is intuitive: If no Disk option	is configured,
	   all disks are collected. If at least	one Disk option	is given and
	   no IgnoreSelected or	set to false, only matching disks will be
	   collected. If IgnoreSelected	is set to true,	all disks are
	   collected except the	ones matched.

       UseBSDName true|false
	   Whether to use the device's "BSD Name", on Mac OS X,	instead	of the
	   default major/minor numbers.	Requires collectd to be	built with
	   Apple's IOKitLib support.

       UdevNameAttr Attribute
	   Attempt to override disk instance name with the value of a
	   specified udev attribute when built with libudev.  If the attribute
	   is not defined for the given	device,	the default name is used.
	   Example:

	     UdevNameAttr "DM_NAME"

   Plugin "dns"
       Interface Interface
	   The dns plugin uses libpcap to capture dns traffic and analyzes it.
	   This	option sets the	interface that should be used. If this option
	   is not set, or set to "any",	the plugin will	try to get packets
	   from	all interfaces.	This may not work on certain platforms,	such
	   as Mac OS X.

       IgnoreSource IP-address
	   Ignore packets that originate from this address.

       SelectNumericQueryTypes true|false
	   Enabled by default, collects	unknown	(and thus presented as numeric
	   only) query types.

   Plugin "dpdkstat"
       The dpdkstat plugin collects information	about DPDK interfaces using
       the extended NIC	stats API in DPDK.

       Synopsis:

	<Plugin	"dpdkstat">
	   Coremask "0x4"
	   MemoryChannels "4"
	   ProcessType "secondary"
	   FilePrefix "rte"
	   EnabledPortMask 0xffff
	   PortName "interface1"
	   PortName "interface2"
	</Plugin>

       Options:

       Coremask	Mask
	   A string containing an hexadecimal bit mask of the cores to run on.
	   Note	that core numbering can	change between platforms and should be
	   determined beforehand.

       Memorychannels Channels
	   A string containing a number	of memory channels per processor
	   socket.

       ProcessType type
	   A string containing the type	of DPDK	process	instance.

       FilePrefix File
	   The prefix text used	for hugepage filenames.	The filename will be
	   set to /var/run/.<prefix>_config where prefix is what is passed in
	   by the user.

       SocketMemory MB
	   A string containing amount of Memory	to allocate from hugepages on
	   specific sockets in MB

       EnabledPortMask Mask
	   A hexidecimal bit mask of the DPDK ports which should be enabled. A
	   mask	of 0x0 means that all ports will be disabled. A	bitmask	of all
	   Fs means that all ports will	be enabled. This is an optional
	   argument - default is all ports enabled.

       PortName	Name
	   A string containing an optional name	for the	enabled	DPDK ports.
	   Each	PortName option	should contain only one	port name; specify as
	   many	PortName options as desired. Default naming convention will be
	   used	if PortName is blank. If there are less	PortName options than
	   there are enabled ports, the	default	naming convention will be used
	   for the additional ports.

   Plugin "email"
       SocketFile Path
	   Sets	the socket-file	which is to be created.

       SocketGroup Group
	   If running as root change the group of the UNIX-socket after	it has
	   been	created. Defaults to collectd.

       SocketPerms Permissions
	   Change the file permissions of the UNIX-socket after	it has been
	   created. The	permissions must be given as a numeric,	octal value as
	   you would pass to chmod(1). Defaults	to 0770.

       MaxConns	Number
	   Sets	the maximum number of connections that can be handled in
	   parallel. Since this	many threads will be started immediately
	   setting this	to a very high value will waste	valuable resources.
	   Defaults to 5 and will be forced to be at most 16384	to prevent
	   typos and dumb mistakes.

   Plugin "ethstat"
       The ethstat plugin collects information about network interface cards
       (NICs) by talking directly with the underlying kernel driver using
       ioctl(2).

       Synopsis:

	<Plugin	"ethstat">
	  Interface "eth0"
	  Map "rx_csum_offload_errors" "if_rx_errors" "checksum_offload"
	  Map "multicast" "if_multicast"
	</Plugin>

       Options:

       Interface Name
	   Collect statistical information about interface Name.

       Map Name	Type [TypeInstance]
	   By default, the plugin will submit values as	type "derive" and type
	   instance set	to Name, the name of the metric	as reported by the
	   driver. If an appropriate Map option	exists,	the given Type and,
	   optionally, TypeInstance will be used.

       MappedOnly true|false
	   When	set to true, only metrics that can be mapped to	a type will be
	   collected, all other	metrics	will be	ignored. Defaults to false.

   Plugin "exec"
       Please make sure	to read	collectd-exec(5) before	using this plugin. It
       contains	valuable information on	when the executable is executed	and
       the output that is expected from	it.

       Exec User[:[Group]] Executable [_arg_ [_arg_ ...]]
       NotificationExec	User[:[Group]] Executable [_arg_ [_arg_	...]]
	   Execute the executable Executable as	user User. If the user name is
	   followed by a colon and a group name, the effective group is	set to
	   that	group.	The real group and saved-set group will	be set to the
	   default group of that user. If no group is given the	effective
	   group ID will be the	same as	the real group ID.

	   Please note that in order to	change the user	and/or group the
	   daemon needs	superuser privileges. If the daemon is run as an
	   unprivileged	user you must specify the same user/group here.	If the
	   daemon is run with superuser	privileges, you	must supply a non-root
	   user	here.

	   The executable may be followed by optional arguments	that are
	   passed to the program. Please note that due to the configuration
	   parsing numbers and boolean values may be changed. If you want to
	   be absolutely sure that something is	passed as-is please enclose it
	   in quotes.

	   The Exec and	NotificationExec statements change the semantics of
	   the programs	executed, i. e.	the data passed	to them	and the
	   response expected from them.	This is	documented in great detail in
	   collectd-exec(5).

   Plugin "fhcount"
       The "fhcount" plugin provides statistics	about used, unused and total
       number of file handles on Linux.

       The fhcount plugin provides the following configuration options:

       ValuesAbsolute true|false
	   Enables or disables reporting of file handles usage in absolute
	   numbers, e.g. file handles used. Defaults to	true.

       ValuesPercentage	false|true
	   Enables or disables reporting of file handles usage in percentages,
	   e.g.	 percent of file handles used. Defaults	to false.

   Plugin "filecount"
       The "filecount" plugin counts the number	of files in a certain
       directory (and its subdirectories) and their combined size. The
       configuration is	very straight forward:

	 <Plugin "filecount">
	   <Directory "/var/qmail/queue/mess">
	     Instance "qmail-message"
	   </Directory>
	   <Directory "/var/qmail/queue/todo">
	     Instance "qmail-todo"
	   </Directory>
	   <Directory "/var/db/php5">
	     Instance "php5-sessions"
	     Name "sess_*"
	   </Directory>
	 </Plugin>

       The example above counts	the number of files in QMail's queue
       directories and the number of PHP5 sessions. Jfiy: The "todo" queue
       holds the messages that QMail has not yet looked	at, the	"message"
       queue holds the messages	that were classified into "local" and
       "remote".

       As you can see, the configuration consists of one or more "Directory"
       blocks, each of which specifies a directory in which to count the
       files. Within those blocks, the following options are recognized:

       Instance	Instance
	   Sets	the plugin instance to Instance. That instance name must be
	   unique, but it's your responsibility, the plugin doesn't check for
	   that. If not	given, the instance is set to the directory name with
	   all slashes replaced	by underscores and all leading underscores
	   removed.

       Name Pattern
	   Only	count files that match Pattern,	where Pattern is a shell-like
	   wildcard as understood by fnmatch(3). Only the filename is checked
	   against the pattern,	not the	entire path. In	case this makes	it
	   easier for you: This	option has been	named after the	-name
	   parameter to	find(1).

       MTime Age
	   Count only files of a specific age: If Age is greater than zero,
	   only	files that haven't been	touched	in the last Age	seconds	are
	   counted. If Age is a	negative number, this is inversed. For
	   example, if -60 is specified, only files that have been modified in
	   the last minute will	be counted.

	   The number can also be followed by a	"multiplier" to	easily specify
	   a larger timespan. When given in this notation, the argument	must
	   in quoted, i. e.  must be passed as string. So the -60 could	also
	   be written as "-1m" (one minute). Valid multipliers are "s"
	   (second), "m" (minute), "h" (hour), "d" (day), "w" (week), and "y"
	   (year). There is no "month" multiplier. You can also	specify
	   fractional numbers, e. g. "0.5d" is identical to "12h".

       Size Size
	   Count only files of a specific size.	When Size is a positive
	   number, only	files that are at least	this big are counted. If Size
	   is a	negative number, this is inversed, i. e. only files smaller
	   than	the absolute value of Size are counted.

	   As with the MTime option, a "multiplier" may	be added. For a
	   detailed description	see above. Valid multipliers here are "b"
	   (byte), "k" (kilobyte), "m" (megabyte), "g" (gigabyte), "t"
	   (terabyte), and "p" (petabyte). Please note that there are 1000
	   bytes in a kilobyte,	not 1024.

       Recursive true|false
	   Controls whether or not to recurse into subdirectories. Enabled by
	   default.

       IncludeHidden true|false
	   Controls whether or not to include "hidden" files and directories
	   in the count.  "Hidden" files and directories are those, whose name
	   begins with a dot.  Defaults	to false, i.e. by default hidden files
	   and directories are ignored.

   Plugin "GenericJMX"
       The GenericJMX plugin is	written	in Java	and therefore documented in
       collectd-java(5).

   Plugin "gmond"
       The gmond plugin	received the multicast traffic sent by gmond, the
       statistics collection daemon of Ganglia.	Mappings for the standard
       "metrics" are built-in, custom mappings may be added via	Metric blocks,
       see below.

       Synopsis:

	<Plugin	"gmond">
	  MCReceiveFrom	"239.2.11.71" "8649"
	  <Metric "swap_total">
	    Type "swap"
	    TypeInstance "total"
	    DataSource "value"
	  </Metric>
	  <Metric "swap_free">
	    Type "swap"
	    TypeInstance "free"
	    DataSource "value"
	  </Metric>
	</Plugin>

       The following metrics are built-in:

       o   load_one, load_five,	load_fifteen

       o   cpu_user, cpu_system, cpu_idle, cpu_nice, cpu_wio

       o   mem_free, mem_shared, mem_buffers, mem_cached, mem_total

       o   bytes_in, bytes_out

       o   pkts_in, pkts_out

       Available configuration options:

       MCReceiveFrom MCGroup [Port]
	   Sets	sets the multicast group and UDP port to which to subscribe.

	   Default: 239.2.11.71	/ 8649

       <Metric Name>
	   These blocks	add a new metric conversion to the internal table.
	   Name, the string argument to	the Metric block, is the metric	name
	   as used by Ganglia.

	   Type	Type
	       Type to map this	metric to. Required.

	   TypeInstance	Instance
	       Type-instance to	use. Optional.

	   DataSource Name
	       Data source to map this metric to. If the configured type has
	       exactly one data	source,	this is	optional. Otherwise the	option
	       is required.

   Plugin "gps"
       The "gps	plugin"	connects to gpsd on the	host machine.  The host, port,
       timeout and pause are configurable.

       This is useful if you run an NTP	server using a GPS for source and you
       want to monitor it.

       Mind your GPS must send $--GSA for having the data reported!

       The following elements are collected:

       satellites
	   Number of satellites	used for fix (type instance "used") and	in
	   view	(type instance "visible"). 0 means no GPS satellites are
	   visible.

       dilution_of_precision
	   Vertical and	horizontal dilution (type instance "horizontal"	or
	   "vertical").	 It should be between 0	and 3.	Look at	the
	   documentation of your GPS to	know more.

       Synopsis:

	LoadPlugin gps
	<Plugin	"gps">
	  # Connect to localhost on gpsd regular port:
	  Host "127.0.0.1"
	  Port "2947"
	  # 15 ms timeout
	  Timeout 0.015
	  # PauseConnect of 5 sec. between connection attempts.
	  PauseConnect 5
	</Plugin>

       Available configuration options:

       Host Host
	   The host on which gpsd daemon runs. Defaults	to localhost.

       Port Port
	   Port	to connect to gpsd on the host machine.	Defaults to 2947.

       Timeout Seconds
	   Timeout in seconds (default 0.015 sec).

	   The GPS data	stream is fetch	by the plugin form the daemon.	It
	   waits for data to be	available, if none arrives it times out	and
	   loop	for another reading.  Mind to put a low	value gpsd expects
	   value in the	micro-seconds area (recommended	is 500 us) since the
	   waiting function is blocking.  Value	must be	between	500 us and 5
	   sec., if outside that range the default value is applied.

	   This	only applies from gpsd release-2.95.

       PauseConnect Seconds
	   Pause to apply between attempts of connection to gpsd in seconds
	   (default 5 sec).

   Plugin "grpc"
       The grpc	plugin provides	an RPC interface to submit values to or	query
       values from collectd based on the open source gRPC framework. It
       exposes an end-point for	dispatching values to the daemon.

       The gRPC	homepage can be	found at <https://grpc.io/>.

       Server Host Port
	   The Server statement	sets the address of a server to	which to send
	   metrics via the "DispatchValues" function.

	   The argument	Host may be a hostname,	an IPv4	address, or an IPv6
	   address.

	   Optionally, Server may be specified as a configuration block	which
	   supports the	following options:

	   EnableSSL false|true
	       Whether to require SSL for outgoing connections.	Default:
	       false.

	   SSLCACertificateFile	Filename
	   SSLCertificateFile Filename
	   SSLCertificateKeyFile Filename
	       Filenames specifying SSL	certificate and	key material to	be
	       used with SSL connections.

       Listen Host Port
	   The Listen statement	sets the network address to bind to. When
	   multiple statements are specified, the daemon will bind to all of
	   them. If none are specified,	it defaults to 0.0.0.0:50051.

	   The argument	Host may be a hostname,	an IPv4	address, or an IPv6
	   address.

	   Optionally, Listen may be specified as a configuration block	which
	   supports the	following options:

	   EnableSSL true|false
	       Whether to enable SSL for incoming connections. Default:	false.

	   SSLCACertificateFile	Filename
	   SSLCertificateFile Filename
	   SSLCertificateKeyFile Filename
	       Filenames specifying SSL	certificate and	key material to	be
	       used with SSL connections.

   Plugin "hddtemp"
       To get values from hddtemp collectd connects to localhost (127.0.0.1),
       port 7634/tcp. The Host and Port	options	can be used to change these
       default values, see below. "hddtemp" has	to be running to work
       correctly. If "hddtemp" is not running timeouts may appear which	may
       interfere with other statistics..

       The hddtemp homepage can	be found at
       <http://www.guzu.net/linux/hddtemp.php>.

       Host Hostname
	   Hostname to connect to. Defaults to 127.0.0.1.

       Port Port
	   TCP-Port to connect to. Defaults to 7634.

   Plugin "hugepages"
       To collect hugepages information, collectd reads	directories
       "/sys/devices/system/node/*/hugepages" and "/sys/kernel/mm/hugepages".
       Reading of these	directories can	be disabled by the following options
       (default	is enabled).

       ReportPerNodeHP true|false
	   If enabled, information will	be collected from the hugepage
	   counters in "/sys/devices/system/node/*/hugepages".	This is	used
	   to check the	per-node hugepage statistics on	a NUMA system.

       ReportRootHP true|false
	   If enabled, information will	be collected from the hugepage
	   counters in "/sys/kernel/mm/hugepages".  This can be	used on	both
	   NUMA	and non-NUMA systems to	check the overall hugepage statistics.

       ValuesPages true|false
	   Whether to report hugepages metrics in number of pages.  Defaults
	   to true.

       ValuesBytes false|true
	   Whether to report hugepages metrics in bytes.  Defaults to false.

       ValuesPercentage	false|true
	   Whether to report hugepages metrics as percentage.  Defaults	to
	   false.

   Plugin "intel_rdt"
       The intel_rdt plugin collects information provided by monitoring
       features	of Intel Resource Director Technology (Intel(R)	RDT) like
       Cache Monitoring	Technology (CMT), Memory Bandwidth Monitoring (MBM).
       These features provide information about	utilization of shared
       resources. CMT monitors last level cache	occupancy (LLC). MBM supports
       two types of events reporting local and remote memory bandwidth.	Local
       memory bandwidth	(MBL) reports the bandwidth of accessing memory
       associated with the local socket. Remote	memory bandwidth (MBR) reports
       the bandwidth of	accessing the remote socket. Also this technology
       allows to monitor instructions per clock	(IPC).	Monitor	events are
       hardware	dependant. Monitoring capabilities are detected	on plugin
       initialization and only supported events	are monitored.

       Synopsis:

	 <Plugin "intel_rdt">
	   Cores "0-2" "3,4,6" "8-10,15"
	 </Plugin>

       Options:

       Interval	seconds
	   The interval	within which to	retrieve statistics on monitored
	   events in seconds.  For milliseconds	divide the time	by 1000	for
	   example if the desired interval is 50ms, set	interval to 0.05. Due
	   to limited capacity of counters it is not recommended to set
	   interval higher than	1 sec.

       Cores cores groups
	   All events are reported on a	per core basis.	Monitoring of the
	   events can be configured for	group of cores (aggregated
	   statistics).	This field defines groups of cores on which to monitor
	   supported events. The field is represented as list of strings with
	   core	group values. Each string represents a list of cores in	a
	   group. Allowed formats are:
	       0,1,2,3
	       0-10,20-18
	       1,3,5-8,10,0x10-12

	   If an empty string is provided as value for this field default
	   cores configuration is applied - a separate group is	created	for
	   each	core.

       Note: By	default	global interval	is used	to retrieve statistics on
       monitored events. To configure a	plugin specific	interval use Interval
       option of the intel_rdt <LoadPlugin> block. For milliseconds divide the
       time by 1000 for	example	if the desired interval	is 50ms, set interval
       to 0.05.	 Due to	limited	capacity of counters it	is not recommended to
       set interval higher than	1 sec.

   Plugin "interface"
       Interface Interface
	   Select this interface. By default these interfaces will then	be
	   collected. For a more detailed description see IgnoreSelected
	   below.

       IgnoreSelected true|false
	   If no configuration if given, the interface-plugin will collect
	   data	from all interfaces. This may not be practical,	especially for
	   loopback- and similar interfaces. Thus, you can use the
	   Interface-option to pick the	interfaces you're interested in.
	   Sometimes, however, it's easier/preferred to	collect	all interfaces
	   except a few	ones. This option enables you to do that: By setting
	   IgnoreSelected to true the effect of	Interface is inverted: All
	   selected interfaces are ignored and all other interfaces are
	   collected.

	   It is possible to use regular expressions to	match interface	names,
	   if the name is surrounded by	/.../ and collectd was compiled	with
	   support for regexps.	This is	useful if there's a need to collect
	   (or ignore) data for	a group	of interfaces that are similarly
	   named, without the need to explicitly list all of them (especially
	   useful if the list is dynamic).  Example:

	    Interface "lo"
	    Interface "/^veth/"
	    Interface "/^tun[0-9]+/"
	    IgnoreSelected "true"

	   This	will ignore the	loopback interface, all	interfaces with	names
	   starting with veth and all interfaces with names starting with tun
	   followed by at least	one digit.

       ReportInactive true|false
	   When	set to false, only interfaces with non-zero traffic will be
	   reported. Note that the check is done by looking into whether a
	   package was sent at any time	from boot and the corresponding
	   counter is non-zero.	So, if the interface has been sending data in
	   the past since boot,	but not	during the reported time-interval, it
	   will	still be reported.

	   The default value is	true and results in collection of the data
	   from	all interfaces that are	selected by Interface and
	   IgnoreSelected options.

       UniqueName true|false
	   Interface name is not unique	on Solaris (KSTAT), interface name is
	   unique only within a	module/instance. Following tuple is considered
	   unique:
	      (ks_module, ks_instance, ks_name)	If this	option is set to true,
	   interface name contains above three fields separated	by an
	   underscore. For more	info on	KSTAT, visit
	   <http://docs.oracle.com/cd/E23824_01/html/821-1468/kstat-3kstat.html#REFMAN3Ekstat-3kstat>

	   This	option is only available on Solaris.

   Plugin "ipmi"
       Sensor Sensor
	   Selects sensors to collect or to ignore, depending on
	   IgnoreSelected.

       IgnoreSelected true|false
	   If no configuration if given, the ipmi plugin will collect data
	   from	all sensors found of type "temperature", "voltage", "current"
	   and "fanspeed".  This option	enables	you to do that:	By setting
	   IgnoreSelected to true the effect of	Sensor is inverted: All
	   selected sensors are	ignored	and all	other sensors are collected.

       NotifySensorAdd true|false
	   If a	sensor appears after initialization time of a minute a
	   notification	is sent.

       NotifySensorRemove true|false
	   If a	sensor disappears a notification is sent.

       NotifySensorNotPresent true|false
	   If you have for example dual	power supply and one of	them is
	   (un)plugged then a notification is sent.

   Plugin "iptables"
       Chain Table Chain [Comment|Number [Name]]
       Chain6 Table Chain [Comment|Number [Name]]
	   Select the iptables/ip6tables filter	rules to count packets and
	   bytes from.

	   If only Table and Chain are given, this plugin will collect the
	   counters of all rules which have a comment-match. The comment is
	   then	used as	type-instance.

	   If Comment or Number	is given, only the rule	with the matching
	   comment or the nth rule will	be collected. Again, the comment (or
	   the number) will be used as the type-instance.

	   If Name is supplied,	it will	be used	as the type-instance instead
	   of the comment or the number.

   Plugin "irq"
       Irq Irq
	   Select this irq. By default these irqs will then be collected. For
	   a more detailed description see IgnoreSelected below.

       IgnoreSelected true|false
	   If no configuration if given, the irq-plugin	will collect data from
	   all irqs. This may not be practical,	especially if no interrupts
	   happen. Thus, you can use the Irq-option to pick the	interrupt
	   you're interested in.  Sometimes, however, it's easier/preferred to
	   collect all interrupts except a few ones. This option enables you
	   to do that: By setting IgnoreSelected to true the effect of Irq is
	   inverted: All selected interrupts are ignored and all other
	   interrupts are collected.

   Plugin "java"
       The Java	plugin makes it	possible to write extensions for collectd in
       Java.  This section only	discusses the syntax and semantic of the
       configuration options. For more in-depth	information on the Java
       plugin, please read collectd-java(5).

       Synopsis:

	<Plugin	"java">
	  JVMArg "-verbose:jni"
	  JVMArg "-Djava.class.path=/opt/collectd/lib/collectd/bindings/java"
	  LoadPlugin "org.collectd.java.Foobar"
	  <Plugin "org.collectd.java.Foobar">
	    # To be parsed by the plugin
	  </Plugin>
	</Plugin>

       Available configuration options:

       JVMArg Argument
	   Argument that is to be passed to the	Java Virtual Machine (JVM).
	   This	works exactly the way the arguments to the java	binary on the
	   command line	work.  Execute "java --help" for details.

	   Please note that all	these options must appear before (i. e.	above)
	   any other options! When another option is found, the	JVM will be
	   started and later options will have to be ignored!

       LoadPlugin JavaClass
	   Instantiates	a new JavaClass	object.	The constructor	of this	object
	   very	likely then registers one or more callback methods with	the
	   server.

	   See collectd-java(5)	for details.

	   When	the first such option is found,	the virtual machine (JVM) is
	   created. This means that all	JVMArg options must appear before
	   (i. e. above) all LoadPlugin	options!

       Plugin Name
	   The entire block is passed to the Java plugin as an
	   org.collectd.api.OConfigItem	object.

	   For this to work, the plugin	has to register	a configuration
	   callback first, see "config callback" in collectd-java(5). This
	   means, that the Plugin block	must appear after the appropriate
	   LoadPlugin block. Also note,	that Name depends on the (Java)	plugin
	   registering the callback and	is completely independent from the
	   JavaClass argument passed to	LoadPlugin.

   Plugin "load"
       The Load	plugin collects	the system load. These numbers give a rough
       overview	over the utilization of	a machine. The system load is defined
       as the number of	runnable tasks in the run-queue	and is provided	by
       many operating systems as a one,	five or	fifteen	minute average.

       The following configuration options are available:

       ReportRelative false|true
	   When	enabled, system	load divided by	number of available CPU	cores
	   is reported for intervals 1 min, 5 min and 15 min. Defaults to
	   false.

   Plugin "logfile"
       LogLevel	debug|info|notice|warning|err
	   Sets	the log-level. If, for example,	set to notice, then all	events
	   with	severity notice, warning, or err will be written to the
	   logfile.

	   Please note that debug is only available if collectd	has been
	   compiled with debugging support.

       File File
	   Sets	the file to write log messages to. The special strings stdout
	   and stderr can be used to write to the standard output and standard
	   error channels, respectively. This, of course, only makes much
	   sense when collectd is running in foreground- or non-daemon-mode.

       Timestamp true|false
	   Prefix all lines printed by the current time. Defaults to true.

       PrintSeverity true|false
	   When	enabled, all lines are prefixed	by the severity	of the log
	   message, for	example	"warning". Defaults to false.

       Note: There is no need to notify	the daemon after moving	or removing
       the log file (e.	g. when	rotating the logs). The	plugin reopens the
       file for	each line it writes.

   Plugin "log_logstash"
       The log logstash	plugin behaves like the	logfile	plugin but formats
       messages	as JSON	events for logstash to parse and input.

       LogLevel	debug|info|notice|warning|err
	   Sets	the log-level. If, for example,	set to notice, then all	events
	   with	severity notice, warning, or err will be written to the
	   logfile.

	   Please note that debug is only available if collectd	has been
	   compiled with debugging support.

       File File
	   Sets	the file to write log messages to. The special strings stdout
	   and stderr can be used to write to the standard output and standard
	   error channels, respectively. This, of course, only makes much
	   sense when collectd is running in foreground- or non-daemon-mode.

       Note: There is no need to notify	the daemon after moving	or removing
       the log file (e.	g. when	rotating the logs). The	plugin reopens the
       file for	each line it writes.

   Plugin "lpar"
       The LPAR	plugin reads CPU statistics of Logical Partitions, a
       virtualization technique	for IBM	POWER processors. It takes into
       account CPU time	stolen from or donated to a partition, in addition to
       the usual user, system, I/O statistics.

       The following configuration options are available:

       CpuPoolStats false|true
	   When	enabled, statistics about the processor	pool are read, too.
	   The partition needs to have pool authority in order to be able to
	   acquire this	information.  Defaults to false.

       ReportBySerial false|true
	   If enabled, the serial of the physical machine the partition	is
	   currently running on	is reported as hostname	and the	logical
	   hostname of the machine is reported in the plugin instance.
	   Otherwise, the logical hostname will	be used	(just like other
	   plugins) and	the plugin instance will be empty.  Defaults to	false.

   Plugin "lua"
       This plugin embeds a Lua	interpreter into collectd and provides an
       interface to collectd's plugin system. See collectd-lua(5) for its
       documentation.

   Plugin "mbmon"
       The "mbmon plugin" uses mbmon to	retrieve temperature, voltage, etc.

       Be default collectd connects to localhost (127.0.0.1), port 411/tcp.
       The Host	and Port options can be	used to	change these values, see
       below.  "mbmon" has to be running to work correctly. If "mbmon" is not
       running timeouts	may appear which may interfere with other statistics..

       "mbmon" must be run with	the -r option ("print TAG and Value format");
       Debian's	/etc/init.d/mbmon script already does this, other people will
       need to ensure that this	is the case.

       Host Hostname
	   Hostname to connect to. Defaults to 127.0.0.1.

       Port Port
	   TCP-Port to connect to. Defaults to 411.

   Plugin "md"
       The "md plugin" collects	information from Linux Software-RAID devices
       (md).

       All reported values are of the type "md_disks". Reported	type instances
       are active, failed (present but not operational), spare (hot stand-by)
       and missing (physically absent) disks.

       Device Device
	   Select md devices based on device name. The device name is the
	   basename of the device, i.e.	the name of the	block device without
	   the leading "/dev/".	 See IgnoreSelected for	more details.

       IgnoreSelected true|false
	   Invert device selection: If set to true, all	md devices except
	   those listed	using Device are collected. If false (the default),
	   only	those listed are collected. If no configuration	is given, the
	   md plugin will collect data from all	md devices.

   Plugin "memcachec"
       The "memcachec plugin" connects to a memcached server, queries one or
       more given pages	and parses the returned	data according to user
       specification.  The matches used	are the	same as	the matches used in
       the "curl" and "tail" plugins.

       In order	to talk	to the memcached server, this plugin uses the
       libmemcached library. Please note that there is another library with a
       very similar name, libmemcache (notice the missing `d'),	which is not
       applicable.

       Synopsis	of the configuration:

	<Plugin	"memcachec">
	  <Page	"plugin_instance">
	    Server "localhost"
	    Key	"page_key"
	    <Match>
	      Regex "(\\d+) bytes sent"
	      DSType CounterAdd
	      Type "ipt_octets"
	      Instance "type_instance"
	    </Match>
	  </Page>
	</Plugin>

       The configuration options are:

       <Page Name>
	   Each	Page block defines one page to be queried from the memcached
	   server.  The	block requires one string argument which is used as
	   plugin instance.

       Server Address
	   Sets	the server address to connect to when querying the page. Must
	   be inside a Page block.

       Key Key
	   When	connected to the memcached server, asks	for the	page Key.

       <Match>
	   Match blocks	define which strings to	look for and how matches
	   substrings are interpreted. For a description of match blocks,
	   please see "Plugin tail".

   Plugin "memcached"
       The memcached plugin connects to	a memcached server and queries
       statistics about	cache utilization, memory and bandwidth	used.
       <http://memcached.org/>

	<Plugin	"memcached">
	  <Instance "name">
	    #Host "memcache.example.com"
	    Address "127.0.0.1"
	    Port 11211
	  </Instance>
	</Plugin>

       The plugin configuration	consists of one	or more	Instance blocks	which
       specify one memcached connection	each. Within the Instance blocks, the
       following options are allowed:

       Host Hostname
	   Sets	the host field of dispatched values. Defaults to the global
	   hostname setting.  For backwards compatibility, values are also
	   dispatched with the global hostname when Host is set	to 127.0.0.1
	   or localhost	and Address is not set.

       Address Address
	   Hostname or IP to connect to. For backwards compatibility, defaults
	   to the value	of Host	or 127.0.0.1 if	Host is	unset.

       Port Port
	   TCP port to connect to. Defaults to 11211.

       Socket Path
	   Connect to memcached	using the UNIX domain socket at	Path. If this
	   setting is given, the Address and Port settings are ignored.

   Plugin "mic"
       The mic plugin gathers CPU statistics, memory usage and temperatures
       from Intel's Many Integrated Core (MIC) systems.

       Synopsis:

	<Plugin	mic>
	  ShowCPU true
	  ShowCPUCores true
	  ShowMemory true

	  ShowTemperatures true
	  Temperature vddg
	  Temperature vddq
	  IgnoreSelectedTemperature true

	  ShowPower true
	  Power	total0
	  Power	total1
	  IgnoreSelectedPower true
	</Plugin>

       The following options are valid inside the Plugin mic block:

       ShowCPU true|false
	   If enabled (the default) a sum of the CPU usage across all cores is
	   reported.

       ShowCPUCores true|false
	   If enabled (the default) per-core CPU usage is reported.

       ShowMemory true|false
	   If enabled (the default) the	physical memory	usage of the MIC
	   system is reported.

       ShowTemperatures	true|false
	   If enabled (the default) various temperatures of the	MIC system are
	   reported.

       Temperature Name
	   This	option controls	which temperatures are being reported. Whether
	   matching temperatures are being ignored or only matching
	   temperatures	are reported depends on	the IgnoreSelectedTemperature
	   setting below. By default all temperatures are reported.

       IgnoreSelectedTemperature false|true
	   Controls the	behavior of the	Temperature setting above. If set to
	   false (the default) only temperatures matching a Temperature	option
	   are reported	or, if no Temperature option is	specified, all
	   temperatures	are reported. If set to	true, matching temperatures
	   are ignored and all other temperatures are reported.

	   Known temperature names are:

	   die Die of the CPU

	   devmem
	       Device Memory

	   fin Fan In

	   fout
	       Fan Out

	   vccp
	       Voltage ccp

	   vddg
	       Voltage ddg

	   vddq
	       Voltage ddq

       ShowPower true|false
	   If enabled (the default) various temperatures of the	MIC system are
	   reported.

       Power Name
	   This	option controls	which power readings are being reported.
	   Whether matching power readings are being ignored or	only matching
	   power readings are reported depends on the IgnoreSelectedPower
	   setting below. By default all power readings	are reported.

       IgnoreSelectedPower false|true
	   Controls the	behavior of the	Power setting above. If	set to false
	   (the	default) only power readings matching a	Power option are
	   reported or,	if no Power option is specified, all power readings
	   are reported. If set	to true, matching power	readings are ignored
	   and all other power readings	are reported.

	   Known power names are:

	   total0
	       Total power utilization averaged	over Time Window 0 (uWatts).

	   total1
	       Total power utilization averaged	over Time Window 0 (uWatts).

	   inst
	       Instantaneous power (uWatts).

	   imax
	       Max instantaneous power (uWatts).

	   pcie
	       PCI-E connector power (uWatts).

	   c2x3
	       2x3 connector power (uWatts).

	   c2x4
	       2x4 connector power (uWatts).

	   vccp
	       Core rail (uVolts).

	   vddg
	       Uncore rail (uVolts).

	   vddq
	       Memory subsystem	rail (uVolts).

   Plugin "memory"
       The memory plugin provides the following	configuration options:

       ValuesAbsolute true|false
	   Enables or disables reporting of physical memory usage in absolute
	   numbers, i.e. bytes.	Defaults to true.

       ValuesPercentage	false|true
	   Enables or disables reporting of physical memory usage in
	   percentages,	e.g.  percent of physical memory used. Defaults	to
	   false.

	   This	is useful for deploying	collectd in a heterogeneous
	   environment in which	the sizes of physical memory vary.

   Plugin "modbus"
       The modbus plugin connects to a Modbus "slave" via Modbus/TCP or
       Modbus/RTU and reads register values. It	supports reading single
       registers (unsigned 16 bit values), large integer values	(unsigned
       32 bit values) and floating point values	(two registers interpreted as
       IEEE floats in big endian notation).

       Synopsis:

	<Data "voltage-input-1">
	  RegisterBase 0
	  RegisterType float
	  RegisterCmd ReadHolding
	  Type voltage
	  Instance "input-1"
	</Data>

	<Data "voltage-input-2">
	  RegisterBase 2
	  RegisterType float
	  RegisterCmd ReadHolding
	  Type voltage
	  Instance "input-2"
	</Data>

	<Data "supply-temperature-1">
	  RegisterBase 0
	  RegisterType Int16
	  RegisterCmd ReadHolding
	  Type temperature
	  Instance "temp-1"
	</Data>

	<Host "modbus.example.com">
	  Address "192.168.0.42"
	  Port	  "502"
	  Interval 60

	  <Slave 1>
	    Instance "power-supply"
	    Collect  "voltage-input-1"
	    Collect  "voltage-input-2"
	  </Slave>
	</Host>

	<Host "localhost">
	  Device "/dev/ttyUSB0"
	  Baudrate 38400
	  Interval 20

	  <Slave 1>
	    Instance "temperature"
	    Collect  "supply-temperature-1"
	  </Slave>
	</Host>

       <Data Name> blocks
	   Data	blocks define a	mapping	between	register numbers and the
	   "types" used	by collectd.

	   Within <Data	/> blocks, the following options are allowed:

	   RegisterBase	Number
	       Configures the base register to read from the device. If	the
	       option RegisterType has been set	to Uint32 or Float, this and
	       the next	register will be read (the register number is
	       increased by one).

	   RegisterType	Int16|Int32|Uint16|Uint32|Float
	       Specifies what kind of data is returned by the device. If the
	       type is Int32, Uint32 or	Float, two 16 bit registers will be
	       read and	the data is combined into one value. Defaults to
	       Uint16.

	   RegisterCmd ReadHolding|ReadInput
	       Specifies register type to be collected from device. Works only
	       with libmodbus 2.9.2 or higher. Defaults	to ReadHolding.

	   Type	Type
	       Specifies the "type" (data set) to use when dispatching the
	       value to	collectd. Currently, only data sets with exactly one
	       data source are supported.

	   Instance Instance
	       Sets the	type instance to use when dispatching the value	to
	       collectd. If unset, an empty string (no type instance) is used.

       <Host Name> blocks
	   Host	blocks are used	to specify to which hosts to connect and what
	   data	to read	from their "slaves". The string	argument Name is used
	   as hostname when dispatching	the values to collectd.

	   Within <Host	/> blocks, the following options are allowed:

	   Address Hostname
	       For Modbus/TCP, specifies the node name (the actual network
	       address)	used to	connect	to the host. This may be an IP address
	       or a hostname. Please note that the used	libmodbus library only
	       supports	IPv4 at	the moment.

	   Port	Service
	       for Modbus/TCP, specifies the port used to connect to the host.
	       The port	can either be given as a number	or as a	service	name.
	       Please note that	the Service argument must be a string, even if
	       ports are given in their	numerical form.	Defaults to "502".

	   Device Devicenode
	       For Modbus/RTU, specifies the path to the serial	device being
	       used.

	   Baudrate Baudrate
	       For Modbus/RTU, specifies the baud rate of the serial device.
	       Note, connections currently support only	8/N/1.

	   Interval Interval
	       Sets the	interval (in seconds) in which the values will be
	       collected from this host. By default the	global Interval
	       setting will be used.

	   <Slave ID>
	       Over each connection, multiple Modbus devices may be reached.
	       The slave ID is used to specify which device should be
	       addressed. For each device you want to query, one Slave block
	       must be given.

	       Within <Slave />	blocks,	the following options are allowed:

	       Instance	Instance
		   Specify the plugin instance to use when dispatching the
		   values to collectd.	By default "slave_ID" is used.

	       Collect DataName
		   Specifies which data	to retrieve from the device. DataName
		   must	be the same string as the Name argument	passed to a
		   Data	block. You can specify this option multiple times to
		   collect more	than one value from a slave. At	least one
		   Collect option is mandatory.

   Plugin "mqtt"
       The MQTT	plugin can send	metrics	to MQTT	(Publish blocks) and receive
       values from MQTT	(Subscribe blocks).

       Synopsis:

	<Plugin	mqtt>
	  <Publish "name">
	    Host "mqtt.example.com"
	    Prefix "collectd"
	  </Publish>
	  <Subscribe "name">
	    Host "mqtt.example.com"
	    Topic "collectd/#"
	  </Subscribe>
	</Plugin>

       The plugin's configuration is in	Publish	and/or Subscribe blocks,
       configuring the sending and receiving direction respectively. The
       plugin will register a write callback named "mqtt/name" where name is
       the string argument given to the	Publish	block. Both types of blocks
       share many but not all of the following options.	If an option is	valid
       in only one of the blocks, it will be mentioned explicitly.

       Options:

       Host Hostname
	   Hostname of the MQTT	broker to connect to.

       Port Service
	   Port	number or service name of the MQTT broker to connect to.

       User UserName
	   Username used when authenticating to	the MQTT broker.

       Password	Password
	   Password used when authenticating to	the MQTT broker.

       ClientId	ClientId
	   MQTT	client ID to use. Defaults to the hostname used	by collectd.

       QoS [0-2]
	   Sets	the Quality of Service,	with the values	0, 1 and 2 meaning:

	   0   At most once

	   1   At least	once

	   2   Exactly once

	   In Publish blocks, this option determines the QoS flag set on
	   outgoing messages and defaults to 0.	In Subscribe blocks,
	   determines the maximum QoS setting the client is going to accept
	   and defaults	to 2. If the QoS flag on a message is larger than the
	   maximum accepted QoS	of a subscriber, the message's QoS will	be
	   downgraded.

       Prefix Prefix (Publish only)
	   This	plugin will use	one topic per value list which will looks like
	   a path.  Prefix is used as the first	path element and defaults to
	   collectd.

	   An example topic name would be:

	    collectd/cpu-0/cpu-user

       Retain false|true (Publish only)
	   Controls whether the	MQTT broker will retain	(keep a	copy of) the
	   last	message	sent to	each topic and deliver it to new subscribers.
	   Defaults to false.

       StoreRates true|false (Publish only)
	   Controls whether "DERIVE" and "COUNTER" metrics are converted to a
	   rate	before sending.	Defaults to true.

       CleanSession true|false (Subscribe only)
	   Controls whether the	MQTT "cleans" the session up after the
	   subscriber disconnects or if	it maintains the subscriber's
	   subscriptions and all messages that arrive while the	subscriber is
	   disconnected. Defaults to true.

       Topic TopicName (Subscribe only)
	   Configures the topic(s) to subscribe	to. You	can use	the single
	   level "+" and multi level "#" wildcards. Defaults to	collectd/#,
	   i.e.	all topics beneath the collectd	branch.

       CACert file
	   Path	to the PEM-encoded CA certificate file.	Setting	this option
	   enables TLS communication with the MQTT broker, and as such,	Port
	   should be the TLS-enabled port of the MQTT broker.  A valid TLS
	   configuration requires CACert, CertificateFile and
	   CertificateKeyFile.

       CertificateFile file
	   Path	to the PEM-encoded certificate file to use as client
	   certificate when connecting to the MQTT broker.  A valid TLS
	   configuration requires CACert, CertificateFile and
	   CertificateKeyFile.

       CertificateKeyFile file
	   Path	to the unencrypted PEM-encoded key file	corresponding to
	   CertificateFile.  A valid TLS configuration requires	CACert,
	   CertificateFile and CertificateKeyFile.

       TLSProtocol protocol
	   If configured, this specifies the string protocol version (e.g.
	   "tlsv1", "tlsv1.2") to use for the TLS connection to	the broker. If
	   not set a default version is	used which depends on the version of
	   OpenSSL the Mosquitto library was linked against.

       CipherSuite ciphersuite
	   A string describing the ciphers available for use. See ciphers(1)
	   and the "openssl ciphers" utility for more information. If unset,
	   the default ciphers will be used.

   Plugin "mysql"
       The "mysql plugin" requires mysqlclient to be installed.	It connects to
       one or more databases when started and keeps the	connection up as long
       as possible. When the connection	is interrupted for whatever reason it
       will try	to re-connect. The plugin will complain	loudly in case
       anything	goes wrong.

       This plugin issues the MySQL "SHOW STATUS" / "SHOW GLOBAL STATUS"
       command and collects information	about MySQL network traffic, executed
       statements, requests, the query cache and threads by evaluating the
       "Bytes_{received,sent}",	"Com_*", "Handler_*", "Qcache_*" and
       "Threads_*" return values. Please refer to the MySQL reference manual,
       5.1.6. Server Status Variables for an explanation of these values.

       Optionally, master and slave statistics may be collected	in a MySQL
       replication setup. In that case,	information about the synchronization
       state of	the nodes are collected	by evaluating the "Position" return
       value of	the "SHOW MASTER STATUS" command and the
       "Seconds_Behind_Master",	"Read_Master_Log_Pos" and
       "Exec_Master_Log_Pos" return values of the "SHOW	SLAVE STATUS" command.
       See the MySQL reference manual, 12.5.5.21 SHOW MASTER STATUS Syntax and
       12.5.5.31 SHOW SLAVE STATUS Syntax for details.

       Synopsis:

	 <Plugin mysql>
	   <Database foo>
	     Host "hostname"
	     User "username"
	     Password "password"
	     Port "3306"
	     MasterStats true
	     ConnectTimeout 10
	     SSLKey "/path/to/key.pem"
	     SSLCert "/path/to/cert.pem"
	     SSLCA "/path/to/ca.pem"
	     SSLCAPath "/path/to/cas/"
	     SSLCipher "DHE-RSA-AES256-SHA"
	   </Database>

	   <Database bar>
	     Alias "squeeze"
	     Host "localhost"
	     Socket "/var/run/mysql/mysqld.sock"
	     SlaveStats	true
	     SlaveNotifications	true
	   </Database>

	  <Database galera>
	     Alias "galera"
	     Host "localhost"
	     Socket "/var/run/mysql/mysqld.sock"
	     WsrepStats	true
	  </Database>
	 </Plugin>

       A Database block	defines	one connection to a MySQL database. It accepts
       a single	argument which specifies the name of the database. None	of the
       other options are required. MySQL will use default values as documented
       in the "mysql_real_connect()" and "mysql_ssl_set()" sections in the
       MySQL reference manual.

       Alias Alias
	   Alias to use	as sender instead of hostname when reporting. This may
	   be useful when having cryptic hostnames.

       Host Hostname
	   Hostname of the database server. Defaults to	localhost.

       User Username
	   Username to use when	connecting to the database. The	user does not
	   have	to be granted any privileges (which is synonym to granting the
	   "USAGE" privilege), unless you want to collectd replication
	   statistics (see MasterStats and SlaveStats below). In this case,
	   the user needs the "REPLICATION CLIENT" (or "SUPER")	privileges.
	   Else, any existing MySQL user will do.

       Password	Password
	   Password needed to log into the database.

       Database	Database
	   Select this database. Defaults to no	database which is a perfectly
	   reasonable option for what this plugin does.

       Port Port
	   TCP-port to connect to. The port must be specified in its numeric
	   form, but it	must be	passed as a string nonetheless.	For example:

	     Port "3306"

	   If Host is set to localhost (the default), this setting has no
	   effect.  See	the documentation for the "mysql_real_connect"
	   function for	details.

       Socket Socket
	   Specifies the path to the UNIX domain socket	of the MySQL server.
	   This	option only has	any effect, if Host is set to localhost	(the
	   default).  Otherwise, use the Port option above. See	the
	   documentation for the "mysql_real_connect" function for details.

       InnodbStats true|false
	   If enabled, metrics about the InnoDB	storage	engine are collected.
	   Disabled by default.

       MasterStats true|false
       SlaveStats true|false
	   Enable the collection of master / slave statistics in a replication
	   setup. In order to be able to get access to these statistics, the
	   user	needs special privileges. See the User documentation above.
	   Defaults to false.

       SlaveNotifications true|false
	   If enabled, the plugin sends	a notification if the replication
	   slave I/O and / or SQL threads are not running. Defaults to false.

       WsrepStats true|false
	    Enable the collection of wsrep plugin statistics, used in Master-Master
	    replication	setups like in MySQL Galera/Percona XtraDB Cluster.
	    User needs only privileges to execute 'SHOW	GLOBAL STATUS'

       ConnectTimeout Seconds
	   Sets	the connect timeout for	the MySQL client.

       SSLKey Path
	   If provided,	the X509 key in	PEM format.

       SSLCert Path
	   If provided,	the X509 cert in PEM format.

       SSLCA Path
	   If provided,	the CA file in PEM format (check OpenSSL docs).

       SSLCAPath Path
	   If provided,	the CA directory (check	OpenSSL	docs).

       SSLCipher String
	   If provided,	the SSL	cipher to use.

   Plugin "netapp"
       The netapp plugin can collect various performance and capacity
       information from	a NetApp filer using the NetApp	API.

       Please note that	NetApp has a wide line of products and a lot of
       different software versions for each of these products. This plugin was
       developed for a NetApp FAS3040 running OnTap 7.2.3P8 and	tested on
       FAS2050 7.3.1.1L1, FAS3140 7.2.5.1 and FAS3020 7.2.4P9. It should work
       for most	combinations of	model and software version but it is very hard
       to test this.  If you have used this plugin with	other models and/or
       software	version, feel free to send us a	mail to	tell us	about the
       results,	even if	it's just a short "It works".

       To collect these	data collectd will log in to the NetApp	via HTTP(S)
       and HTTP	basic authentication.

       Do not use a regular user for this! Create a special collectd user with
       just the	minimum	of capabilities	needed.	The user only needs the
       "login-http-admin" capability as	well as	a few more depending on	which
       data will be collected.	Required capabilities are documented below.

       Synopsis

	<Plugin	"netapp">
	  <Host	"netapp1.example.com">
	   Protocol	 "https"
	   Address	 "10.0.0.1"
	   Port		 443
	   User		 "username"
	   Password	 "aef4Aebe"
	   Interval	 30

	   <WAFL>
	     Interval 30
	     GetNameCache   true
	     GetDirCache    true
	     GetBufferCache true
	     GetInodeCache  true
	   </WAFL>

	   <Disks>
	     Interval 30
	     GetBusy true
	   </Disks>

	   <VolumePerf>
	     Interval 30
	     GetIO	"volume0"
	     IgnoreSelectedIO	   false
	     GetOps	"volume0"
	     IgnoreSelectedOps	   false
	     GetLatency	"volume0"
	     IgnoreSelectedLatency false
	   </VolumePerf>

	   <VolumeUsage>
	     Interval 30
	     GetCapacity "vol0"
	     GetCapacity "vol1"
	     IgnoreSelectedCapacity false
	     GetSnapshot "vol1"
	     GetSnapshot "vol3"
	     IgnoreSelectedSnapshot false
	   </VolumeUsage>

	   <Quota>
	     Interval 60
	   </Quota>

	   <Snapvault>
	     Interval 30
	   </Snapvault>

	   <System>
	     Interval 30
	     GetCPULoad	    true
	     GetInterfaces  true
	     GetDiskOps	    true
	     GetDiskIO	    true
	   </System>

	   <VFiler vfilerA>
	     Interval 60

	     SnapVault true
	     # ...
	   </VFiler>
	  </Host>
	</Plugin>

       The netapp plugin accepts the following configuration options:

       Host Name
	   A host block	defines	one NetApp filer. It will appear in collectd
	   with	the name you specify here which	does not have to be its	real
	   name	nor its	hostname (see the Address option below).

       VFiler Name
	   A VFiler block may only be used inside a host block.	It accepts all
	   the same options as the Host	block (except for cascaded VFiler
	   blocks) and will execute all	NetApp API commands in the context of
	   the specified VFiler(R). It will appear in collectd with the	name
	   you specify here which does not have	to be its real name. The
	   VFiler name may be specified	using the VFilerName option. If	this
	   is not specified, it	will default to	the name you specify here.

	   The VFiler block inherits all connection related settings from the
	   surrounding Host block (which appear	before the VFiler block) but
	   they	may be overwritten inside the VFiler block.

	   This	feature	is useful, for example,	when using a VFiler as
	   SnapVault target (supported since OnTap 8.1). In that case, the
	   SnapVault statistics	are not	available in the host filer (vfiler0)
	   but only in the respective VFiler context.

       Protocol	httpd|http
	   The protocol	collectd will use to query this	host.

	   Optional

	   Type: string

	   Default: https

	   Valid options: http,	https

       Address Address
	   The hostname	or IP address of the host.

	   Optional

	   Type: string

	   Default: The	"host" block's name.

       Port Port
	   The TCP port	to connect to on the host.

	   Optional

	   Type: integer

	   Default: 80 for protocol "http", 443	for protocol "https"

       User User
       Password	Password
	   The username	and password to	use to login to	the NetApp.

	   Mandatory

	   Type: string

       VFilerName Name
	   The name of the VFiler in which context to execute API commands. If
	   not specified, the name provided to the VFiler block	will be	used
	   instead.

	   Optional

	   Type: string

	   Default: name of the	VFiler block

	   Note: This option may only be used inside VFiler blocks.

       Interval	Interval
	   TODO

       The following options decide what kind of data will be collected. You
       can either use them as a	block and fine tune various parameters inside
       this block, use them as a single	statement to just accept all default
       values, or omit it to not collect any data.

       The following options are valid inside all blocks:

       Interval	Seconds
	   Collect the respective statistics every Seconds seconds. Defaults
	   to the host specific	setting.

       The System block

       This will collect various performance data about	the whole system.

       Note: To	get this data the collectd user	needs the "api-perf-object-
       get-instances" capability.

       Interval	Seconds
	   Collect disk	statistics every Seconds seconds.

       GetCPULoad true|false
	   If you set this option to true the current CPU usage	will be	read.
	   This	will be	the average usage between all CPUs in your NetApp
	   without any information about individual CPUs.

	   Note: These are the same values that	the NetApp CLI command
	   "sysstat" returns in	the "CPU" field.

	   Optional

	   Type: boolean

	   Default: true

	   Result: Two value lists of type "cpu", and type instances "idle"
	   and "system".

       GetInterfaces true|false
	   If you set this option to true the current traffic of the network
	   interfaces will be read. This will be the total traffic over	all
	   interfaces of your NetApp without any information about individual
	   interfaces.

	   Note: This is the same values that the NetApp CLI command "sysstat"
	   returns in the "Net kB/s" field.

	   Or is it?

	   Optional

	   Type: boolean

	   Default: true

	   Result: One value list of type "if_octects".

       GetDiskIO true|false
	   If you set this option to true the current IO throughput will be
	   read. This will be the total	IO of your NetApp without any
	   information about individual	disks, volumes or aggregates.

	   Note: This is the same values that the NetApp CLI command "sysstat"
	   returns in the "Disk	kB/s" field.

	   Optional

	   Type: boolean

	   Default: true

	   Result: One value list of type "disk_octets".

       GetDiskOps true|false
	   If you set this option to true the current number of	HTTP, NFS,
	   CIFS, FCP, iSCSI, etc. operations will be read. This	will be	the
	   total number	of operations on your NetApp without any information
	   about individual volumes or aggregates.

	   Note: These are the same values that	the NetApp CLI command
	   "sysstat" returns in	the "NFS", "CIFS", "HTTP", "FCP" and "iSCSI"
	   fields.

	   Optional

	   Type: boolean

	   Default: true

	   Result: A variable number of	value lists of type
	   "disk_ops_complex". Each type of operation will result in one value
	   list	with the name of the operation as type instance.

       The WAFL	block

       This will collect various performance data about	the WAFL file system.
       At the moment this just means cache performance.

       Note: To	get this data the collectd user	needs the "api-perf-object-
       get-instances" capability.

       Note: The interface to get these	values is classified as	"Diagnostics"
       by NetApp. This means that it is	not guaranteed to be stable even
       between minor releases.

       Interval	Seconds
	   Collect disk	statistics every Seconds seconds.

       GetNameCache true|false
	   Optional

	   Type: boolean

	   Default: true

	   Result: One value list of type "cache_ratio"	and type instance
	   "name_cache_hit".

       GetDirCache true|false
	   Optional

	   Type: boolean

	   Default: true

	   Result: One value list of type "cache_ratio"	and type instance
	   "find_dir_hit".

       GetInodeCache true|false
	   Optional

	   Type: boolean

	   Default: true

	   Result: One value list of type "cache_ratio"	and type instance
	   "inode_cache_hit".

       GetBufferCache true|false
	   Note: This is the same value	that the NetApp	CLI command "sysstat"
	   returns in the "Cache hit" field.

	   Optional

	   Type: boolean

	   Default: true

	   Result: One value list of type "cache_ratio"	and type instance
	   "buf_hash_hit".

       The Disks block

       This will collect performance data about	the individual disks in	the
       NetApp.

       Note: To	get this data the collectd user	needs the "api-perf-object-
       get-instances" capability.

       Interval	Seconds
	   Collect disk	statistics every Seconds seconds.

       GetBusy true|false
	   If you set this option to true the busy time	of all disks will be
	   calculated and the value of the busiest disk	in the system will be
	   written.

	   Note: This is the same values that the NetApp CLI command "sysstat"
	   returns in the "Disk	util" field. Probably.

	   Optional

	   Type: boolean

	   Default: true

	   Result: One value list of type "percent" and	type instance
	   "disk_busy".

       The VolumePerf block

       This will collect various performance data about	the individual
       volumes.

       You can select which data to collect about which	volume using the
       following options. They follow the standard ignorelist semantic.

       Note: To	get this data the collectd user	needs the api-perf-object-get-
       instances capability.

       Interval	Seconds
	   Collect volume performance data every Seconds seconds.

       GetIO Volume
       GetOps Volume
       GetLatency Volume
	   Select the given volume for IO, operations or latency statistics
	   collection.	The argument is	the name of the	volume without the
	   "/vol/" prefix.

	   Since the standard ignorelist functionality is used here, you can
	   use a string	starting and ending with a slash to specify regular
	   expression matching:	To match the volumes "vol0", "vol2" and
	   "vol7", you can use this regular expression:

	     GetIO "/^vol[027]$/"

	   If no regular expression is specified, an exact match is required.
	   Both, regular and exact matching are	case sensitive.

	   If no volume	was specified at all for either	of the three options,
	   that	data will be collected for all available volumes.

       IgnoreSelectedIO	true|false
       IgnoreSelectedOps true|false
       IgnoreSelectedLatency true|false
	   When	set to true, the volumes selected for IO, operations or
	   latency statistics collection will be ignored and the data will be
	   collected for all other volumes.

	   When	set to false, data will	only be	collected for the specified
	   volumes and all other volumes will be ignored.

	   If no volumes have been specified with the above Get* options, all
	   volumes will	be collected regardless	of the IgnoreSelected* option.

	   Defaults to false

       The VolumeUsage block

       This will collect capacity data about the individual volumes.

       Note: To	get this data the collectd user	needs the api-volume-list-info
       capability.

       Interval	Seconds
	   Collect volume usage	statistics every Seconds seconds.

       GetCapacity VolumeName
	   The current capacity	of the volume will be collected. This will
	   result in two to four value lists, depending	on the configuration
	   of the volume. All data sources are of type "df_complex" with the
	   name	of the volume as plugin_instance.

	   There will be type_instances	"used" and "free" for the number of
	   used	and available bytes on the volume.  If the volume has some
	   space reserved for snapshots, a type_instance "snap_reserved" will
	   be available.  If the volume	has SIS	enabled, a type_instance
	   "sis_saved" will be available. This is the number of	bytes saved by
	   the SIS feature.

	   Note: The current NetApp API	has a bug that results in this value
	   being reported as a 32 bit number. This plugin tries	to guess the
	   correct number which	works most of the time.	 If you	see strange
	   values here,	bug NetApp support to fix this.

	   Repeat this option to specify multiple volumes.

       IgnoreSelectedCapacity true|false
	   Specify whether to collect only the volumes selected	by the
	   GetCapacity option or to ignore those volumes.
	   IgnoreSelectedCapacity defaults to false. However, if no
	   GetCapacity option is specified at all, all capacities will be
	   selected anyway.

       GetSnapshot VolumeName
	   Select volumes from which to	collect	snapshot information.

	   Usually, the	space used for snapshots is included in	the space
	   reported as "used". If snapshot information is collected as well,
	   the space used for snapshots	is subtracted from the used space.

	   To make things even more interesting, it is possible	to reserve
	   space to be used for	snapshots. If the space	required for snapshots
	   is less than	that reserved space, there is "reserved	free" and
	   "reserved used" space in addition to	"free" and "used". If the
	   space required for snapshots	exceeds	the reserved space, that part
	   allocated in	the normal space is subtracted from the	"used" space
	   again.

	   Repeat this option to specify multiple volumes.

       IgnoreSelectedSnapshot
	   Specify whether to collect only the volumes selected	by the
	   GetSnapshot option or to ignore those volumes.
	   IgnoreSelectedSnapshot defaults to false. However, if no
	   GetSnapshot option is specified at all, all capacities will be
	   selected anyway.

       The Quota block

       This will collect (tree)	quota statistics (used disk space and number
       of used files). This mechanism is useful	to get usage information for
       single qtrees.  In case the quotas are not used for any other purpose,
       an entry	similar	to the following in "/etc/quotas" would	be sufficient:

	 /vol/volA/some_qtree tree - - - - -

       After adding the	entry, issue "quota on -w volA"	on the NetApp filer.

       Interval	Seconds
	   Collect SnapVault(R)	statistics every Seconds seconds.

       The SnapVault block

       This will collect statistics about the time and traffic of SnapVault(R)
       transfers.

       Interval	Seconds
	   Collect SnapVault(R)	statistics every Seconds seconds.

   Plugin "netlink"
       The "netlink" plugin uses a netlink socket to query the Linux kernel
       about statistics	of various interface and routing aspects.

       Interface Interface
       VerboseInterface	Interface
	   Instruct the	plugin to collect interface statistics.	This is
	   basically the same as the statistics	provided by the	"interface"
	   plugin (see above) but potentially much more	detailed.

	   When	configuring with Interface only	the basic statistics will be
	   collected, namely octets, packets, and errors. These	statistics are
	   collected by	the "interface"	plugin,	too, so	using both at the same
	   time	is no benefit.

	   When	configured with	VerboseInterface all counters except the basic
	   ones, so that no data needs to be collected twice if	you use	the
	   "interface" plugin.	This includes dropped packets, received
	   multicast packets, collisions and a whole zoo of differentiated RX
	   and TX errors. You can try the following command to get an idea of
	   what	awaits you:

	     ip	-s -s link list

	   If Interface	is All,	all interfaces will be selected.

       QDisc Interface [QDisc]
       Class Interface [Class]
       Filter Interface	[Filter]
	   Collect the octets and packets that pass a certain qdisc, class or
	   filter.

	   QDiscs and classes are identified by	their type and handle (or
	   classid).  Filters don't necessarily	have a handle, therefore the
	   parent's handle is used.  The notation used in collectd differs
	   from	that used in tc(1) in that it doesn't skip the major or	minor
	   number if it's zero and doesn't print special ids by	their name.
	   So, for example, a qdisc may	be identified by "pfifo_fast-1:0" even
	   though the minor number of all qdiscs is zero and thus not
	   displayed by	tc(1).

	   If QDisc, Class, or Filter is given without the second argument,
	   i. .e. without an identifier, all qdiscs, classes, or filters that
	   are associated with that interface will be collected.

	   Since a filter itself doesn't necessarily have a handle, the
	   parent's handle is used. This may lead to problems when more	than
	   one filter is attached to a qdisc or	class. This isn't nice,	but we
	   don't know how this could be	done any better. If you	have a idea,
	   please don't	hesitate to tell us.

	   As with the Interface option	you can	specify	All as the interface,
	   meaning all interfaces.

	   Here	are some examples to help you understand the above text	more
	   easily:

	     <Plugin netlink>
	       VerboseInterface	"All"
	       QDisc "eth0" "pfifo_fast-1:0"
	       QDisc "ppp0"
	       Class "ppp0" "htb-1:10"
	       Filter "ppp0" "u32-1:0"
	     </Plugin>

       IgnoreSelected
	   The behavior	is the same as with all	other similar plugins: If
	   nothing is selected at all, everything is collected.	If some	things
	   are selected	using the options described above, only	these
	   statistics are collected. If	you set	IgnoreSelected to true,	this
	   behavior is inverted, i. e. the specified statistics	will not be
	   collected.

   Plugin "network"
       The Network plugin sends	data to	a remote instance of collectd,
       receives	data from a remote instance, or	both at	the same time. Data
       which has been received from the	network	is usually not transmitted
       again, but this can be activated, see the Forward option	below.

       The default IPv6	multicast group	is "ff18::efc0:4a42". The default IPv4
       multicast group is 239.192.74.66. The default UDP port is 25826.

       Both, Server and	Listen can be used as single option or as block. When
       used as block, given options are	valid for this socket only. The
       following example will export the metrics twice:	Once to	an "internal"
       server (without encryption and signing) and one to an external server
       (with cryptographic signature):

	<Plugin	"network">
	  # Export to an internal server
	  # (demonstrates usage	without	additional options)
	  Server "collectd.internal.tld"

	  # Export to an external server
	  # (demonstrates usage	with signature options)
	  <Server "collectd.external.tld">
	    SecurityLevel "sign"
	    Username "myhostname"
	    Password "ohl0eQue"
	  </Server>
	</Plugin>

       <Server Host [Port]>
	   The Server statement/block sets the server to send datagrams	to.
	   The statement may occur multiple times to send each datagram	to
	   multiple destinations.

	   The argument	Host may be a hostname,	an IPv4	address	or an IPv6
	   address. The	optional second	argument specifies a port number or a
	   service name. If not	given, the default, 25826, is used.

	   The following options are recognized	within Server blocks:

	   SecurityLevel Encrypt|Sign|None
	       Set the security	you require for	network	communication. When
	       the security level has been set to Encrypt, data	sent over the
	       network will be encrypted using AES-256.	The integrity of
	       encrypted packets is ensured using SHA-1. When set to Sign,
	       transmitted data	is signed using	the HMAC-SHA-256 message
	       authentication code. When set to	None, data is sent without any
	       security.

	       This feature is only available if the network plugin was	linked
	       with libgcrypt.

	   Username Username
	       Sets the	username to transmit. This is used by the server to
	       lookup the password. See	AuthFile below.	All security levels
	       except None require this	setting.

	       This feature is only available if the network plugin was	linked
	       with libgcrypt.

	   Password Password
	       Sets a password (shared secret) for this	socket.	All security
	       levels except None require this setting.

	       This feature is only available if the network plugin was	linked
	       with libgcrypt.

	   Interface Interface name
	       Set the outgoing	interface for IP packets. This applies at
	       least to	IPv6 packets and if possible to	IPv4. If this option
	       is not applicable, undefined or a non-existent interface	name
	       is specified, the default behavior is to	let the	kernel choose
	       the appropriate interface. Be warned that the manual selection
	       of an interface for unicast traffic is only necessary in	rare
	       cases.

	   ResolveInterval Seconds
	       Sets the	interval at which to re-resolve	the DNS	for the	Host.
	       This is useful to force a regular DNS lookup to support a high
	       availability setup. If not specified, re-resolves are never
	       attempted.

       <Listen Host [Port]>
	   The Listen statement	sets the interfaces to bind to.	When multiple
	   statements are found	the daemon will	bind to	multiple interfaces.

	   The argument	Host may be a hostname,	an IPv4	address	or an IPv6
	   address. If the argument is a multicast address the daemon will
	   join	that multicast group.  The optional second argument specifies
	   a port number or a service name. If not given, the default, 25826,
	   is used.

	   The following options are recognized	within "<Listen>" blocks:

	   SecurityLevel Encrypt|Sign|None
	       Set the security	you require for	network	communication. When
	       the security level has been set to Encrypt, only	encrypted data
	       will be accepted. The integrity of encrypted packets is ensured
	       using SHA-1. When set to	Sign, only signed and encrypted	data
	       is accepted. When set to	None, all data will be accepted. If an
	       AuthFile	option was given (see below), encrypted	data is
	       decrypted if possible.

	       This feature is only available if the network plugin was	linked
	       with libgcrypt.

	   AuthFile Filename
	       Sets a file in which usernames are mapped to passwords. These
	       passwords are used to verify signatures and to decrypt
	       encrypted network packets. If SecurityLevel is set to None,
	       this is optional. If given, signed data is verified and
	       encrypted packets are decrypted.	Otherwise, signed data is
	       accepted	without	checking the signature and encrypted data
	       cannot be decrypted.  For the other security levels this	option
	       is mandatory.

	       The file	format is very simple: Each line consists of a
	       username	followed by a colon and	any number of spaces followed
	       by the password.	To demonstrate,	an example file	could look
	       like this:

		 user0:	foo
		 user1:	bar

	       Each time a packet is received, the modification	time of	the
	       file is checked using stat(2). If the file has been changed,
	       the contents is re-read.	While the file is being	read, it is
	       locked using fcntl(2).

	   Interface Interface name
	       Set the incoming	interface for IP packets explicitly. This
	       applies at least	to IPv6	packets	and if possible	to IPv4. If
	       this option is not applicable, undefined	or a non-existent
	       interface name is specified, the	default	behavior is, to	let
	       the kernel choose the appropriate interface. Thus incoming
	       traffic gets only accepted, if it arrives on the	given
	       interface.

       TimeToLive 1-255
	   Set the time-to-live	of sent	packets. This applies to all, unicast
	   and multicast, and IPv4 and IPv6 packets. The default is to not
	   change this value.  That means that multicast packets will be sent
	   with	a TTL of 1 (one) on most operating systems.

       MaxPacketSize 1024-65535
	   Set the maximum size	for datagrams received over the	network.
	   Packets larger than this will be truncated. Defaults	to 1452	bytes,
	   which is the	maximum	payload	size that can be transmitted in	one
	   Ethernet frame using	IPv6 / UDP.

	   On the server side, this limit should be set	to the largest value
	   used	on any client. Likewise, the value on the client must not be
	   larger than the value on the	server,	or data	will be	lost.

	   Compatibility: Versions prior to version 4.8	used a fixed sized
	   buffer of 1024 bytes. Versions 4.8, 4.9 and 4.10 used a default
	   value of 1024 bytes to avoid	problems when sending data to an older
	   server.

       Forward true|false
	   If set to true, write packets that were received via	the network
	   plugin to the sending sockets. This should only be activated	when
	   the Listen- and Server-statements differ. Otherwise packets may be
	   send	multiple times to the same multicast group. While this results
	   in more network traffic than	necessary it's not a huge problem
	   since the plugin has	a duplicate detection, so the values will not
	   loop.

       ReportStats true|false
	   The network plugin cannot only receive and send statistics, it can
	   also	create statistics about	itself.	Collectd data included the
	   number of received and sent octets and packets, the length of the
	   receive queue and the number	of values handled. When	set to true,
	   the Network plugin will make	these statistics available. Defaults
	   to false.

   Plugin "nginx"
       This plugin collects the	number of connections and requests handled by
       the "nginx daemon" (speak: engine X), a HTTP and	mail server/proxy. It
       queries the page	provided by the	"ngx_http_stub_status_module" module,
       which isn't compiled by default.	Please refer to
       <http://wiki.codemongers.com/NginxStubStatusModule> for more
       information on how to compile and configure nginx and this module.

       The following options are accepted by the "nginx	plugin":

       URL http://host/nginx_status
	   Sets	the URL	of the "ngx_http_stub_status_module" output.

       User Username
	   Optional user name needed for authentication.

       Password	Password
	   Optional password needed for	authentication.

       VerifyPeer true|false
	   Enable or disable peer SSL certificate verification.	See
	   <http://curl.haxx.se/docs/sslcerts.html> for	details. Enabled by
	   default.

       VerifyHost true|false
	   Enable or disable peer host name verification. If enabled, the
	   plugin checks if the	"Common	Name" or a "Subject Alternate Name"
	   field of the	SSL certificate	matches	the host name provided by the
	   URL option. If this identity	check fails, the connection is
	   aborted. Obviously, only works when connecting to a SSL enabled
	   server. Enabled by default.

       CACert File
	   File	that holds one or more SSL certificates. If you	want to	use
	   HTTPS you will possibly need	this option. What CA certificates come
	   bundled with	"libcurl" and are checked by default depends on	the
	   distribution	you use.

       Timeout Milliseconds
	   The Timeout option sets the overall timeout for HTTP	requests to
	   URL,	in milliseconds. By default, the configured Interval is	used
	   to set the timeout.

   Plugin "notify_desktop"
       This plugin sends a desktop notification	to a notification daemon, as
       defined in the Desktop Notification Specification. To actually display
       the notifications, notification-daemon is required and collectd has to
       be able to access the X server (i. e., the "DISPLAY" and	"XAUTHORITY"
       environment variables have to be	set correctly) and the D-Bus message
       bus.

       The Desktop Notification	Specification can be found at
       <http://www.galago-project.org/specs/notification/>.

       OkayTimeout timeout
       WarningTimeout timeout
       FailureTimeout timeout
	   Set the timeout, in milliseconds, after which to expire the
	   notification	for "OKAY", "WARNING" and "FAILURE" severities
	   respectively. If zero has been specified, the displayed
	   notification	will not be closed at all - the	user has to do so
	   herself. These options default to 5000. If a	negative number	has
	   been	specified, the default is used as well.

   Plugin "notify_email"
       The notify_email	plugin uses the	ESMTP library to send notifications to
       a configured email address.

       libESMTP	is available from <http://www.stafford.uklinux.net/libesmtp/>.

       Available configuration options:

       From Address
	   Email address from which the	emails should appear to	come from.

	   Default: "root@localhost"

       Recipient Address
	   Configures the email	address(es) to which the notifications should
	   be mailed.  May be repeated to send notifications to	multiple
	   addresses.

	   At least one	Recipient must be present for the plugin to work
	   correctly.

       SMTPServer Hostname
	   Hostname of the SMTP	server to connect to.

	   Default: "localhost"

       SMTPPort	Port
	   TCP port to connect to.

	   Default: 25

       SMTPUser	Username
	   Username for	ASMTP authentication. Optional.

       SMTPPassword Password
	   Password for	ASMTP authentication. Optional.

       Subject Subject
	   Subject-template to use when	sending	emails.	There must be exactly
	   two string-placeholders in the subject, given in the	standard
	   printf(3) syntax, i.	e. %s. The first will be replaced with the
	   severity, the second	with the hostname.

	   Default: "Collectd notify: %s@%s"

   Plugin "notify_nagios"
       The notify_nagios plugin	writes notifications to	Nagios'	command	file
       as a passive service check result.

       Available configuration options:

       CommandFile Path
	   Sets	the command file to write to. Defaults to
	   /usr/local/nagios/var/rw/nagios.cmd.

   Plugin "ntpd"
       The "ntpd" plugin collects per-peer ntp data such as time offset	and
       time dispersion.

       For talking to ntpd, it mimics what the ntpdc control program does on
       the wire	- using	mode 7 specific	requests. This mode is deprecated with
       newer ntpd releases (4.2.7p230 and later). For the "ntpd" plugin	to
       work correctly with them, the ntp daemon	must be	explicitly configured
       to enable mode 7	(which is disabled by default).	Refer to the
       ntp.conf(5) manual page for details.

       Available configuration options for the "ntpd" plugin:

       Host Hostname
	   Hostname of the host	running	ntpd. Defaults to localhost.

       Port Port
	   UDP-Port to connect to. Defaults to 123.

       ReverseLookups true|false
	   Sets	whether	or not to perform reverse lookups on peers. Since the
	   name	or IP-address may be used in a filename	it is recommended to
	   disable reverse lookups. The	default	is to do reverse lookups to
	   preserve backwards compatibility, though.

       IncludeUnitID true|false
	   When	a peer is a refclock, include the unit ID in the type
	   instance.  Defaults to false	for backward compatibility.

	   If two refclock peers use the same driver and this is false,	the
	   plugin will try to write simultaneous measurements from both	to the
	   same	type instance.	This will result in error messages in the log
	   and only one	set of measurements making it through.

   Plugin "nut"
       UPS upsname@hostname[:port]
	   Add a UPS to	collect	data from. The format is identical to the one
	   accepted by upsc(8).

   Plugin "olsrd"
       The olsrd plugin	connects to the	TCP port opened	by the txtinfo plugin
       of the Optimized	Link State Routing daemon and reads information	about
       the current state of the	meshed network.

       The following configuration options are understood:

       Host Host
	   Connect to Host. Defaults to	"localhost".

       Port Port
	   Specifies the port to connect to. This must be a string, even if
	   you give the	port as	a number rather	than a service name. Defaults
	   to "2006".

       CollectLinks No|Summary|Detail
	   Specifies what information to collect about links, i. e. direct
	   connections of the daemon queried. If set to	No, no information is
	   collected. If set to	Summary, the number of links and the average
	   of all link quality (LQ) and	neighbor link quality (NLQ) values is
	   calculated.	If set to Detail LQ and	NLQ are	collected per link.

	   Defaults to Detail.

       CollectRoutes No|Summary|Detail
	   Specifies what information to collect about routes of the daemon
	   queried. If set to No, no information is collected. If set to
	   Summary, the	number of routes and the average metric	and ETX	is
	   calculated. If set to Detail	metric and ETX are collected per
	   route.

	   Defaults to Summary.

       CollectTopology No|Summary|Detail
	   Specifies what information to collect about the global topology. If
	   set to No, no information is	collected. If set to Summary, the
	   number of links in the entire topology and the average link quality
	   (LQ)	is calculated.	If set to Detail LQ and	NLQ are	collected for
	   each	link in	the entire topology.

	   Defaults to Summary.

   Plugin "onewire"
       EXPERIMENTAL! See notes below.

       The "onewire" plugin uses the owcapi library from the owfs project
       <http://owfs.org/> to read sensors connected via	the onewire bus.

       It can be used in two possible modes - standard or advanced.

       In the standard mode only temperature sensors (sensors with the family
       code 10,	22 and 28 - e.g. DS1820, DS18S20, DS1920) can be read. If you
       have other sensors you would like to have included, please send a sort
       request to the mailing list. You	can select sensors to be read or to be
       ignored depending on the	option IgnoreSelected).	When no	list is
       provided	the whole bus is walked	and all	sensors	are read.

       Hubs (the DS2409	chips) are working, but	read the note, why this	plugin
       is experimental,	below.

       In the advanced mode you	can configure any sensor to be read (only
       numerical value)	using full OWFS	path (e.g.
       "/uncached/10.F10FCA000800/temperature").  In this mode you have	to
       list all	the sensors. Neither default bus walk nor IgnoreSelected are
       used here. Address and type (file) is extracted from the	path
       automatically and should	produce	compatible structure with the
       "standard" mode (basically the path is expected as for example
       "/uncached/10.F10FCA000800/temperature" where it	would extract address
       part "F10FCA000800" and the rest	after the slash	is considered the type
       - here "temperature").  There are two advantages	to this	mode - you can
       access virtually	any sensor (not	just temperature), select whether to
       use cached or directly read values and it is slighlty faster. The
       downside	is more	complex	configuration.

       The two modes are distinguished automatically by	the format of the
       address.	 It is not possible to mix the two modes. Once a full path is
       detected	in any Sensor then the whole addressing	(all sensors) is
       considered to be	this way (and as standard addresses will fail parsing
       they will be ignored).

       Device Device
	   Sets	the device to read the values from. This can either be a
	   "real" hardware device, such	as a serial port or an USB port, or
	   the address of the owserver(1) socket, usually localhost:4304.

	   Though the documentation claims to automatically recognize the
	   given address format, with version 2.7p4 we had to specify the type
	   explicitly. So with that version, the following configuration
	   worked for us:

	     <Plugin onewire>
	       Device "-s localhost:4304"
	     </Plugin>

	   This	directive is required and does not have	a default value.

       Sensor Sensor
	   In the standard mode	selects	sensors	to collect or to ignore
	   (depending on IgnoreSelected, see below). Sensors are specified
	   without the family byte at the beginning, so	you have to use	for
	   example "F10FCA000800", and not include the leading 10. family byte
	   and point.  When no Sensor is configured the	whole Onewire bus is
	   walked and all supported sensors (see above)	are read.

	   In the advanced mode	the Sensor specifies full OWFS path - e.g.
	   "/uncached/10.F10FCA000800/temperature" (or when cached values are
	   OK "/10.F10FCA000800/temperature"). IgnoreSelected is not used.

	   As there can	be multiple devices on the bus you can list multiple
	   sensor (use multiple	Sensor elements).

       IgnoreSelected true|false
	   If no configuration is given, the onewire plugin will collect data
	   from	all sensors found. This	may not	be practical, especially if
	   sensors are added and removed regularly. Sometimes, however,	it's
	   easier/preferred to collect only specific sensors or	all sensors
	   except a few	specified ones.	This option enables you	to do that: By
	   setting IgnoreSelected to true the effect of	Sensor is inverted:
	   All selected	interfaces are ignored and all other interfaces	are
	   collected.

	   Used	only in	the standard mode - see	above.

       Interval	Seconds
	   Sets	the interval in	which all sensors should be read. If not
	   specified, the global Interval setting is used.

       EXPERIMENTAL! The "onewire" plugin is experimental, because it doesn't
       yet work	with big setups. It works with one sensor being	attached to
       one controller, but as soon as you throw	in a couple more senors	and
       maybe a hub or two, reading all values will take	more than ten seconds
       (the default interval). We will probably	add some separate thread for
       reading the sensors and some cache or something like that, but it's not
       done yet. We will try to	maintain backwards compatibility in the
       future, but we can't promise. So	in short: If it	works for you: Great!
       But keep	in mind	that the config	might change, though this is unlikely.
       Oh, and if you want to help improving this plugin, just send a short
       notice to the mailing list. Thanks :)

   Plugin "openldap"
       To use the "openldap" plugin you	first need to configure	the OpenLDAP
       server correctly. The backend database "monitor"	needs to be loaded and
       working.	See slapd-monitor(5) for the details.

       The configuration of the	"openldap" plugin consists of one or more
       Instance	blocks.	Each block requires one	string argument	as the
       instance	name. For example:

	<Plugin	"openldap">
	  <Instance "foo">
	    URL	"ldap://localhost/"
	  </Instance>
	  <Instance "bar">
	    URL	"ldaps://localhost/"
	  </Instance>
	</Plugin>

       The instance name will be used as the plugin instance. To emulate the
       old (version 4) behavior, you can use an	empty string (""). In order
       for the plugin to work correctly, each instance name must be unique.
       This is not enforced by the plugin and it is your responsibility	to
       ensure it is.

       The following options are accepted within each Instance block:

       URL ldap://host/binddn
	   Sets	the URL	to use to connect to the OpenLDAP server. This option
	   is mandatory.

       BindDN BindDN
	   Name	in the form of an LDAP distinguished name intended to be used
	   for authentication. Defaults	to empty string	to establish an
	   anonymous authorization.

       Password	Password
	   Password for	simple bind authentication. If this option is not set,
	   unauthenticated bind	operation is used.

       StartTLS	true|false
	   Defines whether TLS must be used when connecting to the OpenLDAP
	   server.  Disabled by	default.

       VerifyHost true|false
	   Enables or disables peer host name verification. If enabled,	the
	   plugin checks if the	"Common	Name" or a "Subject Alternate Name"
	   field of the	SSL certificate	matches	the host name provided by the
	   URL option. If this identity	check fails, the connection is
	   aborted. Enabled by default.

       CACert File
	   File	that holds one or more SSL certificates. If you	want to	use
	   TLS/SSL you may possibly need this option. What CA certificates are
	   checked by default depends on the distribution you use and can be
	   changed with	the usual ldap client configuration mechanisms.	See
	   ldap.conf(5)	for the	details.

       Timeout Seconds
	   Sets	the timeout value for ldap operations, in seconds. By default,
	   the configured Interval is used to set the timeout. Use -1 to
	   disable (infinite timeout).

       Version Version
	   An integer which sets the LDAP protocol version number to use when
	   connecting to the OpenLDAP server. Defaults to 3 for	using LDAPv3.

   Plugin "openvpn"
       The OpenVPN plugin reads	a status file maintained by OpenVPN and
       gathers traffic statistics about	connected clients.

       To set up OpenVPN to write to the status	file periodically, use the
       --status	option of OpenVPN. Since OpenVPN can write two different
       formats,	you need to set	the required format, too. This is done by
       setting --status-version	to 2.

       So, in a	nutshell you need:

	 openvpn $OTHER_OPTIONS	\
	   --status "/var/run/openvpn-status" 10 \
	   --status-version 2

       Available options:

       StatusFile File
	   Specifies the location of the status	file.

       ImprovedNamingSchema true|false
	   When	enabled, the filename of the status file will be used as
	   plugin instance and the client's "common name" will be used as type
	   instance. This is required when reading multiple status files.
	   Enabling this option	is recommended,	but to maintain	backwards
	   compatibility this option is	disabled by default.

       CollectCompression true|false
	   Sets	whether	or not statistics about	the compression	used by
	   OpenVPN should be collected.	This information is only available in
	   single mode.	Enabled	by default.

       CollectIndividualUsers true|false
	   Sets	whether	or not traffic information is collected	for each
	   connected client individually. If set to false, currently no
	   traffic data	is collected at	all because aggregating	this data in a
	   save	manner is tricky. Defaults to true.

       CollectUserCount	true|false
	   When	enabled, the number of currently connected clients or users is
	   collected.  This is especially interesting when
	   CollectIndividualUsers is disabled, but can be configured
	   independently from that option. Defaults to false.

   Plugin "oracle"
       The "oracle" plugin uses	the OracleX Call Interface (OCI) to connect to
       an OracleX Database and lets you	execute	SQL statements there. It is
       very similar to the "dbi" plugin, because it was	written	around the
       same time. See the "dbi"	plugin's documentation above for details.

	 <Plugin oracle>
	   <Query "out_of_stock">
	     Statement "SELECT category, COUNT(*) AS value FROM	products WHERE in_stock	= 0 GROUP BY category"
	     <Result>
	       Type "gauge"
	       # InstancePrefix	"foo"
	       InstancesFrom "category"
	       ValuesFrom "value"
	     </Result>
	   </Query>
	   <Database "product_information">
	     ConnectID "db01"
	     Username "oracle"
	     Password "secret"
	     Query "out_of_stock"
	   </Database>
	 </Plugin>

       Query blocks

       The Query blocks	are handled identically	to the Query blocks of the
       "dbi" plugin. Please see	its documentation above	for details on how to
       specify queries.

       Database	blocks

       Database	blocks define a	connection to a	database and which queries
       should be sent to that database.	Each database needs a "name" as	string
       argument	in the starting	tag of the block. This name will be used as
       "PluginInstance"	in the values submitted	to the daemon. Other than
       that, that name is not used.

       ConnectID ID
	   Defines the "database alias"	or "service name" to connect to.
	   Usually, these names	are defined in the file	named
	   "$ORACLE_HOME/network/admin/tnsnames.ora".

       Host Host
	   Hostname to use when	dispatching values for this database. Defaults
	   to using the	global hostname	of the collectd	instance.

       Username	Username
	   Username used for authentication.

       Password	Password
	   Password used for authentication.

       Query QueryName
	   Associates the query	named QueryName	with this database connection.
	   The query needs to be defined before	this statement,	i. e. all
	   query blocks	you want to refer to must be placed above the database
	   block you want to refer to them from.

   Plugin "perl"
       This plugin embeds a Perl-interpreter into collectd and provides	an
       interface to collectd's plugin system. See collectd-perl(5) for its
       documentation.

   Plugin "pinba"
       The Pinba plugin	receives profiling information from Pinba, an
       extension for the PHP interpreter. At the end of	executing a script,
       i.e. after a PHP-based webpage has been delivered, the extension	will
       send a UDP packet containing timing information,	peak memory usage and
       so on. The plugin will wait for such packets, parse them	and account
       the provided information, which is then dispatched to the daemon	once
       per interval.

       Synopsis:

	<Plugin	pinba>
	  Address "::0"
	  Port "30002"
	  # Overall statistics for the website.
	  <View	"www-total">
	    Server "www.example.com"
	  </View>
	  # Statistics for www-a only
	  <View	"www-a">
	    Host "www-a.example.com"
	    Server "www.example.com"
	  </View>
	  # Statistics for www-b only
	  <View	"www-b">
	    Host "www-b.example.com"
	    Server "www.example.com"
	  </View>
	</Plugin>

       The plugin provides the following configuration options:

       Address Node
	   Configures the address used to open a listening socket. By default,
	   plugin will bind to the any address "::0".

       Port Service
	   Configures the port (service) to bind to. By	default	the default
	   Pinba port "30002" will be used. The	option accepts service names
	   in addition to port numbers and thus	requires a string argument.

       <View Name> block
	   The packets sent by the Pinba extension include the hostname	of the
	   server, the server name (the	name of	the virtual host) and the
	   script that was executed.  Using View blocks	it is possible to
	   separate the	data into multiple groups to get more meaningful
	   statistics. Each packet is added to all matching groups, so that a
	   packet may be accounted for more than once.

	   Host	Host
	       Matches the hostname of the system the webserver	/ script is
	       running on. This	will contain the result	of the gethostname(2)
	       system call. If not configured, all hostnames will be accepted.

	   Server Server
	       Matches the name	of the virtual host, i.e. the contents of the
	       $_SERVER["SERVER_NAME"] variable	when within PHP. If not
	       configured, all server names will be accepted.

	   Script Script
	       Matches the name	of the script name, i.e. the contents of the
	       $_SERVER["SCRIPT_NAME"] variable	when within PHP. If not
	       configured, all script names will be accepted.

   Plugin "ping"
       The Ping	plugin starts a	new thread which sends ICMP "ping" packets to
       the configured hosts periodically and measures the network latency.
       Whenever	the "read" function of the plugin is called, it	submits	the
       average latency,	the standard deviation and the drop rate for each
       host.

       Available configuration options:

       Host IP-address
	   Host	to ping	periodically. This option may be repeated several
	   times to ping multiple hosts.

       Interval	Seconds
	   Sets	the interval in	which to send ICMP echo	packets	to the
	   configured hosts.  This is not the interval in which	statistics are
	   queries from	the plugin but the interval in which the hosts are
	   "pinged". Therefore,	the setting here should	be smaller than	or
	   equal to the	global Interval	setting. Fractional times, such	as
	   "1.24" are allowed.

	   Default: 1.0

       Timeout Seconds
	   Time	to wait	for a response from the	host to	which an ICMP packet
	   had been sent. If a reply was not received after Seconds seconds,
	   the host is assumed to be down or the packet	to be dropped. This
	   setting must	be smaller than	the Interval setting above for the
	   plugin to work correctly. Fractional	arguments are accepted.

	   Default: 0.9

       TTL 0-255
	   Sets	the Time-To-Live of generated ICMP packets.

       Size size
	   Sets	the size of the	data payload in	ICMP packet to specified size
	   (it will be filled with regular ASCII pattern). If not set, default
	   56 byte long	string is used so that the packet size of an ICMPv4
	   packet is exactly 64	bytes, similar to the behaviour	of normal
	   ping(1) command.

       SourceAddress host
	   Sets	the source address to use. host	may either be a	numerical
	   network address or a	network	hostname.

       Device name
	   Sets	the outgoing network device to be used.	name has to specify an
	   interface name (e. g. "eth0"). This might not be supported by all
	   operating systems.

       MaxMissed Packets
	   Trigger a DNS resolve after the host	has not	replied	to Packets
	   packets. This enables the use of dynamic DNS	services (like
	   dyndns.org) with the	ping plugin.

	   Default: -1 (disabled)

   Plugin "postgresql"
       The "postgresql"	plugin queries statistics from PostgreSQL databases.
       It keeps	a persistent connection	to all configured databases and	tries
       to reconnect if the connection has been interrupted. A database is
       configured by specifying	a Database block as described below. The
       default statistics are collected	from PostgreSQL's statistics collector
       which thus has to be enabled for	this plugin to work correctly. This
       should usually be the case by default. See the section "The Statistics
       Collector" of the PostgreSQL Documentation for details.

       By specifying custom database queries using a Query block as described
       below, you may collect any data that is available from some PostgreSQL
       database. This way, you are able	to access statistics of	external
       daemons which are available in a	PostgreSQL database or use future or
       special statistics provided by PostgreSQL without the need to upgrade
       your collectd installation.

       Starting	with version 5.2, the "postgresql" plugin supports writing
       data to PostgreSQL databases as well. This has been implemented in a
       generic way. You	need to	specify	an SQL statement which will then be
       executed	by collectd in order to	write the data (see below for
       details). The benefit of	that approach is that there is no fixed
       database	layout.	Rather,	the layout may be optimized for	the current
       setup.

       The PostgreSQL Documentation manual can be found	at
       <http://www.postgresql.org/docs/manuals/>.

	 <Plugin postgresql>
	   <Query magic>
	     Statement "SELECT magic FROM wizard WHERE host = $1;"
	     Param hostname
	     <Result>
	       Type gauge
	       InstancePrefix "magic"
	       ValuesFrom magic
	     </Result>
	   </Query>

	   <Query rt36_tickets>
	     Statement "SELECT COUNT(type) AS count, type \
			       FROM (SELECT CASE \
					    WHEN resolved = 'epoch' THEN 'open'	\
					    ELSE 'resolved' END	AS type	\
					    FROM tickets) type \
			       GROUP BY	type;"
	     <Result>
	       Type counter
	       InstancePrefix "rt36_tickets"
	       InstancesFrom "type"
	       ValuesFrom "count"
	     </Result>
	   </Query>

	   <Writer sqlstore>
	     Statement "SELECT collectd_insert($1, $2, $3, $4, $5, $6, $7, $8, $9);"
	     StoreRates	true
	   </Writer>

	   <Database foo>
	     Host "hostname"
	     Port "5432"
	     User "username"
	     Password "secret"
	     SSLMode "prefer"
	     KRBSrvName	"kerberos_service_name"
	     Query magic
	   </Database>

	   <Database bar>
	     Interval 300
	     Service "service_name"
	     Query backend # predefined
	     Query rt36_tickets
	   </Database>

	   <Database qux>
	     # ...
	     Writer sqlstore
	     CommitInterval 10
	   </Database>
	 </Plugin>

       The Query block defines one database query which	may later be used by a
       database	definition. It accepts a single	mandatory argument which
       specifies the name of the query.	The names of all queries have to be
       unique (see the MinVersion and MaxVersion options below for an
       exception to this rule).

       In each Query block, there is one or more Result	blocks.	Multiple
       Result blocks may be used to extract multiple values from a single
       query.

       The following configuration options are available to define the query:

       Statement sql query statement
	   Specify the sql query statement which the plugin should execute.
	   The string may contain the tokens $1, $2, etc. which	are used to
	   reference the first,	second,	etc. parameter.	The value of the
	   parameters is specified by the Param	configuration option - see
	   below for details. To include a literal $ character followed	by a
	   number, surround it with single quotes (').

	   Any SQL command which may return data (such as "SELECT" or "SHOW")
	   is allowed. Note, however, that only	a single command may be	used.
	   Semicolons are allowed as long as a single non-empty	command	has
	   been	specified only.

	   The returned	lines will be handled separately one after another.

       Param hostname|database|instance|username|interval
	   Specify the parameters which	should be passed to the	SQL query. The
	   parameters are referred to in the SQL query as $1, $2, etc. in the
	   same	order as they appear in	the configuration file.	The value of
	   the parameter is determined depending on the	value of the Param
	   option as follows:

	   hostname
	       The configured hostname of the database connection. If a	UNIX
	       domain socket is	used, the parameter expands to "localhost".

	   database
	       The name	of the database	of the current connection.

	   instance
	       The name	of the database	plugin instance. See the Instance
	       option of the database specification below for details.

	   username
	       The username used to connect to the database.

	   interval
	       The interval with which this database is	queried	(as specified
	       by the database specific	or global Interval options).

	   Please note that parameters are only	supported by PostgreSQL's
	   protocol version 3 and above	which was introduced in	version	7.4 of
	   PostgreSQL.

       PluginInstanceFrom column
	   Specify how to create the "PluginInstance" for reporting this query
	   results.  Only one column is	supported. You may concatenate fields
	   and string values in	the query statement to get the required
	   results.

       MinVersion version
       MaxVersion version
	   Specify the minimum or maximum version of PostgreSQL	that this
	   query should	be used	with. Some statistics might only be available
	   with	certain	versions of PostgreSQL.	This allows you	to specify
	   multiple queries with the same name but which apply to different
	   versions, thus allowing you to use the same configuration in	a
	   heterogeneous environment.

	   The version has to be specified as the concatenation	of the major,
	   minor and patch-level versions, each	represented as two-decimal-
	   digit numbers. For example, version 8.2.3 will become 80203.

       The Result block	defines	how to handle the values returned from the
       query.  It defines which	column holds which value and how to dispatch
       that value to the daemon.

       Type type
	   The type name to be used when dispatching the values. The type
	   describes how to handle the data and	where to store it. See
	   types.db(5) for more	details	on types and their configuration. The
	   number and type of values (as selected by the ValuesFrom option)
	   has to match	the type of the	given name.

	   This	option is mandatory.

       InstancePrefix prefix
       InstancesFrom column0 [column1 ...]
	   Specify how to create the "TypeInstance" for	each data set (i. e.
	   line).  InstancePrefix defines a static prefix that will be
	   prepended to	all type instances. InstancesFrom defines the column
	   names whose values will be used to create the type instance.
	   Multiple values will	be joined together using the hyphen ("-") as
	   separation character.

	   The plugin itself does not check whether or not all built instances
	   are different. It is	your responsibility to assure that each	is
	   unique.

	   Both	options	are optional. If none is specified, the	type instance
	   will	be empty.

       ValuesFrom column0 [column1 ...]
	   Names the columns whose content is used as the actual data for the
	   data	sets that are dispatched to the	daemon.	How many such columns
	   you need is determined by the Type setting as explained above. If
	   you specify too many	or not enough columns, the plugin will
	   complain about that and no data will	be submitted to	the daemon.

	   The actual data type, as seen by PostgreSQL,	is not that important
	   as long as it represents numbers. The plugin	will automatically
	   cast	the values to the right	type if	it know	how to do that.	For
	   that, it uses the strtoll(3)	and strtod(3) functions, so anything
	   supported by	those functions	is supported by	the plugin as well.

	   This	option is required inside a Result block and may be specified
	   multiple times. If multiple ValuesFrom options are specified, the
	   columns are read in the given order.

       The following predefined	queries	are available (the definitions can be
       found in	the postgresql_default.conf file which,	by default, is
       available at "prefix/share/collectd/"):

       backends
	   This	query collects the number of backends, i. e. the number	of
	   connected clients.

       transactions
	   This	query collects the numbers of committed	and rolled-back
	   transactions	of the user tables.

       queries
	   This	query collects the numbers of various table modifications
	   (i. e.  insertions, updates,	deletions) of the user tables.

       query_plans
	   This	query collects the numbers of various table scans and returned
	   tuples of the user tables.

       table_states
	   This	query collects the numbers of live and dead rows in the	user
	   tables.

       disk_io
	   This	query collects disk block access counts	for user tables.

       disk_usage
	   This	query collects the on-disk size	of the database	in bytes.

       In addition, the	following detailed queries are available by default.
       Please note that	each of	those queries collects information by table,
       thus, potentially producing a lot of data. For details see the
       description of the non-by_table queries above.

       queries_by_table
       query_plans_by_table
       table_states_by_table
       disk_io_by_table

       The Writer block	defines	a PostgreSQL writer backend. It	accepts	a
       single mandatory	argument specifying the	name of	the writer. This will
       then be used in the Database specification in order to activate the
       writer instance.	The names of all writers have to be unique. The
       following options may be	specified:

       Statement sql statement
	   This	mandatory option specifies the SQL statement that will be
	   executed for	each submitted value. A	single SQL statement is
	   allowed only. Anything after	the first semicolon will be ignored.

	   Nine	parameters will	be passed to the statement and should be
	   specified as	tokens $1, $2, through $9 in the statement string. The
	   following values are	made available through those parameters:

	   $1  The timestamp of	the queried value as an	RFC 3339-formatted
	       local time.

	   $2  The hostname of the queried value.

	   $3  The plugin name of the queried value.

	   $4  The plugin instance of the queried value. This value may	be
	       NULL if there is	no plugin instance.

	   $5  The type	of the queried value (cf. types.db(5)).

	   $6  The type	instance of the	queried	value. This value may be NULL
	       if there	is no type instance.

	   $7  An array	of names for the submitted values (i. e., the name of
	       the data	sources	of the submitted value-list).

	   $8  An array	of types for the submitted values (i. e., the type of
	       the data	sources	of the submitted value-list; "counter",
	       "gauge",	...). Note, that if StoreRates is enabled (which is
	       the default, see	below),	all types will be "gauge".

	   $9  An array	of the submitted values. The dimensions	of the value
	       name and	value arrays match.

	   In general, it is advisable to create and call a custom function in
	   the PostgreSQL database for this purpose. Any procedural language
	   supported by	PostgreSQL will	do (see	chapter	"Server	Programming"
	   in the PostgreSQL manual for	details).

       StoreRates false|true
	   If set to true (the default), convert counter values	to rates. If
	   set to false	counter	values are stored as is, i. e. as an
	   increasing integer number.

       The Database block defines one PostgreSQL database for which to collect
       statistics. It accepts a	single mandatory argument which	specifies the
       database	name. None of the other	options	are required. PostgreSQL will
       use default values as documented	in the section "CONNECTING TO A
       DATABASE" in the	psql(1)	manpage. However, be aware that	those defaults
       may be influenced by the	user collectd is run as	and special
       environment variables. See the manpage for details.

       Interval	seconds
	   Specify the interval	with which the database	should be queried. The
	   default is to use the global	Interval setting.

       CommitInterval seconds
	   This	option may be used for database	connections which have
	   "writers" assigned (see above). If specified, it causes a writer to
	   put several updates into a single transaction. This transaction
	   will	last for the specified amount of time. By default, each	update
	   will	be executed in a separate transaction. Each transaction
	   generates a fair amount of overhead which can, thus,	be reduced by
	   activating this option. The draw-back is, that data covering	the
	   specified amount of time will be lost, for example, if a single
	   statement within the	transaction fails or if	the database server
	   crashes.

       Instance	name
	   Specify the plugin instance name that should	be used	instead	of the
	   database name (which	is the default,	if this	option has not been
	   specified). This allows one to query	multiple databases of the same
	   name	on the same host (e.g.	when running multiple database server
	   versions in parallel).  The plugin instance name can	also be	set
	   from	the query result using the PluginInstanceFrom option in	Query
	   block.

       Host hostname
	   Specify the hostname	or IP of the PostgreSQL	server to connect to.
	   If the value	begins with a slash, it	is interpreted as the
	   directory name in which to look for the UNIX	domain socket.

	   This	option is also used to determine the hostname that is
	   associated with a collected data set. If it has been	omitted	or
	   either begins with with a slash or equals localhost it will be
	   replaced with the global hostname definition	of collectd. Any other
	   value will be passed	literally to collectd when dispatching values.
	   Also	see the	global Hostname	and FQDNLookup options.

       Port port
	   Specify the TCP port	or the local UNIX domain socket	file extension
	   of the server.

       User username
	   Specify the username	to be used when	connecting to the server.

       Password	password
	   Specify the password	to be used when	connecting to the server.

       ExpireDelay delay
	   Skip	expired	values in query	output.

       SSLMode disable|allow|prefer|require
	   Specify whether to use an SSL connection when contacting the
	   server. The following modes are supported:

	   disable
	       Do not use SSL at all.

	   allow
	       First, try to connect without using SSL.	If that	fails, try
	       using SSL.

	   prefer (default)
	       First, try to connect using SSL.	If that	fails, try without
	       using SSL.

	   require
	       Use SSL only.

       Instance	name
	   Specify the plugin instance name that should	be used	instead	of the
	   database name (which	is the default,	if this	option has not been
	   specified). This allows one to query	multiple databases of the same
	   name	on the same host (e.g.	when running multiple database server
	   versions in parallel).

       KRBSrvName kerberos_service_name
	   Specify the Kerberos	service	name to	use when authenticating	with
	   Kerberos 5 or GSSAPI. See the sections "Kerberos authentication"
	   and "GSSAPI"	of the PostgreSQL Documentation	for details.

       Service service_name
	   Specify the PostgreSQL service name to use for additional
	   parameters. That service has	to be defined in pg_service.conf and
	   holds additional connection parameters. See the section "The
	   Connection Service File" in the PostgreSQL Documentation for
	   details.

       Query query
	   Specifies a query which should be executed in the context of	the
	   database connection.	This may be any	of the predefined or user-
	   defined queries. If no such option is given,	it defaults to
	   "backends", "transactions", "queries", "query_plans",
	   "table_states", "disk_io" and "disk_usage" (unless a	Writer has
	   been	specified). Else, the specified	queries	are used only.

       Writer writer
	   Assigns the specified writer	backend	to the database	connection.
	   This	causes all collected data to be	send to	the database using the
	   settings defined in the writer configuration	(see the section
	   "FILTER CONFIGURATION" below	for details on how to selectively send
	   data	to certain plugins).

	   Each	writer will register a flush callback which may	be used	when
	   having long transactions enabled (see the CommitInterval option
	   above). When	issuing	the FLUSH command (see collectd-unixsock(5)
	   for details)	the current transaction	will be	committed right	away.
	   Two different kinds of flush	callbacks are available	with the
	   "postgresql"	plugin:

	   postgresql
	       Flush all writer	backends.

	   postgresql-database
	       Flush all writers of the	specified database only.

   Plugin "powerdns"
       The "powerdns" plugin queries statistics	from an	authoritative PowerDNS
       nameserver and/or a PowerDNS recursor. Since both offer a wide variety
       of values, many of which	are probably meaningless to most users,	but
       may be useful for some. So you may chose	which values to	collect, but
       if you don't, some reasonable defaults will be collected.

	 <Plugin "powerdns">
	   <Server "server_name">
	     Collect "latency"
	     Collect "udp-answers" "udp-queries"
	     Socket "/var/run/pdns.controlsocket"
	   </Server>
	   <Recursor "recursor_name">
	     Collect "questions"
	     Collect "cache-hits" "cache-misses"
	     Socket "/var/run/pdns_recursor.controlsocket"
	   </Recursor>
	   LocalSocket "/opt/collectd/var/run/collectd-powerdns"
	 </Plugin>

       Server and Recursor block
	   The Server block defines one	authoritative server to	query, the
	   Recursor does the same for an recursing server. The possible
	   options in both blocks are the same,	though.	The argument defines a
	   name	for the	server / recursor and is required.

	   Collect Field
	       Using the Collect statement you can select which	values to
	       collect.	Here, you specify the name of the values as used by
	       the PowerDNS servers, e.	g.  "dlg-only-drops", "answers10-100".

	       The method of getting the values	differs	for Server and
	       Recursor	blocks:	When querying the server a "SHOW *" command is
	       issued in any case, because that's the only way of getting
	       multiple	values out of the server at once.  collectd then picks
	       out the values you have selected. When querying the recursor, a
	       command is generated to query exactly these values. So if you
	       specify invalid fields when querying the	recursor, a syntax
	       error may be returned by	the daemon and collectd	may not
	       collect any values at all.

	       If no Collect statement is given, the following Server values
	       will be collected:

	       latency
	       packetcache-hit
	       packetcache-miss
	       packetcache-size
	       query-cache-hit
	       query-cache-miss
	       recursing-answers
	       recursing-questions
	       tcp-answers
	       tcp-queries
	       udp-answers
	       udp-queries

	       The following Recursor values will be collected by default:

	       noerror-answers
	       nxdomain-answers
	       servfail-answers
	       sys-msec
	       user-msec
	       qa-latency
	       cache-entries
	       cache-hits
	       cache-misses
	       questions

	       Please note that	up to that point collectd doesn't know what
	       values are available on the server and values that are added do
	       not need	a change of the	mechanism so far. However, the values
	       must be mapped to collectd's naming scheme, which is done using
	       a lookup	table that lists all known values. If values are added
	       in the future and collectd does not know	about them, you	will
	       get an error much like this:

		 powerdns plugin: submit: Not found in lookup table: foobar = 42

	       In this case please file	a bug report with the collectd team.

	   Socket Path
	       Configures the path to the UNIX domain socket to	be used	when
	       connecting to the daemon. By default
	       "${localstatedir}/run/pdns.controlsocket" will be used for an
	       authoritative server and
	       "${localstatedir}/run/pdns_recursor.controlsocket" will be used
	       for the recursor.

       LocalSocket Path
	   Querying the	recursor is done using UDP. When using UDP over	UNIX
	   domain sockets, the client socket needs a name in the file system,
	   too.	You can	set this local name to Path using the LocalSocket
	   option. The default is "prefix/var/run/collectd-powerdns".

   Plugin "processes"
       Process Name
	   Select more detailed	statistics of processes	matching this name.
	   The statistics collected for	these selected processes are size of
	   the resident	segment	size (RSS), user- and system-time used,	number
	   of processes	and number of threads, io data (where available) and
	   minor and major pagefaults.

	   Some	platforms have a limit on the length of	process	names. Name
	   must	stay below this	limit.

       ProcessMatch name regex
	   Similar to the Process option this allows one to select more
	   detailed statistics of processes matching the specified regex (see
	   regex(7) for	details). The statistics of all	matching processes are
	   summed up and dispatched to the daemon using	the specified name as
	   an identifier. This allows one to "group" several processes
	   together. name must not contain slashes.

       CollectContextSwitch Boolean
	   Collect context switch of the process.

   Plugin "protocols"
       Collects	a lot of information about various network protocols, such as
       IP, TCP,	UDP, etc.

       Available configuration options:

       Value Selector
	   Selects whether or not to select a specific value. The string being
	   matched is of the form "Protocol:ValueName",	where Protocol will be
	   used	as the plugin instance and ValueName will be used as type
	   instance. An	example	of the string being used would be
	   "Tcp:RetransSegs".

	   You can use regular expressions to match a large number of values
	   with	just one configuration option. To select all "extended"	TCP
	   values, you could use the following statement:

	     Value "/^TcpExt:/"

	   Whether only	matched	values are selected or all matched values are
	   ignored depends on the IgnoreSelected. By default, only matched
	   values are selected.	 If no value is	configured at all, all values
	   will	be selected.

       IgnoreSelected true|false
	   If set to true, inverts the selection made by Value,	i. e. all
	   matching values will	be ignored.

   Plugin "python"
       This plugin embeds a Python-interpreter into collectd and provides an
       interface to collectd's plugin system. See collectd-python(5) for its
       documentation.

   Plugin "routeros"
       The "routeros" plugin connects to a device running RouterOS, the	Linux-
       based operating system for routers by MikroTik. The plugin uses
       librouteros to connect and reads	information about the interfaces and
       wireless	connections of the device. The configuration supports querying
       multiple	routers:

	 <Plugin "routeros">
	   <Router>
	     Host "router0.example.com"
	     User "collectd"
	     Password "secr3t"
	     CollectInterface true
	     CollectCPULoad true
	     CollectMemory true
	   </Router>
	   <Router>
	     Host "router1.example.com"
	     User "collectd"
	     Password "5ecret"
	     CollectInterface true
	     CollectRegistrationTable true
	     CollectDF true
	     CollectDisk true
	   </Router>
	 </Plugin>

       As you can see above, the configuration of the routeros plugin consists
       of one or more <Router> blocks. Within each block, the following
       options are understood:

       Host Host
	   Hostname or IP-address of the router	to connect to.

       Port Port
	   Port	name or	port number used when connecting. If left unspecified,
	   the default will be chosen by librouteros, currently	"8728".	This
	   option expects a string argument, even when a numeric port number
	   is given.

       User User
	   Use the user	name User to authenticate. Defaults to "admin".

       Password	Password
	   Set the password used to authenticate.

       CollectInterface	true|false
	   When	set to true, interface statistics will be collected for	all
	   interfaces present on the device. Defaults to false.

       CollectRegistrationTable	true|false
	   When	set to true, information about wireless	LAN connections	will
	   be collected. Defaults to false.

       CollectCPULoad true|false
	   When	set to true, information about the CPU usage will be
	   collected. The number is a dimensionless value where	zero indicates
	   no CPU usage	at all.	 Defaults to false.

       CollectMemory true|false
	   When	enabled, the amount of used and	free memory will be collected.
	   How used memory is calculated is unknown, for example whether or
	   not caches are counted as used space.  Defaults to false.

       CollectDF true|false
	   When	enabled, the amount of used and	free disk space	will be
	   collected.  Defaults	to false.

       CollectDisk true|false
	   When	enabled, the number of sectors written and bad blocks will be
	   collected.  Defaults	to false.

   Plugin "redis"
       The Redis plugin	connects to one	or more	Redis servers and gathers
       information about each server's state. For each server there is a Node
       block which configures the connection parameters	for this node.

	 <Plugin redis>
	   <Node "example">
	       Host "localhost"
	       Port "6379"
	       Timeout 2000
	       <Query "LLEN myqueue">
		 Type "queue_length"
		 Instance "myqueue"
	       <Query>
	   </Node>
	 </Plugin>

       The information shown in	the synopsis above is the default
       configuration which is used by the plugin if no configuration is
       present.

       Node Nodename
	   The Node block identifies a new Redis node, that is a new Redis
	   instance running in an specified host and port. The name for	node
	   is a	canonical identifier which is used as plugin instance. It is
	   limited to 64 characters in length.

       Host Hostname
	   The Host option is the hostname or IP-address where the Redis
	   instance is running on.

       Port Port
	   The Port option is the TCP port on which the	Redis instance accepts
	   connections.	Either a service name of a port	number may be given.
	   Please note that numerical port numbers must	be given as a string,
	   too.

       Password	Password
	   Use Password	to authenticate	when connecting	to Redis.

       Timeout Milliseconds
	   The Timeout option set the socket timeout for node response.	Since
	   the Redis read function is blocking,	you should keep	this value as
	   low as possible. Keep in mind that the sum of all Timeout values
	   for all Nodes should	be lower than Interval defined globally.

       Query Querystring
	   The Query block identifies a	query to execute against the redis
	   server.  There may be an arbitrary number of	queries	to execute.

       Type Collectd type
	   Within a query definition, a	valid collectd type to use as when
	   submitting the result of the	query. When not	supplied, will default
	   to gauge.

       Instance	Type instance
	   Within a query definition, an optional type instance	to use when
	   submitting the result of the	query. When not	supplied will default
	   to the escaped command, up to 64 chars.

   Plugin "rrdcached"
       The "rrdcached" plugin uses the RRDtool accelerator daemon,
       rrdcached(1), to	store values to	RRD files in an	efficient manner. The
       combination of the "rrdcached" plugin and the "rrdcached" daemon	is
       very similar to the way the "rrdtool" plugin works (see below). The
       added abstraction layer provides	a number of benefits, though: Because
       the cache is not	within "collectd" anymore, it does not need to be
       flushed when "collectd" is to be	restarted. This	results	in much
       shorter (if any)	gaps in	graphs,	especially under heavy load. Also, the
       "rrdtool" command line utility is aware of the daemon so	that it	can
       flush values to disk automatically when needed. This allows one to
       integrate automated flushing of values into graphing solutions much
       more easily.

       There are disadvantages,	though:	The daemon may reside on a different
       host, so	it may not be possible for "collectd" to create	the
       appropriate RRD files anymore. And even if "rrdcached" runs on the same
       host, it	may run	in a different base directory, so relative paths may
       do weird	stuff if you're	not careful.

       So the recommended configuration	is to let "collectd" and "rrdcached"
       run on the same host, communicating via a UNIX domain socket. The
       DataDir setting should be set to	an absolute path, so that a changed
       base directory does not result in RRD files being created / expected in
       the wrong place.

       DaemonAddress Address
	   Address of the daemon as understood by the "rrdc_connect" function
	   of the RRD library. See rrdcached(1)	for details. Example:

	     <Plugin "rrdcached">
	       DaemonAddress "unix:/var/run/rrdcached.sock"
	     </Plugin>

       DataDir Directory
	   Set the base	directory in which the RRD files reside. If this is a
	   relative path, it is	relative to the	working	base directory of the
	   "rrdcached" daemon!	Use of an absolute path	is recommended.

       CreateFiles true|false
	   Enables or disables the creation of RRD files. If the daemon	is not
	   running locally, or DataDir is set to a relative path, this will
	   not work as expected. Default is true.

       CreateFilesAsync	false|true
	   When	enabled, new RRD files are enabled asynchronously, using a
	   separate thread that	runs in	the background.	This prevents writes
	   to block, which is a	problem	especially when	many hundreds of files
	   need	to be created at once. However,	since the purpose of creating
	   the files asynchronously is not to block until the file is
	   available, values before the	file is	available will be discarded.
	   When	disabled (the default) files are created synchronously,
	   blocking for	a short	while, while the file is being written.

       StepSize	Seconds
	   Force the stepsize of newly created RRD-files. Ideally (and per
	   default) this setting is unset and the stepsize is set to the
	   interval in which the data is collected. Do not use this option
	   unless you absolutely have to for some reason. Setting this option
	   may cause problems with the "snmp plugin", the "exec	plugin"	or
	   when	the daemon is set up to	receive	data from other	hosts.

       HeartBeat Seconds
	   Force the heartbeat of newly	created	RRD-files. This	setting	should
	   be unset in which case the heartbeat	is set to twice	the StepSize
	   which should	equal the interval in which data is collected. Do not
	   set this option unless you have a very good reason to do so.

       RRARows NumRows
	   The "rrdtool	plugin"	calculates the number of PDPs per CDP based on
	   the StepSize, this setting and a timespan. This plugin creates RRD-
	   files with three times five RRAs, i.	e. five	RRAs with the CFs MIN,
	   AVERAGE, and	MAX. The five RRAs are optimized for graphs covering
	   one hour, one day, one week,	one month, and one year.

	   So for each timespan, it calculates how many	PDPs need to be
	   consolidated	into one CDP by	calculating:
	     number of PDPs = timespan / (stepsize * rrarows)

	   Bottom line is, set this no smaller than the	width of you graphs in
	   pixels. The default is 1200.

       RRATimespan Seconds
	   Adds	an RRA-timespan, given in seconds. Use this option multiple
	   times to have more then one RRA. If this option is never used, the
	   built-in default of (3600, 86400, 604800, 2678400, 31622400)	is
	   used.

	   For more information	on how RRA-sizes are calculated	see RRARows
	   above.

       XFF Factor
	   Set the "XFiles Factor". The	default	is 0.1.	If unsure, don't set
	   this	option.	 Factor	must be	in the range "[0.0-1.0)", i.e. between
	   zero	(inclusive) and	one (exclusive).

       CollectStatistics false|true
	   When	set to true, various statistics	about the rrdcached daemon
	   will	be collected, with "rrdcached" as the plugin name. Defaults to
	   false.

	   Statistics are read via rrdcacheds socket using the STATS command.
	   See rrdcached(1) for	details.

   Plugin "rrdtool"
       You can use the settings	StepSize, HeartBeat, RRARows, and XFF to fine-
       tune your RRD-files. Please read	rrdcreate(1) if	you encounter problems
       using these settings. If	you don't want to dive into the	depths of
       RRDtool,	you can	safely ignore these settings.

       DataDir Directory
	   Set the directory to	store RRD files	under. By default RRD files
	   are generated beneath the daemon's working directory, i.e. the
	   BaseDir.

       CreateFilesAsync	false|true
	   When	enabled, new RRD files are enabled asynchronously, using a
	   separate thread that	runs in	the background.	This prevents writes
	   to block, which is a	problem	especially when	many hundreds of files
	   need	to be created at once. However,	since the purpose of creating
	   the files asynchronously is not to block until the file is
	   available, values before the	file is	available will be discarded.
	   When	disabled (the default) files are created synchronously,
	   blocking for	a short	while, while the file is being written.

       StepSize	Seconds
	   Force the stepsize of newly created RRD-files. Ideally (and per
	   default) this setting is unset and the stepsize is set to the
	   interval in which the data is collected. Do not use this option
	   unless you absolutely have to for some reason. Setting this option
	   may cause problems with the "snmp plugin", the "exec	plugin"	or
	   when	the daemon is set up to	receive	data from other	hosts.

       HeartBeat Seconds
	   Force the heartbeat of newly	created	RRD-files. This	setting	should
	   be unset in which case the heartbeat	is set to twice	the StepSize
	   which should	equal the interval in which data is collected. Do not
	   set this option unless you have a very good reason to do so.

       RRARows NumRows
	   The "rrdtool	plugin"	calculates the number of PDPs per CDP based on
	   the StepSize, this setting and a timespan. This plugin creates RRD-
	   files with three times five RRAs, i.e. five RRAs with the CFs MIN,
	   AVERAGE, and	MAX. The five RRAs are optimized for graphs covering
	   one hour, one day, one week,	one month, and one year.

	   So for each timespan, it calculates how many	PDPs need to be
	   consolidated	into one CDP by	calculating:
	     number of PDPs = timespan / (stepsize * rrarows)

	   Bottom line is, set this no smaller than the	width of you graphs in
	   pixels. The default is 1200.

       RRATimespan Seconds
	   Adds	an RRA-timespan, given in seconds. Use this option multiple
	   times to have more then one RRA. If this option is never used, the
	   built-in default of (3600, 86400, 604800, 2678400, 31622400)	is
	   used.

	   For more information	on how RRA-sizes are calculated	see RRARows
	   above.

       XFF Factor
	   Set the "XFiles Factor". The	default	is 0.1.	If unsure, don't set
	   this	option.	 Factor	must be	in the range "[0.0-1.0)", i.e. between
	   zero	(inclusive) and	one (exclusive).

       CacheFlush Seconds
	   When	the "rrdtool" plugin uses a cache (by setting CacheTimeout,
	   see below) it writes	all values for a certain RRD-file if the
	   oldest value	is older than (or equal	to) the	number of seconds
	   specified. If some RRD-file is not updated anymore for some reason
	   (the	computer was shut down,	the network is broken, etc.) some
	   values may still be in the cache. If	CacheFlush is set, then	the
	   entire cache	is searched for	entries	older than CacheTimeout
	   seconds and written to disk every Seconds seconds. Since this is
	   kind	of expensive and does nothing under normal circumstances, this
	   value should	not be too small.  900 seconds might be	a good value,
	   though setting this to 7200 seconds doesn't normally	do much	harm
	   either.

       CacheTimeout Seconds
	   If this option is set to a value greater than zero, the "rrdtool
	   plugin" will	save values in a cache,	as described above. Writing
	   multiple values at once reduces IO-operations and thus lessens the
	   load	produced by updating the files.	 The trade off is that the
	   graphs kind of "drag	behind"	and that more memory is	used.

       WritesPerSecond Updates
	   When	collecting many	statistics with	collectd and the "rrdtool"
	   plugin, you will run	serious	performance problems. The CacheFlush
	   setting and the internal update queue assert	that collectd
	   continues to	work just fine even under heavy	load, but the system
	   may become very unresponsive	and slow. This is a problem especially
	   if you create graphs	from the RRD files on the same machine,	for
	   example using the "graph.cgi" script	included in the
	   "contrib/collection3/" directory.

	   This	setting	is designed for	very large setups. Setting this	option
	   to a	value between 25 and 80	updates	per second, depending on your
	   hardware, will leave	the server responsive enough to	draw graphs
	   even	while all the cached values are	written	to disk. Flushed
	   values, i. e. values	that are forced	to disk	by the FLUSH command,
	   are not effected by this limit. They	are still written as fast as
	   possible, so	that web frontends have	up to date data	when
	   generating graphs.

	   For example:	If you have 100,000 RRD	files and set WritesPerSecond
	   to 30 updates per second, writing all values	to disk	will take
	   approximately 56 minutes. Together with the flushing	ability	that's
	   integrated into "collection3" you'll	end up with a responsive and
	   fast	system,	up to date graphs and basically	a "backup" of your
	   values every	hour.

       RandomTimeout Seconds
	   When	set, the actual	timeout	for each value is chosen randomly
	   between CacheTimeout-RandomTimeout and CacheTimeout+RandomTimeout.
	   The intention is to avoid high load situations that appear when
	   many	values timeout at the same time. This is especially a problem
	   shortly after the daemon starts, because all	values were added to
	   the internal	cache at roughly the same time.

   Plugin "sensors"
       The Sensors plugin uses lm_sensors to retrieve sensor-values. This
       means that all the needed modules have to be loaded and lm_sensors has
       to be configured	(most likely by	editing	/etc/sensors.conf. Read
       sensors.conf(5) for details.

       The lm_sensors homepage can be found at
       <http://secure.netroedge.com/~lm78/>.

       SensorConfigFile	File
	   Read	the lm_sensors configuration from File.	When unset
	   (recommended), the library's	default	will be	used.

       Sensor chip-bus-address/type-feature
	   Selects the name of the sensor which	you want to collect or ignore,
	   depending on	the IgnoreSelected below. For example, the option
	   "Sensor it8712-isa-0290/voltage-in1"	will cause collectd to gather
	   data	for the	voltage	sensor in1 of the it8712 on the	isa bus	at the
	   address 0290.

       IgnoreSelected true|false
	   If no configuration if given, the sensors-plugin will collect data
	   from	all sensors. This may not be practical,	especially for
	   uninteresting sensors.  Thus, you can use the Sensor-option to pick
	   the sensors you're interested in. Sometimes,	however, it's
	   easier/preferred to collect all sensors except a few	ones. This
	   option enables you to do that: By setting IgnoreSelected to true
	   the effect of Sensor	is inverted: All selected sensors are ignored
	   and all other sensors are collected.

       UseLabels true|false
	   Configures how sensor readings are reported.	When set to true,
	   sensor readings are reported	using their descriptive	label (e.g.
	   "VCore"). When set to false (the default) the sensor	name is	used
	   ("in0").

   Plugin "sigrok"
       The sigrok plugin uses libsigrok	to retrieve measurements from any
       device supported	by the sigrok <http://sigrok.org/> project.

       Synopsis

	<Plugin	sigrok>
	  LogLevel 3
	  <Device "AC Voltage">
	     Driver "fluke-dmm"
	     MinimumInterval 10
	     Conn "/dev/ttyUSB2"
	  </Device>
	  <Device "Sound Level">
	     Driver "cem-dt-885x"
	     Conn "/dev/ttyUSB1"
	  </Device>
	</Plugin>

       LogLevel	0-5
	   The sigrok logging level to pass on to the collectd log, as a
	   number between 0 and	5 (inclusive). These levels correspond to
	   "None", "Errors", "Warnings", "Informational", "Debug "and "Spew",
	   respectively.  The default is 2 ("Warnings"). The sigrok log
	   messages, regardless	of their level,	are always submitted to
	   collectd at its INFO	log level.

       <Device Name>
	   A sigrok-supported device, uniquely identified by this section's
	   options. The	Name is	passed to collectd as the plugin instance.

       Driver DriverName
	   The sigrok driver to	use for	this device.

       Conn ConnectionSpec
	   If the device cannot	be auto-discovered, or more than one might be
	   discovered by the driver, ConnectionSpec specifies the connection
	   string to the device.  It can be of the form	of a device path
	   (e.g. "/dev/ttyUSB2"), or, in case of a non-serial USB-connected
	   device, the USB VendorID.ProductID separated	by a period
	   (e.g. 0403.6001). A USB device can also be specified	as Bus.Address
	   (e.g. 1.41).

       SerialComm SerialSpec
	   For serial devices with non-standard	port settings, this option can
	   be used to specify them in a	form understood	by sigrok,
	   e.g.	"9600/8n1".  This should not be	necessary; drivers know	how to
	   communicate with devices they support.

       MinimumInterval Seconds
	   Specifies the minimum time between measurement dispatches to
	   collectd, in	seconds. Since some sigrok supported devices can
	   acquire measurements	many times per second, it may be necessary to
	   throttle these. For example,	the RRD	plugin cannot process writes
	   more	than once per second.

	   The default MinimumInterval is 0, meaning measurements received
	   from	the device are always dispatched to collectd. When throttled,
	   unused measurements are discarded.

   Plugin "smart"
       The "smart" plugin collects SMART information from physical disks.
       Values collectd include temperature, power cycle	count, poweron time
       and bad sectors.	Also, all SMART	attributes are collected along with
       the normalized current value, the worst value, the threshold and	a
       human readable value.

       Using the following two options you can ignore some disks or configure
       the collection only of specific disks.

       Disk Name
	   Select the disk Name. Whether it is collected or ignored depends on
	   the IgnoreSelected setting, see below. As with other	plugins	that
	   use the daemon's ignorelist functionality, a	string that starts and
	   ends	with a slash is	interpreted as a regular expression. Examples:

	     Disk "sdd"
	     Disk "/hda[34]/"

       IgnoreSelected true|false
	   Sets	whether	selected disks,	i. e. the ones matches by any of the
	   Disk	statements, are	ignored	or if all other	disks are ignored. The
	   behavior (hopefully)	is intuitive: If no Disk option	is configured,
	   all disks are collected. If at least	one Disk option	is given and
	   no IgnoreSelected or	set to false, only matching disks will be
	   collected. If IgnoreSelected	is set to true,	all disks are
	   collected except the	ones matched.

       IgnoreSleepMode true|false
	   Normally, the "smart" plugin	will ignore disks that are reported to
	   be asleep.  This option disables the	sleep mode check and allows
	   the plugin to collect data from these disks anyway. This is useful
	   in cases where libatasmart mistakenly reports disks as asleep
	   because it has not been updated to incorporate support for newer
	   idle	states in the ATA spec.

       UseSerial true|false
	   A disk's kernel name	(e.g., sda) can	change from one	boot to	the
	   next. If this option	is enabled, the	"smart"	plugin will use	the
	   disk's serial number	(e.g., HGST_HUH728080ALE600_2EJ8VH8X) instead
	   of the kernel name as the key for storing data. This	ensures	that
	   the data for	a given	disk will be kept together even	if the kernel
	   name	changes.

   Plugin "snmp"
       Since the configuration of the "snmp plugin" is a little	more
       complicated than	other plugins, its documentation has been moved	to an
       own manpage, collectd-snmp(5). Please see there for details.

   Plugin "statsd"
       The statsd plugin listens to a UDP socket, reads	"events" in the	statsd
       protocol	and dispatches rates or	other aggregates of these numbers
       periodically.

       The plugin implements the Counter, Timer, Gauge and Set types which are
       dispatched as the collectd types	"derive", "latency", "gauge" and
       "objects" respectively.

       The following configuration options are valid:

       Host Host
	   Bind	to the hostname	/ address Host.	By default, the	plugin will
	   bind	to the "any" address, i.e. accept packets sent to any of the
	   hosts addresses.

       Port Port
	   UDP port to listen to. This can be either a service name or a port
	   number.  Defaults to	8125.

       DeleteCounters false|true
       DeleteTimers false|true
       DeleteGauges false|true
       DeleteSets false|true
	   These options control what happens if metrics are not updated in an
	   interval.  If set to	False, the default, metrics are	dispatched
	   unchanged, i.e. the rate of counters	and size of sets will be zero,
	   timers report "NaN" and gauges are unchanged. If set	to True, the
	   such	metrics	are not	dispatched and removed from the	internal
	   cache.

       CounterSum false|true
	   When	enabled, creates a "count" metric which	reports	the change
	   since the last read.	This option primarily exists for compatibility
	   with	the statsd implementation by Etsy.

       TimerPercentile Percent
	   Calculate and dispatch the configured percentile, i.e. compute the
	   latency, so that Percent of all reported timers are smaller than or
	   equal to the	computed latency. This is useful for cutting off the
	   long	tail latency, as it's often done in Service Level Agreements
	   (SLAs).

	   Different percentiles can be	calculated by setting this option
	   several times.  If none are specified, no percentiles are
	   calculated /	dispatched.

       TimerLower false|true
       TimerUpper false|true
       TimerSum	false|true
       TimerCount false|true
	   Calculate and dispatch various values out of	Timer metrics received
	   during an interval. If set to False,	the default, these values
	   aren't calculated / dispatched.

   Plugin "swap"
       The Swap	plugin collects	information about used and available swap
       space. On Linux and Solaris, the	following options are available:

       ReportByDevice false|true
	   Configures how to report physical swap devices. If set to false
	   (the	default), the summary over all swap devices is reported	only,
	   i.e.	the globally used and available	space over all devices.	If
	   true	is configured, the used	and available space of each device
	   will	be reported separately.

	   This	option is only available if the	Swap plugin can	read
	   "/proc/swaps" (under	Linux) or use the swapctl(2) mechanism (under
	   Solaris).

       ReportBytes false|true
	   When	enabled, the swap I/O is reported in bytes. When disabled, the
	   default, swap I/O is	reported in pages. This	option is available
	   under Linux only.

       ValuesAbsolute true|false
	   Enables or disables reporting of absolute swap metrics, i.e.	number
	   of bytes available and used.	Defaults to true.

       ValuesPercentage	false|true
	   Enables or disables reporting of relative swap metrics, i.e.
	   percent available and free. Defaults	to false.

	   This	is useful for deploying	collectd in a heterogeneous
	   environment,	where swap sizes differ	and you	want to	specify
	   generic thresholds or similar.

   Plugin "syslog"
       LogLevel	debug|info|notice|warning|err
	   Sets	the log-level. If, for example,	set to notice, then all	events
	   with	severity notice, warning, or err will be submitted to the
	   syslog-daemon.

	   Please note that debug is only available if collectd	has been
	   compiled with debugging support.

       NotifyLevel OKAY|WARNING|FAILURE
	   Controls which notifications	should be sent to syslog. The default
	   behaviour is	not to send any. Less severe notifications always
	   imply logging more severe notifications: Setting this to OKAY means
	   all notifications will be sent to syslog, setting this to WARNING
	   will	send WARNING and FAILURE notifications but will	dismiss	OKAY
	   notifications. Setting this option to FAILURE will only send
	   failures to syslog.

   Plugin "table"
       The "table plugin" provides generic means to parse tabular data and
       dispatch	user specified values. Values are selected based on column
       numbers.	For example, this plugin may be	used to	get values from	the
       Linux proc(5) filesystem	or CSV (comma separated	values)	files.

	 <Plugin table>
	   <Table "/proc/slabinfo">
	     Instance "slabinfo"
	     Separator " "
	     <Result>
	       Type gauge
	       InstancePrefix "active_objs"
	       InstancesFrom 0
	       ValuesFrom 1
	     </Result>
	     <Result>
	       Type gauge
	       InstancePrefix "objperslab"
	       InstancesFrom 0
	       ValuesFrom 4
	     </Result>
	   </Table>
	 </Plugin>

       The configuration consists of one or more Table blocks, each of which
       configures one file to parse. Within each Table block, there are	one or
       more Result blocks, which configure which data to select	and how	to
       interpret it.

       The following options are available inside a Table block:

       Instance	instance
	   If specified, instance is used as the plugin	instance. So, in the
	   above example, the plugin name "table-slabinfo" would be used. If
	   omitted, the	filename of the	table is used instead, with all
	   special characters replaced with an underscore ("_").

       Separator string
	   Any character of string is interpreted as a delimiter between the
	   different columns of	the table. A sequence of two or	more
	   contiguous delimiters in the	table is considered to be a single
	   delimiter, i. e. there cannot be any	empty columns. The plugin uses
	   the strtok_r(3) function to parse the lines of a table - see	its
	   documentation for more details. This	option is mandatory.

	   A horizontal	tab, newline and carriage return may be	specified by
	   "\\t", "\\n"	and "\\r" respectively.	Please note that the double
	   backslashes are required because of collectd's config parsing.

       The following options are available inside a Result block:

       Type type
	   Sets	the type used to dispatch the values to	the daemon. Detailed
	   information about types and their configuration can be found	in
	   types.db(5).	This option is mandatory.

       InstancePrefix prefix
	   If specified, prepend prefix	to the type instance. If omitted, only
	   the InstancesFrom option is considered for the type instance.

       InstancesFrom column0 [column1 ...]
	   If specified, the content of	the given columns (identified by the
	   column number starting at zero) will	be used	to create the type
	   instance for	each row. Multiple values (and the instance prefix)
	   will	be joined together with	dashes (-) as separation character. If
	   omitted, only the InstancePrefix option is considered for the type
	   instance.

	   The plugin itself does not check whether or not all built instances
	   are different. ItXs your responsibility to assure that each is
	   unique. This	is especially true, if you do not specify
	   InstancesFrom: You have to make sure	that the table only contains
	   one row.

	   If neither InstancePrefix nor InstancesFrom is given, the type
	   instance will be empty.

       ValuesFrom column0 [column1 ...]
	   Specifies the columns (identified by	the column numbers starting at
	   zero) whose content is used as the actual data for the data sets
	   that	are dispatched to the daemon. How many such columns you	need
	   is determined by the	Type setting above. If you specify too many or
	   not enough columns, the plugin will complain	about that and no data
	   will	be submitted to	the daemon. The	plugin uses strtoll(3) and
	   strtod(3) to	parse counter and gauge	values respectively, so
	   anything supported by those functions is supported by the plugin as
	   well. This option is	mandatory.

   Plugin "tail"
       The "tail plugin" follows logfiles, just	like tail(1) does, parses each
       line and	dispatches found values. What is matched can be	configured by
       the user	using (extended) regular expressions, as described in
       regex(7).

	 <Plugin "tail">
	   <File "/var/log/exim4/mainlog">
	     Instance "exim"
	     Interval 60
	     <Match>
	       Regex "S=([1-9][0-9]*)"
	       DSType "CounterAdd"
	       Type "ipt_bytes"
	       Instance	"total"
	     </Match>
	     <Match>
	       Regex "\\<R=local_user\\>"
	       ExcludeRegex "\\<R=local_user\\>.*mail_spool defer"
	       DSType "CounterInc"
	       Type "counter"
	       Instance	"local_user"
	     </Match>
	     <Match>
	       Regex "l=([0-9]*\\.[0-9]*)"
	       <DSType "Distribution">
		 Percentile 99
		 Bucket	0 100
	       </DSType>
	       Type "latency"
	       Instance	"foo"
	     </Match>
	   </File>
	 </Plugin>

       The config consists of one or more File blocks, each of which
       configures one logfile to parse.	Within each File block,	there are one
       or more Match blocks, which configure a regular expression to search
       for.

       The Instance option in the File block may be used to set	the plugin
       instance. So in the above example the plugin name "tail-foo" would be
       used.  This plugin instance is for all Match blocks that	follow it,
       until the next Instance option. This way	you can	extract	several	plugin
       instances from one logfile, handy when parsing syslog and the like.

       The Interval option allows you to define	the length of time between
       reads. If this is not set, the default Interval will be used.

       Each Match block	has the	following options to describe how the match
       should be performed:

       Regex regex
	   Sets	the regular expression to use for matching against a line. The
	   first subexpression has to match something that can be turned into
	   a number by strtoll(3) or strtod(3),	depending on the value of
	   "CounterAdd", see below. Because extended regular expressions are
	   used, you do	not need to use	backslashes for	subexpressions!	If in
	   doubt, please consult regex(7). Due to collectd's config parsing
	   you need to escape backslashes, though. So if you want to match
	   literal parentheses you need	to do the following:

	     Regex "SPAM \\(Score: (-?[0-9]+\\.[0-9]+)\\)"

       ExcludeRegex regex
	   Sets	an optional regular expression to use for excluding lines from
	   the match.  An example which	excludes all connections from
	   localhost from the match:

	     ExcludeRegex "127\\.0\\.0\\.1"

       DSType Type
	   Sets	how the	values are cumulated. Type is one of:

	   GaugeAverage
	       Calculate the average.

	   GaugeMin
	       Use the smallest	number only.

	   GaugeMax
	       Use the greatest	number only.

	   GaugeLast
	       Use the last number found.

	   GaugePersist
	       Use the last number found. The number is	not reset at the end
	       of an interval.	It is continously reported until another
	       number is matched. This is intended for cases in	which only
	       state changes are reported, for example a thermometer that only
	       reports the temperature when it changes.

	   CounterSet
	   DeriveSet
	   AbsoluteSet
	       The matched number is a counter.	Simply sets the	internal
	       counter to this value. Variants exist for "COUNTER", "DERIVE",
	       and "ABSOLUTE" data sources.

	   GaugeAdd
	   CounterAdd
	   DeriveAdd
	       Add the matched value to	the internal counter. In case of
	       DeriveAdd, the matched number may be negative, which will
	       effectively subtract from the internal counter.

	   GaugeInc
	   CounterInc
	   DeriveInc
	       Increase	the internal counter by	one. These DSType are the only
	       ones that do not	use the	matched	subexpression, but simply
	       count the number	of matched lines. Thus,	you may	use a regular
	       expression without submatch in this case.

	   Distribution
	       Type to do calculations based on	the distribution of values,
	       primarily calculating percentiles. This is primarily geared
	       towards latency,	but can	be used	for other metrics as well. The
	       range of	values tracked with this setting must be in the	range
	       (0X2^34)	and can	be fractional. Please note that	neither	zero
	       nor 2^34	are inclusive bounds, i.e. zero	cannot be handled by a
	       distribution.

	       This option must	be used	together with the Percentile and/or
	       Bucket options.

	       Synopsis:

		 <DSType "Distribution">
		   Percentile 99
		   Bucket 0 100
		 </DSType>

	       Percentile Percent
		   Calculate and dispatch the configured percentile, i.e.
		   compute the value, so that Percent of all matched values
		   are smaller than or equal to	the computed latency.

		   Metrics are reported	with the type Type (the	value of the
		   above option) and the type instance
		   "[<Instance>-]<Percent>".

		   This	option may be repeated to calculate more than one
		   percentile.

	       Bucket lower_bound upper_bound
		   Export the number of	values (a "DERIVE") falling within the
		   given range.	Both, lower_bound and upper_bound may be a
		   fractional number, such as 0.5.  Each Bucket	option
		   specifies an	interval "(lower_bound,	upper_bound]", i.e.
		   the range excludes the lower	bound and includes the upper
		   bound. lower_bound and upper_bound may be zero, meaning no
		   lower/upper bound.

		   To export the entire	(0Xinf)	range without overlap, use the
		   upper bound of the previous range as	the lower bound	of the
		   following range. In other words, use	the following schema:

		     Bucket   0	  1
		     Bucket   1	  2
		     Bucket   2	  5
		     Bucket   5	 10
		     Bucket  10	 20
		     Bucket  20	 50
		     Bucket  50	  0

		   Metrics are reported	with the type "bucket" and the type
		   instance "<Type>[-<Instance>]-<lower_bound>_<upper_bound>".

		   This	option may be repeated to calculate more than one
		   rate.

	   The Gauge* and Distribution types interpret the submatch as a
	   floating point number, using	strtod(3). The Counter*	and
	   AbsoluteSet types interpret the submatch as an unsigned integer
	   using strtoull(3). The Derive* types	interpret the submatch as a
	   signed integer using	strtoll(3). CounterInc and DeriveInc do	not
	   use the submatch at all and it may be omitted in this case.

       Type Type
	   Sets	the type used to dispatch this value. Detailed information
	   about types and their configuration can be found in types.db(5).

       Instance	TypeInstance
	   This	optional setting sets the type instance	to use.

   Plugin "tail_csv"
       The tail_csv plugin reads files in the CSV format, e.g. the statistics
       file written by Snort.

       Synopsis:

	<Plugin	"tail_csv">
	  <Metric "snort-dropped">
	      Type "percent"
	      Instance "dropped"
	      Index 1
	  </Metric>
	  <File	"/var/log/snort/snort.stats">
	      Instance "snort-eth0"
	      Interval 600
	      Collect "snort-dropped"
	  </File>
	</Plugin>

       The configuration consists of one or more Metric	blocks that define an
       index into the line of the CSV file and how this	value is mapped	to
       collectd's internal representation. These are followed by one or	more
       Instance	blocks which configure which file to read, in which interval
       and which metrics to extract.

       <Metric Name>
	   The Metric block configures a new metric to be extracted from the
	   statistics file and how it is mapped	on collectd's data model. The
	   string Name is only used inside the Instance	blocks to refer	to
	   this	block, so you can use one Metric block for multiple CSV	files.

	   Type	Type
	       Configures which	Type to	use when dispatching this metric.
	       Types are defined in the	types.db(5) file, see the appropriate
	       manual page for more information	on specifying types. Only
	       types with a single data	source are supported by	the tail_csv
	       plugin. The information whether the value is an absolute	value
	       (i.e. a "GAUGE")	or a rate (i.e.	a "DERIVE") is taken from the
	       Type's definition.

	   Instance TypeInstance
	       If set, TypeInstance is used to populate	the type instance
	       field of	the created value lists. Otherwise, no type instance
	       is used.

	   ValueFrom Index
	       Configure to read the value from	the field with the zero-based
	       index Index.  If	the value is parsed as signed integer,
	       unsigned	integer	or double depends on the Type setting, see
	       above.

       <File Path>
	   Each	File block represents one CSV file to read. There must be at
	   least one File block	but there can be multiple if you have multiple
	   CSV files.

	   Instance PluginInstance
	       Sets the	plugin instance	used when dispatching the values.

	   Collect Metric
	       Specifies which Metric to collect. This option must be
	       specified at least once,	and you	can use	this option multiple
	       times to	specify	more than one metric to	be extracted from this
	       statistic file.

	   Interval Seconds
	       Configures the interval in which	to read	values from this
	       instance	/ file.	 Defaults to the plugin's default interval.

	   TimeFrom Index
	       Rather than using the local time	when dispatching a value, read
	       the timestamp from the field with the zero-based	index Index.
	       The value is interpreted	as seconds since epoch.	The value is
	       parsed as a double and may be factional.

   Plugin "teamspeak2"
       The "teamspeak2 plugin" connects	to the query port of a teamspeak2
       server and polls	interesting global and virtual server data. The	plugin
       can query only one physical server but unlimited	virtual	servers. You
       can use the following options to	configure it:

       Host hostname/ip
	   The hostname	or ip which identifies the physical server.  Default:
	   127.0.0.1

       Port port
	   The query port of the physical server. This needs to	be a string.
	   Default: "51234"

       Server port
	   This	option has to be added once for	every virtual server the
	   plugin should query.	If you want to query the virtual server	on
	   port	8767 this is what the option would look	like:

	     Server "8767"

	   This	option,	although numeric, needs	to be a	string,	i. e. you must
	   use quotes around it! If no such statement is given only global
	   information will be collected.

   Plugin "ted"
       The TED plugin connects to a device of "The Energy Detective", a	device
       to measure power	consumption. These devices are usually connected to a
       serial (RS232) or USB port. The plugin opens a configured device	and
       tries to	read the current energy	readings. For more information on TED,
       visit <http://www.theenergydetective.com/>.

       Available configuration options:

       Device Path
	   Path	to the device on which TED is connected. collectd will need
	   read	and write permissions on that file.

	   Default: /dev/ttyUSB0

       Retries Num
	   Apparently reading from TED is not that reliable. You can therefore
	   configure a number of retries here. You only	configure the retries
	   here, to if you specify zero, one reading will be performed (but no
	   retries if that fails); if you specify three, a maximum of four
	   readings are	performed. Negative values are illegal.

	   Default: 0

   Plugin "tcpconns"
       The "tcpconns plugin" counts the	number of currently established	TCP
       connections based on the	local port and/or the remote port. Since there
       may be a	lot of connections the default if to count all connections
       with a local port, for which a listening	socket is opened. You can use
       the following options to	fine-tune the ports you	are interested in:

       ListeningPorts true|false
	   If this option is set to true, statistics for all local ports for
	   which a listening socket exists are collected. The default depends
	   on LocalPort	and RemotePort (see below): If no port at all is
	   specifically	selected, the default is to collect listening ports.
	   If specific ports (no matter	if local or remote ports) are
	   selected, this option defaults to false, i. e. only the selected
	   ports will be collected unless this option is set to	true
	   specifically.

       LocalPort Port
	   Count the connections to a specific local port. This	can be used to
	   see how many	connections are	handled	by a specific daemon, e. g.
	   the mailserver.  You	have to	specify	the port in numeric form, so
	   for the mailserver example you'd need to set	25.

       RemotePort Port
	   Count the connections to a specific remote port. This is useful to
	   see how much	a remote service is used. This is most useful if you
	   want	to know	how many connections a local service has opened	to
	   remote services, e. g. how many connections a mail server or	news
	   server has to other mail or news servers, or	how many connections a
	   web proxy holds to web servers. You have to give the	port in
	   numeric form.

       AllPortsSummary true|false
	   If this option is set to true a summary of statistics from all
	   connections are collected. This option defaults to false.

   Plugin "thermal"
       ForceUseProcfs true|false
	   By default, the Thermal plugin tries	to read	the statistics from
	   the Linux "sysfs" interface.	If that	is not available, the plugin
	   falls back to the "procfs" interface. By setting this option	to
	   true, you can force the plugin to use the latter. This option
	   defaults to false.

       Device Device
	   Selects the name of the thermal device that you want	to collect or
	   ignore, depending on	the value of the IgnoreSelected	option.	This
	   option may be used multiple times to	specify	a list of devices.

       IgnoreSelected true|false
	   Invert the selection: If set	to true, all devices except the	ones
	   that	match the device names specified by the	Device option are
	   collected. By default only selected devices are collected if	a
	   selection is	made. If no selection is configured at all, all
	   devices are selected.

   Plugin "threshold"
       The Threshold plugin checks values collected or received	by collectd
       against a configurable threshold	and issues notifications if values are
       out of bounds.

       Documentation for this plugin is	available in the collectd-threshold(5)
       manual page.

   Plugin "tokyotyrant"
       The TokyoTyrant plugin connects to a TokyoTyrant	server and collects a
       couple metrics: number of records, and database size on disk.

       Host Hostname/IP
	   The hostname	or IP which identifies the server.  Default: 127.0.0.1

       Port Service/Port
	   The query port of the server. This needs to be a string, even if
	   the port is given in	its numeric form.  Default: 1978

   Plugin "turbostat"
       The Turbostat plugin reads CPU frequency	and C-state residency on
       modern Intel processors by using	Model Specific Registers.

       CoreCstates Bitmask(Integer)
	   Bit mask of the list	of core	C-states supported by the processor.
	   This	option should only be used if the automated detection fails.
	   Default value extracted from	the CPU	model and family.

	   Currently supported C-states	(by this plugin): 3, 6,	7

	   Example:

	     All states	(3, 6 and 7):
	     (1<<3) + (1<<6) + (1<<7) =	392

       PackageCstates Bitmask(Integer)
	   Bit mask of the list	of packages C-states supported by the
	   processor. This option should only be used if the automated
	   detection fails. Default value extracted from the CPU model and
	   family.

	   Currently supported C-states	(by this plugin): 2, 3,	6, 7, 8, 9, 10

	   Example:

	     States 2, 3, 6 and	7:
	     (1<<2) + (1<<3) + (1<<6) +	(1<<7) = 396

       SystemManagementInterrupt true|false
	   Boolean enabling the	collection of the I/O System-Management
	   Interrupt counter.  This option should only be used if the
	   automated detection fails or	if you want to disable this feature.

       DigitalTemperatureSensor	true|false
	   Boolean enabling the	collection of the temperature of each core.
	   This	option should only be used if the automated detection fails or
	   if you want to disable this feature.

       TCCActivationTemp Temperature
	   Thermal Control Circuit Activation Temperature of the installed
	   CPU.	This temperature is used when collecting the temperature of
	   cores or packages. This option should only be used if the automated
	   detection fails. Default value extracted from
	   MSR_IA32_TEMPERATURE_TARGET.

       RunningAveragePowerLimit	Bitmask(Integer)
	   Bit mask of the list	of elements to be thermally monitored. This
	   option should only be used if the automated detection fails or if
	   you want to disable some collections. The different bits of this
	   bit mask accepted by	this plugin are:

	   0 ('1'): Package
	   1 ('2'): DRAM
	   2 ('4'): Cores
	   3 ('8'): Embedded graphic device
       LogicalCoreNames	true|false
	   Boolean enabling the	use of logical core numbering for per core
	   statistics.	When enabled, "cpu<n>" is used as plugin instance,
	   where n is a	sequential number assigned by the kernel. Otherwise,
	   "core<n>" is	used where n is	the n-th core of the socket, causing
	   name	conflicts when there is	more than one socket.

   Plugin "unixsock"
       SocketFile Path
	   Sets	the socket-file	which is to be created.

       SocketGroup Group
	   If running as root change the group of the UNIX-socket after	it has
	   been	created. Defaults to collectd.

       SocketPerms Permissions
	   Change the file permissions of the UNIX-socket after	it has been
	   created. The	permissions must be given as a numeric,	octal value as
	   you would pass to chmod(1). Defaults	to 0770.

       DeleteSocket false|true
	   If set to true, delete the socket file before calling bind(2), if a
	   file	with the given name already exists. If collectd	crashes	a
	   socket file may be left over, preventing the	daemon from opening a
	   new socket when restarted.  Since this is potentially dangerous,
	   this	defaults to false.

   Plugin "uuid"
       This plugin, if loaded, causes the Hostname to be taken from the
       machine's UUID. The UUID	is a universally unique	designation for	the
       machine,	usually	taken from the machine's BIOS. This is most useful if
       the machine is running in a virtual environment such as Xen, in which
       case the	UUID is	preserved across shutdowns and migration.

       The following methods are used to find the machine's UUID, in order:

       o   Check /etc/uuid (or UUIDFile).

       o   Check for UUID from HAL
	   (<http://www.freedesktop.org/wiki/Software/hal>) if present.

       o   Check for UUID from "dmidecode" / SMBIOS.

       o   Check for UUID from Xen hypervisor.

       If no UUID can be found then the	hostname is not	modified.

       UUIDFile	Path
	   Take	the UUID from the given	file (default /etc/uuid).

   Plugin "varnish"
       The varnish plugin collects information about Varnish, an HTTP
       accelerator.  It	collects a subset of the values	displayed by
       varnishstat(1), and organizes them in categories	which can be enabled
       or disabled. Currently only metrics shown in varnishstat(1)'s MAIN
       section are collected. The exact	meaning	of each	metric can be found in
       varnish-counters(7).

       Synopsis:

	<Plugin	"varnish">
	  <Instance "example">
	    CollectBackend     true
	    CollectBan	       false
	    CollectCache       true
	    CollectConnections true
	    CollectDirectorDNS false
	    CollectESI	       false
	    CollectFetch       false
	    CollectHCB	       false
	    CollectObjects     false
	    CollectPurge       false
	    CollectSession     false
	    CollectSHM	       true
	    CollectSMA	       false
	    CollectSMS	       false
	    CollectSM	       false
	    CollectStruct      false
	    CollectTotals      false
	    CollectUptime      false
	    CollectVCL	       false
	    CollectVSM	       false
	    CollectWorkers     false
	  </Instance>
	</Plugin>

       The configuration consists of one or more <Instance Name> blocks. Name
       is the parameter	passed to "varnishd -n". If left empty,	it will
       collectd	statistics from	the default "varnishd" instance	(this should
       work fine in most cases).

       Inside each <Instance> blocks, the following options are	recognized:

       CollectBackend true|false
	   Back-end connection statistics, such	as successful, reused, and
	   closed connections. True by default.

       CollectBan true|false
	   Statistics about ban	operations, such as number of bans added,
	   retired, and	number of objects tested against ban operations. Only
	   available with Varnish 3.x and above. False by default.

       CollectCache true|false
	   Cache hits and misses. True by default.

       CollectConnections true|false
	   Number of client connections	received, accepted and dropped.	True
	   by default.

       CollectDirectorDNS true|false
	   DNS director	lookup cache statistics. Only available	with Varnish
	   3.x.	False by default.

       CollectESI true|false
	   Edge	Side Includes (ESI) parse statistics. False by default.

       CollectFetch true|false
	   Statistics about fetches (HTTP requests sent	to the backend). False
	   by default.

       CollectHCB true|false
	   Inserts and look-ups	in the crit bit	tree based hash. Look-ups are
	   divided into	locked and unlocked look-ups. False by default.

       CollectObjects true|false
	   Statistics on cached	objects: number	of objects expired, nuked
	   (prematurely	expired), saved, moved,	etc. False by default.

       CollectPurge true|false
	   Statistics about purge operations, such as number of	purges added,
	   retired, and	number of objects tested against purge operations.
	   Only	available with Varnish 2.x. False by default.

       CollectSession true|false
	   Client session statistics. Number of	past and current sessions,
	   session herd	and linger counters, etc. False	by default. Note that
	   if using Varnish 4.x, some metrics found in the Connections and
	   Threads sections with previous versions of Varnish have been	moved
	   here.

       CollectSHM true|false
	   Statistics about the	shared memory log, a memory region to store
	   log messages	which is flushed to disk when full. True by default.

       CollectSMA true|false
	   malloc or umem (umem_alloc(3MALLOC) based) storage statistics. The
	   umem	storage	component is Solaris specific. Only available with
	   Varnish 2.x.	False by default.

       CollectSMS true|false
	   synth (synthetic content) storage statistics. This storage
	   component is	used internally	only. False by default.

       CollectSM true|false
	   file	(memory	mapped file) storage statistics. Only available	with
	   Varnish 2.x.	 False by default.

       CollectStruct true|false
	   Current varnish internal state statistics. Number of	current
	   sessions, objects in	cache store, open connections to backends
	   (with Varnish 2.x), etc. False by default.

       CollectTotals true|false
	   Collects overview counters, such as the number of sessions created,
	   the number of requests and bytes transferred. False by default.

       CollectUptime true|false
	   Varnish uptime. Only	available with Varnish 3.x and above. False by
	   default.

       CollectVCL true|false
	   Number of total (available +	discarded) VCL (config files). False
	   by default.

       CollectVSM true|false
	   Collect statistics about Varnish's shared memory usage (used	by the
	   logging and statistics subsystems). Only available with Varnish
	   4.x.	False by default.

       CollectWorkers true|false
	   Collect statistics about worker threads. False by default.

   Plugin "virt"
       This plugin allows CPU, disk and	network	load to	be collected for
       virtualized guests on the machine. This means that these	metrics	can be
       collected for guest systems without installing any software on them -
       collectd	only runs on the host system. The statistics are collected
       through libvirt (<http://libvirt.org/>).

       Only Connection is required.

       Connection uri
	   Connect to the hypervisor given by uri. For example if using	Xen
	   use:

	    Connection "xen:///"

	   Details which URIs allowed are given	at
	   <http://libvirt.org/uri.html>.

       RefreshInterval seconds
	   Refresh the list of domains and devices every seconds. The default
	   is 60 seconds. Setting this to be the same or smaller than the
	   Interval will cause the list	of domains and devices to be refreshed
	   on every iteration.

	   Refreshing the devices in particular	is quite a costly operation,
	   so if your virtualization setup is static you might consider
	   increasing this. If this option is set to 0,	refreshing is disabled
	   completely.

       Domain name
       BlockDevice name:dev
       InterfaceDevice name:dev
       IgnoreSelected true|false
	   Select which	domains	and devices are	collected.

	   If IgnoreSelected is	not given or false then	only the listed
	   domains and disk/network devices are	collected.

	   If IgnoreSelected is	true then the test is reversed and the listed
	   domains and disk/network devices are	ignored, while the rest	are
	   collected.

	   The domain name and device names may	use a regular expression, if
	   the name is surrounded by /.../ and collectd	was compiled with
	   support for regexps.

	   The default is to collect statistics	for all	domains	and all	their
	   devices.

	   Example:

	    BlockDevice	"/:hdb/"
	    IgnoreSelected "true"

	   Ignore all hdb devices on any domain, but other block devices (eg.
	   hda)	will be	collected.

       BlockDeviceFormat target|source
	   If BlockDeviceFormat	is set to target, the default, then the	device
	   name	seen by	the guest will be used for reporting metrics.  This
	   corresponds to the "<target>" node in the XML definition of the
	   domain.

	   If BlockDeviceFormat	is set to source, then metrics will be
	   reported using the path of the source, e.g. an image	file.  This
	   corresponds to the "<source>" node in the XML definition of the
	   domain.

	   Example:

	   If the domain XML have the following	device defined:

	     <disk type='block'	device='disk'>
	       <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/>
	       <source dev='/var/db/libvirt/images/image1.qcow2'/>
	       <target dev='sda' bus='scsi'/>
	       <boot order='2'/>
	       <address	type='drive' controller='0' bus='0' target='0' unit='0'/>
	     </disk>

	   Setting "BlockDeviceFormat target" will cause the type instance to
	   be set to "sda".  Setting "BlockDeviceFormat	source"	will cause the
	   type	instance to be set to "var_lib_libvirt_images_image1.qcow2".

       BlockDeviceFormatBasename false|true
	   The BlockDeviceFormatBasename controls whether the full path	or the
	   basename(1) of the source is	being used as the type instance	when
	   BlockDeviceFormat is	set to source. Defaults	to false.

	   Example:

	   Assume the device path (source tag) is
	   "/var/db/libvirt/images/image1.qcow2".  Setting
	   "BlockDeviceFormatBasename false" will cause	the type instance to
	   be set to "var_lib_libvirt_images_image1.qcow2".  Setting
	   "BlockDeviceFormatBasename true" will cause the type	instance to be
	   set to "image1.qcow2".

       HostnameFormat name|uuid|hostname|...
	   When	the virt plugin	logs data, it sets the hostname	of the
	   collected data according to this setting. The default is to use the
	   guest name as provided by the hypervisor, which is equal to setting
	   name.

	   uuid	means use the guest's UUID. This is useful if you want to
	   track the same guest	across migrations.

	   hostname means to use the global Hostname setting, which is
	   probably not	useful on its own because all guests will appear to
	   have	the same name.

	   You can also	specify	combinations of	these fields. For example name
	   uuid	means to concatenate the guest name and	UUID (with a literal
	   colon character between, thus "foo:1234-1234-1234-1234").

	   At the moment of writing (collectd-5.5), hostname string is limited
	   to 62 characters. In	case when combination of fields	exceeds	62
	   characters, hostname	will be	truncated without a warning.

       InterfaceFormat name|address
	   When	the virt plugin	logs interface data, it	sets the name of the
	   collected data according to this setting. The default is to use the
	   path	as provided by the hypervisor (the "dev" property of the
	   target node), which is equal	to setting name.

	   address means use the interface's mac address. This is useful since
	   the interface path might change between reboots of a	guest or
	   across migrations.

       PluginInstanceFormat name|uuid|none
	   When	the virt plugin	logs data, it sets the plugin_instance of the
	   collected data according to this setting. The default is to not set
	   the plugin_instance.

	   name	means use the guest's name as provided by the hypervisor.
	   uuid	means use the guest's UUID.

	   You can also	specify	combinations of	the name and uuid fields.  For
	   example name	uuid means to concatenate the guest name and UUID
	   (with a literal colon character between, thus
	   "foo:1234-1234-1234-1234").

   Plugin "vmem"
       The "vmem" plugin collects information about the	usage of virtual
       memory.	Since the statistics provided by the Linux kernel are very
       detailed, they are collected very detailed. However, to get all the
       details,	you have to switch them	on manually. Most people just want an
       overview	over, such as the number of pages read from swap space.

       Verbose true|false
	   Enables verbose collection of information. This will	start
	   collecting page "actions", e. g. page allocations, (de)activations,
	   steals and so on.  Part of these statistics are collected on	a "per
	   zone" basis.

   Plugin "vserver"
       This plugin doesn't have	any options. VServer support is	only available
       for Linux. It cannot yet	be found in a vanilla kernel, though. To make
       use of this plugin you need a kernel that has VServer support built in,
       i. e. you need to apply the patches and compile your own	kernel,	which
       will then provide the /proc/virtual filesystem that is required by this
       plugin.

       The VServer homepage can	be found at <http://linux-vserver.org/>.

       Note: The traffic collected by this plugin accounts for the amount of
       traffic passing a socket	which might be a lot less than the actual on-
       wire traffic (e.	g. due to headers and retransmission). If you want to
       collect on-wire traffic you could, for example, use the logging
       facilities of iptables to feed data for the guest IPs into the iptables
       plugin.

   Plugin "write_graphite"
       The "write_graphite" plugin writes data to Graphite, an open-source
       metrics storage and graphing project. The plugin	connects to Carbon,
       the data	layer of Graphite, via TCP or UDP and sends data via the "line
       based" protocol (per default using port 2003). The data will be sent in
       blocks of at most 1428 bytes to minimize	the number of network packets.

       Synopsis:

	<Plugin	write_graphite>
	  <Node	"example">
	    Host "localhost"
	    Port "2003"
	    Protocol "tcp"
	    LogSendErrors true
	    Prefix "collectd"
	  </Node>
	</Plugin>

       The configuration consists of one or more <Node Name> blocks. Inside
       the Node	blocks,	the following options are recognized:

       Host Address
	   Hostname or address to connect to. Defaults to "localhost".

       Port Service
	   Service name	or port	number to connect to. Defaults to 2003.

       Protocol	String
	   Protocol to use when	connecting to Graphite.	Defaults to "tcp".

       ReconnectInterval Seconds
	   When	set to non-zero, forces	the connection to the Graphite backend
	   to be closed	and re-opend periodically. This	behavior is desirable
	   in environments where the connection	to the Graphite	backend	is
	   done	through	load balancers,	for example. When set to zero, the
	   default, the	connetion is kept open for as long as possible.

       LogSendErrors false|true
	   If set to true (the default), logs errors when sending data to
	   Graphite.  If set to	false, it will not log the errors. This	is
	   especially useful when using	Protocol UDP since many	times we want
	   to use the "fire-and-forget"	approach and logging errors fills
	   syslog with unneeded	messages.

       Prefix String
	   When	set, String is added in	front of the host name.	Dots and
	   whitespace are not escaped in this string (see EscapeCharacter
	   below).

       Postfix String
	   When	set, String is appended	to the host name. Dots and whitespace
	   are not escaped in this string (see EscapeCharacter below).

       EscapeCharacter Char
	   Carbon uses the dot (".") as	escape character and doesn't allow
	   whitespace in the identifier. The EscapeCharacter option determines
	   which character dots, whitespace and	control	characters are
	   replaced with. Defaults to underscore ("_").

       StoreRates false|true
	   If set to true (the default), convert counter values	to rates. If
	   set to false	counter	values are stored as is, i. e. as an
	   increasing integer number.

       SeparateInstances false|true
	   If set to true, the plugin instance and type	instance will be in
	   their own path component, for example "host.cpu.0.cpu.idle".	If set
	   to false (the default), the plugin and plugin instance (and
	   likewise the	type and type instance)	are put	into one component,
	   for example "host.cpu-0.cpu-idle".

       AlwaysAppendDS false|true
	   If set to true, append the name of the Data Source (DS) to the
	   "metric" identifier.	If set to false	(the default), this is only
	   done	when there is more than	one DS.

       PreserveSeparator false|true
	   If set to false (the	default) the "." (dot) character is replaced
	   with	EscapeCharacter. Otherwise, if set to true, the	"." (dot)
	   character is	preserved, i.e.	passed through.

       DropDuplicateFields false|true
	   If set to true, detect and remove duplicate components in Graphite
	   metric names. For example, the metric name
	   "host.load.load.shortterm" will be shortened	to
	   "host.load.shortterm".

   Plugin "write_log"
       The "write_log" plugin writes metrics as	INFO log messages.

       This plugin supports two	output formats:	Graphite and JSON.

       Synopsis:

	<Plugin	write_log>
	  Format Graphite
	</Plugin>

       Format Format
	   The output format to	use. Can be one	of "Graphite" or "JSON".

   Plugin "write_tsdb"
       The "write_tsdb"	plugin writes data to OpenTSDB,	a scalable open-source
       time series database. The plugin	connects to a TSD, a masterless, no
       shared state daemon that	ingests	metrics	and stores them	in HBase. The
       plugin uses TCP over the	"line based" protocol with a default port
       4242. The data will be sent in blocks of	at most	1428 bytes to minimize
       the number of network packets.

       Synopsis:

	<Plugin	write_tsdb>
	  <Node	"example">
	    Host "tsd-1.my.domain"
	    Port "4242"
	    HostTags "status=production"
	  </Node>
	</Plugin>

       The configuration consists of one or more <Node Name> blocks. Inside
       the Node	blocks,	the following options are recognized:

       Host Address
	   Hostname or address to connect to. Defaults to "localhost".

       Port Service
	   Service name	or port	number to connect to. Defaults to 4242.

       HostTags	String
	   When	set, HostTags is added to the end of the metric. It is
	   intended to be used for name=value pairs that the TSD will tag the
	   metric with.	Dots and whitespace are	not escaped in this string.

       StoreRates false|true
	   If set to true, convert counter values to rates. If set to false
	   (the	default) counter values	are stored as is, as an	increasing
	   integer number.

       AlwaysAppendDS false|true
	   If set the true, append the name of the Data	Source (DS) to the
	   "metric" identifier.	If set to false	(the default), this is only
	   done	when there is more than	one DS.

   Plugin "write_mongodb"
       The write_mongodb plugin	will send values to MongoDB, a schema-less
       NoSQL database.

       Synopsis:

	<Plugin	"write_mongodb">
	  <Node	"default">
	    Host "localhost"
	    Port "27017"
	    Timeout 1000
	    StoreRates true
	  </Node>
	</Plugin>

       The plugin can send values to multiple instances	of MongoDB by
       specifying one Node block for each instance. Within the Node blocks,
       the following options are available:

       Host Address
	   Hostname or address to connect to. Defaults to "localhost".

       Port Service
	   Service name	or port	number to connect to. Defaults to 27017.

       Timeout Milliseconds
	   Set the timeout for each operation on MongoDB to Timeout
	   milliseconds.  Setting this option to zero means no timeout,	which
	   is the default.

       StoreRates false|true
	   If set to true (the default), convert counter values	to rates. If
	   set to false	counter	values are stored as is, i.e. as an increasing
	   integer number.

       Database	Database
       User User
       Password	Password
	   Sets	the information	used when authenticating to a MongoDB
	   database. The fields	are optional (in which case no authentication
	   is attempted), but if you want to use authentication	all three
	   fields must be set.

   Plugin "write_prometheus"
       The write_prometheus plugin implements a	tiny webserver that can	be
       scraped using Prometheus.

       Options:

       Port Port
	   Port	the embedded webserver should listen on. Defaults to 9103.

       StalenessDelta Seconds
	   Time	in seconds after which Prometheus considers a metric "stale"
	   if it hasn't	seen any update	for it.	This value must	match the
	   setting in Prometheus.  It defaults to 300 seconds (5 minutes),
	   same	as Prometheus.

	   Background:

	   Prometheus has a global setting, "StalenessDelta", which controls
	   after which time a metric without updates is	considered "stale".
	   This	setting	effectively puts an upper limit	on the interval	in
	   which metrics are reported.

	   When	the write_prometheus plugin encounters a metric	with an
	   interval exceeding this limit, it will inform you, the user,	and
	   provide the metric to Prometheus without a timestamp. That causes
	   Prometheus to consider the metric "fresh" each time it is scraped,
	   with	the time of the	scrape being considered	the time of the
	   update. The result is that there appear more	datapoints in
	   Prometheus than were	actually created, but at least the metric
	   doesn't disappear periodically.

   Plugin "write_http"
       This output plugin submits values to an HTTP server using POST requests
       and encoding metrics with JSON or using the "PUTVAL" command described
       in collectd-unixsock(5).

       Synopsis:

	<Plugin	"write_http">
	  <Node	"example">
	    URL	"http://example.com/post-collectd"
	    User "collectd"
	    Password "weCh3ik0"
	    Format JSON
	  </Node>
	</Plugin>

       The plugin can send values to multiple HTTP servers by specifying one
       <Node Name> block for each server. Within each Node block, the
       following options are available:

       URL URL
	   URL to which	the values are submitted to. Mandatory.

       User Username
	   Optional user name needed for authentication.

       Password	Password
	   Optional password needed for	authentication.

       VerifyPeer true|false
	   Enable or disable peer SSL certificate verification.	See
	   <http://curl.haxx.se/docs/sslcerts.html> for	details. Enabled by
	   default.

       VerifyHost true|false
	   Enable or disable peer host name verification. If enabled, the
	   plugin checks if the	"Common	Name" or a "Subject Alternate Name"
	   field of the	SSL certificate	matches	the host name provided by the
	   URL option. If this identity	check fails, the connection is
	   aborted. Obviously, only works when connecting to a SSL enabled
	   server. Enabled by default.

       CACert File
	   File	that holds one or more SSL certificates. If you	want to	use
	   HTTPS you will possibly need	this option. What CA certificates come
	   bundled with	"libcurl" and are checked by default depends on	the
	   distribution	you use.

       CAPath Directory
	   Directory holding one or more CA certificate	files. You can use
	   this	if for some reason all the needed CA certificates aren't in
	   the same file and can't be pointed to using the CACert option.
	   Requires "libcurl" to be built against OpenSSL.

       ClientKey File
	   File	that holds the private key in PEM format to be used for
	   certificate-based authentication.

       ClientCert File
	   File	that holds the SSL certificate to be used for certificate-
	   based authentication.

       ClientKeyPass Password
	   Password required to	load the private key in	ClientKey.

       Header Header
	   A HTTP header to add	to the request.	 Multiple headers are added if
	   this	option is specified more than once.  Example:

	     Header "X-Custom-Header: custom_value"

       SSLVersion SSLv2|SSLv3|TLSv1|TLSv1_0|TLSv1_1|TLSv1_2
	   Define which	SSL protocol version must be used. By default
	   "libcurl" will attempt to figure out	the remote SSL protocol
	   version. See	curl_easy_setopt(3) for	more details.

       Format Command|JSON|KAIROSDB
	   Format of the output	to generate. If	set to Command,	will create
	   output that is understood by	the Exec and UnixSock plugins. When
	   set to JSON,	will create output in the JavaScript Object Notation
	   (JSON). When	set to KAIROSDB	, will create output in	the KairosDB
	   format.

	   Defaults to Command.

       Metrics true|false
	   Controls whether metrics are	POSTed to this location. Defaults to
	   true.

       Notifications false|true
	   Controls whether notifications are POSTed to	this location.
	   Defaults to false.

       StoreRates true|false
	   If set to true, convert counter values to rates. If set to false
	   (the	default) counter values	are stored as is, i.e. as an
	   increasing integer number.

       BufferSize Bytes
	   Sets	the send buffer	size to	Bytes. By increasing this buffer, less
	   HTTP	requests will be generated, but	more metrics will be batched /
	   metrics are cached for longer before	being sent, introducing
	   additional delay until they are available on	the server side. Bytes
	   must	be at least 1024 and cannot exceed the size of an "int", i.e.
	   2 GByte.  Defaults to 4096.

       LowSpeedLimit Bytes per Second
	   Sets	the minimal transfer rate in Bytes per Second below which the
	   connection with the HTTP server will	be considered too slow and
	   aborted. All	the data submitted over	this connection	will probably
	   be lost. Defaults to	0, which means no minimum transfer rate	is
	   enforced.

       Timeout Timeout
	   Sets	the maximum time in milliseconds given for HTTP	POST
	   operations to complete. When	this limit is reached, the POST
	   operation will be aborted, and all the data in the current send
	   buffer will probably	be lost. Defaults to 0,	which means the
	   connection never times out.

       LogHttpError false|true
	   Enables printing of HTTP error code to log. Turned off by default.

	   The "write_http" plugin regularly submits the collected values to
	   the HTTP server. How	frequently this	happens	depends	on how much
	   data	you are	collecting and the size	of BufferSize. The optimal
	   value to set	Timeout	to is slightly below this interval, which you
	   can estimate	by monitoring the network traffic between collectd and
	   the HTTP server.

   Plugin "write_kafka"
       The write_kafka plugin will send	values to a Kafka topic, a distributed
       queue.  Synopsis:

	<Plugin	"write_kafka">
	  Property "metadata.broker.list" "broker1:9092,broker2:9092"
	  <Topic "collectd">
	    Format JSON
	  </Topic>
	</Plugin>

       The following options are understood by the write_kafka plugin:

       <Topic Name>
	   The plugin's	configuration consists of one or more Topic blocks.
	   Each	block is given a unique	Name and specifies one kafka producer.
	   Inside the Topic block, the following per-topic options are
	   understood:

	   Property String String
	       Configure the named property for	the current topic. Properties
	       are forwarded to	the kafka producer library librdkafka.

	   Key String
	       Use the specified string	as a partitioning key for the topic.
	       Kafka breaks topic into partitions and guarantees that for a
	       given topology, the same	consumer will be used for a specific
	       key. The	special	(case insensitive) string Random can be	used
	       to specify that an arbitrary partition should be	used.

	   Format Command|JSON|Graphite
	       Selects the format in which messages are	sent to	the broker. If
	       set to Command (the default), values are	sent as	"PUTVAL"
	       commands	which are identical to the syntax used by the Exec and
	       UnixSock	plugins.

	       If set to JSON, the values are encoded in the JavaScript	Object
	       Notation, an easy and straight forward exchange format.

	       If set to Graphite, values are encoded in the Graphite format,
	       which is	"<metric> <value> <timestamp>\n".

	   StoreRates true|false
	       Determines whether or not "COUNTER", "DERIVE" and "ABSOLUTE"
	       data sources are	converted to a rate (i.e. a "GAUGE" value). If
	       set to false (the default), no conversion is performed.
	       Otherwise the conversion	is performed using the internal	value
	       cache.

	       Please note that	currently this option is only used if the
	       Format option has been set to JSON.

	   GraphitePrefix (Format=Graphite only)
	       A prefix	can be added in	the metric name	when outputting	in the
	       Graphite	format.	It's added before the Host name.  Metric name
	       will be "<prefix><host><postfix><plugin><type><name>"

	   GraphitePostfix (Format=Graphite only)
	       A postfix can be	added in the metric name when outputting in
	       the Graphite format. It's added after the Host name.  Metric
	       name will be "<prefix><host><postfix><plugin><type><name>"

	   GraphiteEscapeChar (Format=Graphite only)
	       Specify a character to replace dots (.) in the host part	of the
	       metric name.  In	Graphite metric	name, dots are used as
	       separators between different metric parts (host,	plugin,	type).
	       Default is "_" (Underscore).

	   GraphiteSeparateInstances false|true
	       If set to true, the plugin instance and type instance will be
	       in their	own path component, for	example	"host.cpu.0.cpu.idle".
	       If set to false (the default), the plugin and plugin instance
	       (and likewise the type and type instance) are put into one
	       component, for example "host.cpu-0.cpu-idle".

	   GraphiteAlwaysAppendDS true|false
	       If set to true, append the name of the Data Source (DS) to the
	       "metric"	identifier. If set to false (the default), this	is
	       only done when there is more than one DS.

	   GraphitePreserveSeparator false|true
	       If set to false (the default) the "." (dot) character is
	       replaced	with GraphiteEscapeChar. Otherwise, if set to true,
	       the "." (dot) character is preserved, i.e. passed through.

	   StoreRates true|false
	       If set to true (the default), convert counter values to rates.
	       If set to false counter values are stored as is,	i.e. as	an
	       increasing integer number.

	       This will be reflected in the "ds_type" tag: If StoreRates is
	       enabled,	converted values will have "rate" appended to the data
	       source type, e.g.  "ds_type:derive:rate".

       Property	String String
	   Configure the kafka producer	through	properties, you	almost always
	   will	want to	set metadata.broker.list to your Kafka broker list.

   Plugin "write_redis"
       The write_redis plugin submits values to	Redis, a data structure
       server.

       Synopsis:

	 <Plugin "write_redis">
	   <Node "example">
	       Host "localhost"
	       Port "6379"
	       Timeout 1000
	       Prefix "collectd/"
	       Database	1
	       MaxSetSize -1
	       StoreRates true
	   </Node>
	 </Plugin>

       Values are submitted to Sorted Sets, using the metric name as the key,
       and the timestamp as the	score. Retrieving a date range can then	be
       done using the "ZRANGEBYSCORE" Redis command. Additionally, all the
       identifiers of these Sorted Sets	are kept in a Set called
       "collectd/values" (or "${prefix}/values"	if the Prefix option was
       specified) and can be retrieved using the "SMEMBERS" Redis command. You
       can specify the database	to use with the	Database parameter (default is
       0). See <http://redis.io/commands#sorted_set> and
       <http://redis.io/commands#set> for details.

       The information shown in	the synopsis above is the default
       configuration which is used by the plugin if no configuration is
       present.

       The plugin can send values to multiple instances	of Redis by specifying
       one Node	block for each instance. Within	the Node blocks, the following
       options are available:

       Node Nodename
	   The Node block identifies a new Redis node, that is a new Redis
	   instance running on a specified host	and port. The node name	is a
	   canonical identifier	which is used as plugin	instance. It is
	   limited to 51 characters in length.

       Host Hostname
	   The Host option is the hostname or IP-address where the Redis
	   instance is running on.

       Port Port
	   The Port option is the TCP port on which the	Redis instance accepts
	   connections.	Either a service name of a port	number may be given.
	   Please note that numerical port numbers must	be given as a string,
	   too.

       Timeout Milliseconds
	   The Timeout option sets the socket connection timeout, in
	   milliseconds.

       Prefix Prefix
	   Prefix used when constructing the name of the Sorted	Sets and the
	   Set containing all metrics. Defaults	to "collectd/",	so metrics
	   will	have names like	"collectd/cpu-0/cpu-user". When	setting	this
	   to something	different, it is recommended but not required to
	   include a trailing slash in Prefix.

       Database	Index
	   This	index selects the redis	database to use	for writing
	   operations. Defaults	to 0.

       MaxSetSize Items
	   The MaxSetSize option limits	the number of items that the Sorted
	   Sets	can hold. Negative values for Items sets no limit, which is
	   the default behavior.

       StoreRates true|false
	   If set to true (the default), convert counter values	to rates. If
	   set to false	counter	values are stored as is, i.e. as an increasing
	   integer number.

   Plugin "write_riemann"
       The write_riemann plugin	will send values to Riemann, a powerful	stream
       aggregation and monitoring system. The plugin sends Protobuf encoded
       data to Riemann using UDP packets.

       Synopsis:

	<Plugin	"write_riemann">
	  <Node	"example">
	    Host "localhost"
	    Port "5555"
	    Protocol UDP
	    StoreRates true
	    AlwaysAppendDS false
	    TTLFactor 2.0
	  </Node>
	  Tag "foobar"
	  Attribute "foo" "bar"
	</Plugin>

       The following options are understood by the write_riemann plugin:

       <Node Name>
	   The plugin's	configuration consists of one or more Node blocks.
	   Each	block is given a unique	Name and specifies one connection to
	   an instance of Riemann. Indise the Node block, the following	per-
	   connection options are understood:

	   Host	Address
	       Hostname	or address to connect to. Defaults to "localhost".

	   Port	Service
	       Service name or port number to connect to. Defaults to 5555.

	   Protocol UDP|TCP|TLS
	       Specify the protocol to use when	communicating with Riemann.
	       Defaults	to TCP.

	   TLSCertFile Path
	       When using the TLS protocol, path to a PEM certificate to
	       present to remote host.

	   TLSCAFile Path
	       When using the TLS protocol, path to a PEM CA certificate to
	       use to validate the remote hosts's identity.

	   TLSKeyFile Path
	       When using the TLS protocol, path to a PEM private key
	       associated with the certificate defined by TLSCertFile.

	   Batch true|false
	       If set to true and Protocol is set to TCP, events will be
	       batched in memory and flushed at	regular	intervals or when
	       BatchMaxSize is exceeded.

	       Notifications are not batched and sent as soon as possible.

	       When enabled, it	can occur that events get processed by the
	       Riemann server close to or after	their expiration time. Tune
	       the TTLFactor and BatchMaxSize settings according to the	amount
	       of values collected, if this is an issue.

	       Defaults	to true

	   BatchMaxSize	size
	       Maximum payload size for	a riemann packet. Defaults to 8192

	   BatchFlushTimeout seconds
	       Maximum amount of seconds to wait in between to batch flushes.
	       No timeout by default.

	   StoreRates true|false
	       If set to true (the default), convert counter values to rates.
	       If set to false counter values are stored as is,	i.e. as	an
	       increasing integer number.

	       This will be reflected in the "ds_type" tag: If StoreRates is
	       enabled,	converted values will have "rate" appended to the data
	       source type, e.g.  "ds_type:derive:rate".

	   AlwaysAppendDS false|true
	       If set to true, append the name of the Data Source (DS) to the
	       "service", i.e. the field that, together	with the "host"	field,
	       uniquely	identifies a metric in Riemann.	If set to false	(the
	       default), this is only done when	there is more than one DS.

	   TTLFactor Factor
	       Riemann events have a Time to Live (TTL)	which specifies	how
	       long each event is considered active. collectd populates	this
	       field based on the metrics interval setting. This setting
	       controls	the factor with	which the interval is multiplied to
	       set the TTL. The	default	value is 2.0. Unless you know exactly
	       what you're doing, you should only increase this	setting	from
	       its default value.

	   Notifications false|true
	       If set to true, create riemann events for notifications.	This
	       is true by default. When	processing thresholds from
	       write_riemann, it might prove useful to avoid getting
	       notification events.

	   CheckThresholds false|true
	       If set to true, attach state to events based on thresholds
	       defined in the Threshold	plugin.	Defaults to false.

	   EventServicePrefix String
	       Add the given string as a prefix	to the event service name.  If
	       EventServicePrefix not set or set to an empty string (""), no
	       prefix will be used.

       Tag String
	   Add the given string	as an additional tag to	the metric being sent
	   to Riemann.

       Attribute String	String
	   Consider the	two given strings to be	the key	and value of an
	   additional attribute	for each metric	being sent out to Riemann.

   Plugin "write_sensu"
       The write_sensu plugin will send	values to Sensu, a powerful stream
       aggregation and monitoring system. The plugin sends JSON	encoded	data
       to a local Sensu	client using a TCP socket.

       At the moment, the write_sensu plugin does not send over	a
       collectd_host parameter so it is	not possible to	use one	collectd
       instance	as a gateway for others. Each collectd host must pair with one
       Sensu client.

       Synopsis:

	<Plugin	"write_sensu">
	  <Node	"example">
	    Host "localhost"
	    Port "3030"
	    StoreRates true
	    AlwaysAppendDS false
	    MetricHandler "influx"
	    MetricHandler "default"
	    NotificationHandler	"flapjack"
	    NotificationHandler	"howling_monkey"
	    Notifications true
	  </Node>
	  Tag "foobar"
	  Attribute "foo" "bar"
	</Plugin>

       The following options are understood by the write_sensu plugin:

       <Node Name>
	   The plugin's	configuration consists of one or more Node blocks.
	   Each	block is given a unique	Name and specifies one connection to
	   an instance of Sensu. Inside	the Node block,	the following per-
	   connection options are understood:

	   Host	Address
	       Hostname	or address to connect to. Defaults to "localhost".

	   Port	Service
	       Service name or port number to connect to. Defaults to 3030.

	   StoreRates true|false
	       If set to true (the default), convert counter values to rates.
	       If set to false counter values are stored as is,	i.e. as	an
	       increasing integer number.

	       This will be reflected in the "collectd_data_source_type" tag:
	       If StoreRates is	enabled, converted values will have "rate"
	       appended	to the data source type, e.g.
	       "collectd_data_source_type:derive:rate".

	   AlwaysAppendDS false|true
	       If set the true,	append the name	of the Data Source (DS)	to the
	       "service", i.e. the field that, together	with the "host"	field,
	       uniquely	identifies a metric in Sensu. If set to	false (the
	       default), this is only done when	there is more than one DS.

	   Notifications false|true
	       If set to true, create Sensu events for notifications. This is
	       false by	default. At least one of Notifications or Metrics
	       should be enabled.

	   Metrics false|true
	       If set to true, create Sensu events for metrics.	This is	false
	       by default. At least one	of Notifications or Metrics should be
	       enabled.

	   Separator String
	       Sets the	separator for Sensu metrics name or checks. Defaults
	       to "/".

	   MetricHandler String
	       Add a handler that will be set when metrics are sent to Sensu.
	       You can add several of them, one	per line. Defaults to no
	       handler.

	   NotificationHandler String
	       Add a handler that will be set when notifications are sent to
	       Sensu. You can add several of them, one per line. Defaults to
	       no handler.

	   EventServicePrefix String
	       Add the given string as a prefix	to the event service name.  If
	       EventServicePrefix not set or set to an empty string (""), no
	       prefix will be used.

       Tag String
	   Add the given string	as an additional tag to	the metric being sent
	   to Sensu.

       Attribute String	String
	   Consider the	two given strings to be	the key	and value of an
	   additional attribute	for each metric	being sent out to Sensu.

   Plugin "xencpu"
       This plugin collects metrics of hardware	CPU load for machine running
       Xen hypervisor. Load is calculated from 'idle time' value, provided by
       Xen.  Result is reported	using the "percent" type, for each CPU (core).

       This plugin doesn't have	any options (yet).

   Plugin "zookeeper"
       The zookeeper plugin will collect statistics from a Zookeeper server
       using the mntr command.	It requires Zookeeper 3.4.0+ and access	to the
       client port.

       Synopsis:

	<Plugin	"zookeeper">
	  Host "127.0.0.1"
	  Port "2181"
	</Plugin>

       Host Address
	   Hostname or address to connect to. Defaults to "localhost".

       Port Service
	   Service name	or port	number to connect to. Defaults to 2181.

THRESHOLD CONFIGURATION
       Starting	with version 4.3.0 collectd has	support	for monitoring.	By
       that we mean that the values are	not only stored	or sent	somewhere, but
       that they are judged and, if a problem is recognized, acted upon. The
       only action collectd takes itself is to generate	and dispatch a
       "notification". Plugins can register to receive notifications and
       perform appropriate further actions.

       Since systems and what you expect them to do differ a lot, you can
       configure thresholds for	your values freely. This gives you a lot of
       flexibility but also a lot of responsibility.

       Every time a value is out of range a notification is dispatched.	This
       means that the idle percentage of your CPU needs	to be less then	the
       configured threshold only once for a notification to be generated.
       There's no such thing as	a moving average or similar - at least not
       now.

       Also, all values	that match a threshold are considered to be relevant
       or "interesting". As a consequence collectd will	issue a	notification
       if they are not received	for Timeout iterations.	The Timeout
       configuration option is explained in section "GLOBAL OPTIONS". If, for
       example,	Timeout	is set to "2" (the default) and	some hosts sends it's
       CPU statistics to the server every 60 seconds, a	notification will be
       dispatched after	about 120 seconds. It may take a little	longer because
       the timeout is checked only once	each Interval on the server.

       When a value comes within range again or	is received after it was
       missing,	an "OKAY-notification" is dispatched.

       Here is a configuration example to get you started. Read	below for more
       information.

	<Plugin	threshold>
	  <Type	"foo">
	    WarningMin	  0.00
	    WarningMax 1000.00
	    FailureMin	  0.00
	    FailureMax 1200.00
	    Invert false
	    Instance "bar"
	  </Type>

	  <Plugin "interface">
	    Instance "eth0"
	    <Type "if_octets">
	      FailureMax 10000000
	      DataSource "rx"
	    </Type>
	  </Plugin>

	  <Host	"hostname">
	    <Type "cpu">
	      Instance "idle"
	      FailureMin 10
	    </Type>

	    <Plugin "memory">
	      <Type "memory">
		Instance "cached"
		WarningMin 100000000
	      </Type>
	    </Plugin>
	  </Host>
	</Plugin>

       There are basically two types of	configuration statements: The "Host",
       "Plugin", and "Type" blocks select the value for	which a	threshold
       should be configured. The "Plugin" and "Type" blocks may	be specified
       further using the "Instance" option. You	can combine the	block by
       nesting the blocks, though they must be nested in the above order,
       i. e. "Host" may	contain	either "Plugin"	and "Type" blocks, "Plugin"
       may only	contain	"Type" blocks and "Type" may not contain other blocks.
       If multiple blocks apply	to the same value the most specific block is
       used.

       The other statements specify the	threshold to configure.	They must be
       included	in a "Type" block. Currently the following statements are
       recognized:

       FailureMax Value
       WarningMax Value
	   Sets	the upper bound	of acceptable values. If unset defaults	to
	   positive infinity. If a value is greater than FailureMax a FAILURE
	   notification	will be	created. If the	value is greater than
	   WarningMax but less than (or	equal to) FailureMax a WARNING
	   notification	will be	created.

       FailureMin Value
       WarningMin Value
	   Sets	the lower bound	of acceptable values. If unset defaults	to
	   negative infinity. If a value is less than FailureMin a FAILURE
	   notification	will be	created. If the	value is less than WarningMin
	   but greater than (or	equal to) FailureMin a WARNING notification
	   will	be created.

       DataSource DSName
	   Some	data sets have more than one "data source". Interesting
	   examples are	the "if_octets"	data set, which	has received ("rx")
	   and sent ("tx") bytes and the "disk_ops" data set, which holds
	   "read" and "write" operations. The system load data set, "load",
	   even	has three data sources:	"shortterm", "midterm",	and
	   "longterm".

	   Normally, all data sources are checked against a configured
	   threshold. If this is undesirable, or if you	want to	specify
	   different limits for	each data source, you can use the DataSource
	   option to have a threshold apply only to one	data source.

       Invert true|false
	   If set to true the range of acceptable values is inverted, i. e.
	   values between FailureMin and FailureMax (WarningMin	and
	   WarningMax) are not okay. Defaults to false.

       Persist true|false
	   Sets	how often notifications	are generated. If set to true one
	   notification	will be	generated for each value that is out of	the
	   acceptable range. If	set to false (the default) then	a notification
	   is only generated if	a value	is out of range	but the	previous value
	   was okay.

	   This	applies	to missing values, too:	If set to true a notification
	   about a missing value is generated once every Interval seconds. If
	   set to false	only one such notification is generated	until the
	   value appears again.

       Percentage true|false
	   If set to true, the minimum and maximum values given	are
	   interpreted as percentage value, relative to	the other data
	   sources. This is helpful for	example	for the	"df" type, where you
	   may want to issue a warning when less than 5	% of the total space
	   is available. Defaults to false.

       Hits Number
	   Delay creating the notification until the threshold has been	passed
	   Number times. When a	notification has been generated, or when a
	   subsequent value is inside the threshold, the counter is reset. If,
	   for example,	a value	is collected once every	10 seconds and Hits is
	   set to 3, a notification will be dispatched at most once every
	   30 seconds.

	   This	is useful when short bursts are	not a problem. If, for
	   example, 100% CPU usage for up to a minute is normal	(and data is
	   collected every 10 seconds),	you could set Hits to 6	to account for
	   this.

       Hysteresis Number
	   When	set to non-zero, a hysteresis value is applied when checking
	   minimum and maximum bounds. This is useful for values that increase
	   slowly and fluctuate	a bit while doing so. When these values	come
	   close to the	threshold, they	may "flap", i.e. switch	between
	   failure / warning case and okay case	repeatedly.

	   If, for example, the	threshold is configures	as

	     WarningMax	100.0
	     Hysteresis	1.0

	   then	a Warning notification is created when the value exceeds 101
	   and the corresponding Okay notification is only created once	the
	   value falls below 99, thus avoiding the "flapping".

FILTER CONFIGURATION
       Starting	with collectd 4.6 there	is a powerful filtering	infrastructure
       implemented in the daemon. The concept has mostly been copied from
       ip_tables, the packet filter infrastructure for Linux. We'll use	a
       similar terminology, so that users that are familiar with iptables feel
       right at	home.

   Terminology
       The following are the terms used	in the remainder of the	filter
       configuration documentation. For	an ASCII-art schema of the mechanism,
       see "General structure" below.

       Match
	   A match is a	criteria to select specific values. Examples are, of
	   course, the name of the value or it's current value.

	   Matches are implemented in plugins which you	have to	load prior to
	   using the match. The	name of	such plugins starts with the "match_"
	   prefix.

       Target
	   A target is some action that	is to be performed with	data. Such
	   actions could, for example, be to change part of the	value's
	   identifier or to ignore the value completely.

	   Some	of these targets are built into	the daemon, see	"Built-in
	   targets" below. Other targets are implemented in plugins which you
	   have	to load	prior to using the target. The name of such plugins
	   starts with the "target_" prefix.

       Rule
	   The combination of any number of matches and	at least one target is
	   called a rule. The target actions will be performed for all values
	   for which all matches apply.	If the rule does not have any matches
	   associated with it, the target action will be performed for all
	   values.

       Chain
	   A chain is a	list of	rules and possibly default targets. The	rules
	   are tried in	order and if one matches, the associated target	will
	   be called. If a value is handled by a rule, it depends on the
	   target whether or not any subsequent	rules are considered or	if
	   traversal of	the chain is aborted, see "Flow	control" below.	After
	   all rules have been checked,	the default targets will be executed.

   General structure
       The following shows the resulting structure:

	+---------+
	! Chain	  !
	+---------+
	     !
	     V
	+---------+  +---------+  +---------+  +---------+
	! Rule	  !->! Match   !->! Match   !->! Target	 !
	+---------+  +---------+  +---------+  +---------+
	     !
	     V
	+---------+  +---------+  +---------+
	! Rule	  !->! Target  !->! Target  !
	+---------+  +---------+  +---------+
	     !
	     V
	     :
	     :
	     !
	     V
	+---------+  +---------+  +---------+
	! Rule	  !->! Match   !->! Target  !
	+---------+  +---------+  +---------+
	     !
	     V
	+---------+
	! Default !
	! Target  !
	+---------+

   Flow	control
       There are four ways to control which way	a value	takes through the
       filter mechanism:

       jump
	   The built-in	jump target can	be used	to "call" another chain, i. e.
	   process the value with another chain. When the called chain
	   finishes, usually the next target or	rule after the jump is
	   executed.

       stop
	   The stop condition, signaled	for example by the built-in target
	   stop, causes	all processing of the value to be stopped immediately.

       return
	   Causes processing in	the current chain to be	aborted, but
	   processing of the value generally will continue. This means that if
	   the chain was called	via Jump, the next target or rule after	the
	   jump	will be	executed. If the chain was not called by another
	   chain, control will be returned to the daemon and it	may pass the
	   value to another chain.

       continue
	   Most	targets	will signal the	continue condition, meaning that
	   processing should continue normally.	There is no special built-in
	   target for this condition.

   Synopsis
       The configuration reflects this structure directly:

	PostCacheChain "PostCache"
	<Chain "PostCache">
	  <Rule	"ignore_mysql_show">
	    <Match "regex">
	      Plugin "^mysql$"
	      Type "^mysql_command$"
	      TypeInstance "^show_"
	    </Match>
	    <Target "stop">
	    </Target>
	  </Rule>
	  <Target "write">
	    Plugin "rrdtool"
	  </Target>
	</Chain>

       The above configuration example will ignore all values where the	plugin
       field is	"mysql", the type is "mysql_command" and the type instance
       begins with "show_". All	other values will be sent to the "rrdtool"
       write plugin via	the default target of the chain. Since this chain is
       run after the value has been added to the cache,	the MySQL "show_*"
       command statistics will be available via	the "unixsock" plugin.

   List	of configuration options
       PreCacheChain ChainName
       PostCacheChain ChainName
	   Configure the name of the "pre-cache	chain" and the "post-cache
	   chain". The argument	is the name of a chain that should be executed
	   before and/or after the values have been added to the cache.

	   To understand the implications, it's	important you know what	is
	   going on inside collectd. The following diagram shows how values
	   are passed from the read-plugins to the write-plugins:

	      +---------------+
	      !	 Read-Plugin  !
	      +-------+-------+
		      !
	    + -	- - - V	- - - -	+
	    : +---------------+	:
	    : !	  Pre-Cache   !	:
	    : !	    Chain     !	:
	    : +-------+-------+	:
	    :	      !		:
	    :	      V		:
	    : +-------+-------+	:  +---------------+
	    : !	    Cache     !--->!  Value Cache  !
	    : !	    insert    !	:  +---+---+-------+
	    : +-------+-------+	:      !   !
	    :	      !	  ,------------'   !
	    :	      V	  V	:	   V
	    : +-------+---+---+	:  +-------+-------+
	    : !	 Post-Cache   +--->! Write-Plugins !
	    : !	    Chain     !	:  +---------------+
	    : +---------------+	:
	    :			:
	    :  dispatch	values	:
	    + -	- - - -	- - - -	+

	   After the values are	passed from the	"read" plugins to the dispatch
	   functions, the pre-cache chain is run first.	The values are added
	   to the internal cache afterwards. The post-cache chain is run after
	   the values have been	added to the cache. So why is it such a	huge
	   deal	if chains are run before or after the values have been added
	   to this cache?

	   Targets that	change the identifier of a value list should be
	   executed before the values are added	to the cache, so that the name
	   in the cache	matches	the name that is used in the "write" plugins.
	   The "unixsock" plugin, too, uses this cache to receive a list of
	   all available values. If you	change the identifier after the	value
	   list	has been added to the cache, this may easily lead to
	   confusion, but it's not forbidden of	course.

	   The cache is	also used to convert counter values to rates. These
	   rates are, for example, used	by the "value" match (see below). If
	   you use the rate stored in the cache	before the new value is	added,
	   you will use	the old, previous rate.	Write plugins may use this
	   rate, too, see the "csv" plugin, for	example.  The "unixsock"
	   plugin uses these rates too,	to implement the "GETVAL" command.

	   Last	but not	last, the stop target makes a difference: If the pre-
	   cache chain returns the stop	condition, the value will not be added
	   to the cache	and the	post-cache chain will not be run.

       Chain Name
	   Adds	a new chain with a certain name. This name can be used to
	   refer to a specific chain, for example to jump to it.

	   Within the Chain block, there can be	Rule blocks and	Target blocks.

       Rule [Name]
	   Adds	a new rule to the current chain. The name of the rule is
	   optional and	currently has no meaning for the daemon.

	   Within the Rule block, there	may be any number of Match blocks and
	   there must be at least one Target block.

       Match Name
	   Adds	a match	to a Rule block. The name specifies what kind of match
	   should be performed.	Available matches depend on the	plugins	that
	   have	been loaded.

	   The arguments inside	the Match block	are passed to the plugin
	   implementing	the match, so which arguments are valid	here depends
	   on the plugin being used.  If you do	not need any to	pass any
	   arguments to	a match, you can use the shorter syntax:

	    Match "foobar"

	   Which is equivalent to:

	    <Match "foobar">
	    </Match>

       Target Name
	   Add a target	to a rule or a default target to a chain. The name
	   specifies what kind of target is to be added. Which targets are
	   available depends on	the plugins being loaded.

	   The arguments inside	the Target block are passed to the plugin
	   implementing	the target, so which arguments are valid here depends
	   on the plugin being used.  If you do	not need any to	pass any
	   arguments to	a target, you can use the shorter syntax:

	    Target "stop"

	   This	is the same as writing:

	    <Target "stop">
	    </Target>

   Built-in targets
       The following targets are built into the	core daemon and	therefore need
       no plugins to be	loaded:

       return
	   Signals the "return"	condition, see the "Flow control" section
	   above. This causes the current chain	to stop	processing the value
	   and returns control to the calling chain. The calling chain will
	   continue processing targets and rules just after the	jump target
	   (see	below).	This is	very similar to	the RETURN target of iptables,
	   see iptables(8).

	   This	target does not	have any options.

	   Example:

	    Target "return"

       stop
	   Signals the "stop" condition, see the "Flow control"	section	above.
	   This	causes processing of the value to be aborted immediately. This
	   is similar to the DROP target of iptables, see iptables(8).

	   This	target does not	have any options.

	   Example:

	    Target "stop"

       write
	   Sends the value to "write" plugins.

	   Available options:

	   Plugin Name
	       Name of the write plugin	to which the data should be sent. This
	       option may be given multiple times to send the data to more
	       than one	write plugin. If the plugin supports multiple
	       instances, the plugin's instance(s) must	also be	specified.

	   If no plugin	is explicitly specified, the values will be sent to
	   all available write plugins.

	   Single-instance plugin example:

	    <Target "write">
	      Plugin "rrdtool"
	    </Target>

	   Multi-instance plugin example:

	    <Plugin "write_graphite">
	      <Node "foo">
	      ...
	      </Node>
	      <Node "bar">
	      ...
	      </Node>
	    </Plugin>
	     ...
	    <Target "write">
	      Plugin "write_graphite/foo"
	    </Target>

       jump
	   Starts processing the rules of another chain, see "Flow control"
	   above. If the end of	that chain is reached, or a stop condition is
	   encountered,	processing will	continue right after the jump target,
	   i. e. with the next target or the next rule.	This is	similar	to the
	   -j command line option of iptables, see iptables(8).

	   Available options:

	   Chain Name
	       Jumps to	the chain Name.	This argument is required and may
	       appear only once.

	   Example:

	    <Target "jump">
	      Chain "foobar"
	    </Target>

   Available matches
       regex
	   Matches a value using regular expressions.

	   Available options:

	   Host	Regex
	   Plugin Regex
	   PluginInstance Regex
	   Type	Regex
	   TypeInstance	Regex
	   MetaData String Regex
	       Match values where the given regular expressions	match the
	       various fields of the identifier	of a value. If multiple
	       regular expressions are given, all regexen must match for a
	       value to	match.

	   Invert false|true
	       When set	to true, the result of the match is inverted, i.e. all
	       value lists where all regular expressions apply are not
	       matched,	all other value	lists are matched. Defaults to false.

	   Example:

	    <Match "regex">
	      Host "customer[0-9]+"
	      Plugin "^foobar$"
	    </Match>

       timediff
	   Matches values that have a time which differs from the time on the
	   server.

	   This	match is mainly	intended for servers that receive values over
	   the "network" plugin	and write them to disk using the "rrdtool"
	   plugin. RRDtool is very sensitive to	the timestamp used when
	   updating the	RRD files. In particular, the time must	be ever
	   increasing. If a misbehaving	client sends one packet	with a
	   timestamp far in the	future,	all further packets with a correct
	   time	will be	ignored	because	of that	one packet. What's worse, such
	   corrupted RRD files are hard	to fix.

	   This	match lets one match all values	outside	a specified time range
	   (relative to	the server's time), so you can use the stop target
	   (see	below) to ignore the value, for	example.

	   Available options:

	   Future Seconds
	       Matches all values that are ahead of the	server's time by
	       Seconds or more seconds.	Set to zero for	no limit. Either
	       Future or Past must be non-zero.

	   Past	Seconds
	       Matches all values that are behind of the server's time by
	       Seconds or more seconds.	Set to zero for	no limit. Either
	       Future or Past must be non-zero.

	   Example:

	    <Match "timediff">
	      Future  300
	      Past   3600
	    </Match>

	   This	example	matches	all values that	are five minutes or more ahead
	   of the server or one	hour (or more) lagging behind.

       value
	   Matches the actual value of data sources against given minimum /
	   maximum values. If a	data-set consists of more than one data-
	   source, all data-sources must match the specified ranges for	a
	   positive match.

	   Available options:

	   Min Value
	       Sets the	smallest value which still results in a	match. If
	       unset, behaves like negative infinity.

	   Max Value
	       Sets the	largest	value which still results in a match. If
	       unset, behaves like positive infinity.

	   Invert true|false
	       Inverts the selection. If the Min and Max settings result in a
	       match, no-match is returned and vice versa. Please note that
	       the Invert setting only effects how Min and Max are applied to
	       a specific value. Especially the	DataSource and Satisfy
	       settings	(see below) are	not inverted.

	   DataSource DSName [DSName ...]
	       Select one or more of the data sources. If no data source is
	       configured, all data sources will be checked. If	the type
	       handled by the match does not have a data source	of the
	       specified name(s), this will always result in no	match
	       (independent of the Invert setting).

	   Satisfy Any|All
	       Specifies how checking with several data	sources	is performed.
	       If set to Any, the match	succeeds if one	of the data sources is
	       in the configured range.	If set to All the match	only succeeds
	       if all data sources are within the configured range. Default is
	       All.

	       Usually All is used for positive	matches, Any is	used for
	       negative	matches. This means that with All you usually check
	       that all	values are in a	"good" range, while with Any you check
	       if any value is within a	"bad" range (or	outside	the "good"
	       range).

	   Either Min or Max, but not both, may	be unset.

	   Example:

	    # Match all	values smaller than or equal to	100. Matches only if all data
	    # sources are below	100.
	    <Match "value">
	      Max 100
	      Satisfy "All"
	    </Match>

	    # Match if the value of any	data source is outside the range of 0 -	100.
	    <Match "value">
	      Min   0
	      Max 100
	      Invert true
	      Satisfy "Any"
	    </Match>

       empty_counter
	   Matches all values with one or more data sources of type COUNTER
	   and where all counter values	are zero. These	counters usually never
	   increased since they	started	existing (and are therefore
	   uninteresting), or got reset	recently or overflowed and you had
	   really, really bad luck.

	   Please keep in mind that ignoring such counters can result in
	   confusing behavior: Counters	which hardly ever increase will	be
	   zero	for long periods of time. If the counter is reset for some
	   reason (machine or service restarted, usually), the graph will be
	   empty (NAN) for a long time.	People may not understand why.

       hashed
	   Calculates a	hash value of the host name and	matches	values
	   according to	that hash value. This makes it possible	to divide all
	   hosts into groups and match only values that	are in a specific
	   group. The intended use is in load balancing, where you want	to
	   handle only part of all data	and leave the rest for other servers.

	   The hashing function	used tries to distribute the hosts evenly.
	   First, it calculates	a 32 bit hash value using the characters of
	   the hostname:

	     hash_value	= 0;
	     for (i = 0; host[i] != 0; i++)
	       hash_value = (hash_value	* 251) + host[i];

	   The constant	251 is a prime number which is supposed	to make	this
	   hash	value more random. The code then checks	the group for this
	   host	according to the Total and Match arguments:

	     if	((hash_value % Total) == Match)
	       matches;
	     else
	       does not	match;

	   Please note that when you set Total to two (i. e. you have only two
	   groups), then the least significant bit of the hash value will be
	   the XOR of all least	significant bits in the	host name. One
	   consequence is that when you	have two hosts,	"server0.example.com"
	   and "server1.example.com", where the	host name differs in one digit
	   only	and the	digits differ by one, those hosts will never end up in
	   the same group.

	   Available options:

	   Match Match Total
	       Divide the data into Total groups and match all hosts in	group
	       Match as	described above. The groups are	numbered from zero,
	       i. e. Match must	be smaller than	Total. Total must be at	least
	       one, although only values greater than one really do make any
	       sense.

	       You can repeat this option to match multiple groups, for
	       example:

		 Match 3 7
		 Match 5 7

	       The above config	will divide the	data into seven	groups and
	       match groups three and five. One	use would be to	keep every
	       value on	two hosts so that if one fails the missing data	can
	       later be	reconstructed from the second host.

	   Example:

	    # Operate on the pre-cache chain, so that ignored values are not even in the
	    # global cache.
	    <Chain "PreCache">
	      <Rule>
		<Match "hashed">
		  # Divide all received	hosts in seven groups and accept all hosts in
		  # group three.
		  Match	3 7
		</Match>
		# If matched: Return and continue.
		Target "return"
	      </Rule>
	      #	If not matched:	Return and stop.
	      Target "stop"
	    </Chain>

   Available targets
       notification
	   Creates and dispatches a notification.

	   Available options:

	   Message String
	       This required option sets the message of	the notification. The
	       following placeholders will be replaced by an appropriate
	       value:

	       %{host}
	       %{plugin}
	       %{plugin_instance}
	       %{type}
	       %{type_instance}
		   These placeholders are replaced by the identifier field of
		   the same name.

	       %{ds:name}
		   These placeholders are replaced by a	(hopefully) human
		   readable representation of the current rate of this data
		   source. If you changed the instance name (using the set or
		   replace targets, see	below),	it may not be possible to
		   convert counter values to rates.

	       Please note that	these placeholders are case sensitive!

	   Severity "FAILURE"|"WARNING"|"OKAY"
	       Sets the	severity of the	message. If omitted, the severity
	       "WARNING" is used.

	   Example:

	     <Target "notification">
	       Message "Oops, the %{type_instance} temperature is currently %{ds:value}!"
	       Severity	"WARNING"
	     </Target>

       replace
	   Replaces parts of the identifier using regular expressions.

	   Available options:

	   Host	Regex Replacement
	   Plugin Regex	Replacement
	   PluginInstance Regex	Replacement
	   TypeInstance	Regex Replacement
	   MetaData String Regex Replacement
	   DeleteMetaData String Regex
	       Match the appropriate field with	the given regular expression
	       Regex. If the regular expression	matches, that part that
	       matches is replaced with	Replacement. If	multiple places	of the
	       input buffer match a given regular expression, only the first
	       occurrence will be replaced.

	       You can specify each option multiple times to use multiple
	       regular expressions one after another.

	   Example:

	    <Target "replace">
	      #	Replace	"example.net" with "example.com"
	      Host "\\<example.net\\>" "example.com"

	      #	Strip "www." from hostnames
	      Host "\\<www\\." ""
	    </Target>

       set Sets	part of	the identifier of a value to a given string.

	   Available options:

	   Host	String
	   Plugin String
	   PluginInstance String
	   TypeInstance	String
	   MetaData String String
	       Set the appropriate field to the	given string. The strings for
	       plugin instance,	type instance, and meta	data may be empty, the
	       strings for host	and plugin may not be empty. It's currently
	       not possible to set the type of a value this way.

	       The following placeholders will be replaced by an appropriate
	       value:

	       %{host}
	       %{plugin}
	       %{plugin_instance}
	       %{type}
	       %{type_instance}
		   These placeholders are replaced by the identifier field of
		   the same name.

	       %{meta:name}
		   These placeholders are replaced by the meta data value with
		   the given name.

	       Please note that	these placeholders are case sensitive!

	   DeleteMetaData String
	       Delete the named	meta data field.

	   Example:

	    <Target "set">
	      PluginInstance "coretemp"
	      TypeInstance "core3"
	    </Target>

   Backwards compatibility
       If you use collectd with	an old configuration, i. e. one	without	a
       Chain block, it will behave as it used to. This is equivalent to	the
       following configuration:

	<Chain "PostCache">
	  Target "write"
	</Chain>

       If you specify a	PostCacheChain,	the write target will not be added
       anywhere	and you	will have to make sure that it is called where
       appropriate. We suggest to add the above	snippet	as default target to
       your "PostCache"	chain.

   Examples
       Ignore all values, where	the hostname does not contain a	dot, i.	e.
       can't be	an FQDN.

	<Chain "PreCache">
	  <Rule	"no_fqdn">
	    <Match "regex">
	      Host "^[^\.]*$"
	    </Match>
	    Target "stop"
	  </Rule>
	  Target "write"
	</Chain>

SEE ALSO
       collectd(1), collectd-exec(5), collectd-perl(5),	collectd-unixsock(5),
       types.db(5), hddtemp(8),	iptables(8), kstat(3KSTAT), mbmon(1), psql(1),
       regex(7), rrdtool(1), sensors(1)

AUTHOR
       Florian Forster <octo@collectd.org>

5.7.1				  2017-01-23		      COLLECTD.CONF(5)

NAME | SYNOPSIS | DESCRIPTION | GLOBAL OPTIONS | PLUGIN OPTIONS | THRESHOLD CONFIGURATION | FILTER CONFIGURATION | SEE ALSO | AUTHOR

Want to link to this manual page? Use this URL:
<https://www.freebsd.org/cgi/man.cgi?query=collectd.conf&sektion=5&manpath=FreeBSD+12.1-RELEASE+and+Ports>

home | help