Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages

  
 
  

home | help
VARNISHD(1)							   VARNISHD(1)

NAME
       varnishd	- HTTP accelerator daemon

SYNOPSIS
       varnishd				[-a			   [name=][ad-
       dress][:port][,PROTO][,user=<user>][,group=<group>][,mode=<mode>]]  [-b
       [host[:port]|path]]  [-C] [-d] [-F] [-f config] [-h type[,options]] [-I
       clifile]	 [-i  identity]	 [-j  jail[,jailoptions]]  [-l	vsl]  [-M  ad-
       dress:port]  [-n	name] [-P file]	[-p param=value] [-r param[,param...]]
       [-S secret-file]	[-s  [name=]kind[,options]]  [-T  address[:port]]  [-t
       TTL] [-V] [-W waiter]

       varnishd	[-x parameter|vsl|cli|builtin|optstring]

       varnishd	[-?]

DESCRIPTION
       The  varnishd daemon accepts HTTP requests from clients,	passes them on
       to a backend server and caches the returned documents to	better satisfy
       future requests for the same document.

OPTIONS
   Basic options
       -a							  <[name=][ad-
       dress][:port][,PROTO][,user=<user>][,group=<group>][,mode=<mode>]>
	  Listen for client requests on	the specified address  and  port.  The
	  address  can	be  a  host  name  ("localhost"),  an IPv4 dotted-quad
	  ("127.0.0.1"),  an  IPv6  address  enclosed	in   square   brackets
	  ("[::1]"),  or  a path beginning with	a '/' for a Unix domain	socket
	  ("/path/to/listen.sock"). If address is not specified, varnishd will
	  listen  on  all  available  IPv4 and IPv6 interfaces.	If port	is not
	  specified, port 80 (http) is used. At	least one of address  or  port
	  is required.

	  If a Unix domain socket is specified as the listen address, then the
	  user,	group and mode sub-arguments may be used to specify  the  per-
	  missions  of	the socket file	-- use names for user and group, and a
	  3-digit octal	value for mode.	These sub-arguments are	not  permitted
	  if an	IP address is specified. When Unix domain socket listeners are
	  in use, all VCL configurations must have version >= 4.1.

	  Name is referenced in	logs. If name is not  specified,  "a0",	 "a1",
	  etc. is used.	An additional protocol type can	be set for the listen-
	  ing socket with PROTO. Valid protocol	types are: HTTP	(default), and
	  PROXY.

	  Multiple  listening addresses	can be specified by using different -a
	  arguments.

       -b _[host[:port]|path]_
	      Use the specified	host as	backend	server.	If port	is not	speci-
	      fied, the	default	is 8080.

	      If the value of -b begins	with /,	it is interpreted as the abso-
	      lute path	of a Unix domain socket	to which Varnish connects.  In
	      that  case, the value of -b must satisfy the conditions required
	      for the .path field of a backend declaration, see	vcl(7).	 Back-
	      ends  with  Unix socket addresses	may only be used with VCL ver-
	      sions >= 4.1.

	      -b can be	used only once,	and not	together with f.

       -f config
	      Use the specified	VCL configuration file instead of the  builtin
	      default.	See vcl(7) for details on VCL syntax.

	      If a single -f option is used, then the VCL instance loaded from
	      the file is named	"boot" and immediately becomes active. If more
	      than one -f option is used, the VCL instances are	named "boot0",
	      "boot1" and so forth, in the order corresponding to the -f argu-
	      ments, and the last one is named "boot", which becomes active.

	      Either  -b  or one or more -f options must be specified, but not
	      both, and	they cannot both be left out, unless  -d  is  used  to
	      start  varnishd in debugging mode. If the	empty string is	speci-
	      fied as the sole -f option, then varnishd	starts without	start-
	      ing  the	worker process,	and the	management process will	accept
	      CLI commands.  You can also combine an empty -f option  with  an
	      initialization  script (-I option) and the child process will be
	      started if there is an active VCL	at the end of the  initializa-
	      tion.

	      When  used  with a relative file name, config is searched	in the
	      vcl_path.	It is possible to set this path	prior to using -f  op-
	      tions  with  a  -p option. During	startup, varnishd doesn't com-
	      plain about unsafe VCL paths:  unlike  the  varnish-cli(7)  that
	      could later be accessed remotely,	starting varnishd requires lo-
	      cal privileges.

       -n name
	      Specify the name for this	instance.  This	name is	used  to  con-
	      struct  the name of the directory	in which varnishd keeps	tempo-
	      rary files and persistent	state. If the  specified  name	begins
	      with  a forward slash, it	is interpreted as the absolute path to
	      the directory.

   Documentation options
       For these options, varnishd prints information to standard  output  and
       exits. When a -x	option is used,	it must	be the only option (it outputs
       documentation in	reStructuredText, aka RST).

       -?
	  Print	the usage message.

       -x parameter
	      Print documentation of the runtime parameters (-p	options),  see
	      List of Parameters.

       -x vsl Print  documentation of the tags used in the Varnish shared mem-
	      ory log, see vsl(7).

       -x cli Print documentation of the  command  line	 interface,  see  var-
	      nish-cli(7).

       -x builtin
	      Print the	contents of the	default	VCL program builtin.vcl.

       -x optstring
	      Print the	optstring parameter to getopt(3) to help writing wrap-
	      per scripts.

   Operations options
       -F     Do not fork, run in the foreground. Only one of -F or -d can  be
	      specified, and -F	cannot be used together	with -C.

       -T _address[:port]_
	      Offer  a management interface on the specified address and port.
	      See varnish-cli(7) for documentation of the management commands.
	      To disable the management	interface use none.

       -M _address:port_
	      Connect  to  this	 port  and  offer  the command line interface.
	      Think of it as a reverse shell. When running with	-M  and	 there
	      is  no  backend  defined	the child process (the cache) will not
	      start initially.

       -P file
	      Write the	PID of the process to the specified file.

       -i identity
	      Specify the identity of the Varnish server. This can be accessed
	      using  server.identity  from VCL and with	VSM_Name() from	utili-
	      ties.  If	not specified the output of gethostname(3) is used.

       -I clifile
	      Execute the management commands in the file given	as clifile be-
	      fore the the worker process starts, see CLI Command File.

   Tuning options
       -t TTL Specifies	 the  default  time  to	live (TTL) for cached objects.
	      This is a	shortcut for specifying	the default_ttl	 run-time  pa-
	      rameter.

       -p _param=value_
	      Set the parameter	specified by param to the specified value, see
	      List of Parameters for details. This option can be used multiple
	      times to specify multiple	parameters.

       -s _[name=]type[,options]_
	      Use the specified	storage	backend. See Storage Backend section.

	      This option can be used multiple times to	specify	multiple stor-
	      age files. Name is referenced in logs, VCL, statistics, etc.  If
	      name is not specified, "s0", "s1"	and so forth is	used.

       -l _vsl_
	      Specifies	 size  of the space for	the VSL	records, shorthand for
	      -p vsl_space=<vsl>. Scaling suffixes like	'K'  and  'M'  can  be
	      used up to (G)igabytes. See vsl_space for	more information.

   Security options
       -r _param[,param...]_
	      Make  the	listed parameters read only. This gives	the system ad-
	      ministrator a way	to limit what the Varnish CLI  can  do.	  Con-
	      sider  making  parameters	such as	cc_command, vcc_allow_inline_c
	      and vmod_path read only as these can potentially be used to  es-
	      calate privileges	from the CLI.

       -S secret-file
	      Path  to	a file containing a secret used	for authorizing	access
	      to the management	port. To disable authentication	use none.

	      If this argument is not provided,	a secret drawn from the	system
	      PRNG  will  be  written to a file	called _.secret	in the working
	      directory	(see opt_n) with default ownership and permissions  of
	      the user having started varnish.

	      Thus, users wishing to delegate control over varnish will	proba-
	      bly want to create a custom secret file with appropriate permis-
	      sions (ie. readable by a unix group to delegate control to).

       -j _jail[,jailoptions]_
	      Specify the jailing mechanism to use. See	Jail section.

   Advanced, development and debugging options
       -d     Enables  debugging  mode:	 The  parent process runs in the fore-
	      ground with a CLI	connection  on	stdin/stdout,  and  the	 child
	      process must be started explicitly with a	CLI command. Terminat-
	      ing the parent process will also terminate the child.

	      Only one of -d or	-F can be specified, and -d cannot be used to-
	      gether with -C.

       -C     Print  VCL code compiled to C language and exit. Specify the VCL
	      file to compile with the -f option. Either -f or -b must be used
	      with -C, and -C cannot be	used with -F or	-d.

       -V     Display  the  version number and exit. This must be the only op-
	      tion.

       -h _type[,options]_
	      Specifies	the hash algorithm. See	Hash Algorithm section	for  a
	      list of supported	algorithms.

       -W waiter
	      Specifies	the waiter type	to use.

   Hash	Algorithm
       The following hash algorithms are available:

       -h critbit
	      self-scaling  tree structure. The	default	hash algorithm in Var-
	      nish Cache 2.1 and onwards. In comparison	to a more  traditional
	      B	 tree  the  critbit tree is almost completely lockless.	Do not
	      change this unless you are certain what you're doing.

       -h simple_list
	      A	simple doubly-linked list.   Not  recommended  for  production
	      use.

       -h _classic[,buckets]_
	      A	standard hash table. The hash key is the CRC32 of the object's
	      URL modulo the size of the hash table.  Each table entry	points
	      to a list	of elements which share	the same hash key. The buckets
	      parameter	specifies the number of	entries	 in  the  hash	table.
	      The default is 16383.

   Storage Backend
       The argument format to define storage backends is:

       -s _[name]=kind[,options]_
	      If name is omitted, Varnish will name storages sN, starting with
	      s0 and incrementing N for	every new storage.

	      For kind and options see details below.

       Storages	can be used in vcl as storage.name, so,	for example if myStor-
       age was defined by -s myStorage=malloc,5G, it could be used in VCL like
       so:

	  set beresp.storage = storage.myStorage;

       A special name is Transient  which  is  the  default  storage  for  un-
       cacheable   objects   as	  resulting   from  a  pass,  hit-for-miss  or
       hit-for-pass.

       If no -s	options	are given, the default is:

	  -s malloc=100m

       If no Transient storage is defined, the default is  an  unbound	malloc
       storage as if defined as:

	  -s Transient=malloc

       The following storage types and options are available:

       -s _default[,size]_
	      The  default  storage  type resolves to umem where available and
	      malloc otherwise.

       -s _malloc[,size]_
	      malloc is	a memory based backend.

       -s _umem[,size]_
	      umem is a	storage	backend	which is more efficient	than malloc on
	      platforms	where it is available.

	      See  the section on umem in chapter Storage backends of The Var-
	      nish Users Guide for details.

       -s _file,path[,size[,granularity[,advice]]]_
	      The file backend stores data in a	file on	disk. The file will be
	      accessed	using  mmap.  Note  that this storage provide no cache
	      persistence.

	      The path is mandatory. If	path points to a directory,  a	tempo-
	      rary  file will be created in that directory and immediately un-
	      linked. If path points to	a non-existing file, the file will  be
	      created.

	      If  size	is omitted, and	path points to an existing file	with a
	      size greater than	zero, the size of that file will be  used.  If
	      not, an error is reported.

	      Granularity sets the allocation block size. Defaults to the sys-
	      tem page size or the filesystem block size, whichever is larger.

	      Advice tells the kernel how varnishd expects to use this	mapped
	      region  so that the kernel can choose the	appropriate read-ahead
	      and caching techniques. Possible values are normal,  random  and
	      sequential,   corresponding   to	MADV_NORMAL,  MADV_RANDOM  and
	      MADV_SEQUENTIAL madvise()	 advice	 argument,  respectively.  De-
	      faults to	random.

       -s _persistent,path,size_
	      Persistent  storage.  Varnish  will store	objects	in a file in a
	      manner that will secure the survival of most of the  objects  in
	      the  event  of  a	 planned or unplanned shutdown of Varnish. The
	      persistent storage backend has multiple issues with it and  will
	      likely be	removed	from a future version of Varnish.

   Jail
       Varnish jails are a generalization over various platform	specific meth-
       ods to reduce the privileges of varnish processes. They may  have  spe-
       cific options. Available	jails are:

       -j _solaris[,worker=`privspec`]_
	      Reduce  privileges(5)  for varnishd and sub-process to the mini-
	      mally required set. Only available on platforms which  have  the
	      setppriv(2) call.

	      The  optional  worker  argument  can  be	used  to pass a	privi-
	      lege-specification (see ppriv(1))	by which to extend the	effec-
	      tive  set	 of  the varnish worker	process. While extended	privi-
	      leges may	be required by custom vmods, it	is always the more se-
	      cure to not use the worker option.

	      Example to grant basic privileges	to the worker process:

		 -j solaris,worker=basic

       -j _unix[,user=`user`][,ccgroup=`group`][,workuser=`user`]_
	      Default  on all other platforms when varnishd is started with an
	      effective	uid of 0 ("as root").

	      With the unix jail mechanism activated, varnish will  switch  to
	      an  alternative  user  for subprocesses and change the effective
	      uid of the master	process	whenever possible.

	      The optional user	argument specifies which alternative  user  to
	      use. It defaults to varnish.

	      The  optional  ccgroup argument specifies	a group	to add to var-
	      nish subprocesses	requiring access to a c-compiler. There	is  no
	      default.

	      The  optional workuser argument specifies	an alternative user to
	      use for the worker process. It defaults to vcache.

       -j none
	      last resort jail choice: With jail mechanism none, varnish  will
	      run all processes	with the privileges it was started with.

   Management Interface
       If the -T option	was specified, varnishd	will offer a command-line man-
       agement interface on the	specified address and port.   The  recommended
       way  of	connecting to the command-line management interface is through
       varnishadm(1).

       The commands available are documented in	varnish-cli(7).

   CLI Command File
       The -I option makes it possible to run  arbitrary  management  commands
       when  varnishd  is  launched,  before the worker	process	is started. In
       particular, this	is the way to load  configurations,  apply  labels  to
       them, and make a	VCL instance active that uses those labels on startup:

	  vcl.load panic /etc/varnish_panic.vcl
	  vcl.load siteA0 /etc/varnish_siteA.vcl
	  vcl.load siteB0 /etc/varnish_siteB.vcl
	  vcl.load siteC0 /etc/varnish_siteC.vcl
	  vcl.label siteA siteA0
	  vcl.label siteB siteB0
	  vcl.label siteC siteC0
	  vcl.load main	/etc/varnish_main.vcl
	  vcl.use main

       Every  line in the file,	including the last line, must be terminated by
       a newline or carriage return.

       If a command in the file	is prefixed with '-', failure will  not	 abort
       the startup.

RUN TIME PARAMETERS
   Run Time Parameter Flags
       Runtime	parameters  are	marked with shorthand flags to avoid repeating
       the same	text over and over in the table	 below.	 The  meaning  of  the
       flags are:

       o experimental

	 We  have  no solid information	about good/bad/optimal values for this
	 parameter. Feedback with experience and observations  are  most  wel-
	 come.

       o delayed

	 This  parameter  can  be changed on the fly, but will not take	effect
	 immediately.

       o restart

	 The worker process must be stopped and	restarted, before this parame-
	 ter takes effect.

       o reload

	 The VCL programs must be reloaded for this parameter to take effect.

       o wizard

	 Do not	touch unless you really	know what you're doing.

       o only_root

	 Only works if varnishd	is running as root.

   Default Value Exceptions on 32 bit Systems
       Be  aware that on 32 bit	systems, certain default or maximum values are
       reduced relative	to the values listed below, in order  to  conserve  VM
       space:

       o workspace_client: 24k

       o workspace_backend: 20k

       o http_resp_size: 8k

       o http_req_size:	12k

       o gzip_buffer: 4k

       o vsl_space: 1G (maximum)

       o thread_pool_stack: 52k

   List	of Parameters
       This  text  is  produced	from the same text you will find in the	CLI if
       you use the param.show command:

   accept_filter
	  o Units: bool

	  o Default: on	(if your platform supports accept filters)

	  o Flags: must_restart

       Enable kernel accept-filters. This may require a	kernel	module	to  be
       loaded to have an effect	when enabled.

       Enabling	 accept_filter	may  prevent some requests to reach Varnish in
       the first place.	Malformed requests may go unnoticed and	 not  increase
       the  client_req_400  counter.  GET  or HEAD requests with a body	may be
       blocked altogether.

   acceptor_sleep_decay
	  o Default: 0.9

	  o Minimum: 0

	  o Maximum: 1

	  o Flags: experimental

       If we run out of	resources, such	as file	descriptors or worker threads,
       the  acceptor  will sleep between accepts.  This	parameter (multiplica-
       tively) reduce the sleep	duration for each successful accept. (ie:  0.9
       = reduce	by 10%)

   acceptor_sleep_incr
	  o Units: seconds

	  o Default: 0.000

	  o Minimum: 0.000

	  o Maximum: 1.000

	  o Flags: experimental

       If we run out of	resources, such	as file	descriptors or worker threads,
       the acceptor will sleep between accepts.	 This  parameter  control  how
       much longer we sleep, each time we fail to accept a new connection.

   acceptor_sleep_max
	  o Units: seconds

	  o Default: 0.050

	  o Minimum: 0.000

	  o Maximum: 10.000

	  o Flags: experimental

       If we run out of	resources, such	as file	descriptors or worker threads,
       the acceptor will sleep between accepts.	  This	parameter  limits  how
       long it can sleep between attempts to accept new	connections.

   auto_restart
	  o Units: bool

	  o Default: on

       Automatically restart the child/worker process if it dies.

   backend_idle_timeout
	  o Units: seconds

	  o Default: 60.000

	  o Minimum: 1.000

       Timeout before we close unused backend connections.

   backend_local_error_holddown
	  o Units: seconds

	  o Default: 10.000

	  o Minimum: 0.000

	  o Flags: experimental

       When  connecting	 to backends, certain error codes (EADDRNOTAVAIL, EAC-
       CESS, EPERM) signal a local resource shortage  or  configuration	 issue
       for  which retrying connection attempts may worsen the situation	due to
       the complexity of the operations	involved in the	kernel.	 This  parame-
       ter prevents repeated connection	attempts for the configured duration.

   backend_remote_error_holddown
	  o Units: seconds

	  o Default: 0.250

	  o Minimum: 0.000

	  o Flags: experimental

       When connecting to backends, certain error codes	(ECONNREFUSED, ENETUN-
       REACH) signal fundamental connection issues such	as the backend not ac-
       cepting	connections  or	routing	problems for which repeated connection
       attempts	are considered useless This parameter prevents	repeated  con-
       nection attempts	for the	configured duration.

   ban_cutoff
	  o Units: bans

	  o Default: 0

	  o Minimum: 0

	  o Flags: experimental

       Expurge long tail content from the cache	to keep	the number of bans be-
       low this	value. 0 disables.

       When this parameter is set to a non-zero	value, the ban lurker  contin-
       ues  to	work  the ban list as usual top	to bottom, but when it reaches
       the ban_cutoff-th ban, it treats	all objects as if they matched	a  ban
       and  expurges  them  from  cache.  As  actively used objects get	tested
       against the ban list at request time and	thus are likely	to be  associ-
       ated with bans near the top of the ban list, with ban_cutoff, least re-
       cently accessed objects (the "long tail") are removed.

       This parameter is a safety net to avoid bad response times due to  bans
       being  tested at	lookup time. Setting a cutoff trades response time for
       cache  efficiency.   The	  recommended	value	is   proportional   to
       rate(bans_lurker_tests_tested)  /  n_objects  while  the	 ban lurker is
       working,	which is the number of bans the	system can sustain. The	 addi-
       tional latency due to request ban testing is in the order of ban_cutoff
       /      rate(bans_lurker_tests_tested).	   For	    example,	   for
       rate(bans_lurker_tests_tested) =	2M/s and a tolerable latency of	100ms,
       a good value for	ban_cutoff may be 200K.

   ban_dups
	  o Units: bool

	  o Default: on

       Eliminate older identical bans when a new ban is	added.	This saves CPU
       cycles  by not comparing	objects	to identical bans.  This is a waste of
       time if you have	many bans which	are never identical.

   ban_lurker_age
	  o Units: seconds

	  o Default: 60.000

	  o Minimum: 0.000

       The ban lurker will ignore bans until they are this old.	 When a	ban is
       added,  the  active traffic will	be tested against it as	part of	object
       lookup.	Because	many applications issue	bans in	bursts,	this parameter
       holds the ban-lurker off	until the rush is over.	 This should be	set to
       the approximate time which a ban-burst takes.

   ban_lurker_batch
	  o Default: 1000

	  o Minimum: 1

       The ban lurker sleeps ${ban_lurker_sleep} after examining this many ob-
       jects.  Use this	to pace	the ban-lurker if it eats too many resources.

   ban_lurker_holdoff
	  o Units: seconds

	  o Default: 0.010

	  o Minimum: 0.000

	  o Flags: experimental

       How  long  the  ban lurker sleeps when giving way to lookup due to lock
       contention.

   ban_lurker_sleep
	  o Units: seconds

	  o Default: 0.010

	  o Minimum: 0.000

       How long	the ban	lurker sleeps after examining ${ban_lurker_batch}  ob-
       jects.	Use this to pace the ban-lurker	if it eats too many resources.
       A value of zero will disable the	ban lurker entirely.

   between_bytes_timeout
	  o Units: seconds

	  o Default: 60.000

	  o Minimum: 0.000

       We only wait for	this many seconds  between  bytes  received  from  the
       backend	before	giving	up  the	fetch.	VCL values, per	backend	or per
       backend request take precedence.	 This  parameter  does	not  apply  to
       pipe'ed requests.

   cc_command
	  o Default: defined when Varnish is built

	  o Flags: must_reload

       Command	used  for  compiling the C source code to a dlopen(3) loadable
       object.	Any occurrence of %s in	the string will	be replaced  with  the
       source file name, and %o	will be	replaced with the output file name.

   cli_limit
	  o Units: bytes

	  o Default: 48k

	  o Minimum: 128b

	  o Maximum: 99999999b

       Maximum	size of	CLI response.  If the response exceeds this limit, the
       response	code will be 201 instead of 200	and the	last line  will	 indi-
       cate the	truncation.

   cli_timeout
	  o Units: seconds

	  o Default: 60.000

	  o Minimum: 0.000

       Timeout for the childs replies to CLI requests from the mgt_param.

   clock_skew
	  o Units: seconds

	  o Default: 10

	  o Minimum: 0

       How much	clockskew we are willing to accept between the backend and our
       own clock.

   clock_step
	  o Units: seconds

	  o Default: 1.000

	  o Minimum: 0.000

       How much	observed clock step we are willing to accept before we panic.

   connect_timeout
	  o Units: seconds

	  o Default: 3.500

	  o Minimum: 0.000

       Default connection timeout for backend connections. We only try to con-
       nect  to	 the  backend  for this	many seconds before giving up. VCL can
       override	this default value for each backend and	backend	request.

   critbit_cooloff
	  o Units: seconds

	  o Default: 180.000

	  o Minimum: 60.000

	  o Maximum: 254.000

	  o Flags: wizard

       How long	the critbit hasher keeps deleted objheads on the cooloff list.

   debug
	  o Default: none

       Enable/Disable various kinds of debugging.

	  none	 Disable all debugging

       Use +/- prefix to set/reset individual bits:

	  req_state
		 VSL Request state engine

	  workspace
		 VSL Workspace operations

	  waitinglist
		 VSL Waitinglist events

	  syncvsl
		 Make VSL synchronous

	  hashedge
		 Edge cases in Hash

	  vclrel Rapid VCL release

	  lurker VSL Ban lurker

	  esi_chop
		 Chop ESI fetch	to bits

	  flush_head
		 Flush after http1 head

	  vtc_mode
		 Varnishtest Mode

	  witness
		 Emit WITNESS lock records

	  vsm_keep
		 Keep the VSM file on restart

	  drop_pools
		 Drop thread pools (testing)

	  slow_acceptor
		 Slow down Acceptor

	  h2_nocheck
		 Disable various H2 checks

	  vmod_so_keep
		 Keep copied VMOD libraries

	  processors
		 Fetch/Deliver processors

	  protocol
		 Protocol debugging

	  vcl_keep
		 Keep VCL C and	so files

	  lck	 Additional lock statistics

   default_grace
	  o Units: seconds

	  o Default: 10.000

	  o Minimum: 0.000

	  o Flags: obj_sticky

       Default grace period.  We will deliver an object	this long after	it has
       expired,	provided another thread	is attempting to get a new copy.

   default_keep
	  o Units: seconds

	  o Default: 0.000

	  o Minimum: 0.000

	  o Flags: obj_sticky

       Default	keep  period.  We will keep a useless object around this long,
       making it available for conditional backend fetches.  That  means  that
       the object will be removed from the cache at the	end of ttl+grace+keep.

   default_ttl
	  o Units: seconds

	  o Default: 120.000

	  o Minimum: 0.000

	  o Flags: obj_sticky

       The TTL assigned	to objects if neither the backend nor the VCL code as-
       signs one.

   feature
	  o Default: none

       Enable/Disable various minor features.

	  none	 Disable all features.

       Use +/- prefix to enable/disable	individual feature:

	  http2	 Enable	HTTP/2 protocol	support.

	  short_panic
		 Short panic message.

	  no_coredump
		 No coredumps.	Must be	set before child process starts.

	  https_scheme
		 Extract host from full	URI in the HTTP/1 request line,	if the
		 scheme	is https.

	  http_date_postel
		 Tolerate non compliant	timestamp headers like Date, Last-Mod-
		 ified,	Expires	etc.

	  esi_ignore_https
		 Convert _esi:include src"https://... to http://...

	  esi_disable_xml_check
		 Allow ESI processing on non-XML ESI bodies

	  esi_ignore_other_elements
		 Ignore	XML syntax errors in ESI bodies.

	  esi_remove_bom
		 Ignore	UTF-8 BOM in ESI bodies.

	  wait_silo
		 Wait for persistent silos to completely load  before  serving
		 requests.

   fetch_chunksize
	  o Units: bytes

	  o Default: 16k

	  o Minimum: 4k

	  o Flags: experimental

       The  default  chunksize used by fetcher.	This should be bigger than the
       majority	of objects with	short TTLs.   Internal	limits	in  the	 stor-
       age_file	module makes increases above 128kb a dubious idea.

   fetch_maxchunksize
	  o Units: bytes

	  o Default: 0.25G

	  o Minimum: 64k

	  o Flags: experimental

       The  maximum chunksize we attempt to allocate from storage. Making this
       too large may cause delays and storage fragmentation.

   first_byte_timeout
	  o Units: seconds

	  o Default: 60.000

	  o Minimum: 0.000

       Default timeout for receiving first byte	from backend. We only wait for
       this  many  seconds for the first byte before giving up.	 VCL can over-
       ride this default value for each	backend	and backend request.  This pa-
       rameter does not	apply to pipe'ed requests.

   gzip_buffer
	  o Units: bytes

	  o Default: 32k

	  o Minimum: 2k

	  o Flags: experimental

       Size of malloc buffer used for gzip processing.	These buffers are used
       for in-transit data, for	 instance  gunzip'ed  data  being  sent	 to  a
       client.Making  this  space to small results in more overhead, writes to
       sockets etc, making it too big is probably just a waste of memory.

   gzip_level
	  o Default: 6

	  o Minimum: 0

	  o Maximum: 9

       Gzip compression	level: 0=debug,	1=fast,	9=best

   gzip_memlevel
	  o Default: 8

	  o Minimum: 1

	  o Maximum: 9

       Gzip memory level 1=slow/least, 9=fast/most compression.	 Memory	impact
       is 1=1k,	2=2k, ... 9=256k.

   h2_header_table_size
	  o Units: bytes

	  o Default: 4k

	  o Minimum: 0b

       HTTP2  header  table  size.  This is the	size that will be used for the
       HPACK dynamic decoding table.

   h2_initial_window_size
	  o Units: bytes

	  o Default: 65535b

	  o Minimum: 0b

	  o Maximum: 2147483647b

       HTTP2 initial flow control window size.

   h2_max_concurrent_streams
	  o Units: streams

	  o Default: 100

	  o Minimum: 0

       HTTP2 Maximum number of concurrent streams.  This is the	number of  re-
       quests  that  can be active at the same time for	a single HTTP2 connec-
       tion.

   h2_max_frame_size
	  o Units: bytes

	  o Default: 16k

	  o Minimum: 16k

	  o Maximum: 16777215b

       HTTP2 maximum per frame payload size we are willing to accept.

   h2_max_header_list_size
	  o Units: bytes

	  o Default: 2147483647b

	  o Minimum: 0b

       HTTP2 maximum size of an	uncompressed header list.

   h2_rx_window_increment
	  o Units: bytes

	  o Default: 1M

	  o Minimum: 1M

	  o Maximum: 1G

	  o Flags: wizard

       HTTP2 Receive Window Increments.	 How big credits we send in WINDOW_UP-
       DATE frames Only	affects	incoming request bodies	(ie: POST, PUT etc.)

   h2_rx_window_low_water
	  o Units: bytes

	  o Default: 10M

	  o Minimum: 65535b

	  o Maximum: 1G

	  o Flags: wizard

       HTTP2  Receive  Window  low  water  mark.  We try to keep the window at
       least this big Only affects incoming  request  bodies  (ie:  POST,  PUT
       etc.)

   http1_iovs
	  o Units: struct iovec	(=16 bytes)

	  o Default: 64

	  o Minimum: 5

	  o Maximum: 1024

	  o Flags: wizard

       Number  of  io  vectors to allocate for HTTP1 protocol transmission.  A
       HTTP1 header needs 7  +	2  per	HTTP  header  field.   Allocated  from
       workspace_thread.

   http_gzip_support
	  o Units: bool

	  o Default: on

       Enable  gzip  support.  When enabled Varnish request compressed objects
       from the	backend	and store them compressed. If a	client does  not  sup-
       port  gzip  encoding  Varnish will uncompress compressed	objects	on de-
       mand. Varnish will also rewrite the Accept-Encoding header  of  clients
       indicating support for gzip to:
	      Accept-Encoding: gzip

       Clients that do not support gzip	will have their	Accept-Encoding	header
       removed.	For more information on	how gzip is implemented	please see the
       chapter on gzip in the Varnish reference.

       When   gzip  support  is	 disabled  the	variables  beresp.do_gzip  and
       beresp.do_gunzip	have no	effect in VCL.

   http_max_hdr
	  o Units: header lines

	  o Default: 64

	  o Minimum: 32

	  o Maximum: 65535

       Maximum	  number    of	  HTTP	  header    lines    we	   allow    in
       {req|resp|bereq|beresp}.http (obj.http is autosized to the exact	number
       of headers).  Cheap, ~20	bytes, in terms	 of  workspace	memory.	  Note
       that the	first line occupies five header	lines.

   http_range_support
	  o Units: bool

	  o Default: on

       Enable support for HTTP Range headers.

   http_req_hdr_len
	  o Units: bytes

	  o Default: 8k

	  o Minimum: 40b

       Maximum	length	of  any	HTTP client request header we will allow.  The
       limit is	inclusive its continuation lines.

   http_req_size
	  o Units: bytes

	  o Default: 32k

	  o Minimum: 0.25k

       Maximum number of bytes of HTTP client request we will deal with.  This
       is a limit on all bytes up to the double	blank line which ends the HTTP
       request.	 The memory for	the  request  is  allocated  from  the	client
       workspace  (param: workspace_client) and	this parameter limits how much
       of that the request is allowed to take up.

   http_resp_hdr_len
	  o Units: bytes

	  o Default: 8k

	  o Minimum: 40b

       Maximum length of any HTTP backend response header we will allow.   The
       limit is	inclusive its continuation lines.

   http_resp_size
	  o Units: bytes

	  o Default: 32k

	  o Minimum: 0.25k

       Maximum	number	of  bytes  of HTTP backend response we will deal with.
       This is a limit on all bytes up to the double blank line	which ends the
       HTTP response.  The memory for the response is allocated	from the back-
       end workspace (param: workspace_backend)	and this parameter limits  how
       much of that the	response is allowed to take up.

   idle_send_timeout
	  o Units: seconds

	  o Default: 60.000

	  o Minimum: 0.000

	  o Flags: delayed

       Send  timeout  for individual pieces of data on client connections. May
       get extended if 'send_timeout' applies.

       When this timeout is hit, the session is	closed.

       See the man page	for setsockopt(2) or socket(7) under  SO_SNDTIMEO  for
       more information.

   listen_depth
	  o Units: connections

	  o Default: 1024

	  o Minimum: 0

	  o Flags: must_restart

       Listen queue depth.

   lru_interval
	  o Units: seconds

	  o Default: 2.000

	  o Minimum: 0.000

	  o Flags: experimental

       Grace  period  before object moves on LRU list.	Objects	are only moved
       to the front of the LRU list if they have not been moved	there  already
       inside this timeout period.  This reduces the amount of lock operations
       necessary for LRU list access.

   max_esi_depth
	  o Units: levels

	  o Default: 5

	  o Minimum: 0

       Maximum depth of	esi:include processing.

   max_restarts
	  o Units: restarts

	  o Default: 4

	  o Minimum: 0

       Upper limit on how many times a request can restart.

   max_retries
	  o Units: retries

	  o Default: 4

	  o Minimum: 0

       Upper limit on how many times a backend fetch can retry.

   max_vcl
	  o Default: 100

	  o Minimum: 0

       Threshold of loaded VCL programs.  (VCL labels are not  counted.)   Pa-
       rameter max_vcl_handling	determines behaviour.

   max_vcl_handling
	  o Default: 1

	  o Minimum: 0

	  o Maximum: 2

       Behaviour when attempting to exceed max_vcl loaded VCL.

       o 0 - Ignore max_vcl parameter.

       o 1 - Issue warning.

       o 2 - Refuse loading VCLs.

   nuke_limit
	  o Units: allocations

	  o Default: 50

	  o Minimum: 0

	  o Flags: experimental

       Maximum number of objects we attempt to nuke in order to	make space for
       a object	body.

   pcre_match_limit
	  o Default: 10000

	  o Minimum: 1

       The limit for the number	of calls to the	internal match()  function  in
       pcre_exec().

       (See: PCRE_EXTRA_MATCH_LIMIT in pcre docs.)

       This parameter limits how much CPU time regular expression matching can
       soak up.

   pcre_match_limit_recursion
	  o Default: 20

	  o Minimum: 1

       The recursion depth-limit  for  the  internal  match()  function	 in  a
       pcre_exec().

       (See: PCRE_EXTRA_MATCH_LIMIT_RECURSION in pcre docs.)

       This  puts  an upper limit on the amount	of stack used by PCRE for cer-
       tain classes of regular expressions.

       We have set the default value low in order to prevent crashes,  at  the
       cost of possible	regexp matching	failures.

       Matching	 failures  will	 show up in the	log as VCL_Error messages with
       regexp errors -27 or -21.

       Testcase	r01576 can be useful when tuning this parameter.

   ping_interval
	  o Units: seconds

	  o Default: 3

	  o Minimum: 0

	  o Flags: must_restart

       Interval	between	pings from parent to child.  Zero will disable pinging
       entirely, which makes it	possible to attach a debugger to the child.

   pipe_sess_max
	  o Units: connections

	  o Default: 0

	  o Minimum: 0

       Maximum number of sessions dedicated to pipe transactions.

   pipe_timeout
	  o Units: seconds

	  o Default: 60.000

	  o Minimum: 0.000

       Idle timeout for	PIPE sessions. If nothing have been received in	either
       direction for this many seconds,	the session is closed.

   pool_req
	  o Default: 10,100,10

       Parameters for per worker pool request memory pool.

       The three numbers are:

	  min_pool
		 minimum size of free pool.

	  max_pool
		 maximum size of free pool.

	  max_age
		 max age of free element.

   pool_sess
	  o Default: 10,100,10

       Parameters for per worker pool session memory pool.

       The three numbers are:

	  min_pool
		 minimum size of free pool.

	  max_pool
		 maximum size of free pool.

	  max_age
		 max age of free element.

   pool_vbo
	  o Default: 10,100,10

       Parameters for backend object fetch memory pool.

       The three numbers are:

	  min_pool
		 minimum size of free pool.

	  max_pool
		 maximum size of free pool.

	  max_age
		 max age of free element.

   prefer_ipv6
	  o Units: bool

	  o Default: off

       Prefer IPv6 address when	connecting to backends which  have  both  IPv4
       and IPv6	addresses.

   rush_exponent
	  o Units: requests per	request

	  o Default: 3

	  o Minimum: 2

	  o Flags: experimental

       How  many parked	request	we start for each completed request on the ob-
       ject.  NB: Even with the	implict	delay of delivery, this	parameter con-
       trols an	exponential increase in	number of worker threads.

   send_timeout
	  o Units: seconds

	  o Default: 600.000

	  o Minimum: 0.000

	  o Flags: delayed

       Total  timeout for ordinary HTTP1 responses. Does not apply to some in-
       ternally	generated errors and pipe mode.

       When 'idle_send_timeout'	is hit while sending an	 HTTP1	response,  the
       timeout is extended unless the total time already taken for sending the
       response	in its entirety	exceeds	this many seconds.

       When this timeout is hit, the session is	closed

   shortlived
	  o Units: seconds

	  o Default: 10.000

	  o Minimum: 0.000

       Objects created with (ttl+grace+keep) shorter than this are always  put
       in transient storage.

   sigsegv_handler
	  o Units: bool

	  o Default: on

	  o Flags: must_restart

       Install	a signal handler which tries to	dump debug information on seg-
       mentation faults, bus errors and	abort signals.

   syslog_cli_traffic
	  o Units: bool

	  o Default: on

       Log all CLI traffic to syslog(LOG_INFO).

   tcp_fastopen
       NB: This	parameter depends on a feature which is	not available  on  all
       platforms.

	  o Units: bool

	  o Default: off

       Enable TCP Fast Open extension.

   tcp_keepalive_intvl
	  o Units: seconds

	  o Default: platform dependent

	  o Minimum: 1.000

	  o Maximum: 100.000

	  o Flags: experimental

       The  number  of seconds between TCP keep-alive probes. Ignored for Unix
       domain sockets.

   tcp_keepalive_probes
	  o Units: probes

	  o Default: platform dependent

	  o Minimum: 1

	  o Maximum: 100

	  o Flags: experimental

       The maximum number of TCP keep-alive probes to send  before  giving  up
       and  killing  the  connection if	no response is obtained	from the other
       end. Ignored for	Unix domain sockets.

   tcp_keepalive_time
	  o Units: seconds

	  o Default: platform dependent

	  o Minimum: 1.000

	  o Maximum: 7200.000

	  o Flags: experimental

       The number of seconds a connection needs	to be idle before  TCP	begins
       sending out keep-alive probes. Ignored for Unix domain sockets.

   thread_pool_add_delay
	  o Units: seconds

	  o Default: 0.000

	  o Minimum: 0.000

	  o Flags: experimental

       Wait at least this long after creating a	thread.

       Some (buggy) systems may	need a short (sub-second) delay	between	creat-
       ing  threads.   Set  this  to  a	 few  milliseconds  if	you  see   the
       'threads_failed'	counter	grow too much.

       Setting this too	high results in	insufficient worker threads.

   thread_pool_destroy_delay
	  o Units: seconds

	  o Default: 1.000

	  o Minimum: 0.010

	  o Flags: delayed, experimental

       Wait this long after destroying a thread.

       This controls the decay of thread pools when idle(-ish).

   thread_pool_fail_delay
	  o Units: seconds

	  o Default: 0.200

	  o Minimum: 0.010

	  o Flags: experimental

       Wait at least this long after a failed thread creation before trying to
       create another thread.

       Failure to create a worker thread is often a  sign  that	  the  end  is
       near,  because the process is running out of some resource.  This delay
       tries to	not rush the end on needlessly.

       If thread creation failures are a problem, check	 that  thread_pool_max
       is not too high.

       It  may	also help to increase thread_pool_timeout and thread_pool_min,
       to reduce the rate at which treads are destroyed	and later recreated.

   thread_pool_max
	  o Units: threads

	  o Default: 5000

	  o Minimum: thread_pool_min

	  o Flags: delayed

       The maximum number of worker threads in each pool.

       Do not set this higher than you have to,	since  excess  worker  threads
       soak  up	 RAM and CPU and generally just	get in the way of getting work
       done.

   thread_pool_min
	  o Units: threads

	  o Default: 100

	  o Minimum: 5

	  o Maximum: thread_pool_max

	  o Flags: delayed

       The minimum number of worker threads in each pool.

       Increasing this may help	ramp up	faster from  low  load	situations  or
       when threads have expired.

       Technical  minimum  is 5	threads, but this parameter is strongly	recom-
       mended to be at least 10

   thread_pool_reserve
	  o Units: threads

	  o Default: 0

	  o Maximum: 95% of thread_pool_min

	  o Flags: delayed

       The number of worker threads reserved for vital tasks in	each pool.

       Tasks may require other tasks to	complete (for example, client requests
       may require backend requests, http2 sessions require streams, which re-
       quire requests).	This reserve is	to ensure that lower priority tasks do
       not prevent higher priority tasks from running even under high load.

       The  effective  value  is  at  least 5 (the number of internal priority
       classes), irrespective of this parameter.  Default is  0	 to  auto-tune
       (5%  of	thread_pool_min).   Minimum  is	1 otherwise, maximum is	95% of
       thread_pool_min.

   thread_pool_stack
	  o Units: bytes

	  o Default: sysconf(_SC_THREAD_STACK_MIN)

	  o Minimum: 2k

	  o Flags: delayed

       Worker thread stack size.  This will likely be rounded up to a multiple
       of 4k (or whatever the page_size	might be) by the kernel.

       The  required  stack  size  is  primarily  driven  by  the depth	of the
       call-tree. The most common relevant determining factors in varnish core
       code  are  GZIP	(un)compression, ESI processing	and regular expression
       matches.	VMODs may  also	 require  significant  amounts	of  additional
       stack.  The nesting depth of VCL	subs is	another	factor,	although typi-
       cally not predominant.

       The stack size is per thread, so	the maximum total memory required  for
       worker  thread  stacks  is  in  the  order  of  size  =	thread_pools x
       thread_pool_max x thread_pool_stack.

       Thus, in	particular for setups with many	 threads,  keeping  the	 stack
       size  at	 a  minimum helps reduce the amount of memory required by Var-
       nish.

       On the other hand, thread_pool_stack must be  large  enough  under  all
       circumstances,  otherwise  varnish  will	crash due to a stack overflow.
       Usually,	a stack	overflow manifests itself as a segmentation fault (aka
       segfault	 /  SIGSEGV)  with  the	 faulting address being	near the stack
       pointer (sp).

       Unless stack usage can be reduced, thread_pool_stack must be  increased
       when  a	stack  overflow	 occurs. Setting it in 150%-200% increments is
       recommended until stack overflows cease to occur.

   thread_pool_timeout
	  o Units: seconds

	  o Default: 300.000

	  o Minimum: 10.000

	  o Flags: delayed, experimental

       Thread idle threshold.

       Threads in excess of thread_pool_min, which have	been idle for at least
       this long, will be destroyed.

   thread_pool_watchdog
	  o Units: seconds

	  o Default: 60.000

	  o Minimum: 0.100

	  o Flags: experimental

       Thread queue stuck watchdog.

       If  no queued work have been released for this long, the	worker process
       panics itself.

   thread_pools
	  o Units: pools

	  o Default: 2

	  o Minimum: 1

	  o Maximum: defined when Varnish is built

	  o Flags: delayed, experimental

       Number of worker	thread pools.

       Increasing the number of	worker pools decreases lock  contention.  Each
       worker  pool  also  has a thread	accepting new connections, so for very
       high rates of incoming new connections on systems with many cores,  in-
       creasing	the worker pools may be	required.

       Too  many pools waste CPU and RAM resources, and	more than one pool for
       each CPU	is most	likely detrimental to performance.

       Can be increased	on the fly, but	decreases require a  restart  to  take
       effect, unless the drop_pools experimental debug	flag is	set.

   thread_queue_limit
	  o Default: 20

	  o Minimum: 0

	  o Flags: experimental

       Permitted request queue length per thread-pool.

       This  sets  the number of requests we will queue, waiting for an	avail-
       able thread.  Above this	limit sessions	will  be  dropped  instead  of
       queued.

   thread_stats_rate
	  o Units: requests

	  o Default: 10

	  o Minimum: 0

	  o Flags: experimental

       Worker  threads	accumulate  statistics,	and dump these into the	global
       stats counters if the  lock  is	free  when  they  finish  a  job  (re-
       quest/fetch etc.)  This parameters defines the maximum number of	jobs a
       worker thread may handle, before	it is forced to	dump  its  accumulated
       stats into the global counters.

   timeout_idle
	  o Units: seconds

	  o Default: 5.000

	  o Minimum: 0.000

       Idle timeout for	client connections.

       A connection is considered idle until we	have received the full request
       headers.

       This parameter is particularly relevant for  HTTP1  keepalive   connec-
       tions  which are	closed unless the next request is received before this
       timeout is reached.

   timeout_linger
	  o Units: seconds

	  o Default: 0.050

	  o Minimum: 0.000

	  o Flags: experimental

       How long	the worker thread lingers on an	idle session before handing it
       over  to	 the waiter.  When sessions are	reused,	as much	as half	of all
       reuses happen within the	first 100 msec of the  previous	 request  com-
       pleting.	  Setting  this	 too  high results in worker threads not doing
       anything	for their keep,	setting	it too low just	means that  more  ses-
       sions take a detour around the waiter.

   vcc_acl_pedantic
	  o Units: bool

	  o Default: off

       Insist  that  network  numbers used in ACLs have	an all-zero host part,
       e.g. make 1.2.3.4/24 an error.  With this option	set to	off  (the  de-
       fault),	the  host part of network numbers is being fixed to all-zeroes
       (e.g. the above changed to 1.2.3.0/24), a warning is output during  VCL
       compilation and any ACL entry hits are logged with the fixed address as
       "fixed: ..." after the original VCL entry.  With	this option set	to on,
       any ACL entries with non-zero host parts	cause VCL compilation to fail.

   vcc_allow_inline_c
	  o Units: bool

	  o Default: off

       Allow inline C code in VCL.

   vcc_err_unref
	  o Units: bool

	  o Default: on

       Unreferenced VCL	objects	result in error.

   vcc_unsafe_path
	  o Units: bool

	  o Default: on

       Allow '/' in vmod & include paths.  Allow 'import ... from ...'.

   vcl_cooldown
	  o Units: seconds

	  o Default: 600.000

	  o Minimum: 1.000

       How  long  a  VCL  is  kept warm	after being replaced as	the active VCL
       (granularity approximately 30 seconds).

   vcl_path
	  o Default: /usr/local/etc/varnish:/usr/local/share/varnish/vcl

       Directory (or colon separated list of directories) from which  relative
       VCL  filenames (vcl.load	and include) are to be found.  By default Var-
       nish searches VCL files in both the  system  configuration  and	shared
       data  directories  to allow packages to drop their VCL files in a stan-
       dard location where relative includes would work.

   vmod_path
	  o Default: /usr/local/lib/varnish/vmods

       Directory (or colon separated list of directories) where	VMODs  are  to
       be found.

   vsl_buffer
	  o Units: bytes

	  o Default: 4k

	  o Minimum: vsl_reclen	+ 12 bytes

       Bytes  of  (req-/backend-)workspace dedicated to	buffering VSL records.
       When this parameter  is	adjusted,  most	 likely	 workspace_client  and
       workspace_backend will have to be adjusted by the same amount.

       Setting	this too high costs memory, setting it too low will cause more
       VSL flushes and likely increase lock-contention on the VSL mutex.

   vsl_mask
	  o Default:	       -Debug,-ObjProtocol,-ObjStatus,-ObjReason,-Obj-
	    Header,-VCL_trace,-WorkThread,-Hash,-VfpAcct,-H2RxHdr,-H2Rx-
	    Body,-H2TxHdr,-H2TxBody

       Mask individual VSL messages from being logged.

	  default
		 Set default value

       Use +/- prefix in front of VSL tag name to unmask/mask  individual  VSL
       messages.

   vsl_reclen
	  o Units: bytes

	  o Default: 255b

	  o Minimum: 16b

	  o Maximum: vsl_buffer	- 12 bytes

       Maximum number of bytes in SHM log record.

   vsl_space
	  o Units: bytes

	  o Default: 80M

	  o Minimum: 1M

	  o Maximum: 4G

	  o Flags: must_restart

       The amount of space to allocate for the VSL fifo	buffer in the VSM mem-
       ory segment.  If	you make this too small,  varnish{ncsa|log}  etc  will
       not  be	able  to  keep	up.  Making it too large just costs memory re-
       sources.

   vsm_free_cooldown
	  o Units: seconds

	  o Default: 60.000

	  o Minimum: 10.000

	  o Maximum: 600.000

       How long	VSM memory is kept warm	after a	deallocation (granularity  ap-
       proximately 2 seconds).

   vsm_space
	  o Units: bytes

	  o Default: 1M

	  o Minimum: 1M

	  o Maximum: 1G

       DEPRECATED:  This  parameter  is	 ignored.  There is no global limit on
       amount of shared	memory now.

   workspace_backend
	  o Units: bytes

	  o Default: 64k

	  o Minimum: 1k

	  o Flags: delayed

       Bytes of	HTTP protocol workspace	for backend HTTP req/resp.  If	larger
       than 4k,	use a multiple of 4k for VM efficiency.

   workspace_client
	  o Units: bytes

	  o Default: 64k

	  o Minimum: 9k

	  o Flags: delayed

       Bytes of	HTTP protocol workspace	for clients HTTP req/resp.  Use	a mul-
       tiple of	4k for VM efficiency.  For HTTP/2 compliance this must	be  at
       least  20k, in order to receive fullsize	(=16k) frames from the client.
       That usually happens only in POST/PUT bodies.  For  other  traffic-pat-
       terns smaller values work just fine.

   workspace_session
	  o Units: bytes

	  o Default: 0.75k

	  o Minimum: 0.25k

	  o Flags: delayed

       Allocation  size	 for session structure and workspace.	 The workspace
       is primarily used for TCP connection addresses.	If larger than 4k, use
       a multiple of 4k	for VM efficiency.

   workspace_thread
	  o Units: bytes

	  o Default: 2k

	  o Minimum: 0.25k

	  o Maximum: 8k

	  o Flags: delayed

       Bytes  of  auxiliary  workspace per thread.  This workspace is used for
       certain temporary data structures during	 the  operation	 of  a	worker
       thread.	 One  use  is for the IO-vectors used during delivery. Setting
       this parameter too low may increase the number  of  writev()  syscalls,
       setting	 it  too  high	just  wastes  space.   ~0.1k  +	 UIO_MAXIOV  *
       sizeof(struct iovec) (typically = ~16k for  64bit)  is  considered  the
       maximum	sensible value under any known circumstances (excluding	exotic
       vmod use).

EXIT CODES
       Varnish and bundled tools will, in most cases, exit  with  one  of  the
       following codes

       o 0 OK

       o 1 Some	error which could be system-dependent and/or transient

       o 2  Serious  configuration  / parameter	error -	retrying with the same
	 configuration / parameters is most likely useless

       The varnishd master process may also OR its exit	code

       o with 0x20 when	the varnishd child process died,

       o with 0x40 when	the varnishd child process was terminated by a	signal
	 and

       o with 0x80 when	a core was dumped.

SEE ALSO
       o varnishlog(1)

       o varnishhist(1)

       o varnishncsa(1)

       o varnishstat(1)

       o varnishtop(1)

       o varnish-cli(7)

       o vcl(7)

HISTORY
       The  varnishd  daemon was developed by Poul-Henning Kamp	in cooperation
       with Verdens Gang AS and	Varnish	Software.

       This manual page	was written by Dag-Erling SmA,rgrav  with  updates  by
       Stig Sandbeck Mathisen <ssm@debian.org>,	Nils Goroll and	others.

COPYRIGHT
       This document is	licensed under the same	licence	as Varnish itself. See
       LICENCE for details.

       o Copyright (c) 2007-2015 Varnish Software AS

								   VARNISHD(1)

NAME | SYNOPSIS | DESCRIPTION | OPTIONS | RUN TIME PARAMETERS | EXIT CODES | SEE ALSO | HISTORY | COPYRIGHT

Want to link to this manual page? Use this URL:
<https://www.freebsd.org/cgi/man.cgi?query=varnishd&sektion=1&manpath=FreeBSD+13.0-RELEASE+and+Ports>

home | help