Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages


home | help
flowgrind(1)		       Flowgrind Manual			  flowgrind(1)

       flowgrind  - advanced TCP traffic generator for Linux, FreeBSD, and Mac
       OS X

       flowgrind [OPTION]...

       flowgrind is an advanced	TCP traffic generator for testing  and	bench-
       marking	Linux,	FreeBSD,  and  Mac  OS X TCP/IP	stacks.	In contrast to
       other performance measurement tools it features a distributed architec-
       ture, where throughput and other	metrics	are measured between arbitrary
       flowgrind server	processes, flowgrind daemon flowgrindd(1).

       Flowgrind measures besides goodput (throughput),	the application	 layer
       interarrival  time (IAT)	and round-trip time (RTT), blockcount and net-
       work transactions/s. Unlike most	cross-platform	testing	 tools,	 flow-
       grind  collects	and  reports  the TCP metrics returned by the TCP_INFO
       socket option, which are	usually	internal to the	TCP/IP stack. On Linux
       and  FreeBSD  this includes among others	the kernel's estimation	of the
       end-to-end RTT, the size	of the TCP congestion window (CWND)  and  slow
       start threshold (SSTHRESH).

       Flowgrind  has  a distributed architecture. It is split into two	compo-
       nents: the flowgrind daemon,  flowgrindd(1),  and  the  flowgrind  con-
       troller.	  Using	 the controller, flows between any two systems running
       the flowgrind daemon can	be setup (third	party tests). At  regular  in-
       tervals	during	the test the controller	collects and displays the mea-
       sured results from the daemons. It can run multiple flows at once  with
       the  same  or  different	 settings and individually schedule every one.
       Test and	control	connection can optionally be diverted to different in-

       The traffic generation itself is	either bulk transfer, rate-limited, or
       sophisticated request/response tests. Flowgrind uses libpcap  to	 auto-
       matically dump traffic for qualitative analysis.

       They  are  two important	groups of options: controller options and flow
       options.	 Like the name suggests, controller options apply globally and
       potentially affect all flows, while flow-specific options only apply to
       the subset of flows selected using the -F option.

       Mandatory arguments to long options are	mandatory  for	short  options

   General options
       -h, --help[=WHAT]
	      display  help and	exit. Optional WHAT can	either be 'socket' for
	      help on socket options or	'traffic' traffic generation help

       -v, --version
	      print version information	and exit

   Controller options
       -c, --show-colon=TYPE[,TYPE]...
	      display intermediated interval report  column  TYPE  in  output.
	      Allowed  values  for TYPE	are: 'interval', 'through', 'transac',
	      'iat', 'kernel' (all show	per  default),	and  'blocks',	'rtt',
	      'delay' (optional)

       -d, --debug
	      increase	debugging  verbosity. Add option multiple times	to in-
	      crease the verbosity

       -e, --dump-prefix=PRE
	      prepend prefix PRE to dump filename (default: "flowgrind-")

       -i, --report-interval=#.#
	      reporting	interval, in seconds (default: 0.05s)

	      write  output  to	 logfile   FILE	  (default:   flowgrind-'time-

       -m     report throughput	in 2**20 bytes/s (default: 10**6 bit/s)

       -n, --flows=#
	      number of	test flows (default: 1)

       -o     overwrite	existing log files (default: don't)

       -p     don't print symbolic values (like	INT_MAX) instead of numbers

       -q, --quiet
	      be quiet,	do not log to screen (default: off)

       -s, --tcp-stack=TYPE
	      don't  determine	unit of	source TCP stacks automatically. Force
	      unit to TYPE, where TYPE is 'segment' or 'byte'

       -w     write output to logfile (same as --log-file)

   Flow	options
       All flows have two endpoints, a source and a destination. The  distinc-
       tion  between  source and destination endpoints only affects connection
       establishment.  When starting a flow the	destination  endpoint  listens
       on a socket and the source endpoint connects to it. For the actual test
       this makes no difference, both endpoints	have exactly the same capabil-
       ities.  Data  can  be sent in either direction and many settings	can be
       configured individually for each	endpoint.

       Some of these options take the flow endpoint as	argument,  denoted  by
       'x'  in the option syntax. 'x' needs to be replaced with	either 's' for
       the source endpoint, 'd'	for the	destination endpoint or	'b'  for  both
       endpoints.  To  specify	different  values for each endpoints, separate
       them by comma. For instance -W s=8192,d=4096 sets the advertised	window
       to 8192 at the source and 4096 at the destination.

       -A x   use minimal response size	needed for RTT calculation
	      (same as -G s=p:C:40)

       -B x=# set requested sending buffer, in bytes

       -C x   stop flow	if it is experiencing local congestion

       -D x=DSCP
	      DSCP value for type-of-service (TOS) IP header byte

       -E     enumerate	bytes in payload instead of sending zeros

       -F #[,#]...
	      Flow  options following this option apply	only to	the given flow
	      IDs. Useful in combination with -n to set	specific  options  for
	      certain  flows.  Numbering  starts with 0, so -F 1 refers	to the
	      second flow. With	-1 all flow can	be referred

       -G x=(q|p|g):(C|U|E|N|L|P|W):#1:[#2]
	      activate stochastic traffic generation and  set  parameters  ac-
	      cording to the used distribution.	For additional information see
	      section 'Traffic Generation Option'

       -H x=HOST[/CONTROL[:PORT]]
	      test from/to HOST. Optional argument is the address and port for
	      the  CONTROL connection to the same host.	An endpoint that isn't
	      specified	is assumed to be localhost

       -J #   use random seed #	(default: read /dev/urandom)

       -I     enable one-way delay calculation (no clock synchronization)

       -L     call connect() on	test socket  immediately  before  starting  to
	      send  data  (late	connect). If not specified the test connection
	      is established in	the preparation	phase before the test starts

       -M x   dump traffic using libpcap. flowgrindd(1)	must be	run as root

       -N     shutdown() each socket direction after test flow

       -O x=OPT
	      set socket option	OPT on test socket. For	additional information
	      see section 'Socket Options'

       -P x   do  not  iterate	through	 select()  to continue sending in case
	      block size did not suffice to fill sending queue (pushy)

       -Q     summarize	only, no intermediated interval	reports	 are  computed

       -R x=#.#(z|k|M|G)(b|B)
	      send at specified	rate per second, where:	z = 2**0, k = 2**10, M
	      =	2**20, G = 2**30, and b	= bits/s (default), B =	bytes/s

       -S x=# set block	(message) size,	in bytes (same as -G s=q:C:#)

       -T x=#.#
	      set flow duration, in seconds (default: s=10,d=0)

       -U x=# set application buffer size, in bytes (default: 8192)  truncates
	      values if	used with stochastic traffic generation

       -W x=# set requested receiver buffer (advertised	window), in bytes

       -Y x=#.#
	      set initial delay	before the host	starts to send,	in seconds

       Via  option  -G flowgrind supports stochastic traffic generation, which
       allows to conduct besides normal	bulk also  advanced  rate-limited  and
       request-response	data transfers.

       The  stochastic traffic generation option -G takes the flow endpoint as
       argument, denoted by 'x'	in the option syntax. 'x' needs	to be replaced
       with  either  's' for the source	endpoint, 'd' for the destination end-
       point or	'b' for	both endpoints.	However,  please  note	that  bidirec-
       tional  traffic	generation  can	lead to	unexpected results. To specify
       different values	for each endpoints, separate them by comma.

       -G x=(q|p|g):(C|U|E|N|L|P|W):#1:[#2]

	      Flow parameter:

		   q	  request size (in bytes)

		   p	  response size	(in bytes)

		   g	  request interpacket gap (in seconds)


		   C	  constant (#1:	value, #2: not used)

		   U	  uniform (#1: min, #2:	max)

		   E	  exponential (#1: lamba - lifetime, #2: not used)

		   N	  normal (#1: mu -  mean  value,  #2:  sigma_square  -

		   L	  lognormal (#1: zeta -	mean, #2: sigma	- std dev)

		   P	  pareto (#1: k	- shape, #2: x_min - scale)

		   W	  weibull (#1: lambda -	scale, #2: k - shape)

	      Advanced	distributions like weibull are only available if flow-
	      grind is compiled	with libgsl support.

       -U #   specify a	cap for	the calculated values for request and response
	      sizes,  needed  because  the advanced distributed	values are un-
	      bounded, but we need to know the buffersize (it's	not needed for
	      constant	values	or  uniform distribution).  Values outside the
	      bounds are recalculated until a valid result occurs but at  most
	      10 times (then the bound value is	used)

       Flowgrind  allows to set	the following standard and non-standard	socket
       options via option -O.

       All socket options take the flow	endpoint as argument, denoted  by  'x'
       in  the option syntax. 'x' needs	to be replaced with either 's' for the
       source endpoint,	'd' for	the destination	endpoint or 'b'	for both  end-
       points.	To  specify different values for each endpoints, separate them
       by comma. Moreover, it is possible to repeatedly	pass the same endpoint
       in order	to specify multiple socket options.

   Standard socket options
	      set congestion control algorithm ALG on test socket

       -O x=TCP_CORK
	      set TCP_CORK on test socket

       -O x=TCP_NODELAY
	      disable nagle algorithm on test socket

       -O x=SO_DEBUG
	      set SO_DEBUG on test socket

	      set  IP_MTU_DISCOVER  on	test  socket if	not already enabled by
	      system default

       -O x=ROUTE_RECORD
	      set ROUTE_RECORD on test socket

   Non-standard	socket options
       -O x=TCP_MTCP
	      set TCP_MTCP (15)	on test	socket

       -O x=TCP_ELCN
	      set TCP_ELCN (20)	on test	socket

       -O x=TCP_LCD
	      set TCP_LCD (21) on test socket

	      testing localhost	IPv4 TCP performance  with  default  settings,
	      same as flowgrind	-H b= -T s=10,d=0. The	flowgrind dae-
	      mon needs	to be run on localhost

       flowgrind -H b=::1/
	      same as above, but testing localhost IPv6	TCP  performance  with
	      default settings

       flowgrind -H s=host1,d=host2
	      bulk TCP transfer	between	host1 and host2. Host1 acts as source,
	      host2 as destination endpoint. Both endpoints need to be run the
	      flowgrind	daemon.	The default flow options are used, with	a flow
	      duration of 10 seconds and a data	stream from host1 to host2

       flowgrind -H s=host1,d=host2 -T s=0,d=10
	      same as the above	but instead with a flow	sending	 data  for  10
	      seconds from host2 to host1

       flowgrind   -n	2   -F	0  -H  s=,d=  -F	 1  -H
	      setup two	parallel flows,	first  flow  between  and, second flow	between to

       flowgrind  -p  -H s=,d=
       -A s
	      setup  one  flow	between  and  and  use
	      192.168.1.x  IP addresses	for controll traffic. Activate minimal
	      response for RTT calculation

       flowgrind -i 0.001 -T s=1 | egrep ^S | gnuplot -persist	-e  'plot  "-"
       using 3:5 with lines title "Throughput" '
	      setup  one  flow	over  loopback device and plot the data	of the
	      sender with the help of gnuplot

       flowgrind -G s=q:C:400 -G s=p:N:2000:50 -G s=g:U:0.005:0.01 -U 32000
	      -G s=q:C:400 : use constant request size of 400 bytes
	      -G s=p:N:2000:50 : use normal  distributed  response  size  with
	      mean 2000	bytes and variance 50
	      -G  s=g:U:0.005:0.01  :  use uniform distributed interpacket gap
	      with min 0.005s and and max 10ms
	      -U 32000:	truncate block sizes at	32 kbytes (needed  for	normal

       The  following  examples	demonstrate how	flowgrind's traffic generation
       capability can be used. These have been incorporated in different tests
       for  flowgrind  and  have  been proven meaningful. However, as Internet
       traffic is diverse, there is no guarantee that these are	appropriate in
       every situation.

   Request Response Style (HTTP)
       This  scenario  is  based  on  the  work	 in

       flowgrind -M s -G s=q:C:350 -G s=p:L:9055:115.17	-U 100000
	      -r 42: use random	seed 42	to make	measurements reproducible
	      -M s: dump traffic on sender side
	      -G s=q:C:350 : use constant requests size	350 bytes
	      -G s=p:L:9055:115	: use lognormal	distribution  with  mean  9055
	      and variance 115 for response size
	      -U 100000: Truncate response at 100 kbytes

       For  this  scenario we recommend	to focus on RTT	(lower values are bet-
       ter) and	Network	Transactions/s as metric (higher values	are better).

   Interactive Session (Telnet)
       This scenario emulates a	telnet session.

       flowgrind -G s=q:U:40:10000 -G s=q:U:40:10000 -O	b=TCP_NODELAY
	      -G s=q:U:40:10000	-G s=q:U:40:10000 :  use  uniform  distributed
	      request and response size	between	40B and	10kB
	      -O b=TCP_NODELAY:	set socket options TCP_NODELAY as used by tel-
	      net applications

       For this	scenario RTT (lower is better) and Network Transactions/s  are
       useful metrics (higher is better).

   Rate	Limited	(Streaming Media)
       This  scenario  emulates	 a video stream	transfer with a	bitrate	of 800

       flowgrind -G s=q:C:800 -G s=g:N:0.008:0.001
	      Use normal distributed interpacket gap with  mean	 0.008	and  a
	      small  variance  (0.001).	 In conjunction	with a request size of
	      800 bytes	an average bitrate of approx 800 kbit/s	 is  achieved.
	      The  variance  is	 added to emulate a variable bitrate like it's
	      used in todays video codecs.

       For this	scenario the IAT (lower	 is  better)  and  minimal  throughput
       (higher is better) are interesting metrics.

   Flow/endpoint identifiers
       #      flow endpoint, either 'S'	for source or 'D' for destination

       ID     numerical	flow identifier

       begin and end
	      boundaries  of  the  measurement	interval  in seconds. The time
	      shown is the elapsed time	since receiving	 the  RPC  message  to
	      start the	test from the daemons point of view

   Application layer metrics
	      transmitting  goodput  of	the flow endpoint during this measure-
	      ment interval, measured in Mbit/s	(default) or MB/s (-m)

	      number of	successfully received response blocks per  second  (we
	      call it network transactions/s)

	      number  of  request and response block sent during this measure-
	      ment interval (column disabled by	default)

       IAT    block inter-arrival time (IAT). Together with  the  minimum  and
	      maximum the arithmetic mean for that specific measurement	inter-
	      val is displayed.	If no block is received	during	report	inter-
	      val, 'inf' is displayed.

       DLY and RTT
	      1-way  and  2-way	block delay respectively the block latency and
	      the block	round-trip time	(RTT). For both	delays the minimum and
	      maximum encountered values in that measurement interval are dis-
	      played together with the arithmetic mean.	If no  block,  respec-
	      tively block acknowledgment is arrived during that report	inter-
	      val, 'inf' is displayed. Both, the 1-way and 2-way  block	 delay
	      are disabled by default (see option -I and -A).

   Kernel metrics (TCP_INFO)
       All following TCP specific metrics are obtained from the	kernel through
       the TCP_INFO socket option at the end of	 every	report	interval.  The
       sampling	rate can be changed via	option -i.

       cwnd (tcpi_cwnd)
	      size  of	TCP  congestion	 window	 (CWND)	 in number of segments
	      (Linux) or bytes (FreeBSD)

       ssth (tcpi_snd_sshtresh)
	      size of the slow-start threshold in number of  segments  (Linux)
	      or bytes (FreeBSD)

       uack (tcpi_unacked)
	      number  of  currently  unacknowledged  segments, i.e., number of
	      segments in flight (FlightSize) (Linux only)

       sack (tcpi_sacked)
	      number of	selectively acknowledged segments (Linux only)

       lost (tcpi_lost)
	      number of	segments assumed lost (Linux only)

       retr (tcpi_retrans)
	      number of	unacknowledged retransmitted segments (Linux only)

       tret (tcpi_retransmits)
	      number of	retransmissions	triggered by a retransmission  timeout
	      (RTO) (Linux only)

       fack (tcpi_fackets)
	      number  of  segments between SND.UNA and the highest selectively
	      acknowledged sequence number (SND.FACK) (Linux only)

       reor (tcpi_reordering)
	      segment reordering metric. The Linux kernel can detect and  cope
	      with  reordering	without	significant loss of performance	if the
	      distance a segment gets displaced	does not exceed	the reordering
	      metric (Linux only)

       rtt (tcpi_rtt) and rttvar (tcpi_rttvar)
	      TCP round-trip time and its variance given in ms

       rto (tcpi_rto)
	      the retransmission timeout given in ms

       bkof (tcpi_backoff)
	      number of	RTO backoffs (Linux only)

       ca state	(tcpi_ca_state)
	      internal	state  of  the TCP congestion control state machine as
	      implemented in the Linux kernel. Can be one of  open,  disorder,
	      cwr, recovery or loss (Linux only)

	      Open   is	 the  normal state. It indicates that no duplicate ac-
		     knowledgment (ACK)	is received and	no segment is  consid-
		     ered lost

		     is	 entered  upon	the reception of the first consecutive
		     duplicate ACK or selective	acknowledgment (SACK)

	      CWR    is	entered	when a notification from  Explicit  Congestion
		     Notification (ECN)	is received

		     is	entered	when three duplicate ACKs or a equivalent num-
		     ber of SACKs are received.	In this	state congestion  con-
		     trol  and	loss  recovery procedures like Fast Retransmit
		     and Fast Recovery (RFC 5861) are executed

	      Loss   is	entered	if the RTO expires. Again  congestion  control
		     and loss recovery procedures are executed

       smss and	pmtu
	      sender  maximum  segment size and	path maximum transmission unit
	      in bytes

   Internal flowgrind state (only enabled in debug builds)
       status state of the flow	inside flowgrind for diagnostic	 purposes.  It
	      is  a  tuple of two values, the first for	sending	and the	second
	      for receiving. Ideally the states	of both	the source and	desti-
	      nation  endpoints	of a flow should be symmetrical	but since they
	      are not synchronized they	may not	change at the same  time.  The
	      possible values are:

	      c	     Direction completed sending/receiving

	      d	     Waiting for initial delay

	      f	     Fault state

	      l	     Active state, nothing yet transmitted or received

	      n	     Normal activity, some data	got transmitted	or received

	      o	     Flow  has zero duration in	that direction,	no data	is go-
		     ing to be exchanged

       Flowgrind was original started by Daniel	 Schaffrath.  The  distributed
       measurement  architecture and advanced traffic generation were later on
       added by	Tim Kosse and Christian	Samsel.	Currently, flowgrind is	devel-
       oped and	maintained by Arnd Hannemann and Alexander Zimmermann.

       The  development	 and  maintenance  of  flowgrind is primarily done via
       github <>.	Please report bugs via
       the issue webpage <>.

       Output  of  flowgrind  is  gnuplot  compatible,	so you can easily plot
       flowlogs	flowgrind's output (aka	flowlogs)

       flowgrindd(1), flowgrind-stop(1), gnuplot(1)

				 January 2021			  flowgrind(1)


Want to link to this manual page? Use this URL:

home | help