Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages

  
 
  

home | help
squeue(1)			Slurm Commands			     squeue(1)

NAME
       squeue  -  view	information about jobs located in the Slurm scheduling
       queue.

SYNOPSIS
       squeue [OPTIONS...]

DESCRIPTION
       squeue is used to view job and job step information for jobs managed by
       Slurm.

OPTIONS
       -A <account_list>, --account=<account_list>
	      Specify  the accounts of the jobs	to view. Accepts a comma sepa-
	      rated list of account names. This	has no effect when listing job
	      steps.

       -a, --all
	      Display  information about jobs and job steps in all partitions.
	      This causes information to be displayed  about  partitions  that
	      are  configured as hidden	and partitions that are	unavailable to
	      user's group.

       -r, --array
	      Display one job array element per	line.	Without	 this  option,
	      the  display  will be optimized for use with job arrays (pending
	      job array	elements will be combined on one line of  output  with
	      the array	index values printed using a regular expression).

       --array-unique
	      Display  one  unique pending job array element per line. Without
	      this option, the pending job array elements will be grouped into
	      the  master array	job to optimize	the display.  This can also be
	      set with the environment variable	SQUEUE_ARRAY_UNIQUE.

       --federation
	      Show jobs	from the federation if a member	of one.

       -h, --noheader
	      Do not print a header on the output.

       --help Print a help message describing all options squeue.

       --hide Do not display information about jobs and	job steps in all  par-
	      titions.	By default, information	about partitions that are con-
	      figured as hidden	or are not available to	the user's group  will
	      not be displayed (i.e. this is the default behavior).

       -i <seconds>, --iterate=<seconds>
	      Repeatedly  gather  and  report the requested information	at the
	      interval specified (in seconds).	 By  default,  prints  a  time
	      stamp with the header.

       -j <job_id_list>, --jobs=<job_id_list>
	      Requests a comma separated list of job IDs to display.  Defaults
	      to all jobs.  The	--jobs=<job_id_list> option  may  be  used  in
	      conjunction  with	 the  --steps option to	print step information
	      about specific jobs.  Note: If a list of job  IDs	 is  provided,
	      the  jobs	 are  displayed	even if	they are on hidden partitions.
	      Since this option's argument is optional,	for proper parsing the
	      single letter option must	be followed immediately	with the value
	      and not include a	space between them. For	example	 "-j1008"  and
	      not  "-j 1008".  The job ID format is "job_id[_array_id]".  Per-
	      formance of the command can be measurably	improved  for  systems
	      with  large  numbers  of jobs when a single job ID is specified.
	      By default, this field size will be limited to  64  bytes.   Use
	      the  environment	variable  SLURM_BITSTR_LEN  to	specify	larger
	      field sizes.

       --local
	      Show only	jobs local to this cluster. Ignore other  clusters  in
	      this federation (if any).	Overrides --federation.

       -l, --long
	      Report  more  of the available information for the selected jobs
	      or job steps, subject to any constraints specified.

       -L, --licenses=<license_list>
	      Request jobs requesting or using one or more of  the  named  li-
	      censes.	The license list consists of a comma separated list of
	      license names.

       --me   Equivalent to --user=<my username>.

       -M, --clusters=<string>
	      Clusters to issue	commands to.  Multiple cluster	names  may  be
	      comma  separated.	  A value of of	'all' will query to run	on all
	      clusters.	 This option implicitly	sets the --local option.

       -n, --name=<name_list>
	      Request jobs or job steps	having one  of	the  specified	names.
	      The list consists	of a comma separated list of job names.

       --noconvert
	      Don't  convert  units from their original	type (e.g. 2048M won't
	      be converted to 2G).

       -o <output_format>, --format=<output_format>
	      Specify the information to be displayed, its size	 and  position
	      (right  or  left	justified).   Also see the -O <output_format>,
	      --Format=<output_format> option described	below (which  supports
	      less  flexibility	 in  formatting,  but  supports	 access	to all
	      fields).	If the command is executed in an federated cluster en-
	      vironment	 and  information about	more than one cluster is to be
	      displayed	and the	-h, --noheader option is used, then the	 clus-
	      ter  name	 will  be  displayed before the	default	output formats
	      shown below.  The	default	formats	with various options are

	      default	     "%.18i %.9P %.8j %.8u %.2t	%.10M %.6D %R"

	      -l, --long     "%.18i %.9P %.8j %.8u %.8T	%.10M %.9l %.6D	%R"

	      -s, --steps    "%.15i %.8j %.9P %.8u %.9M	%N"

	      The format of each field is "%[[.]size]type".

	      size    is the minimum field size.  If  no  size	is  specified,
		      whatever	is  needed  to	print  the information will be
		      used.

	       .      indicates	the output should be right justified and  size
		      must  be	specified.   By	default, output	is left	justi-
		      fied.

	      Note that	many of	these type specifications are valid  only  for
	      jobs  while  others  are	valid  only for	job steps.  Valid type
	      specifications include:

	      %all  Print all fields available for this	data type with a  ver-
		    tical bar separating each field.

	      %a    Account associated with the	job.  (Valid for jobs only)

	      %A    Number  of	tasks created by a job step.  This reports the
		    value of the srun --ntasks option.	(Valid for  job	 steps
		    only)

	      %A    Job	id.  This will have a unique value for each element of
		    job	arrays.	 (Valid	for jobs only)

	      %B    Executing (batch) host. For	an allocated session, this  is
		    the	 host on which the session is executing	(i.e. the node
		    from which the srun	or the salloc command  was  executed).
		    For	 a  batch  job,	 this  is the node executing the batch
		    script. In the case	of a typical Linux cluster, this would
		    be the compute node	zero of	the allocation.	In the case of
		    a Cray ALPS	system,	this would be the front-end host whose
		    slurmd daemon executes the job script.

	      %c    Minimum  number of CPUs (processors) per node requested by
		    the	job.  This reports the value of	the srun --mincpus op-
		    tion with a	default	value of zero.	(Valid for jobs	only)

	      %C    Number  of CPUs (processors) requested by the job or allo-
		    cated to it	if already running.  As	a  job	is  completing
		    this  number will reflect the current number of CPUs allo-
		    cated.  (Valid for jobs only)

	      %d    Minimum size of temporary disk space (in MB) requested  by
		    the	job.  (Valid for jobs only)

	      %D    Number of nodes allocated to the job or the	minimum	number
		    of nodes required by a pending job.	The actual  number  of
		    nodes allocated to a pending job may exceed	this number if
		    the	job specified a	node range count  (e.g.	  minimum  and
		    maximum  node  counts)  or	the  job specifies a processor
		    count instead of a node count. As a	job is completing this
		    number will	reflect	the current number of nodes allocated.
		    (Valid for jobs only)

	      %e    Time at which the job ended	or is expected to  end	(based
		    upon its time limit).  (Valid for jobs only)

	      %E    Job	dependencies remaining.	This job will not begin	execu-
		    tion until these dependent jobs complete. In the case of a
		    job	 that  can not run due to job dependencies never being
		    satisfied, the full	original job dependency	 specification
		    will  be reported. A value of NULL implies this job	has no
		    dependencies.  (Valid for jobs only)

	      %f    Features required by the job.  (Valid for jobs only)

	      %F    Job	array's	job ID.	This is	the base job ID.  For  non-ar-
		    ray	jobs, this is the job ID.  (Valid for jobs only)

	      %g    Group name of the job.  (Valid for jobs only)

	      %G    Group ID of	the job.  (Valid for jobs only)

	      %h    Can	 the  compute  resources  allocated to the job be over
		    subscribed by other	jobs.  The resources to	be  over  sub-
		    scribed  can be nodes, sockets, cores, or hyperthreads de-
		    pending upon configuration.	 The value will	 be  "YES"  if
		    the	job was	submitted with the oversubscribe option	or the
		    partition is configured with OverSubscribe=Force, "NO"  if
		    the	 job requires exclusive	node access, "USER" if the al-
		    located compute nodes are  dedicated  to  a	 single	 user,
		    "MCS"  if  the  allocated compute nodes are	dedicated to a
		    single security class  (See	 MCSPlugin  and	 MCSParameters
		    configuration  parameters for more information), "OK" oth-
		    erwise (typically allocated	dedicated  CPUs),  (Valid  for
		    jobs only)

	      %H    Number of sockets per node requested by the	job.  This re-
		    ports the value of	the  srun  --sockets-per-node  option.
		    When  --sockets-per-node  has  not	been  set, "*" is dis-
		    played.  (Valid for	jobs only)

	      %i    Job	or job step id.	 In the	case of	job arrays, the	job ID
		    format  will  be  of the form "<base_job_id>_<index>".  By
		    default, the job array index field size will be limited to
		    64	bytes.	 Use the environment variable SLURM_BITSTR_LEN
		    to specify larger field sizes.  (Valid for	jobs  and  job
		    steps)  In	the case of heterogeneous job allocations, the
		    job	ID format will be of the form "#+#"  where  the	 first
		    number  is	the  "heterogeneous job	leader"	and the	second
		    number the zero origin offset for each  component  of  the
		    job.

	      %I    Number of cores per	socket requested by the	job.  This re-
		    ports the value of	the  srun  --cores-per-socket  option.
		    When  --cores-per-socket  has  not	been  set, "*" is dis-
		    played.  (Valid for	jobs only)

	      %j    Job	or job step name.  (Valid for jobs and job steps)

	      %J    Number of threads per core requested by the	job.  This re-
		    ports  the	value  of  the srun --threads-per-core option.
		    When --threads-per-core has	not  been  set,	 "*"  is  dis-
		    played.  (Valid for	jobs only)

	      %k    Comment associated with the	job.  (Valid for jobs only)

	      %K    Job	array index.  By default, this field size will be lim-
		    ited to 64 bytes.  Use the environment variable SLURM_BIT-
		    STR_LEN  to	 specify  larger field sizes.  (Valid for jobs
		    only)

	      %l    Time limit of the  job  or	job  step  in  days-hours:min-
		    utes:seconds.   The	 value may be "NOT_SET"	if not yet es-
		    tablished or "UNLIMITED" for no limit.   (Valid  for  jobs
		    and	job steps)

	      %L    Time  left	for  the  job  to  execute  in days-hours:min-
		    utes:seconds.  This	value is calculated by subtracting the
		    job's  time	 used  from  its time limit.  The value	may be
		    "NOT_SET" if not yet established  or  "UNLIMITED"  for  no
		    limit.  (Valid for jobs only)

	      %m    Minimum  size  of  memory  (in  MB)	 requested by the job.
		    (Valid for jobs only)

	      %M    Time used by  the  job  or	job  step  in  days-hours:min-
		    utes:seconds.   The	 days  and  hours  are printed only as
		    needed.  For job steps this	field shows the	 elapsed  time
		    since  execution began and thus will be inaccurate for job
		    steps which	have been suspended.  Clock skew between nodes
		    in	the  cluster will cause	the time to be inaccurate.  If
		    the	time is	obviously wrong	(e.g. negative),  it  displays
		    as "INVALID".  (Valid for jobs and job steps)

	      %n    List  of  node  names  explicitly  requested  by  the job.
		    (Valid for jobs only)

	      %N    List of nodes allocated to the job or  job	step.  In  the
		    case  of a COMPLETING job, the list	of nodes will comprise
		    only those nodes that have not yet been returned  to  ser-
		    vice.  (Valid for jobs and job steps)

	      %o    The	command	to be executed.

	      %O    Are	 contiguous  nodes  requested  by the job.  (Valid for
		    jobs only)

	      %p    Priority of	the job	(converted to a	floating point	number
		    between 0.0	and 1.0).  Also	see %Q.	 (Valid	for jobs only)

	      %P    Partition of the job or job	step.  (Valid for jobs and job
		    steps)

	      %q    Quality of service associated with the  job.   (Valid  for
		    jobs only)

	      %Q    Priority of	the job	(generally a very large	unsigned inte-
		    ger).  Also	see %p.	 (Valid	for jobs only)

	      %r    The	reason a job is	in its current	state.	 See  the  JOB
		    REASON  CODES  section below for more information.	(Valid
		    for	jobs only)

	      %R    For	pending	jobs: the reason a job is waiting  for	execu-
		    tion  is  printed within parenthesis.  For terminated jobs
		    with failure: an explanation as to why the job  failed  is
		    printed within parenthesis.	 For all other job states: the
		    list of allocate nodes.  See the JOB REASON	CODES  section
		    below for more information.	 (Valid	for jobs only)

	      %s    Node  selection  plugin  specific data for a job. Possible
		    data includes: Geometry requirement	of resource allocation
		    (X,Y,Z  dimensions),  Connection type (TORUS, MESH,	or NAV
		    == torus else mesh), Permit	rotation of geometry  (yes  or
		    no),  Node	use (VIRTUAL or	COPROCESSOR), etc.  (Valid for
		    jobs only)

	      %S    Actual or expected start time of  the  job	or  job	 step.
		    (Valid for jobs and	job steps)

	      %t    Job	 state	in compact form.  See the JOB STATE CODES sec-
		    tion below for a list of possible states.  (Valid for jobs
		    only)

	      %T    Job	 state in extended form.  See the JOB STATE CODES sec-
		    tion below for a list of possible states.  (Valid for jobs
		    only)

	      %u    User  name for a job or job	step.  (Valid for jobs and job
		    steps)

	      %U    User ID for	a job or job step.  (Valid for	jobs  and  job
		    steps)

	      %v    Reservation	for the	job.  (Valid for jobs only)

	      %V    The	job's submission time.

	      %w    Workload  Characterization	Key  (wckey).  (Valid for jobs
		    only)

	      %W    Licenses reserved for the job.  (Valid for jobs only)

	      %x    List of node names explicitly excluded by the job.	(Valid
		    for	jobs only)

	      %X    Count  of cores reserved on	each node for system use (core
		    specialization).  (Valid for jobs only)

	      %y    Nice value (adjustment to a	 job's	scheduling  priority).
		    (Valid for jobs only)

	      %Y    For	 pending jobs, a list of the nodes expected to be used
		    when the job is started.

	      %z    Number of requested	sockets, cores,	 and  threads  (S:C:T)
		    per	 node for the job.  When (S:C:T) has not been set, "*"
		    is displayed.  (Valid for jobs only)

	      %Z    The	job's working directory.

       -O <output_format>, --Format=<output_format>
	      Specify the information to be displayed.	Also see the -o	 <out-
	      put_format>,  --format=<output_format>  option  described	 below
	      (which supports greater flexibility in formatting, but does  not
	      support  access  to  all	fields because we ran out of letters).
	      Requests a comma separated list of job information  to  be  dis-
	      played.

	      The format of each field is "type[:[.]size]"

	      size    is  the minimum field size.  If no size is specified, 20
		      characters will be allocated to print the	information.

	       .      indicates	the output should be right justified and  size
		      must  be	specified.   By	default, output	is left	justi-
		      fied.

	      Note that	many of	these type specifications are valid  only  for
	      jobs  while  others  are	valid  only for	job steps.  Valid type
	      specifications include:

	      account
		    Print the account associated with  the  job.   (Valid  for
		    jobs only)

	      accruetime
		    Print the accrue time associated with the job.  (Valid for
		    jobs only)

	      admin_comment
		    Administrator comment associated with the job.  (Valid for
		    jobs only)

	      allocnodes
		    Print  the	nodes  allocated  to the job.  (Valid for jobs
		    only)

	      allocsid
		    Print the session ID used to submit	the job.   (Valid  for
		    jobs only)

	      arrayjobid
		    Prints  the	 job ID	of the job array.  (Valid for jobs and
		    job	steps)

	      arraytaskid
		    Prints the task ID of the job array.  (Valid for jobs  and
		    job	steps)

	      associd
		    Prints  the	 id  of	 the job association.  (Valid for jobs
		    only)

	      batchflag
		    Prints whether the batch flag has been  set.   (Valid  for
		    jobs only)

	      batchhost
		    Executing  (batch) host. For an allocated session, this is
		    the	host on	which the session is executing (i.e. the  node
		    from  which	 the srun or the salloc	command	was executed).
		    For	a batch	job, this is  the  node	 executing  the	 batch
		    script. In the case	of a typical Linux cluster, this would
		    be the compute node	zero of	the allocation.	In the case of
		    a Cray ALPS	system,	this would be the front-end host whose
		    slurmd daemon executes the job script.   (Valid  for  jobs
		    only)

	      boardspernode
		    Prints the number of boards	per node allocated to the job.
		    (Valid for jobs only)

	      burstbuffer
		    Burst Buffer specification (Valid for jobs only)

	      burstbufferstate
		    Burst Buffer state (Valid for jobs only)

	      cluster
		    Name of the	cluster	that is	running	the job	or job step.

	      clusterfeature
		    Cluster features required by the  job.   (Valid  for  jobs
		    only)

	      command
		    The	command	to be executed.	 (Valid	for jobs only)

	      comment
		    Comment associated with the	job.  (Valid for jobs only)

	      contiguous
		    Are	 contiguous  nodes  requested  by the job.  (Valid for
		    jobs only)

	      cores Number of cores per	socket requested by the	job.  This re-
		    ports  the	value  of  the srun --cores-per-socket option.
		    When --cores-per-socket has	not  been  set,	 "*"  is  dis-
		    played.  (Valid for	jobs only)

	      corespec
		    Count  of cores reserved on	each node for system use (core
		    specialization).  (Valid for jobs only)

	      cpufreq
		    Prints the frequency of the	allocated  CPUs.   (Valid  for
		    job	steps only)

	      cpus-per-task
		    Prints  the	number of CPUs per tasks allocated to the job.
		    (Valid for jobs only)

	      cpus-per-tres
		    Print the memory required per  trackable  resources	 allo-
		    cated to the job or	job step.

	      deadline
		    Prints  the	 deadline  affected to the job (Valid for jobs
		    only)

	      dependency
		    Job	dependencies remaining.	This job will not begin	execu-
		    tion until these dependent jobs complete. In the case of a
		    job	that can not run due to	job dependencies  never	 being
		    satisfied,	the full original job dependency specification
		    will be reported. A	value of NULL implies this job has  no
		    dependencies.  (Valid for jobs only)

	      delayboot
		    Delay boot time.  (Valid for jobs only)

	      derivedec
		    Derived  exit  code	for the	job, which is the highest exit
		    code of any	job step.  (Valid for jobs only)

	      eligibletime
		    Time the job is eligible for  running.   (Valid  for  jobs
		    only)

	      endtime
		    The	 time  of job termination, actual or expected.	(Valid
		    for	jobs only)

	      exit_code
		    The	exit code for the job.	(Valid for jobs	only)

	      feature
		    Features required by the job.  (Valid for jobs only)

	      groupid
		    Group ID of	the job.  (Valid for jobs only)

	      groupname
		    Group name of the job.  (Valid for jobs only)

	      jobarrayid
		    Job	array's	job ID.	This is	the base job ID.  For  non-ar-
		    ray	jobs, this is the job ID.  (Valid for jobs only)

	      jobid Job	id.  This will have a unique value for each element of
		    job	arrays	and  each  component  of  heterogeneous	 jobs.
		    (Valid for jobs only)

	      lastschedeval
		    Prints the last time the job was evaluated for scheduling.
		    (Valid for jobs only)

	      licenses
		    Licenses reserved for the job.  (Valid for jobs only)

	      maxcpus
		    Prints the max  number  of	CPUs  allocated	 to  the  job.
		    (Valid for jobs only)

	      maxnodes
		    Prints  the	 max  number  of  nodes	 allocated to the job.
		    (Valid for jobs only)

	      mcslabel
		    Prints the MCS_label of the	job.  (Valid for jobs only)

	      mem-per-tres
		    Print the memory (in MB) required per trackable  resources
		    allocated to the job or job	step.

	      minmemory
		    Minimum  size  of  memory  (in  MB)	 requested by the job.
		    (Valid for jobs only) intime

	      mintime
		    Minimum time limit of the job (Valid for jobs only)

	      mintmpdisk
		    Minimum size of temporary disk space (in MB) requested  by
		    the	job.  (Valid for jobs only)

	      mincpus
		    Minimum  number of CPUs (processors) per node requested by
		    the	job.  This reports the value of	the srun --mincpus op-
		    tion with a	default	value of zero.	(Valid for jobs	only)

	      name  Job	or job step name.  (Valid for jobs and job steps)

	      network
		    The	 network  that the job is running on.  (Valid for jobs
		    and	job steps)

	      nice  Nice value (adjustment to a	 job's	scheduling  priority).
		    (Valid for jobs only)

	      nodes List  of  nodes  allocated	to the job or job step.	In the
		    case of a COMPLETING job, the list of nodes	will  comprise
		    only  those	 nodes that have not yet been returned to ser-
		    vice.  (Valid job steps only)

	      nodelist
		    List of nodes allocated to the job or  job	step.  In  the
		    case  of a COMPLETING job, the list	of nodes will comprise
		    only those nodes that have not yet been returned  to  ser-
		    vice.  (Valid for jobs only)

	      ntperboard
		    The	 number	 of  tasks  per	 board	allocated  to the job.
		    (Valid for jobs only)

	      ntpercore
		    The	number of tasks	per core allocated to the job.	(Valid
		    for	jobs only)

	      ntpernode
		    The	 number	of task	per node allocated to the job.	(Valid
		    for	jobs only)

	      ntpersocket
		    The	number of tasks	 per  socket  allocated	 to  the  job.
		    (Valid for jobs only)

	      numcpus
		    Number  of CPUs (processors) requested by the job or allo-
		    cated to it	if already running.  As	a job  is  completing,
		    this  number will reflect the current number of CPUs allo-
		    cated.  (Valid for jobs and	job steps)

	      numnodes
		    Number of nodes allocated to the job or the	minimum	number
		    of	nodes  required	by a pending job. The actual number of
		    nodes allocated to a pending job may exceed	this number if
		    the	 job  specified	 a node	range count (e.g.  minimum and
		    maximum node counts) or  the  job  specifies  a  processor
		    count instead of a node count. As a	job is completing this
		    number will	reflect	the current number of nodes allocated.
		    (Valid for jobs only)

	      numtasks
		    Number  of tasks requested by a job	or job step.  This re-
		    ports the value of the --ntasks option.  (Valid  for  jobs
		    and	job steps)

	      origin
		    Cluster  name where	federated job originated from.	(Valid
		    for	federated jobs only)

	      originraw
		    Cluster ID where federated job  originated	from.	(Valid
		    for	federated jobs only)

	      oversubscribe
		    Can	 the  compute  resources  allocated to the job be over
		    subscribed by other	jobs.  The resources to	be  over  sub-
		    scribed  can be nodes, sockets, cores, or hyperthreads de-
		    pending upon configuration.	 The value will	 be  "YES"  if
		    the	job was	submitted with the oversubscribe option	or the
		    partition is configured with OverSubscribe=Force, "NO"  if
		    the	 job requires exclusive	node access, "USER" if the al-
		    located compute nodes are  dedicated  to  a	 single	 user,
		    "MCS"  if  the  allocated compute nodes are	dedicated to a
		    single security class  (See	 MCSPlugin  and	 MCSParameters
		    configuration  parameters for more information), "OK" oth-
		    erwise (typically allocated	dedicated  CPUs),  (Valid  for
		    jobs only)

	      hetjobid
		    Job	ID of the heterogeneous	job leader.

	      hetjoboffset
		    Zero  origin  offset  within a collection of heterogeneous
		    job	components.

	      hetjobidset
		    Expression identifying all components  job	IDs  within  a
		    heterogeneous job.

	      partition
		    Partition of the job or job	step.  (Valid for jobs and job
		    steps)

	      priority
		    Priority of	the job	(converted to a	floating point	number
		    between  0.0 and 1.0).  Also see prioritylong.  (Valid for
		    jobs only)

	      prioritylong
		    Priority of	the job	(generally a very large	unsigned inte-
		    ger).  Also	see priority.  (Valid for jobs only)

	      profile
		    Profile of the job.	 (Valid	for jobs only)

	      preempttime
		    The	preempt	time for the job.  (Valid for jobs only)

	      qos   Quality  of	 service  associated with the job.  (Valid for
		    jobs only)

	      reason
		    The	reason a job is	in its current	state.	 See  the  JOB
		    REASON  CODES  section below for more information.	(Valid
		    for	jobs only)

	      reasonlist
		    For	pending	jobs: the reason a job is waiting  for	execu-
		    tion  is  printed within parenthesis.  For terminated jobs
		    with failure: an explanation as to why the job  failed  is
		    printed within parenthesis.	 For all other job states: the
		    list of allocate nodes.  See the JOB REASON	CODES  section
		    below for more information.	 (Valid	for jobs only)

	      reboot
		    Indicates if the allocated nodes should be rebooted	before
		    starting the job.  (Valid on jobs only)

	      reqnodes
		    List of  node  names  explicitly  requested	 by  the  job.
		    (Valid for jobs only)

	      reqswitch
		    The	 max  number  of  requested  switches  by for the job.
		    (Valid for jobs only)

	      requeue
		    Prints whether  the	 job  will  be	requeued  on  failure.
		    (Valid for jobs only)

	      reservation
		    Reservation	for the	job.  (Valid for jobs only)

	      resizetime
		    The	amount of time changed for the job to run.  (Valid for
		    jobs only)

	      restartcnt
		    The	number of restarts for the job.	 (Valid	for jobs only)

	      resvport
		    Reserved ports of the job.	(Valid for job steps only)

	      schednodes
		    For	pending	jobs, a	list of	the nodes expected to be  used
		    when the job is started.  (Valid for jobs only)

	      sct   Number  of	requested  sockets, cores, and threads (S:C:T)
		    per	node for the job.  When	(S:C:T)	has not	been set,  "*"
		    is displayed.  (Valid for jobs only)

	      selectjobinfo
		    Node  selection  plugin  specific data for a job. Possible
		    data includes: Geometry requirement	of resource allocation
		    (X,Y,Z  dimensions),  Connection type (TORUS, MESH,	or NAV
		    == torus else mesh), Permit	rotation of geometry  (yes  or
		    no),  Node	use (VIRTUAL or	COPROCESSOR), etc.  (Valid for
		    jobs only)

	      siblingsactive
		    Cluster names  of  where  federated	 sibling  jobs	exist.
		    (Valid for federated jobs only)

	      siblingsactiveraw
		    Cluster IDs	of where federated sibling jobs	exist.	(Valid
		    for	federated jobs only)

	      siblingsviable
		    Cluster names of where federated sibling jobs  are	viable
		    to run.  (Valid for	federated jobs only)

	      siblingsviableraw
		    Cluster IDs	of where federated sibling jobs	viable to run.
		    (Valid for federated jobs only)

	      sockets
		    Number of sockets per node requested by the	job.  This re-
		    ports  the	value  of  the srun --sockets-per-node option.
		    When --sockets-per-node has	not  been  set,	 "*"  is  dis-
		    played.  (Valid for	jobs only)

	      sperboard
		    Number  of sockets per board allocated to the job.	(Valid
		    for	jobs only)

	      starttime
		    Actual or expected start time of  the  job	or  job	 step.
		    (Valid for jobs and	job steps)

	      state Job	 state in extended form.  See the JOB STATE CODES sec-
		    tion below for a list of possible states.  (Valid for jobs
		    only)

	      statecompact
		    Job	 state	in compact form.  See the JOB STATE CODES sec-
		    tion below for a list of possible states.  (Valid for jobs
		    only)

	      stderr
		    The	directory for standard error to	output to.  (Valid for
		    jobs only)

	      stdin The	directory for standard in.  (Valid for jobs only)

	      stdout
		    The	directory for standard out to output to.   (Valid  for
		    jobs only)

	      stepid
		    Job	or job step id.	 In the	case of	job arrays, the	job ID
		    format  will  be  of  the  form   "<base_job_id>_<index>".
		    (Valid forjob steps	only)

	      stepname
		    job	step name.  (Valid for job steps only)

	      stepstate
		    The	state of the job step.	(Valid for job steps only)

	      submittime
		    The	 time  that the	job was	submitted at.  (Valid for jobs
		    only)

	      system_comment
		    System comment associated with the job.  (Valid  for  jobs
		    only)

	      threads
		    Number of threads per core requested by the	job.  This re-
		    ports the value of	the  srun  --threads-per-core  option.
		    When  --threads-per-core  has  not	been  set, "*" is dis-
		    played.  (Valid for	jobs only)

	      timeleft
		    Time left  for  the	 job  to  execute  in  days-hours:min-
		    utes:seconds.  This	value is calculated by subtracting the
		    job's time used from its time limit.   The	value  may  be
		    "NOT_SET"  if  not	yet  established or "UNLIMITED"	for no
		    limit.  (Valid for jobs only)

	      timelimit
		    Timelimit for the job or job step.	(Valid	for  jobs  and
		    job	steps)

	      timeused
		    Time  used	by  the	 job  or  job  step in days-hours:min-
		    utes:seconds.  The days and	 hours	are  printed  only  as
		    needed.   For  job steps this field	shows the elapsed time
		    since execution began and thus will	be inaccurate for  job
		    steps which	have been suspended.  Clock skew between nodes
		    in the cluster will	cause the time to be  inaccurate.   If
		    the	 time  is obviously wrong (e.g.	negative), it displays
		    as "INVALID".  (Valid for jobs and job steps)

	      tres-alloc
		    Print the trackable	resources allocated to the job if run-
		    ning.   If not running, then print the trackable resources
		    requested by the job.

	      tres-bind
		    Print the trackable	resources task	binding	 requested  by
		    the	job or job step.

	      tres-freq
		    Print the trackable	resources frequencies requested	by the
		    job	or job step.

	      tres-per-job
		    Print the trackable	resources requested by the job.

	      tres-per-node
		    Print the trackable	resources per node  requested  by  the
		    job	or job step.

	      tres-per-socket
		    Print  the trackable resources per socket requested	by the
		    job	or job step.

	      tres-per-step
		    Print the trackable	resources requested by the job step.

	      tres-per-task
		    Print the trackable	resources per task  requested  by  the
		    job	or job step.

	      userid
		    User  ID  for  a job or job	step.  (Valid for jobs and job
		    steps)

	      username
		    User name for a job	or job step.  (Valid for jobs and  job
		    steps)

	      wait4switch
		    The	 amount	 of  time  to  wait  for the desired number of
		    switches.  (Valid for jobs only)

	      wckey Workload Characterization Key (wckey).   (Valid  for  jobs
		    only)

	      workdir
		    The	job's working directory.  (Valid for jobs only)

       -p <part_list>, --partition=<part_list>
	      Specify  the  partitions of the jobs or steps to view. Accepts a
	      comma separated list of partition	names.

       -P, --priority
	      For pending jobs submitted to multiple partitions, list the  job
	      once per partition. In addition, if jobs are sorted by priority,
	      consider both the	partition and job priority. This option	can be
	      used to produce a	list of	pending	jobs in	the same order consid-
	      ered for scheduling by Slurm with	appropriate additional options
	      (e.g. "--sort=-p,i --states=PD").

       -q <qos_list>, --qos=<qos_list>
	      Specify the qos(s) of the	jobs or	steps to view. Accepts a comma
	      separated	list of	qos's.

       -R, --reservation=reservation_name
	      Specify the reservation of the jobs to view.

       -s, --steps
	      Specify the job steps to view.  This flag	indicates that a comma
	      separated	 list  of  job	steps to view follows without an equal
	      sign (see	 examples).   The  job	step  format  is  "job_id[_ar-
	      ray_id].step_id".	Defaults to all	job steps. Since this option's
	      argument is optional, for	proper parsing the single  letter  op-
	      tion must	be followed immediately	with the value and not include
	      a	space  between	them.  For  example  "-s1008.0"	 and  not  "-s
	      1008.0".

       --sibling
	      Show  all	sibling	jobs on	a federated cluster. Implies --federa-
	      tion.

       -S <sort_list>, --sort=<sort_list>
	      Specification of the order in which records should be  reported.
	      This  uses  the same field specification as the <output_format>.
	      The long format option "cluster" can also	be used	to  sort  jobs
	      or  job  steps  by cluster name (e.g. federated jobs).  Multiple
	      sorts may	be performed by	listing	multiple sort fields separated
	      by  commas.   The	field specifications may be preceded by	"+" or
	      "-" for ascending	(default) and descending  order	 respectively.
	      For example, a sort value	of "P,U" will sort the records by par-
	      tition name then by user id.  The	default	value of sort for jobs
	      is  "P,t,-p" (increasing partition name then within a given par-
	      tition by	increasing job state and  then	decreasing  priority).
	      The  default  value  of  sort for	job steps is "P,i" (increasing
	      partition	name then within a given partition by increasing  step
	      id).

       --start
	      Report the expected start	time and resources to be allocated for
	      pending jobs in order of increasing start	time.  This is equiva-
	      lent  to	the  following options:	--format="%.18i	%.9P %.8j %.8u
	      %.2t  %.19S %.6D %20Y %R", --sort=S and  --states=PENDING.   Any
	      of these options may be explicitly changed as desired by combin-
	      ing the --start option with other	option values (e.g. to	use  a
	      different	 output	 format).   The	expected start time of pending
	      jobs is only available if	the Slurm is  configured  to  use  the
	      backfill scheduling plugin.

       -t <state_list>,	--states=<state_list>
	      Specify  the  states of jobs to view.  Accepts a comma separated
	      list of state names or "all". If "all" is	specified then jobs of
	      all states will be reported. If no state is specified then pend-
	      ing, running, and	completing jobs	 are  reported.	 See  the  JOB
	      STATE  CODES  section below for a	list of	valid states. Both ex-
	      tended and compact forms are valid.  Note	the <state_list>  sup-
	      plied is case insensitive	("pd" and "PD" are equivalent).

       -u <user_list>, --user=<user_list>
	      Request  jobs or job steps from a	comma separated	list of	users.
	      The list can consist of user names or user id numbers.   Perfor-
	      mance of the command can be measurably improved for systems with
	      large numbers of jobs when a single user is specified.

       --usage
	      Print a brief help message listing the squeue options.

       -v, --verbose
	      Report details of	squeues	actions.

       -V , --version
	      Print version information	and exit.

       -w <hostlist>, --nodelist=<hostlist>
	      Report only on jobs allocated to the specified node or  list  of
	      nodes.   This  may either	be the NodeName	or NodeHostname	as de-
	      fined in	slurm.conf(5)  in  the	event  that  they  differ.   A
	      node_name	of localhost is	mapped to the current host name.

JOB REASON CODES
       These codes identify the	reason that a job is waiting for execution.  A
       job may be waiting for more than	one reason, in which case only one  of
       those reasons is	displayed.

       AssociationJobLimit   The job's association has reached its maximum job
			     count.

       AssociationResourceLimit
			     The job's association has reached	some  resource
			     limit.

       AssociationTimeLimit  The job's association has reached its time	limit.

       BadConstraints	     The job's constraints can not be satisfied.

       BeginTime	     The  job's	 earliest  start time has not yet been
			     reached.

       Cleaning		     The job is	being requeued and still  cleaning  up
			     from its previous execution.

       Dependency	     This  job	is waiting for a dependent job to com-
			     plete.

       FrontEndDown	     No	front end node is available  to	 execute  this
			     job.

       InactiveLimit	     The job reached the system	InactiveLimit.

       InvalidAccount	     The job's account is invalid.

       InvalidQOS	     The job's QOS is invalid.

       JobHeldAdmin	     The job is	held by	a system administrator.

       JobHeldUser	     The job is	held by	the user.

       JobLaunchFailure	     The  job  could not be launched.  This may	be due
			     to	a file system problem, invalid	program	 name,
			     etc.

       Licenses		     The job is	waiting	for a license.

       NodeDown		     A node required by	the job	is down.

       NonZeroExitCode	     The job terminated	with a non-zero	exit code.

       PartitionDown	     The  partition  required by this job is in	a DOWN
			     state.

       PartitionInactive     The partition required by this job	is in an Inac-
			     tive state	and not	able to	start jobs.

       PartitionNodeLimit    The  number of nodes required by this job is out-
			     side of its partition's current limits.  Can also
			     indicate that required nodes are DOWN or DRAINED.

       PartitionTimeLimit    The job's time limit exceeds its partition's cur-
			     rent time limit.

       Priority		     One or more higher	priority jobs exist  for  this
			     partition or advanced reservation.

       Prolog		     Its PrologSlurmctld program is still running.

       QOSJobLimit	     The job's QOS has reached its maximum job count.

       QOSResourceLimit	     The job's QOS has reached some resource limit.

       QOSTimeLimit	     The job's QOS has reached its time	limit.

       ReqNodeNotAvail	     Some node specifically required by	the job	is not
			     currently available.  The node may	 currently  be
			     in	 use, reserved for another job,	in an advanced
			     reservation, DOWN,	DRAINED,  or  not  responding.
			     Nodes  which are DOWN, DRAINED, or	not responding
			     will be identified	as part	of the job's  "reason"
			     field as "UnavailableNodes". Such nodes will typ-
			     ically require the	intervention of	a  system  ad-
			     ministrator to make available.

       Reservation	     The  job  is  waiting its advanced	reservation to
			     become available.

       Resources	     The job is	waiting	for resources to become	avail-
			     able.

       SystemFailure	     Failure  of  the Slurm system, a file system, the
			     network, etc.

       TimeLimit	     The job exhausted its time	limit.

       QOSUsageThreshold     Required QOS threshold has	been breached.

       WaitingForScheduling  No	reason has been	set for	this job yet.  Waiting
			     for  the  scheduler  to determine the appropriate
			     reason.

JOB STATE CODES
       Jobs typically pass through several states in the course	of their  exe-
       cution.	 The  typical states are PENDING, RUNNING, SUSPENDED, COMPLET-
       ING, and	COMPLETED.  An explanation of each state follows.

       BF  BOOT_FAIL	   Job terminated due to launch	failure, typically due
			   to a	hardware failure (e.g. unable to boot the node
			   or block and	the job	can not	be requeued).

       CA  CANCELLED	   Job was explicitly cancelled	by the user or	system
			   administrator.   The	 job  may or may not have been
			   initiated.

       CD  COMPLETED	   Job has terminated all processes on all nodes  with
			   an exit code	of zero.

       CF  CONFIGURING	   Job	has  been allocated resources, but are waiting
			   for them to become ready for	use (e.g. booting).

       CG  COMPLETING	   Job is in the process of completing.	Some processes
			   on some nodes may still be active.

       DL  DEADLINE	   Job terminated on deadline.

       F   FAILED	   Job	terminated  with  non-zero  exit code or other
			   failure condition.

       NF  NODE_FAIL	   Job terminated due to failure of one	or more	 allo-
			   cated nodes.

       OOM OUT_OF_MEMORY   Job experienced out of memory error.

       PD  PENDING	   Job is awaiting resource allocation.

       PR  PREEMPTED	   Job terminated due to preemption.

       R   RUNNING	   Job currently has an	allocation.

       RD  RESV_DEL_HOLD   Job is held.

       RF  REQUEUE_FED	   Job is being	requeued by a federation.

       RH  REQUEUE_HOLD	   Held	job is being requeued.

       RQ  REQUEUED	   Completing job is being requeued.

       RS  RESIZING	   Job is about	to change size.

       RV  REVOKED	   Sibling was removed from cluster due	to other clus-
			   ter starting	the job.

       SI  SIGNALING	   Job is being	signaled.

       SE  SPECIAL_EXIT	   The job was requeued	in a special state. This state
			   can	be set by users, typically in EpilogSlurmctld,
			   if the job has terminated with  a  particular  exit
			   value.

       SO  STAGE_OUT	   Job is staging out files.

       ST  STOPPED	   Job	has  an	 allocation,  but  execution  has been
			   stopped with	SIGSTOP	signal.	 CPUS  have  been  re-
			   tained by this job.

       S   SUSPENDED	   Job	has an allocation, but execution has been sus-
			   pended and CPUs have	been released for other	jobs.

       TO  TIMEOUT	   Job terminated upon reaching	its time limit.

PERFORMANCE
       Executing squeue	sends a	remote procedure call to slurmctld. If	enough
       calls  from squeue or other Slurm client	commands that send remote pro-
       cedure calls to the slurmctld daemon come in at once, it	can result  in
       a  degradation of performance of	the slurmctld daemon, possibly result-
       ing in a	denial of service.

       Do not run squeue or other Slurm	client commands	that send remote  pro-
       cedure  calls  to  slurmctld  from loops	in shell scripts or other pro-
       grams. Ensure that programs limit calls to squeue to the	minimum	neces-
       sary for	the information	you are	trying to gather.

ENVIRONMENT VARIABLES
       Some  squeue  options may be set	via environment	variables. These envi-
       ronment variables, along	with their corresponding options,  are	listed
       below. (Note: Commandline options will always override these settings.)

       SLURM_BITSTR_LEN	   Specifies  the string length	to be used for holding
			   a job array's  task	ID  expression.	  The  default
			   value  is  64  bytes.   A value of 0	will print the
			   full	expression with	any length  required.	Larger
			   values may adversely	impact the application perfor-
			   mance.

       SLURM_CLUSTERS	   Same	as --clusters

       SLURM_CONF	   The location	of the Slurm configuration file.

       SLURM_TIME_FORMAT   Specify the format used to report  time  stamps.  A
			   value  of  standard,	 the  default value, generates
			   output	     in		   the		  form
			   "year-month-dateThour:minute:second".   A  value of
			   relative returns only "hour:minute:second"  if  the
			   current  day.   For other dates in the current year
			   it prints the "hour:minute"	preceded  by  "Tomorr"
			   (tomorrow),	"Ystday"  (yesterday), the name	of the
			   day for the coming week (e.g. "Mon",	"Tue",	etc.),
			   otherwise  the  date	 (e.g.	"25  Apr").  For other
			   years it returns a date month and  year  without  a
			   time	 (e.g.	 "6 Jun	2012").	All of the time	stamps
			   use a 24 hour format.

			   A valid strftime() format can  also	be  specified.
			   For example,	a value	of "%a %T" will	report the day
			   of the week and a time stamp	(e.g. "Mon 12:34:56").

       SQUEUE_ACCOUNT	   -A <account_list>, --account=<account_list>

       SQUEUE_ALL	   -a, --all

       SQUEUE_ARRAY	   -r, --array

       SQUEUE_NAMES	   --name=<name_list>

       SQUEUE_FEDERATION   --federation

       SQUEUE_FORMAT	   -o <output_format>, --format=<output_format>

       SQUEUE_FORMAT2	   -O <output_format>, --Format=<output_format>

       SQUEUE_LICENSES	   -p-l	<license_list>,	--license=<license_list>

       SQUEUE_LOCAL	   --local

       SQUEUE_PARTITION	   -p <part_list>, --partition=<part_list>

       SQUEUE_PRIORITY	   -P, --priority

       SQUEUE_QOS	   -p <qos_list>, --qos=<qos_list>

       SQUEUE_SIBLING	   --sibling

       SQUEUE_SORT	   -S <sort_list>, --sort=<sort_list>

       SQUEUE_STATES	   -t <state_list>, --states=<state_list>

       SQUEUE_USERS	   -u <user_list>, --users=<user_list>

EXAMPLES
       Print the jobs scheduled	in the debug partition and  in	the  COMPLETED
       state in	the format with	six right justified digits for the job id fol-
       lowed by	the priority with an arbitrary fields size:
       # squeue	-p debug -t COMPLETED -o "%.6i %p"
	JOBID PRIORITY
	65543 99993
	65544 99992
	65545 99991

       Print the job steps in the debug	partition sorted by user:
       # squeue	-s -p debug -S u
	 STEPID	       NAME PARTITION	  USER	    TIME NODELIST
	65552.1	      test1	debug	 alice	    0:23 dev[1-4]
	65562.2	    big_run	debug	   bob	    0:18 dev22
	65550.1	     param1	debug  candice	 1:43:21 dev[6-12]

       Print information only about jobs 12345,12345, and 12348:
       # squeue	--jobs 12345,12346,12348
	JOBID PARTITION	NAME USER ST  TIME  NODES NODELIST(REASON)
	12345	  debug	job1 dave  R   0:21	4 dev[9-12]
	12346	  debug	job2 dave PD   0:00	8 (Resources)
	12348	  debug	job3 ed	  PD   0:00	4 (Priority)

       Print information only about job	step 65552.1:
       # squeue	--steps	65552.1
	 STEPID	    NAME PARTITION    USER    TIME  NODELIST
	65552.1	   test2     debug   alice   12:49  dev[1-4]

COPYING
       Copyright (C) 2002-2007 The Regents of the  University  of  California.
       Produced	at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
       Copyright (C) 2008-2010 Lawrence	Livermore National Security.
       Copyright (C) 2010-2016 SchedMD LLC.

       This  file  is  part  of	Slurm, a resource management program.  For de-
       tails, see <https://slurm.schedmd.com/>.

       Slurm is	free software; you can redistribute it and/or modify it	 under
       the  terms  of  the GNU General Public License as published by the Free
       Software	Foundation; either version 2 of	the License, or	(at  your  op-
       tion) any later version.

       Slurm  is  distributed  in the hope that	it will	be useful, but WITHOUT
       ANY WARRANTY; without even the implied warranty of  MERCHANTABILITY  or
       FITNESS	FOR  A PARTICULAR PURPOSE.  See	the GNU	General	Public License
       for more	details.

SEE ALSO
       scancel(1), scontrol(1),	sinfo(1),  srun(1),  slurm_load_ctl_conf  (3),
       slurm_load_jobs (3), slurm_load_node (3), slurm_load_partitions (3)

March 2020			Slurm Commands			     squeue(1)

NAME | SYNOPSIS | DESCRIPTION | OPTIONS | JOB REASON CODES | JOB STATE CODES | PERFORMANCE | ENVIRONMENT VARIABLES | EXAMPLES | COPYING | SEE ALSO

Want to link to this manual page? Use this URL:
<https://www.freebsd.org/cgi/man.cgi?query=squeue&sektion=1&manpath=FreeBSD+12.1-RELEASE+and+Ports>

home | help