Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages

  
 
  

home | help
CEPH-DEPLOY(8)			     Ceph			CEPH-DEPLOY(8)

NAME
       ceph-deploy - Ceph deployment tool

SYNOPSIS
       ceph-deploy new [initial-monitor-node(s)]

       ceph-deploy install [ceph-node] [ceph-node...]

       ceph-deploy mon create-initial

       ceph-deploy osd create --data device ceph-node

       ceph-deploy admin [admin-node][ceph-node...]

       ceph-deploy purgedata [ceph-node][ceph-node...]

       ceph-deploy forgetkeys

DESCRIPTION
       ceph-deploy  is a tool which allows easy	and quick deployment of	a Ceph
       cluster without involving complex and detailed manual configuration. It
       uses  ssh  to gain access to other Ceph nodes from the admin node, sudo
       for administrator privileges on them and	the underlying Python  scripts
       automates the manual process of Ceph installation on each node from the
       admin node itself.  It can be easily run	on an workstation and  doesn't
       require	servers, databases or any other	automated tools. With ceph-de-
       ploy, it	is really easy to set up and take down a cluster. However,  it
       is  not	a  generic deployment tool. It is a specific tool which	is de-
       signed for those	who want to get	Ceph up	and running quickly with  only
       the unavoidable initial configuration settings and without the overhead
       of installing other tools like Chef, Puppet or Juju. Those who want  to
       customize security settings, partitions or directory locations and want
       to set up a cluster following detailed manual steps, should  use	 other
       tools i.e, Chef,	Puppet,	Juju or	Crowbar.

       With ceph-deploy, you can install Ceph packages on remote nodes,	create
       a cluster, add monitors,	gather/forget  keys,  add  OSDs	 and  metadata
       servers,	configure admin	hosts or take down the cluster.

COMMANDS
   new
       Start  deploying	 a  new	 cluster  and  write  a	configuration file and
       keyring for it.	It tries to copy ssh keys  from	 admin	node  to  gain
       passwordless ssh	to monitor node(s), validates host IP, creates a clus-
       ter with	a new initial monitor node or nodes for	monitor	quorum,	a ceph
       configuration file, a monitor secret keyring and	a log file for the new
       cluster.	It populates the newly created Ceph  configuration  file  with
       fsid  of	cluster, hostnames and IP addresses of initial monitor members
       under [global] section.

       Usage:

	  ceph-deploy new [MON][MON...]

       Here, [MON] is the initial monitor hostname (short hostname i.e,	 host-
       name -s).

       Other  options  like  --no-ssh-copykey,	--fsid,	 --cluster-network and
       --public-network	can also be used with this command.

       If more than one	network	interface is used, public network setting  has
       to  be  added under [global] section of Ceph configuration file.	If the
       public subnet is	given, new command will	choose the one IP from the re-
       mote  host that exists within the subnet	range. Public network can also
       be added	at runtime using --public-network option with the  command  as
       mentioned above.

   install
       Install	Ceph  packages	on  remote  hosts. As a	first step it installs
       yum-plugin-priorities in	admin and other	nodes using  passwordless  ssh
       and sudo	so that	Ceph packages from upstream repository get more	prior-
       ity. It then detects the	platform and distribution for  the  hosts  and
       installs	Ceph normally by downloading distro compatible packages	if ad-
       equate repo for Ceph is already added.  --release flag is used  to  get
       the  latest  release for	installation. During detection of platform and
       distribution before installation, if it finds  the  distro.init	to  be
       sysvinit	 (Fedora, CentOS/RHEL etc), it doesn't allow installation with
       custom cluster name and uses the	default	name ceph for the cluster.

       If the user explicitly specifies	a custom repo url with --repo-url  for
       installation, anything detected from the	configuration will be overrid-
       den and the custom repository location will be used for installation of
       Ceph  packages.	 If  required,	valid custom repositories are also de-
       tected and installed. In	case of	installation  from  a  custom  repo  a
       boolean	is used	to determine the logic needed to proceed with a	custom
       repo installation. A custom repo	 install  helper  is  used  that  goes
       through	config	checks to retrieve repos (and any extra	repos defined)
       and installs them. cd_conf is the object	built from argparse that holds
       the  flags  and	information needed to determine	what metadata from the
       configuration is	to be used.

       A user can also opt to install only the repository  without  installing
       Ceph and	its dependencies by using --repo option.

       Usage:

	  ceph-deploy install [HOST][HOST...]

       Here, [HOST] is/are the host node(s) where Ceph is to be	installed.

       An option --release is used to install a	release	known as CODENAME (de-
       fault: firefly).

       Other options like --testing, --dev, --adjust-repos, --no-adjust-repos,
       --repo,	--local-mirror,	--repo-url and --gpg-url can also be used with
       this command.

   mds
       Deploy Ceph mds on remote hosts.	A metadata server  is  needed  to  use
       CephFS  and  the	 mds command is	used to	create one on the desired host
       node. It	uses the subcommand create to do so.  create  first  gets  the
       hostname	 and distro information	of the desired mds host. It then tries
       to read the bootstrap-mds key for the cluster and deploy	it in the  de-
       sired   host.  The  key	generally  has	a  format  of  {cluster}.boot-
       strap-mds.keyring. If it	doesn't	finds a	keyring, it runs gatherkeys to
       get  the	 keyring.  It then creates a mds on the	desired	host under the
       path /var/lib/ceph/mds/	in  /var/lib/ceph/mds/{cluster}-{name}	format
       and   a	 bootstrap   keyring   under  /var/lib/ceph/bootstrap-mds/  in
       /var/lib/ceph/bootstrap-mds/{cluster}.keyring format. It	then runs  ap-
       propriate commands based	on distro.init to start	the mds.

       Usage:

	  ceph-deploy mds create [HOST[:DAEMON-NAME]] [HOST[:DAEMON-NAME]...]

       The [DAEMON-NAME] is optional.

   mon
       Deploy  Ceph  monitor on	remote hosts. mon makes	use of certain subcom-
       mands to	deploy Ceph monitors on	other nodes.

       Subcommand create-initial deploys for monitors defined in  mon  initial
       members	under  [global]	section	in Ceph	configuration file, wait until
       they form quorum	and then  gatherkeys,  reporting  the  monitor	status
       along the process. If monitors don't form quorum	the command will even-
       tually time out.

       Usage:

	  ceph-deploy mon create-initial

       Subcommand create is used to deploy Ceph	monitors by explicitly	speci-
       fying  the hosts	which are desired to be	made monitors. If no hosts are
       specified it will default to use	the mon	initial	members	defined	 under
       [global]	section	of Ceph	configuration file. create first detects plat-
       form and	distro for desired hosts and checks if hostname	is  compatible
       for  deployment.	It then	uses the monitor keyring initially created us-
       ing new command and deploys the monitor in desired  host.  If  multiple
       hosts  were  specified  during  new  command i.e, if there are multiple
       hosts in	mon initial members and	multiple keyrings were created then  a
       concatenated  keyring  is  used	for  deployment	 of  monitors. In this
       process a keyring parser	is used	which looks for	[entity]  sections  in
       monitor keyrings	and returns a list of those sections. A	helper is then
       used to collect all keyrings into a single blob that will  be  used  to
       inject  it  to  monitors	with --mkfs on remote nodes. All keyring files
       are concatenated	to be in a directory ending with .keyring. During this
       process	the helper uses	list of	sections returned by keyring parser to
       check if	an entity is already present in	a keyring and if not, adds it.
       The  concatenated keyring is used for deployment	of monitors to desired
       multiple	hosts.

       Usage:

	  ceph-deploy mon create [HOST]	[HOST...]

       Here, [HOST] is hostname	of desired monitor host(s).

       Subcommand add is used to add a monitor	to  an	existing  cluster.  It
       first  detects platform and distro for desired host and checks if host-
       name is compatible for deployment. It then uses	the  monitor  keyring,
       ensures	configuration for new monitor host and adds the	monitor	to the
       cluster.	If the section for the monitor exists and  defines  a  monitor
       address	that will be used, otherwise it	will fallback by resolving the
       hostname	to an IP. If --address is used it will override	all other  op-
       tions.  After  adding the monitor to the	cluster, it gives it some time
       to start. It then looks for any monitor errors and checks monitor  sta-
       tus.  Monitor  errors  arise if the monitor is not added	in mon initial
       members,	if it doesn't exist in monmap and if neither  public_addr  nor
       public_network  keys  were defined for monitors.	Under such conditions,
       monitors	may not	be able	to form	quorum.	Monitor	status	tells  if  the
       monitor	is  up	and running normally. The status is checked by running
       ceph daemon mon.hostname	mon_status on remote end  which	 provides  the
       output and returns a boolean status of what is going on.	 False means a
       monitor that is not fine	even if	it is up and running, while True means
       the monitor is up and running correctly.

       Usage:

	  ceph-deploy mon add [HOST]

	  ceph-deploy mon add [HOST] --address [IP]

       Here,  [HOST] is	the hostname and [IP] is the IP	address	of the desired
       monitor node. Please note, unlike other mon subcommands,	only one  node
       can be specified	at a time.

       Subcommand  destroy  is	used  to  completely remove monitors on	remote
       hosts.  It takes	hostnames as arguments.	It stops the monitor, verifies
       if ceph-mon daemon really stopped, creates an archive directory mon-re-
       move under /var/lib/ceph/, archives old	monitor	 directory  in	{clus-
       ter}-{hostname}-{stamp} format in it and	removes	the monitor from clus-
       ter by running ceph remove... command.

       Usage:

	  ceph-deploy mon destroy [HOST] [HOST...]

       Here, [HOST] is hostname	of monitor that	is to be removed.

   gatherkeys
       Gather authentication keys for provisioning new nodes. It  takes	 host-
       names  as  arguments.  It  checks for and fetches client.admin keyring,
       monitor keyring and bootstrap-mds/bootstrap-osd	keyring	 from  monitor
       host. These authentication keys are used	when new monitors/OSDs/MDS are
       added to	the cluster.

       Usage:

	  ceph-deploy gatherkeys [HOST]	[HOST...]

       Here, [HOST] is hostname	of the monitor	from  where  keys  are	to  be
       pulled.

   disk
       Manage  disks  on  a  remote host. It actually triggers the ceph-volume
       utility and its subcommands to manage disks.

       Subcommand list lists disk partitions and Ceph OSDs.

       Usage:

	  ceph-deploy disk list	HOST

       Subcommand zap zaps/erases/destroys a device's partition	table and con-
       tents.	It  actually  uses ceph-volume lvm zap remotely, alternatively
       allowing	someone	to remove the Ceph metadata from the logical volume.

   osd
       Manage OSDs by preparing	data disk on remote host.  osd	makes  use  of
       certain subcommands for managing	OSDs.

       Subcommand  create  prepares  a	device	for  Ceph OSD. It first	checks
       against multiple	OSDs getting created and warns about  the  possibility
       of  more	than the recommended which would cause issues with max allowed
       PIDs in a system. It then reads the bootstrap-osd key for  the  cluster
       or  writes  the	bootstrap  key if not found.  It then uses ceph-volume
       utility's lvm create subcommand to prepare the disk,  (and  journal  if
       using  filestore)  and  deploy  the OSD on the desired host.  Once pre-
       pared, it gives some time to the	OSD to start and checks	for any	possi-
       ble errors and if found,	reports	to the user.

       Bluestore Usage:

	  ceph-deploy osd create --data	DISK HOST

       Filestore Usage:

	  ceph-deploy osd create --data	DISK --journal JOURNAL HOST

       NOTE:
	  For  other  flags  available,	 please	see the	man page or the	--help
	  menu on ceph-deploy osd create

       Subcommand list lists devices associated	to Ceph	as part	of an OSD.  It
       uses  the  ceph-volume  lvm list	output that has	a rich output, mapping
       OSDs to devices and other interesting information about the OSD setup.

       Usage:

	  ceph-deploy osd list HOST

   admin
       Push configuration and client.admin key to a remote host. It takes  the
       {cluster}.client.admin.keyring  from  admin  node  and  writes it under
       /etc/ceph directory of desired node.

       Usage:

	  ceph-deploy admin [HOST] [HOST...]

       Here, [HOST] is desired host to be configured for Ceph administration.

   config
       Push/pull configuration file to/from a remote host. It uses  push  sub-
       command to takes	the configuration file from admin host and write it to
       remote host under /etc/ceph directory. It uses pull  subcommand	to  do
       the opposite i.e, pull the configuration	file under /etc/ceph directory
       of remote host to admin node.

       Usage:

	  ceph-deploy config push [HOST] [HOST...]

	  ceph-deploy config pull [HOST] [HOST...]

       Here, [HOST] is the hostname of the node	 where	config	file  will  be
       pushed to or pulled from.

   uninstall
       Remove  Ceph  packages  from  remote hosts. It detects the platform and
       distro of selected host and uninstalls Ceph packages from it.  However,
       some  dependencies  like	 librbd1 and librados2 will not	be removed be-
       cause they can cause issues with	qemu-kvm.

       Usage:

	  ceph-deploy uninstall	[HOST] [HOST...]

       Here, [HOST] is hostname	of the node from  where	 Ceph  will  be	 unin-
       stalled.

   purge
       Remove  Ceph  packages from remote hosts	and purge all data. It detects
       the platform and	distro of selected host, uninstalls Ceph packages  and
       purges  all data. However, some dependencies like librbd1 and librados2
       will not	be removed because they	can cause issues with qemu-kvm.

       Usage:

	  ceph-deploy purge [HOST] [HOST...]

       Here, [HOST] is hostname	of the node from where Ceph will be purged.

   purgedata
       Purge  (delete,	destroy,  discard,   shred)   any   Ceph   data	  from
       /var/lib/ceph.	Once  it  detects  the	platform and distro of desired
       host, it	first checks if	Ceph is	still installed	on the	selected  host
       and if installed, it won't purge	data from it. If Ceph is already unin-
       stalled	from  the  host,  it  tries  to	  remove   the	 contents   of
       /var/lib/ceph.  If  it  fails  then probably OSDs are still mounted and
       needs to	be unmounted to	continue. It unmount the OSDs and tries	to re-
       move the	contents of /var/lib/ceph again	and checks for errors. It also
       removes contents	of /etc/ceph. Once all	steps  are  successfully  com-
       pleted, all the Ceph data from the selected host	are removed.

       Usage:

	  ceph-deploy purgedata	[HOST] [HOST...]

       Here,  [HOST]  is  hostname  of	the  node from where Ceph data will be
       purged.

   forgetkeys
       Remove authentication keys from the local directory. It removes all the
       authentication  keys  i.e, monitor keyring, client.admin	keyring, boot-
       strap-osd and bootstrap-mds keyring from	the node.

       Usage:

	  ceph-deploy forgetkeys

   pkg
       Manage packages on remote hosts.	It is used for installing or  removing
       packages	 from  remote hosts. The package names for installation	or re-
       moval are to be specified after the command. Two	options	--install  and
       --remove	are used for this purpose.

       Usage:

	  ceph-deploy pkg --install [PKGs] [HOST] [HOST...]

	  ceph-deploy pkg --remove [PKGs] [HOST] [HOST...]

       Here, [PKGs] is comma-separated package names and [HOST]	is hostname of
       the remote node where packages are to be	installed or removed from.

OPTIONS
       --address
	      IP address of the	host node to be	added to the cluster.

       --adjust-repos
	      Install packages modifying source	repos.

       --ceph-conf
	      Use (or reuse) a given ceph.conf file.

       --cluster
	      Name of the cluster.

       --dev  Install a	bleeding edge built from Git branch or	tag  (default:
	      master).

       --cluster-network
	      Specify the (internal) cluster network.

       --dmcrypt
	      Encrypt [data-path] and/or journal devices with dm-crypt.

       --dmcrypt-key-dir
	      Directory	where dm-crypt keys are	stored.

       --install
	      Comma-separated package(s) to install on remote hosts.

       --fs-type
	      Filesystem  to  use  to  format disk (xfs, btrfs or ext4).  Note
	      that support for btrfs and ext4 is no longer  tested  or	recom-
	      mended; please use xfs.

       --fsid Provide an alternate FSID	for ceph.conf generation.

       --gpg-url
	      Specify  a GPG key url to	be used	with custom repos (defaults to
	      ceph.com).

       --keyrings
	      Concatenate multiple keyrings to be seeded on new	monitors.

       --local-mirror
	      Fetch packages and push them to hosts for	a local	repo mirror.

       --mkfs Inject keys to MONs on remote nodes.

       --no-adjust-repos
	      Install packages without modifying source	repos.

       --no-ssh-copykey
	      Do not attempt to	copy ssh keys.

       --overwrite-conf
	      Overwrite	an existing conf file on remote	host (if present).

       --public-network
	      Specify the public network for a cluster.

       --remove
	      Comma-separated package(s) to remove from	remote hosts.

       --repo Install repo files only (skips package installation).

       --repo-url
	      Specify a	repo url that mirrors/contains Ceph packages.

       --testing
	      Install the latest development release.

       --username
	      The username to connect to the remote host.

       --version
	      The current installed version of ceph-deploy.

       --zap-disk
	      Destroy the partition table and content of a disk.

AVAILABILITY
       ceph-deploy is part of Ceph, a massively	 scalable,  open-source,  dis-
       tributed	  storage   system.  Please  refer  to	the  documentation  at
       https://ceph.com/ceph-deploy/docs for more information.

SEE ALSO
       ceph-mon(8), ceph-osd(8), ceph-volume(8), ceph-mds(8)

COPYRIGHT
       2010-2014, Inktank Storage, Inc.	and contributors. Licensed under  Cre-
       ative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)

dev				 Aug 28, 2020			CEPH-DEPLOY(8)

NAME | SYNOPSIS | DESCRIPTION | COMMANDS | OPTIONS | AVAILABILITY | SEE ALSO | COPYRIGHT

Want to link to this manual page? Use this URL:
<https://www.freebsd.org/cgi/man.cgi?query=ceph-deploy&sektion=8&manpath=FreeBSD+12.2-RELEASE+and+Ports>

home | help