Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages


home | help
CEPH-DEPLOY(8)			     Ceph			CEPH-DEPLOY(8)

       ceph-deploy - Ceph deployment tool

       ceph-deploy new [initial-monitor-node(s)]

       ceph-deploy install [ceph-node] [ceph-node...]

       ceph-deploy mon create-initial

       ceph-deploy osd create --data device ceph-node

       ceph-deploy admin [admin-node][ceph-node...]

       ceph-deploy purgedata [ceph-node][ceph-node...]

       ceph-deploy forgetkeys

       ceph-deploy  is a tool which allows easy	and quick deployment of	a Ceph
       cluster without involving complex and detailed manual configuration. It
       uses  ssh  to gain access to other Ceph nodes from the admin node, sudo
       for administrator privileges on them and	the underlying Python  scripts
       automates the manual process of Ceph installation on each node from the
       admin node itself.  It can be easily run	on an workstation and  doesn't
       require	servers, databases or any other	automated tools. With ceph-de-
       ploy, it	is really easy to set up and take down a cluster. However,  it
       is  not	a  generic deployment tool. It is a specific tool which	is de-
       signed for those	who want to get	Ceph up	and running quickly with  only
       the unavoidable initial configuration settings and without the overhead
       of installing other tools like Chef, Puppet or Juju. Those who want  to
       customize security settings, partitions or directory locations and want
       to set up a cluster following detailed manual steps, should  use	 other
       tools i.e, Chef,	Puppet,	Juju or	Crowbar.

       With ceph-deploy, you can install Ceph packages on remote nodes,	create
       a cluster, add monitors,	gather/forget  keys,  add  OSDs	 and  metadata
       servers,	configure admin	hosts or take down the cluster.

       Start  deploying	 a  new	 cluster  and  write  a	configuration file and
       keyring for it.	It tries to copy ssh keys  from	 admin	node  to  gain
       passwordless ssh	to monitor node(s), validates host IP, creates a clus-
       ter with	a new initial monitor node or nodes for	monitor	quorum,	a ceph
       configuration file, a monitor secret keyring and	a log file for the new
       cluster.	It populates the newly created Ceph  configuration  file  with
       fsid  of	cluster, hostnames and IP addresses of initial monitor members
       under [global] section.


	  ceph-deploy new [MON][MON...]

       Here, [MON] is the initial monitor hostname (short hostname i.e,	 host-
       name -s).

       Other  options  like  --no-ssh-copykey,	--fsid,	 --cluster-network and
       --public-network	can also be used with this command.

       If more than one	network	interface is used, public network setting  has
       to  be  added under [global] section of Ceph configuration file.	If the
       public subnet is	given, new command will	choose the one IP from the re-
       mote  host that exists within the subnet	range. Public network can also
       be added	at runtime using --public-network option with the  command  as
       mentioned above.

       Install	Ceph  packages	on  remote  hosts. As a	first step it installs
       yum-plugin-priorities in	admin and other	nodes using  passwordless  ssh
       and sudo	so that	Ceph packages from upstream repository get more	prior-
       ity. It then detects the	platform and distribution for  the  hosts  and
       installs	Ceph normally by downloading distro compatible packages	if ad-
       equate repo for Ceph is already added.  --release flag is used  to  get
       the  latest  release for	installation. During detection of platform and
       distribution before installation, if it finds  the  distro.init	to  be
       sysvinit	 (Fedora, CentOS/RHEL etc), it doesn't allow installation with
       custom cluster name and uses the	default	name ceph for the cluster.

       If the user explicitly specifies	a custom repo url with --repo-url  for
       installation, anything detected from the	configuration will be overrid-
       den and the custom repository location will be used for installation of
       Ceph  packages.	 If  required,	valid custom repositories are also de-
       tected and installed. In	case of	installation  from  a  custom  repo  a
       boolean	is used	to determine the logic needed to proceed with a	custom
       repo installation. A custom repo	 install  helper  is  used  that  goes
       through	config	checks to retrieve repos (and any extra	repos defined)
       and installs them. cd_conf is the object	built from argparse that holds
       the  flags  and	information needed to determine	what metadata from the
       configuration is	to be used.

       A user can also opt to install only the repository  without  installing
       Ceph and	its dependencies by using --repo option.


	  ceph-deploy install [HOST][HOST...]

       Here, [HOST] is/are the host node(s) where Ceph is to be	installed.

       An option --release is used to install a	release	known as CODENAME (de-
       fault: firefly).

       Other options like --testing, --dev, --adjust-repos, --no-adjust-repos,
       --repo,	--local-mirror,	--repo-url and --gpg-url can also be used with
       this command.

       Deploy Ceph mds on remote hosts.	A metadata server  is  needed  to  use
       CephFS  and  the	 mds command is	used to	create one on the desired host
       node. It	uses the subcommand create to do so.  create  first  gets  the
       hostname	 and distro information	of the desired mds host. It then tries
       to read the bootstrap-mds key for the cluster and deploy	it in the  de-
       sired   host.  The  key	generally  has	a  format  of  {cluster}.boot-
       strap-mds.keyring. If it	doesn't	finds a	keyring, it runs gatherkeys to
       get  the	 keyring.  It then creates a mds on the	desired	host under the
       path /var/lib/ceph/mds/	in  /var/lib/ceph/mds/{cluster}-{name}	format
       and   a	 bootstrap   keyring   under  /var/lib/ceph/bootstrap-mds/  in
       /var/lib/ceph/bootstrap-mds/{cluster}.keyring format. It	then runs  ap-
       propriate commands based	on distro.init to start	the mds.


	  ceph-deploy mds create [HOST[:DAEMON-NAME]] [HOST[:DAEMON-NAME]...]

       The [DAEMON-NAME] is optional.

       Deploy  Ceph  monitor on	remote hosts. mon makes	use of certain subcom-
       mands to	deploy Ceph monitors on	other nodes.

       Subcommand create-initial deploys for monitors defined in  mon  initial
       members	under  [global]	section	in Ceph	configuration file, wait until
       they form quorum	and then  gatherkeys,  reporting  the  monitor	status
       along the process. If monitors don't form quorum	the command will even-
       tually time out.


	  ceph-deploy mon create-initial

       Subcommand create is used to deploy Ceph	monitors by explicitly	speci-
       fying  the hosts	which are desired to be	made monitors. If no hosts are
       specified it will default to use	the mon	initial	members	defined	 under
       [global]	section	of Ceph	configuration file. create first detects plat-
       form and	distro for desired hosts and checks if hostname	is  compatible
       for  deployment.	It then	uses the monitor keyring initially created us-
       ing new command and deploys the monitor in desired  host.  If  multiple
       hosts  were  specified  during  new  command i.e, if there are multiple
       hosts in	mon initial members and	multiple keyrings were created then  a
       concatenated  keyring  is  used	for  deployment	 of  monitors. In this
       process a keyring parser	is used	which looks for	[entity]  sections  in
       monitor keyrings	and returns a list of those sections. A	helper is then
       used to collect all keyrings into a single blob that will  be  used  to
       inject  it  to  monitors	with --mkfs on remote nodes. All keyring files
       are concatenated	to be in a directory ending with .keyring. During this
       process	the helper uses	list of	sections returned by keyring parser to
       check if	an entity is already present in	a keyring and if not, adds it.
       The  concatenated keyring is used for deployment	of monitors to desired
       multiple	hosts.


	  ceph-deploy mon create [HOST]	[HOST...]

       Here, [HOST] is hostname	of desired monitor host(s).

       Subcommand add is used to add a monitor	to  an	existing  cluster.  It
       first  detects platform and distro for desired host and checks if host-
       name is compatible for deployment. It then uses	the  monitor  keyring,
       ensures	configuration for new monitor host and adds the	monitor	to the
       cluster.	If the section for the monitor exists and  defines  a  monitor
       address	that will be used, otherwise it	will fallback by resolving the
       hostname	to an IP. If --address is used it will override	all other  op-
       tions.  After  adding the monitor to the	cluster, it gives it some time
       to start. It then looks for any monitor errors and checks monitor  sta-
       tus.  Monitor  errors  arise if the monitor is not added	in mon initial
       members,	if it doesn't exist in monmap and if neither  public_addr  nor
       public_network  keys  were defined for monitors.	Under such conditions,
       monitors	may not	be able	to form	quorum.	Monitor	status	tells  if  the
       monitor	is  up	and running normally. The status is checked by running
       ceph daemon mon.hostname	mon_status on remote end  which	 provides  the
       output and returns a boolean status of what is going on.	 False means a
       monitor that is not fine	even if	it is up and running, while True means
       the monitor is up and running correctly.


	  ceph-deploy mon add [HOST]

	  ceph-deploy mon add [HOST] --address [IP]

       Here,  [HOST] is	the hostname and [IP] is the IP	address	of the desired
       monitor node. Please note, unlike other mon subcommands,	only one  node
       can be specified	at a time.

       Subcommand  destroy  is	used  to  completely remove monitors on	remote
       hosts.  It takes	hostnames as arguments.	It stops the monitor, verifies
       if ceph-mon daemon really stopped, creates an archive directory mon-re-
       move under /var/lib/ceph/, archives old	monitor	 directory  in	{clus-
       ter}-{hostname}-{stamp} format in it and	removes	the monitor from clus-
       ter by running ceph remove... command.


	  ceph-deploy mon destroy [HOST] [HOST...]

       Here, [HOST] is hostname	of monitor that	is to be removed.

       Gather authentication keys for provisioning new nodes. It  takes	 host-
       names  as  arguments.  It  checks for and fetches client.admin keyring,
       monitor keyring and bootstrap-mds/bootstrap-osd	keyring	 from  monitor
       host. These authentication keys are used	when new monitors/OSDs/MDS are
       added to	the cluster.


	  ceph-deploy gatherkeys [HOST]	[HOST...]

       Here, [HOST] is hostname	of the monitor	from  where  keys  are	to  be

       Manage  disks  on  a  remote host. It actually triggers the ceph-volume
       utility and its subcommands to manage disks.

       Subcommand list lists disk partitions and Ceph OSDs.


	  ceph-deploy disk list	HOST

       Subcommand zap zaps/erases/destroys a device's partition	table and con-
       tents.	It  actually  uses ceph-volume lvm zap remotely, alternatively
       allowing	someone	to remove the Ceph metadata from the logical volume.

       Manage OSDs by preparing	data disk on remote host.  osd	makes  use  of
       certain subcommands for managing	OSDs.

       Subcommand  create  prepares  a	device	for  Ceph OSD. It first	checks
       against multiple	OSDs getting created and warns about  the  possibility
       of  more	than the recommended which would cause issues with max allowed
       PIDs in a system. It then reads the bootstrap-osd key for  the  cluster
       or  writes  the	bootstrap  key if not found.  It then uses ceph-volume
       utility's lvm create subcommand to prepare the disk,  (and  journal  if
       using  filestore)  and  deploy  the OSD on the desired host.  Once pre-
       pared, it gives some time to the	OSD to start and checks	for any	possi-
       ble errors and if found,	reports	to the user.

       Bluestore Usage:

	  ceph-deploy osd create --data	DISK HOST

       Filestore Usage:

	  ceph-deploy osd create --data	DISK --journal JOURNAL HOST

	  For  other  flags  available,	 please	see the	man page or the	--help
	  menu on ceph-deploy osd create

       Subcommand list lists devices associated	to Ceph	as part	of an OSD.  It
       uses  the  ceph-volume  lvm list	output that has	a rich output, mapping
       OSDs to devices and other interesting information about the OSD setup.


	  ceph-deploy osd list HOST

       Push configuration and client.admin key to a remote host. It takes  the
       {cluster}.client.admin.keyring  from  admin  node  and  writes it under
       /etc/ceph directory of desired node.


	  ceph-deploy admin [HOST] [HOST...]

       Here, [HOST] is desired host to be configured for Ceph administration.

       Push/pull configuration file to/from a remote host. It uses  push  sub-
       command to takes	the configuration file from admin host and write it to
       remote host under /etc/ceph directory. It uses pull  subcommand	to  do
       the opposite i.e, pull the configuration	file under /etc/ceph directory
       of remote host to admin node.


	  ceph-deploy config push [HOST] [HOST...]

	  ceph-deploy config pull [HOST] [HOST...]

       Here, [HOST] is the hostname of the node	 where	config	file  will  be
       pushed to or pulled from.

       Remove  Ceph  packages  from  remote hosts. It detects the platform and
       distro of selected host and uninstalls Ceph packages from it.  However,
       some  dependencies  like	 librbd1 and librados2 will not	be removed be-
       cause they can cause issues with	qemu-kvm.


	  ceph-deploy uninstall	[HOST] [HOST...]

       Here, [HOST] is hostname	of the node from  where	 Ceph  will  be	 unin-

       Remove  Ceph  packages from remote hosts	and purge all data. It detects
       the platform and	distro of selected host, uninstalls Ceph packages  and
       purges  all data. However, some dependencies like librbd1 and librados2
       will not	be removed because they	can cause issues with qemu-kvm.


	  ceph-deploy purge [HOST] [HOST...]

       Here, [HOST] is hostname	of the node from where Ceph will be purged.

       Purge  (delete,	destroy,  discard,   shred)   any   Ceph   data	  from
       /var/lib/ceph.	Once  it  detects  the	platform and distro of desired
       host, it	first checks if	Ceph is	still installed	on the	selected  host
       and if installed, it won't purge	data from it. If Ceph is already unin-
       stalled	from  the  host,  it  tries  to	  remove   the	 contents   of
       /var/lib/ceph.  If  it  fails  then probably OSDs are still mounted and
       needs to	be unmounted to	continue. It unmount the OSDs and tries	to re-
       move the	contents of /var/lib/ceph again	and checks for errors. It also
       removes contents	of /etc/ceph. Once all	steps  are  successfully  com-
       pleted, all the Ceph data from the selected host	are removed.


	  ceph-deploy purgedata	[HOST] [HOST...]

       Here,  [HOST]  is  hostname  of	the  node from where Ceph data will be

       Remove authentication keys from the local directory. It removes all the
       authentication  keys  i.e, monitor keyring, client.admin	keyring, boot-
       strap-osd and bootstrap-mds keyring from	the node.


	  ceph-deploy forgetkeys

       Manage packages on remote hosts.	It is used for installing or  removing
       packages	 from  remote hosts. The package names for installation	or re-
       moval are to be specified after the command. Two	options	--install  and
       --remove	are used for this purpose.


	  ceph-deploy pkg --install [PKGs] [HOST] [HOST...]

	  ceph-deploy pkg --remove [PKGs] [HOST] [HOST...]

       Here, [PKGs] is comma-separated package names and [HOST]	is hostname of
       the remote node where packages are to be	installed or removed from.

	      IP address of the	host node to be	added to the cluster.

	      Install packages modifying source	repos.

	      Use (or reuse) a given ceph.conf file.

	      Name of the cluster.

       --dev  Install a	bleeding edge built from Git branch or	tag  (default:

	      Specify the (internal) cluster network.

	      Encrypt [data-path] and/or journal devices with dm-crypt.

	      Directory	where dm-crypt keys are	stored.

	      Comma-separated package(s) to install on remote hosts.

	      Filesystem  to  use  to  format disk (xfs, btrfs or ext4).  Note
	      that support for btrfs and ext4 is no longer  tested  or	recom-
	      mended; please use xfs.

       --fsid Provide an alternate FSID	for ceph.conf generation.

	      Specify  a GPG key url to	be used	with custom repos (defaults to

	      Concatenate multiple keyrings to be seeded on new	monitors.

	      Fetch packages and push them to hosts for	a local	repo mirror.

       --mkfs Inject keys to MONs on remote nodes.

	      Install packages without modifying source	repos.

	      Do not attempt to	copy ssh keys.

	      Overwrite	an existing conf file on remote	host (if present).

	      Specify the public network for a cluster.

	      Comma-separated package(s) to remove from	remote hosts.

       --repo Install repo files only (skips package installation).

	      Specify a	repo url that mirrors/contains Ceph packages.

	      Install the latest development release.

	      The username to connect to the remote host.

	      The current installed version of ceph-deploy.

	      Destroy the partition table and content of a disk.

       ceph-deploy is part of Ceph, a massively	 scalable,  open-source,  dis-
       tributed	  storage   system.  Please  refer  to	the  documentation  at for more information.

       ceph-mon(8), ceph-osd(8), ceph-volume(8), ceph-mds(8)

       2010-2014, Inktank Storage, Inc.	and contributors. Licensed under  Cre-
       ative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)

dev				 Aug 28, 2020			CEPH-DEPLOY(8)


Want to link to this manual page? Use this URL:

home | help