Ceph single node deployment on Fedora 23
A little while back Cole documented a minimal ceph deployment on Fedora. Unfortunately, since then the ‘mkcephfs’ command has been dropped in favour of the ‘ceph-deploy’ tool. There’s various other blog posts talking about ceph-deploy, but none of them had quite the right set of commands to get a working single node deployment – the status would always end up in “HEALTH_WARN” which is pretty much an error state for ceph. After much trial & error I finally figured out the steps that work on Fedora 23.
Even though we’re doing a single node deployment, the ‘ceph-deploy’ tool expects to be able to ssh into the local host as root, without password prompts. So before starting, make sure to install ssh keys and edit /etc/ssh/sshd_config to set PermitRootLogin to yes. Everything that follows should also be run as root.
First, we need the ‘ceph-deploy’ tool installed
# dnf install ceph-deploy
ceph-deploy will create some config files in the local directory, so it is best to create a directory to hold them and run it from there
# mkdir ceph-deploy # cd ceph-deploy
Make sure that the hostname for the local machine is resolvable, both with domain name and unqualified. If it is not, then add entries to /etc/hosts to make it resolve. The first step simply creates the basic config file for ceph-deploy
# export CEPH_HOST=`hostname -f` # ceph-deploy new $CEPH_HOST
Since this will be a single node deployment there are 2 critical additions that must be made to the ceph.conf that was just created in the current directory
# echo "osd crush chooseleaf type = 0" >> ceph.conf # echo "osd pool default size = 1" >> ceph.conf
Without these two settings, the storage will never achieve a healthy status.
Now tell ceph-deploy to actually install the main ceph software. By default it will try to activate YUM repos hosted on ceph.com, but Fedora has everything needed, so the ‘--no-adjust-repos
‘ argument tells it not to add custom repos
# ceph-deploy install --no-adjust-repos $CEPH_HOST
With the software install the monitor service can be created and started
# ceph-deploy mon create-initial
Ceph can use storage on a block device, but for single node test deployments it is far easier to just point it to a local directory
# mkdir -p /srv/ceph/osd # ceph-deploy osd prepare $CEPH_HOST:/srv/ceph/osd # ceph-deploy osd activate $CEPH_HOST:/srv/ceph/osd
Assuming that completed without error, check the cluster status shows HEALTH_OK
# ceph status cluster 7e7be62d-4c83-4b59-8c11-6b57301e8cb4 health HEALTH_OK monmap e1: 1 mons at {t530wlan=192.168.1.66:6789/0} election epoch 2, quorum 0 t530wlan osdmap e5: 1 osds: 1 up, 1 in pgmap v15: 64 pgs, 1 pools, 0 bytes data, 0 objects 246 GB used, 181 GB / 450 GB avail 64 active+clean
If it displays “HEALTH_WARN” don’t make the mistake of thinking that is merely a warning – chances are it is a fatal error that will prevent anything working. If you did get errors, then purge all trace of ceph before trying again
# ceph-deploy purgedata $CEPH_HOST # ceph-deploy purge $CEPH_HOST # ceph-deploy forgetkeys # rm -rf /srv/ceph/osd
Once everything it working, it should be possible to use the ‘rbd’ command on the local node to setup volumes suitable for use with QEMU/KVM.