Wednesday, May 16, 2018

Installing Ceph (Luminous) distributed object store

Introduction

Ceph is a distributed storage system that has interfaces for object, block and file storage. This post describes the steps to install the 'Luminous' release (latest as of May 2018)  of Ceph  on Ubuntu 16.04 (Xenial) machines and store objects in the Ceph cluster using the Swift Rest API.

Ceph Overview

Ceph can be used to store objects (any kind of binary data) in a cluster of commodity hardware machines. Ceph takes care of replication and consistency of the stored objects.

There are 4 kinds of Ceph services (daemons), each of which usually runs on a separate machine, and each of which has multiple instances, depending on the amount of data to be stored and the level of redundancy required. The services are -

  1. Monitors (mons), that monitor the cluster.
  2. Object Storage Device (OSD) are nodes that store the object data directly on to the disks (no filesystem required) using BlueStore storage format.
  3. Ceph Object Gateway (radosgw) nodes that allow clients to interact with the storage cluster using either the OpenStack Swift or the Amazon S3 Rest APIs.
  4. Managers (mgr), that provide additional monitoring.

The Setup

The Machines

For the purposes of this lab, there will be 4 Ubuntu-server 16.04 (Xenial) virtual machines that will form the Ceph cluster and host the Ceph services. These virtual machines are created using Oracle VirtualBox.  Alternatively, one could use bare-metal machines. I tried using the latest Ubuntu 18.04 (Bionic) release but Ceph didn't install successfully as its repositories haven't yet been updated for Bionic (as of mid May 2018).

There will also be another machine that will be used for Ceph administration and to run the client that will be used to test the storage cluster. In this lab, this machine is a Windows Subsystem for Linux (WSL) layer running Ubuntu 16.04 on a Windows 10 host. Alternatively, one could use a regular Ubuntu 16.04 virtual or bare-metal machine.

Each of the machines will run multiple Ceph services. This is alright for experimentation but not advisable for production.

The machines are -

  1. Ubuntu 16.04 on WSL that will be used for Ceph administration and to run the client that will test the storage cluster.
  2. xenial1 - runs mons, mgr, osd.
  3. xenial2 - runs osd.
  4. xenial3 - runs osd.
  5. xenial4 - runs radosgw.

Hard Disks

Each cluster machine will have one or two hard disks. The first disk, /dev/sda, will be used to install the Ubuntu 16.04 operating system. There is no need for LVM.

The machines that are OSDs will need an extra hard disk, in addition to the one on which the operating system is installed. This extra disk, /dev/sdb should not be formatted with any filesystem, as Ceph will format it with BlueStore format to store the objects. In VirtualBox, this disk can be added in the virtual machine 
Settings --> Storage --> SATA Controller --> Add Harddisk --> Create New Disk.

Passwordless sudo

Create a user for Ceph e.g cephuser. This user should have passwordless sudo as it will be required by Ceph during installation. This can be done as -

echo "cephuser ALL = (root) NOPASSWD:ALL"|sudo tee /etc/sudoers.d/cephuser
sudo chmod 0440 /etc/sudoers.d/cephuser

 Virtual Machine Networking

For this lab, each virtual machine uses Bridged networking (instead of default NAT) associated with the internet adapter of the host machine. Each machine is assigned a static IP address by the network router's DHCP server that has mappings of MAC vs IP addresses for each virtual machine.

The /etc/hosts file of the Ceph admin machine should contain mappings of the cluster host names and corresponding IP addresses. e.g.
192.168.1.132    xenial1
192.168.1.133    xenial2

One important thing to note is to use the same host names in the hosts file as the actual hostnames of the machines (i.e. the output of hostname command). Failing to do so will cause unnecessary headaches later on during the Ceph installation.

If the Ceph admin machine is a WSL instance, then one cannot directly edit the Ubuntu /etc/hosts file as it is overwritten by Windows. The correct way is to edit C:\Windows\System32\drivers\etc\hosts and Windows will automatically populate /etc/hosts when it starts the WSL instance.

Packages to install on Ceph cluster hosts

There are some packages required by Ceph that are missing in the default Ubuntu-server 16.04.

python2 is required. This can be installed as -
sudo apt-get install python-minimal

 Add Ceph Repository

On all the machines, add the Ceph repository.

Add the Ceph release key -
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
Add the repository,
echo deb https://download.ceph.com/debian-luminous/ xenial main | sudo tee /etc/apt/sources.list.d/ceph.list
Update apt-get to refer to the new repo,
sudo apt-get update 

Clone the virtual machines

Once all the previous steps have been done on a single virtual machine, the machine could be cloned 3 times to create 3 more virtual machines (our lab setup requires 4 virtual machines in the Ceph cluster).

Ensure that the network adapter MAC address is different for each virtual machine.

Change the hostname for each virtual machine -
sudo hostnamectl set-hostname some_host_name
where some_host_name is from xenial1, xenial2, xenial3, xenial4.

Passwordless SSH into each cluster host

For installation, Ceph admin node needs to be able to do passwordless SSH into each of the cluster's hosts. This can be done using SSH public key authentication by copying the key of the admin node into each of the cluster's hosts.

If the Ceph admin node (the WSL host in our case) doesn't have SSH keys then generate them as -
ssh-keygen
Ensure that the user on the admin node is the same as that to be used to install Ceph on the cluster hosts (i.e. cephuser in our case).

Copy the SSH key into each of the cluster's virtual machines as -
ssh-copy-id xenial1 (change this hostname as required).

Installing Ceph

Ceph is installed remotely into each of the cluster's hosts (4 in our case) from the admin host, using the ceph-deploy tool.

Firstly, install ceph-deploy on the admin host -
sudo apt-get install ceph-deploy
Make a directory on the admin host for installing this cluster then run all installation commands from this directory.
e.g.
mkdir cluster-1
cd cluster-1

Create a new Ceph configuration and monitor node -
ceph-deploy new xenial1
The above comand will set host 'xenial1' as a monitor node.

In the newly created ceph.conf file, set the IP address of the network attached to the cluster's  virtual machines by adding following line as,

public network = 192.168.1.0/24
In above line, change the IP address subnet as required by your own network.


Install Ceph on the monitor and OSD nodes,
ceph-deploy install --release=luminous xenial1 xenial2 xenial3

Deploy the monitor,
ceph-deploy mon create-initial

Copy the ceph.conf config file and admin keys to the cluster nodes,
ceph-deploy admin xenial1 xenial2 xenial3

Deploy the manager,
ceph-deploy mgr create xenial1

Create a BlueStore volume on /dev/sdb of each OSD virtual machine,
ceph-deploy osd create --data /dev/sdb xenial1
Repeat the above command for xenial2 and xenial3 hosts.


Check the health of the cluster by logging into the mon node and checking the status,

ssh xenial1
sudo ceph health
sudo ceph -s

If the health command reports HEALTH_OK then all is well with the cluster.

Installing the Ceph Object Gateway

Once the basic cluster is up and running, one can add a Ceph Object Gateway (radosgw) node. This can be used to access the storage cluster using either the Swift or S3 Rest APIs.

Provision a virtual machine 'xenial4' with Ubuntu-server 16.04 and set it up in the same way as the other virtual machines i.e. it should have passwordless sudo and ssh and python-minimal package.

Install radosgw on xenial4 from admin node,
 ceph-deploy install --rgw --release=luminous xenial4

Add keyrings to allow this node to be used as admin node (used later for creating Rest API user),
ceph-deploy admin xenial4

Start all the other nodes of the Ceph cluster.

Create and start radosgw instance,
ceph-deploy rgw create xenial4

Open a browser and point it to xenial4, port 7480 to test that the Ceph Object Gateway is accessible.

e.g. http://xenial4:7480

To make the Ceph Object Gateway server use well known HTTP port 80 (or any other port) instead of default port 7480, in admin node, in ceph.conf of the cluster folder, add,

[client.rgw.xenial4]
rgw_frontends = "civetweb port=80"

In above config, change 'xenial4' to whatever is the hostname of your radosgw host.

Push the above config change to radosgw host,
ceph-deploy --overwrite-conf config push xenial4

Restart host xenial4 to make the radosgw server use the new port.

 Swift API

Objects within the storage cluster can be accessed using either the OpenStack Swift or the Amazon S3 Rest APIs.  I prefer the Swift API as its not tied to proprietary stuff tied to a single vendor.

Objects in the object store are placed within Swift containers (known as buckets in S3) and are identified by a unique object identifier.

Swift User and Secret Key

The first step is to create a Swift User and Secret Key on the radosgw host. These will be used for authentication by clients wishing to access the storage cluster.

On the radosgw host (xenial4 in our case), 

Create user 'testuser',

sudo radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full


Create secret key,

sudo radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret

Among other things, the output of the above command will display the secret key. Copy it and use it in clients that wish to access the storage cluster.

Swift commandline client

There is a swift commandline application that can be used as a Swift API client to test the storage cluster. Install this app on the admin node (WSL host in our case) as,

sudo python -m pip install python-swiftclient
If there are errors in above install due to errors in pip or python then search online for workarounds.

After the app is installed, to list all the containers for given user,

swift -A http://xenial4/auth/1.0 -U testuser:swift -K your_secret_key_here  list

Use man swift to see all the other actions that are possible with this app.

Custom Swift client in Java

Clients can be written in various languages to interact with the Swift server provided by the Ceph Object Gateway. For Java clients, one can use the JOSS library http://joss.javaswift.org/.
The current version of this library is 0.10.2 (as of mid May 2018). It has the following dependencies -

httpclient-4.4.1
httpcore-4.4.4
slf4j-api-1.7.0
slf4j-log4j12-1.7.21
log4j-1.2.14
commons-logging
jackson-mapper-asl-1.9.13 (org.codehaus package)
jackson-core-asl-1.9.13 (org.codehaus package)
commons-codec-1.4
commons-io-2.4

The version numbers shown above for each dependency are the ones that I found to work. There might be other versions that might work too.

The idea is to create an Account using an AccountConfig which contains the Swift credentials and server endpoint then use this account to manipulate containers and objects. Example code is shown in the Ceph documentation at http://docs.ceph.com/docs/luminous/radosgw/swift/java/.

References



























No comments:

Post a Comment