当前位置: 代码迷 >> 综合 >> ubuntu 12.04 手动搭建ceph0.87(giant)单节点
  详细解决方案

ubuntu 12.04 手动搭建ceph0.87(giant)单节点

热度:0   发布时间:2023-12-26 23:02:58.0

参考:http://docs.ceph.com/docs/giant/install/manual-deployment/

这里已经提前下载安装好了ceph的安装包,除了ceph-deploy,因为要手动安装

环境

主机名 IP地址 系统 环境
ceph01 192.168.100.101 ubuntu 12.04 ceph giant

1 修改主机名

root@ceph01:~# vi /etc/hostname
ceph01

 

2 修改主机映射

root@ceph01:~# vi /etc/hosts
127.0.0.1       localhost
127.0.1.1       ceph01
192.168.100.101 ceph01

 

3 设置ssh无密钥登录

root@ceph01:~# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
58:47:8d:56:db:90:aa:7c:be:8a:22:a6:d6:e4:c3:8d root@ceph01
The key's randomart image is:
+--[ RSA 2048]----+
|          .+o.   |
|         .o o+   |
|        ..... .  |
|       o ..      |
|      ..S.       |
|   .    o .      |
|  = o    o       |
| .oE.. .  .      |
|oo .... ....     |
+-----------------+
root@ceph01:~# ssh-copy-id ceph01
Warning: Permanently added 'ceph01' (ECDSA) to the list of known hosts.
root@ceph01's password:
Now try logging into the machine, with "ssh 'ceph01'", and check in:~/.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.

 

 

4 编辑ceph配置文件

root@ceph01:~# vi /etc/ceph/ceph.conf
fsid = 53eaacda-3558-4881-8e35-67f3741072dd
mon initial members = ceph01
mon host = 192.168.100.101

 

5 为群集创建密钥环并生成监控密钥

root@ceph01:~# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'

 

6生成管理员密钥环,生成client.admin用户并将用户添加到密钥环

root@ceph01:~# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'

 

7将client.admin密钥添加到ceph.mon.keyring。

root@ceph01:~# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring

 

8使用主机名,主机IP地址和FSID生成监控映射。将其另存为/ tmp / monmap:

root@ceph01:~# monmaptool --create --add ceph01 192.168.100.101 --fsid 53eaacda-3558-4881-8e35-67f3741072dd /tmp/monmap

 

9在监视器主机上创建默认数据目录(或多个目录)。

root@ceph01:~# mkdir /var/lib/ceph/mon/ceph01

 

10使用监视器映射和密钥环填充监视器守护程序。

root@ceph01:~# ceph-mon --mkfs -i ceph01 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

 

11 考虑Ceph配置文件的设置。常见设置包括以下内容:

[global]
fsid = {cluster-id}
mon initial members = {hostname}[, {hostname}]
mon host = {ip-address}[, {ip-address}]
public network = {network}[, {network}]
cluster network = {network}[, {network}]
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = {n}
filestore xattr use omap = true
osd pool default size = {n}  # Write an object n times.
osd pool default min size = {n} # Allow writing n copy in a degraded state.
osd pool default pg num = {n}
osd pool default pgp num = {n}
osd crush chooseleaf type = {n}例如:
[global]
fsid = 53eaacda-3558-4881-8e35-67f3741072dd
mon initial members = ceph01
mon host = 192.168.100.101
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
filestore xattr use omap = true
osd pool default size = 1
osd pool default min size = 1
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1

12 创建完成的文件

标记已创建监视器并准备启动:

root@ceph01:~# touch /var/lib/ceph/mon/ceph01/done

 

13 启动mon

13.1 对于Ubuntu,请使用Upstart:

root@ceph01:~# start ceph-mon id=ceph01

在这种情况下,要允许在每次重新启动时启动守护程序,您必须创建两个空文件,如下所示:

root@ceph01:~# touch /var/lib/ceph/mon/ceph01/upstart

 

13.2 对于Debian / CentOS / RHEL,请使用sysvinit:

# /etc/init.d/ceph start mon.ceph01

 

14 验证mon

14.1 验证Ceph是否创建了默认池。

root@ceph01:~# ceph osd lspools0 rbd,

 

14.2 验证监视器是否正在运行。

root@ceph01:~# ceph  - scluster 53eaacda-3558-4881-8e35-67f3741072ddhealth HEALTH_ERR 64 pgs stuck inactive; 64 pgs stuck unclean; no osdsmonmap e1: 1 mons at {ceph01=192.168.100.101:6789/0}, election epoch 2, quorum 0 ceph01osdmap e1: 0 osds: 0 up, 0 inpgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects0 kB used, 0 kB / 0 kB avail64 creating

 

 

15 添加OSD

15.1 精简模式

15.1.1 准备OSD

root@ceph01:~# ceph-disk prepare --cluster ceph --cluster-uuid 53eaacda-3558-4881-8e35-67f3741072dd --fs-type ext4 /dev/sdb

 

15.1.2 激活OSD

root@ceph01:~# ceph-disk activate /dev/sdb1

 

 

15.2长模式

15.2.1 生成uuid

root@ceph01:~# uuidgen

 

15.2.2 创建OSD

如果没有给出UUID,则在OSD启动时将自动设置。以下命令将输出OSD编号,将在后续步骤中使用该编号。

root@ceph01:~# ceph osd create 72fb9a60-38a1-48b3-b1fe-6d3f7c26e9eb1

 

15.2.3 在新OSD上创建默认目录。

root@ceph01:~# mkdir /var/lib/ceph/osd/ceph-1/

15.2.4 如果OSD用于OS驱动器以外的驱动器,格式化并挂载到目录

root@ceph01:~# mkfs -t ext4 /dev/sdcroot@ceph01:~# mount -o user_xattr /dev/sdc /var/lib/ceph/osd/ceph-1/

 

15.2.5 初始化OSD数据目录

root@ceph01:~# ceph-osd -i 1 --mkfs --mkkey --osd-uuid 72fb9a60-38a1-48b3-b1fe-6d3f7c26e9eb

在使用 --mkkey选项运行ceph -osd之前,该目录必须为空。此外,ceph -osd工具需要使用--cluster选项指定自定义集群名称。

 

15.2.6 注册OSD验证密钥

如果你的群集名称与ceph不同,使用自己的群集名称:

root@ceph01:~# ceph auth add osd.1 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-1/keyring

 

15.2.7 将您的Ceph节点添加到CRUSH map

root@ceph01:~# ceph osd crush add-bucket ceph01 host

 

15.2.8 将Ceph节点置于根默认值下

root@ceph01:~# ceph osd crush move ceph01 root=default

 

15.2.9 将OSD添加到CRUSH映射,以便它可以开始接收数据

你也可以反编译CRUSH map,将OSD添加到设备列表,将主机添加为存储桶(如果它尚未存在于CRUSH地图中),将设备添加为主机中的项目,为其分配权重,重新编译它并设置它。

root@ceph01:~# ceph osd crush add osd.1 1.0 host=ceph01

 

15.2.10 启动OSD

将OSD添加到Ceph后,OSD就在你的配置中。但是,它还没有运行。OSD选单下和在。你必须先启动新的OSD才能开始接收数据。

1 对于Ubuntu,使用upstart

root@ceph01:~# start ceph-osd id=1

 

2 对于Debian / CentOS / RHEL,使用sysvinit:

# /etc/init.d/ceph start osd.1

15.2.11 验证

root@ceph01:~# ceph -wroot@ceph01:~# ceph osd tree# id   weight      type name        up/down  reweight-1      0.03722   root default-2      0.03722             host ceph010       0.01813                      osd.0        up     1      1       0.01909                      osd.1        up     1