Skip to content

Commit

Permalink
docs: reports of ceph
Browse files Browse the repository at this point in the history
  • Loading branch information
SproutNan authored and liuly0322 committed Jul 10, 2022
1 parent 9021fc3 commit a165f42
Show file tree
Hide file tree
Showing 11 changed files with 1,053 additions and 0 deletions.
20 changes: 20 additions & 0 deletions _Lab4/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# x-realism Lab4-Ceph

## 小组成员

黄瑞轩(PB20111686)

叶升宇(PB20111701)

刘良宇(PB20000180)

许坤钊(PB20111714)

## 文件说明

- deploy.md -> 单机部署说明文档
- optimize.md -> 单机性能测试和优化文档
- deploy_distri.md -> 分布式部署说明文档
- optimize_distri.md -> 分布式性能测试
- 报告发布于公开媒体:[Sprout 的知乎文章](https://zhuanlan.zhihu.com/p/539716105)

192 changes: 192 additions & 0 deletions _Lab4/deploy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,192 @@
# Ceph 单机部署文档

## 部署环境

- VMware Workstation 16 Pro
- CentOS 7 64位(CentOS 7.9)
- IP 段:192.168.153.0(实验机从 192.168.153.128 开始)

## 部署过程

### 配置主机名

```shell
# 设置 hostname
hostnamectl set-hostname ceph

# 配置 hosts 解析
echo "192.168.153.128 ceph">>/etc/hosts

# 重启
reboot
```

### 关闭防火墙

```shell
# 关闭防火樯
systemctl disable firewalld
systemctl stop firewalld

# 关闭 selinux
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
```

### SSH配置免密

```shell
# 给ceph单节点配置免密
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

# 权限设置 644
chmod 644 ~/.ssh/authorized_keys
```

### 安装 ceph-deploy

```shell
# yum方式安装
yum install -y ceph-deploy

# 检验版本
ceph-deploy --version
```

### 创建 ceph 集群

这里我们使用单节点部署,需将集群的副本数量设置为1,创建完后修改 ceph.conf 文件

```shell
# 创建一个目录保存ceph配置及密钥
mkdir -p /data/services/ceph

# 创建ceph cluster集群
cd /data/services/ceph
ceph-deploy new ceph

# 因为是单节点部署,需将集群的副本数量设置为1,修改ceph.conf文件
[root@ceph ceph]# echo "osd pool default size = 1" >> ceph.conf
[root@ceph ceph]# echo "osd pool default min size = 1" >> ceph.conf
```

### 安装 ceph

```shell
# 安装L版本的软件
yum install -y ceph ceph-radosgw
```

### 初始化 mon

```shell
## 初始化 monitor
ceph-deploy mon create-initial

## 把配置文件和密钥拷贝到管理节点和Ceph节点
ceph-deploy admin ceph

## 确保对秘钥环有权限
chmod +r /etc/ceph/ceph.client.admin.keyring

cp /data/services/ceph/ceph* /etc/ceph/
chmod +r /etc/ceph/ceph*

# 启动monitor节点后,检查ceph集群
[root@ceph ceph]# ceph -s
cluster:
id: 098f5601-a1f1-4eb4-a150-8db0090bc9d7
health: HEALTH_WARN
mon is allowing insecure global_id reclaim

services:
mon: 1 daemons, quorum ceph (age 4m)
mgr: no daemons active
osd: 0 osds: 0 up, 0 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
```

### 部署 mgr

```shell
## 部署mgr组件
ceph-deploy mgr create ceph

## 查看mgr状态为active
[root@ceph ceph]# ceph -s
cluster:
id: 098f5601-a1f1-4eb4-a150-8db0090bc9d7
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 1
mon is allowing insecure global_id reclaim

services:
mon: 1 daemons, quorum ceph (age 10m)
mgr: ceph(active, since 103s)
osd: 0 osds: 0 up, 0 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
```

### 部署 osd

```shell
## 安装lvm2
yum install -y lvm2

# 创建pv
pvcreate /dev/sdb

# 创建 vg
vgcreate ceph-pool /dev/sdb

# 创建lv,单个osd节点
lvcreate -n osd0 -l 100%FREE ceph-pool

# 使用xfs格式化
mkfs.xfs /dev/ceph-pool/osd0

# 添加osd节点
ceph-deploy osd create --data /dev/ceph-pool/osd0 ceph

# 查看集群状态
[root@ceph ceph]# ceph -s
cluster:
id: 098f5601-a1f1-4eb4-a150-8db0090bc9d7
health: HEALTH_WARN
mon is allowing insecure global_id reclaim

services:
mon: 1 daemons, quorum ceph (age 71m)
mgr: ceph(active, since 63m)
osd: 1 osds: 1 up (since 5s), 1 in (since 5s)

task status:

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 1.0 GiB used, 199 GiB / 200 GiB avail
pgs:

# 通过ceph osd tree 查看osd的列表情况
[root@ceph ceph]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.00870 root default
-3 0.00870 host ceph
0 hdd 0.00290 osd.0 up 1.00000 1.00000
```

到这里就部署完毕了,实机截图:

![1](pic/1.png)
71 changes: 71 additions & 0 deletions _Lab4/deploy_distri.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# Ceph 分布式部署文档

## 部署环境

- VMware Workstation 16 Pro
- CentOS 7 64位(CentOS 7.9)
- IP 段:192.168.153.0(实验机从 192.168.153.128 开始)

## 部署过程

这里每台实验机都是从刚刚安装完 ceph 的第一台实验机(也就是 192.168.153.128)直接克隆得到的,这样可以避免重复而冗长的安装和配置环节。下面进行的是对克隆之后的实验机进行的额外配置。

### 配置主机名

```shell
### 在所有实验机上重新修改,这里以第 2 个为例

# 设置 hostname
hostnamectl set-hostname ceph2

# 配置 hosts 解析
echo "192.168.153.130 ceph2">>/etc/hosts

# 重启
reboot
```

### 配置管理主机

```shell
# 配置管理主机(第一个节点)ceph, 使之可以通过 SSH 无密码访问各节点
ssh-keygen
ssh-copy-id ceph2
ssh-copy-id ceph3
```

### 其他配置

- 根据[这篇文章](https://zhuanlan.zhihu.com/p/390377674)的提示,进行 NTP 的配置。

```shell
# 确保各节点的用户都有sudo权限
echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
sudo chmod 0440 /etc/sudoers.d/{username}
```

### 部署操作

``` shell
# 在管理节点新建工作目录,后续操作在工作目录下完成
mkdir cluster; cd cluster

# 创建 monitor 节点
ceph-deploy new ceph

# 生成 monitor 节点检测集群所需要的密钥文件,将生成的ceph.client.admin.keyring和配置文件推送到各节点
ceph-deploy mon create-initial
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
ceph-deploy admin ceph ceph2 ceph3

# 部署mgr节点
ceph-deploy mgr create ceph

# 部署osd节点
ceph-deploy osd create ceph --data /dev/sdb
ceph-deploy osd create ceph2 --data /dev/sdb
ceph-deploy osd create ceph3 --data /dev/sdb
```

到这里分布式部署就结束了。

Loading

0 comments on commit a165f42

Please sign in to comment.