Magma 扩容
本页目录
Magma 扩容#
本节将描述 Magma 扩容的方法,并假设原有集群主节点分别为 oushu1,oushu2,oushu3,Magma 节点为Magma1,Magma2,Magma3,新添加的节点为 Magma4.
安装#
配置yum源,安装lava命令行管理工具, yum 源需自行配置
ssh Magma4
# 从yum源所在机器(假设为192.168.1.10)获取repo文件
scp root@192.168.1.10:/etc/yum.repos.d/oushu.repo /etc/yum.repos.d/oushu.repo
# 追加yum源所在机器信息到/etc/hosts文件
yum clean all
yum makecache
yum install -y lava
使用 yum install 的安装方式:
yum install -y oushudb
配置#
1. 系统配置
在 magma4 节点的系统配置文件 /etc/sysctl.conf 中追加如下内容:
kernel.shmmax = 3000000000
kernel.shmmni = 4096
kernel.shmall = 4000000000
kernel.sem = 250 512000 100 2048
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 200000
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range = 10000 65535
net.core.netdev_max_backlog = 200000
net.netfilter.nf_conntrack_max = 524288
fs.nr_open = 3000000
kernel.threads-max = 798720
kernel.pid_max = 798720
net.core.rmem_max=2097152
net.core.wmem_max=2097152
net.core.somaxconn=4096
kernel.core_pattern=/data1/oushudb/cores/core-%e-%s-%u-%g-%p-%t
如果集群计划部署在麒麟操作系统上,需要额外追加网络配置参数:
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ipfrag_high_thresh = 41943040
net.ipv4.ipfrag_low_thresh = 40894464
net.ipv4.udp_mem = 9242685 12323580 18485370
net.ipv4.tcp_mem = 9240912 12321218 18481824
为了方便软件调试分析,我们建议允许 Magma 生成 coredump 文件。
创建文件 /etc/security/limits.d/oushu.conf
touch /etc/security/limits.d/oushu.conf
并向其写入内容:
* soft nofile 1048576
* hard nofile 1048576
* soft nproc 131072
* hard nproc 131072
oushu soft core unlimited
oushu hard core unlimited
配置原始OushuDB 集群节点IP到节点的 /etc/hosts
中
echo 192.168.1.11 oushu1 >>/etc/hosts
echo 192.168.1.12 oushu2 >>/etc/hosts
echo 192.168.1.13 oushu3 >>/etc/hosts
配置 Magma 集群节点IP到节点的/etc/hosts
中
echo 192.168.1.21 magma1 >>/etc/hosts
echo 192.168.1.22 magma2 >>/etc/hosts
echo 192.168.1.23 magma3 >>/etc/hosts
2. 集群配置
在 oushu 用户下,创建 magmahosts
文件,包含 Magma 集群中所有机器:
touch ~/magmahosts
编写 magmahosts
文件内容如下:
magma1
magma2
magma3
magma4
与集群进行交换ssh-key
oushudb ssh-exkeys -f ~/magmahostfile
添加本机 IP 到集群内 hosts 中:
sudo su root
lava ssh -f ~/magmahosts -e "echo '192.168.1.24 magma4' >>/etc/hosts"
3. 配置文件
此步骤需要按照部署计划,创建存储文件夹。 例如配置 catalog 集群的文件夹:
mkdir -p /data1/oushudb/magma_catalog
chown -R oushu:oushu /data1/oushudb
配置生成 core 的文件目录:
mkdir -p /data1/oushudb/cores
chmod 777 /data1/oushudb/cores
4. 修改配置文件
从原有数据库 magma 节点的 /usr/local/oushu/conf/
中获取所有配置文件,并保存到本节点相同路径下。
修改 oushudb-topology.yaml 文件:
假设其为:
nodes:
- id: m1
addr: 192.168.1.21
label: { region: "regionA", zone: "zoneA"}
- id: m2
addr: 192.168.1.22
label: { region: "regionA", zone: "zoneA"}
- id: m3
addr: 192.168.1.23
label: { region: "regionA", zone: "zoneA"}
vsc:
- name: vsc_catalog
nodes: m1,m2,m3
port: 6666
num_ranges: 3
num_replicas: 3
data_dir: /data1/oushudb/magma_catalog
log_dir:
replica_locations: "regionA.zoneA:3"
leader_preferences: "regionA.zoneA"
conf_path:
- name: vsc_default
nodes: m1,m2,m3
port: 6676
num_ranges: 18
num_replicas: 3
data_dir: /data1/oushudb/magma_data
log_dir:
replica_locations: "regionA.zoneA:3"
leader_preferences: "regionA.zoneA"
conf_path:
将本节点添加到 nodes 下,并在对应 vsc 内添加本节点,例如为 vsc_catalog 添加节点:
nodes:
- id: m1
addr: 192.168.1.21
label: { region: "regionA", zone: "zoneA"}
- id: m2
addr: 192.168.1.22
label: { region: "regionA", zone: "zoneA"}
- id: m3
addr: 192.168.1.23
label: { region: "regionA", zone: "zoneA"}
- id: m4
addr: 192.168.1.24
label: { region: "regionA", zone: "zoneA"}
vsc:
- name: vsc_catalog
nodes: m1,m2,m3,m4
port: 6666
num_ranges: 3
num_replicas: 3
data_dir: /data1/oushudb/magma_catalog
log_dir:
replica_locations: "regionA.zoneA:3"
leader_preferences: "regionA.zoneA"
conf_path:
.....
将 magma-topology.yaml 发送到其他节点:
lava scp -f ~/magmahosts /usr/local/oushu/conf/magma-topology.yaml =:/usr/local/oushu/conf/
初始化#
在 oushu 用户下,本地执行以下命令启动节点,例如为 vsc_catalog 添加节点:
sudo su oushu
magma start node --vsc=‘vsc_catalog’.
在 magma 集群内,执行参数重载命令:
magma reload vscluster --vsc=‘vsc_catalog’
验证#
在 magma 集群内,执行magma status
检查 magma 状态,检查本节点已经出现,并且所有 rg 均处于服务状态即可,例如:
nodeaddress: 192.168.1.24:6666
topo: regionA.zoneA
vscname: vsc_catalog
compactstatus: 0,0,0,0
healthy: healthy
replicastatus:
RG:id=1,isLeader=0,raftGroupId=raft_0_group,raftMembers=(1,0,2),status=serving
RG:id=4,isLeader=0,raftGroupId=raft_1_group,raftMembers=(4,3,5),status=serving
RG:id=6,isLeader=1,raftGroupId=raft_2_group,raftMembers=(6,7,8),status=serving
RG:id=11,isLeader=0,raftGroupId=raft_3_group,raftMembers=(11,10,9),status=serving