linux:ceph:extending_cluster_bootstrap
Table of Contents
Extending the cluster: bootstrap vm's
Documentation | |
---|---|
Name: | Extending the cluster: bootstrap vm's |
Description: | How to add more nodes to a running cluster (the beginning) |
Modification date : | 19/07/2019 |
Owner: | dodger |
Notify changes to: | Owner |
Tags: | ceph, object storage |
Scalate to: | Thefuckingbofh |
Variables used in this documentation
Name | Description | Sample |
---|---|---|
${THESERVER} | Variable used as salt target, it can be a mask of serves (see sample) | export THESERVER="avmlp-osm-00[56]*" |
${NEWSERVERS} | Variable used for clonewars target and ceph-deploy | export NEWSERVERS="avmlp-osm-005 avmlp-osm-006" |
${VMNAMESTART} | Variable used to perform a regex in salt execution, it will match the environment (avmlp , bvmlb …) | export VMNAMESTART="avmlp" |
Instructions
SALT PART
SALT part
Deploy the new VM's for new nodes
Use clonewars with the following ops:
bash CloneWars.sh -c ${NUTANIXCLUSTER} -h ${THESERVER} -i ${THEIPADDRESS} -d 50GB -m 20 -O -r 4096 -v 2 -o 2
Run salt basic states
- Connect to salt-master
- Run the following
sls
<code bash> salt “${THESERVER}” state.apply salt “${THESERVER}” state.apply nsupdate </code>
Install yum-plugin-priorities
In all the servers:
salt "${THESERVER}" pkg.install yum-plugin-priorities
Install ceph-deploy
In the adm:
salt "${THESERVER}" pkg.install ceph-deploy
Add ceph user
In all the servers:
salt "${THESERVER}" user.add ceph 1002
Check:
salt "${THESERVER}" user.info ceph
Add ceph user to sudoers
In all the servers:
salt "${THESERVER}" file.write /etc/sudoers.d/ceph \ "ceph ALL = (root) NOPASSWD:ALL"
Check:
salt "${THESERVER}" cmd.run 'cat /etc/sudoers.d/ceph' salt "${THESERVER}" cmd.run "sudo whoami" runas=ceph
Generate ssh keys
All the servers:
salt "${THESERVER}" cmd.run \ "ssh-keygen -q -N '' -f /home/ceph/.ssh/id_rsa" \ runas=ceph
Populate ssh keys
Allow self node to ssh itself:
salt "${THESERVER}" cmd.run "cp /home/ceph/.ssh/id_rsa.pub /home/ceph/.ssh/authorized_keys"
Get pub keys, from ${NEWSERVERS}
servers:
salt "${THESERVER}" cmd.run "cat /home/ceph/.ssh/id_rsa.pub" |egrep -v "^${VMNAMESTART}" | sed 's/^[[:space:]]\{1,5\}//g' > auth_keys_oss.txt
Get the pub keys from all the cluster nodes (${THESERVER}
Must match all the nodes in the cluster):
salt "${THESERVER}" cmd.run "cat /home/ceph/.ssh/id_rsa.pub" |egrep -v "^${VMNAMESTART}" | sed 's/^[[:space:]]\{1,5\}//g' > all_cluster_nodes.txt
Populate the pub keys from all cluster to ${NEWSERVERS}
(${THESERVER}
Must match only ${NEWSERVERS}
the nodes in the cluster):
while read LINE ; do salt "${THESERVER}" file.append /home/ceph/.ssh/authorized_keys "${LINE}" ; done < all_cluster_nodes.txt
Populate the keys from ${NEWSERVERS}
to the rest of the cluster (${THESERVER}
Must match all the nodes in the cluster: yes, you'll duplicate some keys, but it does not matter):
while read LINE ; do salt "${THESERVER}" file.append /home/ceph/.ssh/authorized_keys "${LINE}" ; done < auth_keys_oss.txt
Ceph admin node PART
Ceph admin node PART
Upload ceph.repo
Copy ceph.repo
from the admin node to the new nodes:
for i in ${NEWSERVERS} ; do scp /etc/yum.repos.d/ceph.repo ${i}:/home/ceph/ ; ssh ${i} "sudo mv /home/ceph/ceph.repo /etc/yum.repos.d/" ; done for i in ${NEWSERVERS} ; do ssh ${i} "sudo chown root. /etc/yum.repos.d/ceph.repo" ; done for i in ${NEWSERVERS} ; do ssh ${i} "ls -l /etc/yum.repos.d/" ; done
Install ceph
ceph-deploy install ${NEWSERVERS}
DONE
linux/ceph/extending_cluster_bootstrap.txt · Last modified: 2022/02/11 11:36 by 127.0.0.1