...
- Procedure to be used if the VM was created using a volume (the migration procedure doesn't require a downtime)
- Procedure to be used if the VM was created using ephemeral storage (the migration procedure requires a downtime)
- Install the operating system using foreman. For this use the hosts_all foreman hostgroup
- Disable SELinux
- Configure the management and data network
...
Once there are no more VMs on that compute node, reinstall the node with AlmaLinux9 using foreman
- The hostgroup must be hosts_all
- The kickstart to be used must be TBC
- TBC
When the node restart after the update, make sure that SELinux is disabled:
Code Block |
---|
[root@cld-np-19 ~]# getenforce
Disabled |
Then do an update of the packages (probabily only puppet will be updated):
Code Block |
---|
[root@cld-np-19 ~]# yum clean all
[root@cld-np-19 ~]# yum update -y |
...
Once the node has been reinstalled with AlmaLinux9, configure the data network
TBC
The address to be used must be the original one
MTU must be 9000
...
If this is a DELL host, reinstall DELL OpenManage, as explained here
Check on Nagios that the relevant checks get green
...
Then configure the node as Openstack compute node using puppet
Stop puppet:
Code Block | ||
---|---|---|
| ||
systemctl stop puppet |
In foreman move the host under the ComputeNode-Prod_Yoga hostgroup
Run puppet manually:
Code Block | ||
---|---|---|
| ||
puppet agent -t
|
If the configuration fails reporting
Code Block | ||
---|---|---|
| ||
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key error: "net.bridge.bridge-nf-call-iptables" is an unknown key error: "net.bridge.bridge-nf-call-arptables" is an unknown key |
...
Code Block | ||
---|---|---|
| ||
modprobe br_netfilter |
and then rerun puppet
In the configuration fails reporting a problem because of a wrong dependency required by swtpm, then issue:
Code Block | ||
---|---|---|
| ||
mv /etc/yum.repos.d/advanced-virtualization.repo /etc/yum.repos.d/advanced-virtualization.repo.old
yum install centos-release-advanced-virtualization |
and then rerun puppet
Disable the compute node so that it doesn't accept new VMs:
...
language | bash |
---|
...
Start and enablepuppet:
Code Block | ||
---|---|---|
| ||
systemctl start puppet; systemctl enable puppet |
Enable the checks in Nagios for this host
Wait till all checks are ok (in particular the VM network and volume one: remember to enable the node just for the time to force the run from Nagios; othwerwise the check will fail.)
- Add this node in the following scripts on cld-ctrl-01: /usr/local/bin/display_usage_of_hypervisors.sh, /usr/local/bin/host_aggregate.sh, /usr/local/bin/new_project.sh, /usr/local/bin/free_resources_compute_nodes.sh
- Add this node in the /etc/cron.daily/vm-log.sh script on cld-log
- Create the directory /var/disk-iscsi/qemu/cld-np-19 in cld-log
- Verify that a ssh from root@cld-log to root@cld-np-19 works without requiring password
Stop puppet on the 2 controller nodes:
Code Block language bash systemctl stop puppet
Add the new compute node in cld-config:/var/puppet/puppet_yoga/controller_yoga/templates/aai_settings.py.erb
Run puppet on the first controller node (this will trigger a restart of httpd):
Code Block language bash puppet agent -t
Run puppet on the second controller node (this will trigger a restart of httpd):
Code Block language bash puppet agent -t
Start puppet on the two controller nodes:
Code Block language bash systemctl start puppet
Enable the host:
Code Block language bash openstack compute service set --enable cld-np-19.cloud.pd.infn.it nova-compute
- Add the host to the 'admin' aggregate
...