...
- Sul secondo controller i servizi sono attivi con versione C
- Si spengono due dei tre db percona
- Si configura HAproxy in modo da far puntare al secondo controller
- Si spengono tutti i servizi sul primo controller
- Si aggiornano i pacchetti a E sul primo controller senza fare partire i servizi
- Si configura e si fa partire un servizio alla volta sul primo controller e si configura HAproxy in modo da far puntare per quel servizio al primo controller
- Si spengono tutti i servizi sul secondo controller e si aggiorna il secondo controller a epoxy, facendo partire i servizi
- Si modifica HAproxy in modo che punti a entrambi i controller
- Si aggiornano i compute node uno alla volta
...
Azioni da fare prima di cominciare con l'installazione della release
Spegnere mysql su due
delle tre istanze del db perconadei tre nodi percona ricordando l'ordine di spegnimento
Code Block language shell [root@cld-db-test-06 ~]# systemctl stop mysql [root@cld-db-test-05 ~]# systemctl stop mysql- Modificare la classe puppet epoxy dei controller (service.pp) in modo che non faccia partire i servizi
Code Block language shell ATTENZIONE: modificare il file service.pp di puppet perche' non faccia partire i servizi una volta aggiornati. Per questo mettere tutti i servizi in ensure => stopped, enable => false, e committare su git
- Modificare l'HA in modo che per tutti i servizi punti al controller-02, che ha i servizi a Caracal attivi. Per questo modificare il file in cld-config ed eseguire puppet sui tre haproxy
Code Block language shell title HAProxy # in cld-config.cloud.pd.infn.it cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_cinder_heat_spento01_acceso02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg # in cld-haproxy-test-01 - .pd.infn.it, 02 -e 03 puppet agent -t
- In entrambi i controller spegnere e disabilitare puppet
...
Controllare se sono installati openstack-client e selinux
Code Block language bash [root@controller-01 ~]# yum list installed | grep openstackclient python-openstackclient-lang.noarch 6.6.1-1.el9s @centos-openstack-caracal python3-openstackclient.noarch 6.6.1-1.el9s @centos-openstack-caracal [root@controller-01 ~]# yum list installed | grep openstack-selinux openstack-selinux.noarch 0.8.40-1.el9s @centos-openstack-zed
Controllare versione kernel e ceph
Code Block language bash [root@controller-01 ~]# yum list installed | grep kernel kernel.x86_64 5.14.0-427.24.1.el9_4 @anaconda kernel.x86_64 5.14.0-503.33.1.el9_5 @baseos kernel-core.x86_64 5.14.0-427.24.1.el9_4 @anaconda kernel-core.x86_64 5.14.0-503.33.1.el9_5 @baseos kernel-headers.x86_64 5.14.0-503.33.1.el9_5 @appstream kernel-modules.x86_64 5.14.0-427.24.1.el9_4 @anaconda kernel-modules.x86_64 5.14.0-503.33.1.el9_5 @baseos kernel-modules-core.x86_64 5.14.0-427.24.1.el9_4 @anaconda kernel-modules-core.x86_64 5.14.0-503.33.1.el9_5 @baseos kernel-srpm-macros.noarch 1.0-13.el9 @appstream kernel-tools.x86_64 5.14.0-503.33.1.el9_5 @baseos kernel-tools-libs.x86_64 5.14.0-503.33.1.el9_5 @baseos [root@controller-01 ~]# yum list installed | grep ceph blosc.x86_64 1.21.0-3.el9s @centos-ceph-pacific centos-release-ceph-reef.noarch 1.0-1.el9 @extras ceph-common.x86_64 2:18.2.4-2.el9s @centos-ceph-reef [root@controller-01 ~]# uname -a Linux controller-01.cloud.pd.infn.it 5.14.0-503.33.1.el9_5.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Mar 20 03:39:23 EDT 2025 x86_64 x86_64 x86_64 GNU/LinuxRimuovere release Caracal
Code Block language bash yum remove centos-release-openstack-caracal.noarch
Installare Epoxy
Code Block language bash dnf install -y https://trunk.rdoproject.org/rdo_release/rdo-release.el9s.rpm (serve questo e contiene il repo epoxy) #### facendo un check [root@todelff ~]# rpm -qil rdo-release Name : rdo-release Version : epoxy Release : 1.el9s Architecture: noarch Install Date: Wed Mar 11 15:29:25 2026 Group : System Environment/Base Size : 13372 License : Apache2 Signature : (none) Source RPM : rdo-release-epoxy-1.el9s.src.rpm Build Date : Fri Mar 14 17:12:13 2025 Build Host : doogie-n1.rdu2.centos.org Packager : CBS <cbs@centos.org> Vendor : CentOS Cloud SIG URL : https://github.com/rdo-infra/rdo-release Summary : RDO repository configuration Description : This package contains the RDO repository /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Messaging /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-NFV /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9 /etc/yum.repos.d/ceph-reef.repo /etc/yum.repos.d/messaging.repo /etc/yum.repos.d/nfv-openvswitch.repo /etc/yum.repos.d/rdo-release.repo /etc/yum.repos.d/rdo-testing.repo
Salvare configurazioni che di solito vengono sovrascritte
Code Block language bash export REL=caracal cp /etc/httpd/conf.d/openstack-dashboard.conf /etc/httpd/conf.d/openstack-dashboard.conf.$REL
Update pacchetti
Code Block language bash dnf update ## se update fallisce con problemi vari: [root@controller-01 ~]# dnf update CentOS-9 - Ceph Reef 489 kB/s | 415 kB 00:00 OpenStack Epoxy Repository 2.5 MB/s | 1.7 MB 00:00 Last metadata expiration check: 0:00:01 ago on Fri 13 Mar 2026 10:41:31 AM CET. Error: Problem 1: cannot install both python3-keystone-1:27.0.0-1.el9s.noarch from openstack-epoxy and python3-keystone-1:25.0.0-1.el9s.noarch from @System - package python3-keystone+memcache-1:25.0.0-1.el9s.noarch from @System requires python3-keystone = 1:25.0.0-1.el9s, but none of the providers can be installed - cannot install the best update candidate for package python3-keystone-1:25.0.0-1.el9s.noarch - problem with installed package python3-keystone+memcache-1:25.0.0-1.el9s.noarch Problem 2: cannot install both python3-oslo-messaging-16.1.0-1.el9s.noarch from openstack-epoxy and python3-oslo-messaging-14.7.2-1.el9s.noarch from @System - package python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch from @System requires python3-oslo-messaging = 14.7.2-1.el9s, but none of the providers can be installed - cannot install the best update candidate for package python3-oslo-messaging-14.7.2-1.el9s.noarch - problem with installed package python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch Problem 3: cannot install both python3-requests-2.32.3-4.el9s.noarch from openstack-epoxy and python3-requests-2.31.0-3.el9s.noarch from @System - package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch from @System requires python3-requests = 2.31.0-3.el9s, but none of the providers can be installed - cannot install the best update candidate for package python3-requests-2.31.0-3.el9s.noarch - problem with installed package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) # Quindi vanno rimossi i seguenti pacchetti rpm -e --nodeps python3-keystone+memcache rpm -e --nodeps python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch rpm -e --nodeps python3-oslo-messaging+amqp1 dnf update -y
- Salvare le vecchie configurazioni
Code Block language bash title risultato collapse true # Nell’update vengono scaricati i nuovi rpm: attenzione a questi file di configurazione cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.$REL cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.$REL cp /etc/nova/nova.conf /etc/nova/nova.conf.$REL cp /etc/placement/placement.conf /etc/placement/placement.conf.$REL cp /etc/heat/heat.conf /etc/heat/heat.conf.$REL cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.$REL cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.$REL cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.$REL cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.$REL cp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.$REL cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.$REL cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.$REL cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.$REL cp /etc/httpd/conf.d/auth_openidc.conf /etc/httpd/conf.d/auth_openidc.conf.$REL mv -f /etc/openstack-dashboard/local_settings.rpmnew /etc/openstack-dashboard/local_settings mv -f /etc/neutron/neutron.conf.rpmnew /etc/neutron/neutron.conf mv -f /etc/nova/nova.conf.rpmnew /etc/nova/nova.conf mv -f /etc/placement/placement.conf.rpmnew /etc/placement/placement.conf mv -f /etc/heat/heat.conf.rpmnew /etc/heat/heat.conf mv -f /etc/neutron/dhcp_agent.ini.rpmnew /etc/neutron/dhcp_agent.ini mv -f /etc/neutron/l3_agent.ini.rpmnew /etc/neutron/l3_agent.ini mv -f /etc/neutron/metadata_agent.ini.rpmnew /etc/neutron/metadata_agent.ini mv -f /etc/neutron/plugins/ml2/ml2_conf.ini.rpmnew /etc/neutron/plugins/ml2/ml2_conf.ini mv -f /etc/neutron/plugins/ml2/openvswitch_agent.ini.rpmnew /etc/neutron/plugins/ml2/openvswitch_agent.ini mv -f /etc/keystone/keystone.conf.rpmnew /etc/keystone/keystone.conf mv -f /etc/glance/glance-api.conf.rpmnew /etc/glance/glance-api.conf mv -f /etc/cinder/cinder.conf.rpmnew /etc/cinder/cinder.conf mv -f /etc/httpd/conf.d/auth_openidc.conf.rpmnew /etc/httpd/conf.d/auth_openidc.conf- In puppet modificare il file /var/puppet/puppet_epoxy_env_test/controller_epoxy/manifests/service.pp perche' non faccia partire i servizi una volta aggiornati. Per questo mettere tutti i servizi in stopped
Code Block language bash # in service.pp mettere per tutti i servizi
inensure => stopped, enable => false, # e committare su git
DaIn Foreman abilitare la classe Epoxy: da pagina web
di Foreman, modificare la classe puppet del controller-01 selezionando Epoxy
ine poi eseguire puppet sul nodo
| language | bash |
|---|
| Code Block | ||
|---|---|---|
| ||
In https://cld-config.cloud.pd.infn.it/hosts/controller-xx.cloud.pd.infn.it editare l'host sostituendo l'hostgroup "hosts_all/ComputeNode-Test" con "hosts_all/ComputeNode-Test_Epoxy" Nel controller poi eseguire puppet agent -t |
A questo punto tutti i servizi sono configurati sul controller-01
KEYSTONE
Code Block language bash # TODO: backup database keystone su -s /bin/sh -c "keystone-manage doctor" keystone [root@controller-01 StartServices]# su -s /bin/sh -c "keystone-manage doctor" keystone WARNING: `keystone.conf [cache] enabled` is not enabled. Caching greatly improves the performance of keystone, and it is highly recommended that you enable it. su -s /bin/sh -c "keystone-manage db_sync --expand" keystone =============================================================================================== Dopo l'aggiornamento del controller-02 e fatto ripartire httpd, si deve eseguire il comando su -s /bin/sh -c "keystone-manage db_sync --contract" keystone- PLACEMENT
Code Block language bash 1) su -s /bin/sh -c "placement-manage db sync" placement 2) accendere i servizi per keystone, placement e dashboard systemctl start httpd.service memcached.service shibd.service 3) in cld-config modificare il file dell'HAproxy in modo che i tre servizi keystone, placement e dashboard (memcached) puntino al controller-01 commentando il controller-02 (controllare porte 5000, 5001, 443, 8778, 11211): cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg 4) eseguire puppet sui tre haproxy ssh root@cld-haproxy-test-01 / 02/ 03 puppet agent -t 5) spegnere e disabilitare i servizi sul controller-02 systemctl stop httpd.service memcached.service shibd.service systemctl disable httpd.service memcached.service shibd.service Controllare che funzioni tutto a livello di dashboard, in particolare il calendario prenotazioni GPU (se non funziona interviene Sergio)
GLANCE
Code Block language bash ATTENZIONE: controllare se c'e'un ordine per l'update di glance (si possono avere due release diverse contemporaneamente?). Esiste ma e' considerato non production ready, o la doc non e' aggiornata (https://docs.openstack.org/glance/2025.1/admin/zero-downtime-db-upgrade.html) Probabilmente per glance e' meglio non rischiare e fare il down durante l'update 1) spegnere il servizio glance sul controller-02 systemctl stop openstack-glance-api.service systemctl disable openstack-glance-api.service Sul controller-01 (gia' configurato ad Epoxy perche' abbiamo girato puppet): 2) su -s /bin/sh -c "glance-manage db expand" glance [root@controller-01 StartServices]# cat /var/log/glance/glance-manage.log 2026-03-16 17:30:38.111 173040 INFO alembic.runtime.migration [-] Context impl MySQLImpl. 2026-03-16 17:30:38.111 173040 INFO alembic.runtime.migration [-] Will assume non-transactional DDL. 3) su -s /bin/sh -c "glance-manage db migrate" glance [root@controller-01 StartServices]# su -s /bin/sh -c "glance-manage db migrate" glance 2026-03-16 17:31:33.469 173073 INFO alembic.runtime.migration [-] Context impl MySQLImpl. 2026-03-16 17:31:33.470 173073 INFO alembic.runtime.migration [-] Will assume non-transactional DDL. Database is up to date. No migrations needed. [root@controller-01 StartServices]# 4) systemctl start openstack-glance-api.service Mar 16 17:31:56 controller-01.cloud.pd.infn.it glance-api[173117]: 2026-03-16 17:31:56.058 173117 WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set with> 5)) Modificare l'HA proxy in modo che glance punti al controller-01 in cld-config: cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg (controllare porta 9292) 6) eseguire puppet nei tre haproxy puppet agent -t ============================================================= Quanto anche il controller-02 sara' aggiornato, eseguire su -s /bin/sh -c "glance-manage db contract" glance
- NOVA
Code Block language bash su -s /bin/sh -c "nova-status upgrade check" nova su -s /bin/sh -c "nova-manage api_db sync" nova in nova-manage.log 2026-03-17 10:59:31.218 208205 INFO alembic.runtime.migration [-] Context impl MySQLImpl. 2026-03-17 10:59:31.219 208205 INFO alembic.runtime.migration [-] Will assume non-transactional DDL. su -s /bin/sh -c "nova-manage db sync" nova in nova-manage.log 026-03-17 11:00:31.148 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade 13863f4e1612 -> d60bddf7a903, add_constraint_instance_share_avoid_duplicates 2026-03-17 11:00:32.539 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade d60bddf7a903 -> 2903cd72dc14, add_tls_port_to_console_auth_tokens 2026-03-17 11:00:32.746 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Context impl MySQLImpl. 2026-03-17 11:00:32.747 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Will assume non-transactional DDL. 2026-03-17 11:00:32.755 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade 13863f4e1612 -> d60bddf7a903, add_constraint_instance_share_avoid_duplicates 2026-03-17 11:00:33.176 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade d60bddf7a903 -> 2903cd72dc14, add_tls_port_to_console_auth_tokens Far partire il servizio nel controller1 systemctl start \ openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service \ openstack-nova-novncproxy.service Modificare l'HA proxy in modo che nova punti al controller-01 in cld-config: cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg (controllare porte 8774, 8775, 6080) Eseguire puppet nei tre haproxy puppet agent -t Spegnere e disabilitare il servizio nel controller2 systemctl stop \ openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service \ openstack-nova-novncproxy.service systemctl disable \ openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service \ openstack-nova-novncproxy.service ============================================================================== Quando anche il controller2 e tutti i compute saranno aggiornati, eseguire di nuovo su -s /bin/sh -c "nova-manage db online_data_migrations" nova NEUTRON
Code Block language bash su -s /bin/sh -c "neutron-db-manage upgrade --expand" neutron INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. Running upgrade (expand) for neutron ... INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade 0e6eff810791 -> 175fa80908e1 INFO [alembic.runtime.migration] Running upgrade 175fa80908e1 -> 5bcb7b31ec7d INFO [alembic.runtime.migration] Running upgrade 5bcb7b31ec7d -> ad80a9f07c5c OK Far partire il servizio systemctl start neutron-server.service systemctl start neutron-openvswitch-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service neutron-l3-agent.service Modificare l'HA proxy in modo che neutron punti al controller-01 in cld-config: cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg (controllare porta 9696) Eseguire puppet nei tre haproxy puppet agent -t Stoppare e disabilitare il servizio sul controller2 systemctl stop neutron-openvswitch-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service neutron-l3-agent.service systemctl stop neutron-server.service systemctl disable neutron-openvswitch-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service neutron-l3-agent.service systemctl disable neutron-server.service CONTROLLARE: [root@controller-01 neutron]# openstack server list Could not load 'message_list': module 'zaqarclient.queues.v2.cli' has no attribute 'OldListMessages' Could not load 'message_post': module 'zaqarclient.queues.v2.cli' has no attribute 'OldPostMessages' .... ========================================================================= Quando anche il controller2 sara' aggiornato eseguire il comando su -s /bin/sh -c "neutron-db-manage upgrade --contract" neutron
- CINDER
| Code Block | ||
|---|---|---|
| ||
su -s /bin/sh -c "cinder-manage db sync" cinder
2026-03-17 11:57:43.085 212882 INFO cinder.db.migration [-] Applying migration(s)
2026-03-17 11:57:43.088 212882 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
2026-03-17 11:57:43.088 212882 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
2026-03-17 11:57:43.132 212882 INFO cinder.db.migration [-] Migration(s) applied
Far partire il servizio sul controller1
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
Modificare l'HA per far puntare il servizio al controller1
Modificare l'HA proxy in modo che cinder punti al controller-01
in cld-config:
cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_cinder_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
(controllare porta 8776)
Eseguire puppet nei tre haproxy
puppet agent -t
Stopparlo e disabilitarlo sul controller2
systemctl stop openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
systemctl disable openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
==============================================================================
Quando il controller2 sara' aggiornato rieseguire il online_data_migration
su -s /bin/sh -c "cinder-manage db online_data_migrations" cinder |
- HEAT
| Code Block | ||
|---|---|---|
| ||
su -s /bin/sh -c "heat-manage db_sync" heat
026-03-17 12:27:45.669 216268 INFO heat.db.migration [-] Applying migration(s)
2026-03-17 12:27:45.682 216268 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
2026-03-17 12:27:45.682 216268 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
2026-03-17 12:27:45.689 216268 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
2026-03-17 12:27:45.689 216268 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
2026-03-17 12:27:45.696 216268 INFO heat.db.migration [-] Migration(s) applied
Accendere il servizio sul controller1
systemctl start openstack-heat-api.service \
openstack-heat-api-cfn.service openstack-heat-engine.service
Modificare l'HA proxy in modo che heat punti al controller-01
in cld-config:
cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_cinder_heat_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
(controllare porte 8000, 8004)
Eseguire puppet nei tre haproxy
puppet agent -t
e spegnerlo e disabilitarlo sul controller2
systemctl stop openstack-heat-api.service \
openstack-heat-api-cfn.service openstack-heat-engine.service
systemctl disable openstack-heat-api.service \
openstack-heat-api-cfn.service openstack-heat-engine.service
|
- DASHBOARD: nulla da fare
A questo punto tutti i servizi puntano al controller1.
...
Code Block language bash title risultato collapse true # Nell’update vengono scaricati i nuovi rpm: attenzione a questi file di configurazione ## cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.$REL cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.$REL cp /etc/nova/nova.conf /etc/nova/nova.conf.$REL cp /etc/placement/placement.conf /etc/placement/placement.conf.$REL cp /etc/heat/heat.conf /etc/heat/heat.conf.$REL cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.$REL cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.$REL cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.$REL cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.$REL cp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.$REL cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.$REL cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.$REL cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.$REL cp /etc/httpd/conf.d/auth_openidc.conf /etc/httpd/conf.d/auth_openidc.conf.$REL ## mv -f /etc/openstack-dashboard/local_settings.rpmnew /etc/openstack-dashboard/local_settings mv -f /etc/neutron/neutron.conf.rpmnew /etc/neutron/neutron.conf mv -f /etc/nova/nova.conf.rpmnew /etc/nova/nova.conf mv -f /etc/placement/placement.conf.rpmnew /etc/placement/placement.conf mv -f /etc/heat/heat.conf.rpmnew /etc/heat/heat.conf mv -f /etc/neutron/dhcp_agent.ini.rpmnew /etc/neutron/dhcp_agent.ini mv -f /etc/neutron/l3_agent.ini.rpmnew /etc/neutron/l3_agent.ini mv -f /etc/neutron/metadata_agent.ini.rpmnew /etc/neutron/metadata_agent.ini mv -f /etc/neutron/plugins/ml2/ml2_conf.ini.rpmnew /etc/neutron/plugins/ml2/ml2_conf.ini mv -f /etc/neutron/plugins/ml2/openvswitch_agent.ini.rpmnew /etc/neutron/plugins/ml2/openvswitch_agent.ini mv -f /etc/keystone/keystone.conf.rpmnew /etc/keystone/keystone.conf mv -f /etc/glance/glance-api.conf.rpmnew /etc/glance/glance-api.conf mv -f /etc/cinder/cinder.conf.rpmnew /etc/cinder/cinder.conf mv -f /etc/httpd/conf.d/auth_openidc.conf.rpmnew /etc/httpd/conf.d/auth_openidc.conf
- Cambiare classe in Foreman con Epoxy
Code Block language bash Da pagina web di Foreman, modificare la classe puppet del controller selezionando Epoxy in https://cld-config.cloud.pd.infn.it/hosts/controller-xx.cloud.pd.infn.it editare l'host sostituendo l'hostgroup "hosts_all/ComputeNode-Test" con "hosts_all/ComputeNode-Test_Epoxy" Nel controller poi eseguire puppet agent -t A questo punto tutti i servizi sono configurati
- girare puppet nel nodonodo
Code Block language shell puppet agent -t
- attivare i servizi modificando service.pp per far partire i servizi
- modificare il cld-config il file di haproxy per utilizzare i due controller
...
- servizi
Code Block language shell # modificare in service.pp tutti i servizi ensure => running, enable => true, # e committare in git
- modificare il cld-config il file di haproxy per utilizzare i due controller
Code Block language shell cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg.orig /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
- eseguire puppet nei tre haproxy
- fare i contract o online-migration del db per i servizi che lo richiedonorichiedono
Code Block language shell # Dopo l'aggiornamento del controller-02
...
su -s /bin/sh -c "keystone-manage db_sync --contract" keystone
...
su -s /bin/sh -c "glance-manage db contract" glance
...
su -s /bin/sh -c "neutron-db-manage upgrade --contract" neutron
...
su -s /bin/sh -c "cinder-manage db online_data_migrations" cinder
-→>> QUI
Far partire tutti i db mysql del cluster percona, accendendoli con ordine inverso allo spegnimento
| Code Block | ||
|---|---|---|
| ||
[root@cld-db-test-05 ~]# systemctl start mysql
[root@cld-db-test-06 ~]# systemctl start mysql |
Compute
Mettere in drain un nodo alla volta.
...