...
KEYSTONE
Code Block language bash # TODO: backup database keystone su -s /bin/sh -c "keystone-manage doctor" keystone [root@controller-01 StartServices]# su -s /bin/sh -c "keystone-manage doctor" keystone WARNING: `keystone.conf [cache] enabled` is not enabled. Caching greatly improves the performance of keystone, and it is highly recommended that you enable it. su -s /bin/sh -c "keystone-manage db_sync --expand" keystone =============================================================================================== Dopo l'aggiornamento del controller-02 e fatto ripartire httpd, si deve eseguire il comando su -s /bin/sh -c "keystone-manage db_sync --contract" keystone- PLACEMENT
Code Block language bash 1) su -s /bin/sh -c "placement-manage db sync" placement 2) accendere i servizi per keystone, placement e dashboard systemctl start httpd.service memcached.service shibd.service 3) in cld-config modificare il file dell'HAproxy in modo che i tre servizi keystone, placement e dashboard (memcached) puntino al controller-01 commentando il controller-02 (controllare porte 5000, 5001, 443, 8778, 11211): cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg 4) eseguire puppet sui tre haproxy ssh root@cld-haproxy-test-01 / 02/ 03 puppet agent -t 5) spegnere e disabilitare i servizi sul controller-02 systemctl stop httpd.service memcached.service shibd.service systemctl disable httpd.service memcached.service shibd.service Controllare che funzioni tutto a livello di dashboard, in particolare il calendario prenotazioni GPU (se non funziona interviene Sergio)
GLANCE
Code Block language bash ATTENZIONE: controllare se c'e'un ordine per l'update di glance (si possono avere due release diverse contemporaneamente?). Esiste ma e' considerato non production ready, o la doc non e' aggiornata (https://docs.openstack.org/glance/2025.1/admin/zero-downtime-db-upgrade.html) Probabilmente per glance e' meglio non rischiare e fare il down durante l'update 1) spegnere il servizio glance sul controller-02 systemctl stop openstack-glance-api.service systemctl disable openstack-glance-api.service Sul controller-01 (gia' configurato ad Epoxy perche' abbiamo girato puppet): 2) su -s /bin/sh -c "glance-manage db expand" glance [root@controller-01 StartServices]# cat /var/log/glance/glance-manage.log 2026-03-16 17:30:38.111 173040 INFO alembic.runtime.migration [-] Context impl MySQLImpl. 2026-03-16 17:30:38.111 173040 INFO alembic.runtime.migration [-] Will assume non-transactional DDL. 3) su -s /bin/sh -c "glance-manage db migrate" glance [root@controller-01 StartServices]# su -s /bin/sh -c "glance-manage db migrate" glance 2026-03-16 17:31:33.469 173073 INFO alembic.runtime.migration [-] Context impl MySQLImpl. 2026-03-16 17:31:33.470 173073 INFO alembic.runtime.migration [-] Will assume non-transactional DDL. Database is up to date. No migrations needed. [root@controller-01 StartServices]# 4) systemctl start openstack-glance-api.service Mar 16 17:31:56 controller-01.cloud.pd.infn.it glance-api[173117]: 2026-03-16 17:31:56.058 173117 WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set with> 5)) Modificare l'HA proxy in modo che glance punti al controller-01 in cld-config: cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg (controllare porta 9292) 6) eseguire puppet nei tre haproxy puppet agent -t ============================================================= Quanto anche il controller-02 sara' aggiornato, eseguire su -s /bin/sh -c "glance-manage db contract" glance
- ---→>>>> QUI
- NOVA
Code Block language bash su -s /bin/sh -c "nova-status upgrade check" nova su -s /bin/sh -c "nova-manage api_db sync" nova su -s /bin/sh -c "nova-manage db sync" nova in nova-manage.log 2026-03-17 10:59:31.218 208205 INFO alembic.runtime.migration [-] Context impl MySQLImpl. 2026-03-17 10:59:31.219 208205 INFO alembic.runtime.migration [-] Will assume non-transactional DDL. su -s /bin/sh -c "nova-manage db sync" nova in nova-manage.log 026-03-17 11:00:31.148 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade 13863f4e1612 -> d60bddf7a903, add_constraint_instance_share_avoid_duplicates 2026-03-17 11:00:32.539 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade d60bddf7a903 -> 2903cd72dc14, add_tls_port_to_console_auth_tokens 2026-03-17 11:00:32.746 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Context impl MySQLImpl. 2026-03-17 11:00:32.747 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Will assume non-transactional DDL. 2026-03-17 11:00:32.755 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade 13863f4e1612 -> d60bddf7a903, add_constraint_instance_share_avoid_duplicates 2026-03-17 11:00:33.176 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade d60bddf7a903 -> 2903cd72dc14, add_tls_port_to_console_auth_tokens Far partire il servizio nel controller1 systemctl start \ openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service \ openstack-nova-novncproxy.service Modificare l'HA proxy in modo che nova punti al controller-01 in cld-config: cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg (controllare porte 8774, 8775, 6080) Eseguire puppet nei tre haproxy puppet agent -t Spegnere e disabilitare il servizio nel controller2 systemctl stop \ openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service \ openstack-nova-novncproxy.service systemctl disable \ openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service \ openstack-nova-novncproxy.service ============================================================================== Quando anche il controller2 e tutti i compute saranno aggiornati, eseguire di nuovo su -s /bin/sh -c "nova-manage db online_data_migrations" nova
NEUTRON
Code Block language bash su -s /bin/sh -c "neutron-db-manage upgrade --expand" neutron Far partire il servizio systemctl start neutron-server.service \ neutron-openvswitch-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service systemctl start neutron-l3-agent.service Modificare l'HA proxy in modo che neutron punti al controller-01 in cld-config: cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg (controllare porta 9696) Eseguire puppet nei tre haproxy puppet agent -t Stoppare il servizio sul controller2 systemctl stop neutron-server.service \ neutron-openvswitch-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service systemctl stop neutron-l3-agent.service ========================================================================= Quando anche il controller2 sara' aggiornato eseguire il comando su -s /bin/sh -c "neutron-db-manage upgrade --contract" neutron
CINDER
Code Block language bash su -s /bin/sh -c "cinder-manage db sync" cinder Far partire il servizio sul controller1 systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service Modificare l'HA per far puntare il servizio al controller1 Modificare l'HA proxy in modo che cinder punti al controller-01 in cld-config: cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_cinder_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg (controllare porta 8776) Eseguire puppet nei tre haproxy puppet agent -t Stopparlo sul controller2 systemctl stop openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service ============================================================================== Quando il controller2 sara' aggiornato rieseguire il online_data_migration su -s /bin/sh -c "cinder-manage db online_data_migrations" cinder
HEAT
Code Block language bash su -s /bin/sh -c "heat-manage db_sync --command expand" heat su -s /bin/sh -c "heat-manage db_sync --command migrate_data" heat Accendere il servizio sul controller1 systemctl start openstack-heat-api.service \ openstack-heat-api-cfn.service openstack-heat-engine.service Modificare l'HA proxy in modo che heat punti al controller-01 in cld-config: cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_cinder_heat_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg (controllare porte 8000, 8004) Eseguire puppet nei tre haproxy puppet agent -t e spegnerlo sul controller2 systemctl stop openstack-heat-api.service \ openstack-heat-api-cfn.service openstack-heat-engine.service =================================================================== Quando anche il controller2 sara' aggiornato su -s /bin/sh -c "heat-manage db_sync --command contract" heat
- DASHBOARD: nulla da fare
...