Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
languagebash
systemctl stop puppet
systemctl disable puppet

Installazione Epoxy nel controller-01

...

Staccare tutti i router dall'L3 del controller-01

...

Code Block
language

...

cd /root/StartServices
./complete.sh stop
./complete.sh disable

...

Code Block
languagebash
[root@cld-db-test-04 backup]# mkdir /backup/BackupCaracalPrimaDellUpdate
[root@cld-db-test-04 ~]# mysqldump -u root -p --all-databases > /backup/130326/cld-db_test_04_caracal_dump_all.sql
[root@cld-db-test-04 ~]# /usr/local/bin/mysql_dump_separate_db
# e sposto i file risultanti da /backup/mysql a /backup/130326 (altrimenti in  /backup/mysql  verrebbero cancellati)

...

Code Block
languagebash
#placement va fatto prima di nova
su -s /bin/sh -c "placement-manage db online_data_migrations" placement
su -s /bin/sh -c "nova-manage db online_data_migrations" nova
su -s /bin/sh -c "cinder-manage db online_data_migrations" cinder
shell
Trovare quali sono i router
openstack router list
+--------------------------------------+-------------+--------+-------+----------------------------------+-------------+------+
| ID                                   | Name        | Status | State | Project                          | Distributed | HA   |

...

languageshell
titleesempio
collapsetrue

...

+--------------------------------------+-------------+--------+-------+-----------

...

-----------------------+-------------+------+
| 92e8b080-f3aa-4d9f-b3d4-613e0dbfd099 | Lan         | ACTIVE | UP    | 56c3f5c047e74a78a71438c4412e6e13 | False       | True |
| 9e31c216-0635-4d21-b7aa-63fe4aee875e | ext-to-vos  | ACTIVE | UP    | 56c3f5c047e74a78a71438c4412e6e13 | False       | True |
| eaa80135-6b79-44e0-b637-cef88d09b85c | CloudVeneto | ACTIVE | UP    | 56c3f5c047e74a78a71438c4412e6e13 | False       | True |
+--------------------------------------+-------------+----

...

----+-------+----------------------------------+-------------+------+

trovare l'ip address nell'external_gateway_info per ogni router e vedere in quale controller e' attaccato

openstack router show 92e8b080-f3aa-4d9f-b3d4-613e0dbfd099

+-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                   | Value                                                                                                                                           |
+-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up          | UP                                                                                                                                              |
| availability_zone_hints |                                                                                                                                                 |
| availability_zones      | nova                                                                                                                                            |
| created_at              | 2018-11-28T16:06:24Z                                                                                                                            |
| description             |                                                                                                                                                 |
| distributed             | False                                                                                                                                           |
| enable_ndp_proxy        | None                                                                                                                                            |
| external_gateway_info   | {"network_id": "38356cfc-d83a-40f0-8604-09ddea12aa20", "external_fixed_ips": [{"subnet_id": "ec498b88-cbda-45d3-8f9f-174d335c6670",             |
|                         | "ip_address": "172.25.27.180"}], "enable_snat": false}                                                                                          |
...

[root@controller-02 ~]# ip netns exec qrouter-92e8b080-f3aa-4d9f-b3d4-613e0dbfd099 ip addr show | grep 172.25.27.180
    inet 172.25.27.180/24 scope global qg-fcc0f7ca-b4

lo stesso comando nel controller-01 non ritorna nulla quindi e' nel controller-02

analogamente per gli altri.

Se bisogna staccare dal controller-01 perche' in active, i comandi da eseguire sono ad esempio:

# for i in $(openstack router list -f value -c ID); do echo $i; openstack network agent list --agent-type l3 --sort-column Host --router $i --long; done 
92e8b080-f3aa-4d9f-b3d4-613e0dbfd099
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
| ID                                   | Agent Type | Host                           | Availability Zone | Alive | State | Binary           | HA State |
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
| aa34b512-89d8-4913-aee1-9f2d2fdf124c | L3 agent   | controller-01.cloud.pd.infn.it | nova              | :-)   | UP    | neutron-l3-agent | standby  |
| b91764b8-58a2-4ad6-a8fc-fd20aa664571 | L3 agent   | controller-02.cloud.pd.infn.it | nova              | :-)   | UP    | neutron-l3-agent | active   |
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
9e31c216-0635-4d21-b7aa-63fe4aee875e
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
| ID                                   | Agent Type | Host                           | Availability Zone | Alive | State | Binary           | HA State |
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
| aa34b512-89d8-4913-aee1-9f2d2fdf124c | L3 agent   | controller-01.cloud.pd.infn.it | nova              | :-)   | UP    | neutron-l3-agent | standby  |
| b91764b8-58a2-4ad6-a8fc-fd20aa664571 | L3 agent   | controller-02.cloud.pd.infn.it | nova              | :-)   | UP    | neutron-l3-agent | active   |
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
eaa80135-6b79-44e0-b637-cef88d09b85c
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
| ID                                   | Agent Type | Host                           | Availability Zone | Alive | State | Binary           | HA State |
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
| aa34b512-89d8-4913-aee1-9f2d2fdf124c | L3 agent   | controller-01.cloud.pd.infn.it | nova              | :-)   | UP    | neutron-l3-agent | active  |
| b91764b8-58a2-4ad6-a8fc-fd20aa664571 | L3 agent   | controller-02.cloud.pd.infn.it | nova              | :-)   | UP    | neutron-l3-agent | standby |
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+


quindi ad esempio per spostare il router eaa80135-6b79-44e0-b637-cef88d09b85c attivo attaccato all'agente L3 aa34b512-89d8-4913-aee1-9f2d2fdf124c del controller-01 nello 02 dovremmo fare:
openstack network agent remove router --l3 aa34b512-89d8-4913-aee1-9f2d2fdf124c eaa80135-6b79-44e0-b637-cef88d09b85c;

dopo un po' troveremo:
eaa80135-6b79-44e0-b637-cef88d09b85c
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
| ID                                   | Agent Type | Host                           | Availability Zone | Alive | State | Binary           | HA State |
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
| aa34b512-89d8-4913-aee1-9f2d2fdf124c | L3 agent   | controller-01.cloud.pd.infn.it | nova              | :-)   | UP    | neutron-l3-agent | standby  |
| b91764b8-58a2-4ad6-a8fc-fd20aa664571 | L3 agent   | controller-02.cloud.pd.infn.it | nova              | :-)   | UP    | neutron-l3-agent | active   |
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+


Installazione Epoxy nel controller-01

  • Nel controller-01 spegnere e disabilitare tutti i servizi Openstack

    • Code Block
      languagebash
      cd /root/StartServices
      ./complete.sh stop
      ./complete.sh disable
  • Fare il backup del db (sia tutto insieme che singolo db)
  • Code Block
    languagebash
    [root@cld-db-test-04 backup]# mkdir /backup/BackupCaracalPrimaDellUpdate
    [root@cld-db-test-04 ~]# mysqldump -u root -p --all-databases > /backup/130326/cld-db_test_04_caracal_dump_all.sql
    [root@cld-db-test-04 ~]# /usr/local/bin/mysql_dump_separate_db
    # e sposto i file risultanti da /backup/mysql a /backup/130326 (altrimenti in  /backup/mysql  verrebbero cancellati)
  • Migrare online il db placement, nova e cinder. ATTENZIONE: puo' metterci molto tempo a migrare
  • Code Block
    languagebash
    #placement va fatto prima di nova
    su -s /bin/sh -c "placement-manage db online_data_migrations" placement
    su -s /bin/sh -c "nova-manage db online_data_migrations" nova
    su -s /bin/sh -c "cinder-manage db online_data_migrations" cinder
  • Code Block
    languageshell
    titleesempio
    collapsetrue
    50 rows matched query populate_instance_compute_id, 0 migrated
    
    +-------------------------------------+--------------+-----------+
    
    |              Migration              | Total Needed | Completed |
    
    +-------------------------------------+--------------+-----------+
    
    |     fill_virtual_interface_list     |      0       |     0     |
    
    |         migrate_empty_ratio         |      0       |     0     |
    
    |   migrate_quota_classes_to_api_db   |      0       |     0     |
    
    |    migrate_quota_limits_to_api_db   |      0       |     0     |
    
    |      migration_migrate_to_uuid      |      0       |     0     |
    
    |          populate_dev_uuids         |      0       |     0     |
    
    |     populate_instance_compute_id    |      50      |     0     |
    
    | populate_missing_availability_zones |      0       |     0     |
    
    |      populate_queued_for_delete     |      0       |     0     |
    
    |           populate_user_id          |      50      |     0     |
    
    |            populate_uuids           |      0       |     0     |
    
    +-------------------------------------+--------------+-----------+
    
    


  • Controllare se sono installati openstack-client e selinux

    Code Block
    languagebash
    [root@controller-01 ~]# yum list installed | grep openstackclient
    python-openstackclient-lang.noarch                                6.6.1-1.el9s                     @centos-openstack-caracal       
    python3-openstackclient.noarch                                    6.6.1-1.el9s                     @centos-openstack-caracal       
    
    [root@controller-01 ~]# yum list installed | grep openstack-selinux
    openstack-selinux.noarch                                          0.8.40-1.el9s                    @centos-openstack-zed           


  • Controllare versione kernel e ceph

    Code Block
    languagebash
    [root@controller-01 ~]#  yum list installed | grep kernel
    kernel.x86_64                                                     5.14.0-427.24.1.el9_4            @anaconda                       
    kernel.x86_64                                                     5.14.0-503.33.1.el9_5            @baseos                         
    kernel-core.x86_64                                                5.14.0-427.24.1.el9_4            @anaconda                       
    kernel-core.x86_64                                                5.14.0-503.33.1.el9_5            @baseos                         
    kernel-headers.x86_64                                             5.14.0-503.33.1.el9_5            @appstream                      
    kernel-modules.x86_64                                             5.14.0-427.24.1.el9_4            @anaconda                       
    kernel-modules.x86_64                                             5.14.0-503.33.1.el9_5            @baseos                         
    kernel-modules-core.x86_64                                        5.14.0-427.24.1.el9_4            @anaconda                       
    kernel-modules-core.x86_64                                        5.14.0-503.33.1.el9_5            @baseos                         
    kernel-srpm-macros.noarch                                         1.0-13.el9                       @appstream                      
    kernel-tools.x86_64                                               5.14.0-503.33.1.el9_5            @baseos                         
    kernel-tools-libs.x86_64                                          5.14.0-503.33.1.el9_5            @baseos                                       
    
    [root@controller-01 ~]#  yum list installed | grep ceph 
    blosc.x86_64                                                      1.21.0-3.el9s                    @centos-ceph-pacific            
    centos-release-ceph-reef.noarch                                   1.0-1.el9                        @extras                         
    ceph-common.x86_64    
                                                2:18.2.4-2.el9s                  @centos-ceph-reef             
    
    [root@controller-01 ~]# uname -a
    Linux controller-01.cloud.pd.infn.it 5.14.0-503.33.1.el9_5.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Mar 20 03:39:23 EDT 2025 x86_64 x86_64 x86_64 GNU/Linux
  • Rimuovere release Caracal

    Code Block
    languagebash
    yum remove centos-release-openstack-caracal.noarch


  • Installare Epoxy

    Code Block
    languagebash
    dnf install -y https://trunk.rdoproject.org/rdo_release/rdo-release.el9s.rpm  (serve questo e contiene il repo epoxy)
    
    #### facendo un check
    [root@todelff ~]# rpm -qil rdo-release
    Name        : rdo-release
    Version     : epoxy
    Release     : 1.el9s
    Architecture: noarch
    Install Date: Wed Mar 11 15:29:25 2026
    Group       : System Environment/Base
    Size        : 13372
    License     : Apache2
    Signature   : (none)
    Source RPM  : rdo-release-epoxy-1.el9s.src.rpm
    Build Date  : Fri Mar 14 17:12:13 2025
    Build Host  : doogie-n1.rdu2.centos.org
    Packager    : CBS <cbs@centos.org>
    Vendor      : CentOS Cloud SIG
    URL         : https://github.com/rdo-infra/rdo-release
    Summary     : RDO repository configuration
    Description :
    This package contains the RDO repository
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Messaging
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-NFV
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
    /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9
    /etc/yum.repos.d/ceph-reef.repo
    /etc/yum.repos.d/messaging.repo
    /etc/yum.repos.d/nfv-openvswitch.repo
    /etc/yum.repos.d/rdo-release.repo
    /etc/yum.repos.d/rdo-testing.repo
    
    


  • Salvare configurazioni che di solito vengono sovrascritte

    Code Block
    languagebash
    export REL=caracal
    cp /etc/httpd/conf.d/openstack-dashboard.conf /etc/httpd/conf.d/openstack-dashboard.conf.$REL
  • Update pacchetti

    Code Block
    languagebash
    dnf update
    
    ## se update fallisce con problemi vari:
    [root@controller-01 ~]# dnf update
    CentOS-9 - Ceph Reef                                                                                                                         489 kB/s | 415 kB     00:00    
    OpenStack Epoxy Repository                                                                                                                   2.5 MB/s | 1.7 MB     00:00    
    Last metadata expiration check: 0:00:01 ago on Fri 13 Mar 2026 10:41:31 AM CET.
    Error: 
     Problem 1: cannot install both python3-keystone-1:27.0.0-1.el9s.noarch from openstack-epoxy and python3-keystone-1:25.0.0-1.el9s.noarch from @System
      - package python3-keystone+memcache-1:25.0.0-1.el9s.noarch from @System requires python3-keystone = 1:25.0.0-1.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-keystone-1:25.0.0-1.el9s.noarch
      - problem with installed package python3-keystone+memcache-1:25.0.0-1.el9s.noarch
     Problem 2: cannot install both python3-oslo-messaging-16.1.0-1.el9s.noarch from openstack-epoxy and python3-oslo-messaging-14.7.2-1.el9s.noarch from @System
      - package python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch from @System requires python3-oslo-messaging = 14.7.2-1.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-oslo-messaging-14.7.2-1.el9s.noarch
      - problem with installed package python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch
     Problem 3: cannot install both python3-requests-2.32.3-4.el9s.noarch from openstack-epoxy and python3-requests-2.31.0-3.el9s.noarch from @System
      - package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch from @System requires python3-requests = 2.31.0-3.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-requests-2.31.0-3.el9s.noarch
      - problem with installed package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch
    (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
    
    # Quindi vanno rimossi i seguenti pacchetti
    
    rpm -e --nodeps python3-keystone+memcache
    rpm -e --nodeps python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch
    rpm -e --nodeps python3-oslo-messaging+amqp1 
    
    mettere gpgcheck=0 in /etc/yum.repos.d/EGI-trustanchors.repo
    
    dnf update -y
    
    


  • Salvare le vecchie configurazioni
  • Code Block
    languagebash
    titlerisultato
    collapsetrue
    # Nell’update vengono scaricati i nuovi rpm: attenzione a questi file di configurazione
    
    cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.$REL
    cp /etc/nova/nova.conf /etc/nova/nova.conf.$REL
    cp /etc/placement/placement.conf /etc/placement/placement.conf.$REL
    cp /etc/heat/heat.conf /etc/heat/heat.conf.$REL
    cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.$REL
    cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.$REL
    cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.$REL
    cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.$REL
    cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.$REL
    cp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.$REL
    cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.$REL
    cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.$REL
    cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.$REL
    cp /etc/httpd/conf.d/auth_openidc.conf /etc/httpd/conf.d/auth_openidc.conf.$REL
    
    mv -f /etc/openstack-dashboard/local_settings.rpmnew /etc/openstack-dashboard/local_settings
    mv -f /etc/httpd/conf.d/openstack-dashboard.conf.rpmnew /etc/httpd/conf.d/openstack-dashboard.conf
    mv -f /etc/nova/nova.conf.rpmnew /etc/nova/nova.conf
    mv -f /etc/placement/placement.conf.rpmnew /etc/placement/placement.conf
    mv -f /etc/heat/heat.conf.rpmnew /etc/heat/heat.conf
    mv -f /etc/neutron/neutron.conf.rpmnew /etc/neutron/neutron.conf
    mv -f /etc/neutron/dhcp_agent.ini.rpmnew /etc/neutron/dhcp_agent.ini
    mv -f /etc/neutron/l3_agent.ini.rpmnew /etc/neutron/l3_agent.ini
    mv -f /etc/neutron/metadata_agent.ini.rpmnew /etc/neutron/metadata_agent.ini
    mv -f /etc/neutron/plugins/ml2/ml2_conf.ini.rpmnew /etc/neutron/plugins/ml2/ml2_conf.ini
    mv -f /etc/neutron/plugins/ml2/openvswitch_agent.ini.rpmnew /etc/neutron/plugins/ml2/openvswitch_agent.ini
    mv -f /etc/keystone/keystone.conf.rpmnew /etc/keystone/keystone.conf
    mv -f /etc/glance/glance-api.conf.rpmnew /etc/glance/glance-api.conf
    mv -f /etc/cinder/cinder.conf.rpmnew /etc/cinder/cinder.conf
    mv -f /etc/httpd/conf.d/auth_openidc.conf.rpmnew /etc/httpd/conf.d/auth_openidc.conf
  • In puppet modificare il file  /var/puppet/puppet_epoxy_env_test/controller_epoxy/manifests/service.pp  perche' non faccia partire i servizi una volta aggiornati. Per questo mettere tutti i servizi in stopped
  • Code Block
    languagebash
    # in service.pp mettere per tutti i servizi
     ensure      => stopped,
     enable      => false,
    # e committare su git


  • In Foreman abilitare la classe Epoxy: da pagina web modificare la classe puppet del controller-01 selezionando Epoxy e poi eseguire puppet sul nodo

Code Block
languagebash
In https://cld-config.cloud.pd.infn.it/hosts/controller-xx.cloud.pd.infn.it editare l'host sostituendo l'hostgroup "hosts_all/ControllerNode-Test" con "hosts_all/ControllerNode_Test-Epoxy"

Nel controller poi eseguire
puppet agent -t 

Se ci sono problemi con i sertificati (di solito dopo il ripristino del clone), vedere procedura in https://confluence.infn.it/x/kw5-B

A questo punto tutti i servizi sono configurati sul controller-01


  • KEYSTONE

    Code Block
    languagebash
    # TODO: backup database keystone
    
    su -s /bin/sh -c "keystone-manage doctor" keystone
    
    [root@controller-01 StartServices]# su -s /bin/sh -c "keystone-manage doctor" keystone
    WARNING: `keystone.conf [cache] enabled` is not enabled.
        Caching greatly improves the performance of keystone, and it is highly
        recommended that you enable it.
    
    su -s /bin/sh -c "keystone-manage db_sync --expand" keystone
    
    ===============================================================================================
    Dopo l'aggiornamento del controller-02 e fatto ripartire httpd, si deve eseguire il comando
    su -s /bin/sh -c "keystone-manage db_sync --contract" keystone    
  • PLACEMENT
  • Code Block
    languagebash
    1) su -s /bin/sh -c "placement-manage db sync" placement 
    
    2) accendere i servizi per keystone, placement e dashboard
    systemctl start httpd.service memcached.service shibd.service
    
    3) in cld-config modificare il file dell'HAproxy in modo che i tre servizi keystone, placement e dashboard (memcached) puntino al controller-01 commentando il controller-02 (controllare porte 5000, 5001, 443, 8778, 11211):
    
    cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    
    4) eseguire puppet sui tre haproxy 
    ssh root@cld-haproxy-test-01 / 02/ 03
    puppet agent -t
    
    5) spegnere e disabilitare i servizi sul controller-02
    systemctl stop httpd.service memcached.service shibd.service
    systemctl disable httpd.service memcached.service shibd.service
    
    
    Controllare che funzioni tutto a livello di dashboard, in particolare il calendario prenotazioni GPU (se non funziona interviene Sergio)
    
    
  • GLANCE

    Code Block
    languagebash
    ATTENZIONE: controllare se c'e'un ordine per l'update di glance (si possono avere due release diverse contemporaneamente?). 
    Esiste  ma e' considerato non production ready, o la doc non e' aggiornata (https://docs.openstack.org/glance/2025.1/admin/zero-downtime-db-upgrade.html) 
    
    Per glance e' meglio non rischiare e fare il down dei servizio sui due controller quindi:
    
    1) spegnere il servizio glance sul controller-02
    systemctl stop openstack-glance-api.service
    systemctl disable openstack-glance-api.service
    
    Sul controller-01 (gia' configurato ad Epoxy perche' abbiamo girato puppet):
    
    2) su -s /bin/sh -c "glance-manage db expand" glance
    
    [root@controller-01 StartServices]# cat /var/log/glance/glance-manage.log 
    2026-03-16 17:30:38.111 173040 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
    2026-03-16 17:30:38.111 173040 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
    
    3) su -s /bin/sh -c "glance-manage db migrate" glance
    
    [root@controller-01 StartServices]# su -s /bin/sh -c "glance-manage db migrate" glance
    2026-03-16 17:31:33.469 173073 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
    2026-03-16 17:31:33.470 173073 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
    Database is up to date. No migrations needed.
    [root@controller-01 StartServices]# 
    
    4) systemctl start openstack-glance-api.service
    
    Mar 16 17:31:56 controller-01.cloud.pd.infn.it glance-api[173117]: 2026-03-16 17:31:56.058 173117 WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set with>
    
    
    5)) Modificare l'HA proxy in modo che glance punti al controller-01
    in cld-config:
    cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    (controllare porta 9292)
    
    6) eseguire puppet nei tre haproxy
    puppet agent -t
    
    =============================================================
    Quanto anche il controller-02 sara' aggiornato, eseguire
    su -s /bin/sh -c "glance-manage db contract" glance
    
    
  • NOVA    
    Code Block
    languagebash
    su -s /bin/sh -c "nova-status upgrade check" nova
    su -s /bin/sh -c "nova-manage api_db sync" nova
    
    in nova-manage.log
    2026-03-17 10:59:31.218 208205 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
    2026-03-17 10:59:31.219 208205 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
    
    
    su -s /bin/sh -c "nova-manage db sync" nova
    
    in nova-manage.log
    026-03-17 11:00:31.148 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade 13863f4e1612 -> d60bddf7a903, add_constraint_instance_share_avoid_duplicates
    2026-03-17 11:00:32.539 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade d60bddf7a903 -> 2903cd72dc14, add_tls_port_to_console_auth_tokens
    2026-03-17 11:00:32.746 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Context impl MySQLImpl.
    2026-03-17 11:00:32.747 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Will assume non-transactional DDL.
    2026-03-17 11:00:32.755 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade 13863f4e1612 -> d60bddf7a903, add_constraint_instance_share_avoid_duplicates
    2026-03-17 11:00:33.176 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade d60bddf7a903 -> 2903cd72dc14, add_tls_port_to_console_auth_tokens
    
    
    Far partire il servizio nel controller1 
    systemctl start \
        openstack-nova-api.service \
        openstack-nova-scheduler.service \
        openstack-nova-conductor.service \
        openstack-nova-novncproxy.service
    
    Modificare l'HA proxy in modo che nova punti al controller-01
    in cld-config:
    cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    (controllare porte 8774, 8775, 6080)
    
    Eseguire puppet nei tre haproxy
    puppet agent -t
    
    Spegnere e disabilitare il servizio nel controller2
    
     systemctl stop \
        openstack-nova-api.service \
        openstack-nova-scheduler.service \
        openstack-nova-conductor.service \
        openstack-nova-novncproxy.service
     
      systemctl disable \
        openstack-nova-api.service \
        openstack-nova-scheduler.service \
        openstack-nova-conductor.service \
        openstack-nova-novncproxy.service 
    
    ==============================================================================
    Quando anche il controller2 e tutti i compute saranno aggiornati, eseguire di nuovo
    su -s /bin/sh -c "nova-manage db online_data_migrations" nova
  • NEUTRON

    Code Block
    languagebash
    su -s /bin/sh -c "neutron-db-manage upgrade --expand" neutron
    
    a monitor compare:
    INFO  [alembic.runtime.migration] Context impl MySQLImpl.
    INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
      Running upgrade (expand) for neutron ...
    INFO  [alembic.runtime.migration] Context impl MySQLImpl.
    INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
    INFO  [alembic.runtime.migration] Running upgrade 0e6eff810791 -> 175fa80908e1
    INFO  [alembic.runtime.migration] Running upgrade 175fa80908e1 -> 5bcb7b31ec7d
    INFO  [alembic.runtime.migration] Running upgrade 5bcb7b31ec7d -> ad80a9f07c5c
      OK
    
    
    Far partire il servizio 
    
    systemctl start neutron-server.service 
    systemctl start neutron-openvswitch-agent.service neutron-dhcp-agent.service \
      neutron-metadata-agent.service neutron-l3-agent.service
    
    
    N.B compare a log  2026-03-31 11:16:14.500 882189 WARNING oslo_config.cfg [-] Deprecated: Option "api_paste_config" 
    
    
    #Staccare i router dal controller-02
    
    for i in $(openstack router list -f value -c ID); do echo $i; openstack network agent list --agent-type l3 --sort-column Host --router $i --long; done 
    openstack network agent remove router --l3 aa34b512-89d8-4913-aee1-9f2d2fdf124c eaa80135-6b79-44e0-b637-cef88d09b85c;
    
    openstack network agent remove router --l3 b91764b8-58a2-4ad6-a8fc-fd20aa664571 92e8b080-f3aa-4d9f-b3d4-613e0dbfd099
    openstack network agent remove router --l3 b91764b8-58a2-4ad6-a8fc-fd20aa664571 9e31c216-0635-4d21-b7aa-63fe4aee875e
    openstack network agent remove router --l3 b91764b8-58a2-4ad6-a8fc-fd20aa664571 eaa80135-6b79-44e0-b637-cef88d09b85c
    
    #verificare che l'ip sia collegato ora allo 01
    ip netns exec qrouter-92e8b080-f3aa-4d9f-b3d4-613e0dbfd099 ip a | grep 172.25.27.180
    ip netns exec qrouter-9e31c216-0635-4d21-b7aa-63fe4aee875e ip a | grep 90.147.77.210
    ip netns exec qrouter-eaa80135-6b79-44e0-b637-cef88d09b85c ip a | grep 90.147.143.145
    
    
    Modificare l'HA proxy in modo che neutron punti al controller-01
    in cld-config:
    cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    (controllare porta 9696)
    
    Eseguire puppet nei tre haproxy
    puppet agent -t
    
    Stoppare e disabilitare il servizio sul controller2
    systemctl stop neutron-openvswitch-agent.service neutron-dhcp-agent.service \
      neutron-metadata-agent.service neutron-l3-agent.service
    systemctl stop neutron-server.service
    
    systemctl disable neutron-openvswitch-agent.service neutron-dhcp-agent.service \
      neutron-metadata-agent.service neutron-l3-agent.service
    systemctl disable neutron-server.service
     
    
    CONTROLLARE: 
    [root@controller-01 neutron]# openstack server list
    Could not load 'message_list': module 'zaqarclient.queues.v2.cli' has no attribute 'OldListMessages'
    Could not load 'message_post': module 'zaqarclient.queues.v2.cli' has no attribute 'OldPostMessages'
    ....
    
    
    
    =========================================================================
    Quando anche il controller2 sara' aggiornato eseguire il comando
    
    su -s /bin/sh -c "neutron-db-manage upgrade --contract" neutron


  • CINDER
Code Block
languagebash
su -s /bin/sh -c "cinder-manage db sync" cinder

2026-03-17 11:57:43.085 212882 INFO cinder.db.migration [-] Applying migration(s)
2026-03-17 11:57:43.088 212882 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
2026-03-17 11:57:43.088 212882 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
2026-03-17 11:57:43.132 212882 INFO cinder.db.migration [-] Migration(s) applied

Far partire il servizio sul controller1 
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service

Modificare l'HA per far puntare il servizio al controller1

Modificare l'HA proxy in modo che cinder punti al controller-01
in cld-config:
cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_cinder_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
(controllare porta 8776)

Eseguire puppet nei tre haproxy
puppet agent -t

Stopparlo e disabilitarlo sul controller2
systemctl stop openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
systemctl disable openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service

==============================================================================
Quando il controller2 sara' aggiornato rieseguire il online_data_migration
su -s /bin/sh -c "cinder-manage db online_data_migrations" cinder


  • HEAT
Code Block
languagebash
su -s /bin/sh -c "heat-manage db_sync" heat

026-03-17 12:27:45.669 216268 INFO heat.db.migration [-] Applying migration(s)
2026-03-17 12:27:45.682 216268 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
2026-03-17 12:27:45.682 216268 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
2026-03-17 12:27:45.689 216268 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
2026-03-17 12:27:45.689 216268 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
2026-03-17 12:27:45.696 216268 INFO heat.db.migration [-] Migration(s) applied


Accendere il servizio sul controller1 
systemctl start openstack-heat-api.service \
  openstack-heat-api-cfn.service openstack-heat-engine.service

Modificare l'HA proxy in modo che heat punti al controller-01
in cld-config:
cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_cinder_heat_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
(controllare porte 8000, 8004)

Eseguire puppet nei tre haproxy
puppet agent -t
e spegnerlo e disabilitarlo sul controller2

 systemctl stop openstack-heat-api.service \
  openstack-heat-api-cfn.service openstack-heat-engine.service
 
systemctl disable openstack-heat-api.service \
  openstack-heat-api-cfn.service openstack-heat-engine.service


  • DASHBOARD: nulla da fare

A questo punto tutti i servizi puntano al controller1.

Installazione Epoxy nel controller-02

  • rimuovere Caracal
  • Code Block
    languagebash
    yum remove centos-release-openstack-caracal.noarch
  • installare Epoxy
  • Code Block
    languagebash
    dnf install -y https://trunk.rdoproject.org/rdo_release/rdo-release.el9s.rpm  (serve questo e contiene il repo epoxy)
    
    #### facendo un check
    [root@todelff ~]# rpm -qil rdo-release
    Name        : rdo-release
    Version     : epoxy
    Release     : 1.el9s
    Architecture: noarch
    Install Date: Wed Mar 11 15:29:25 2026
    Group       : System Environment/Base
    Size        : 13372
    License     : Apache2
    Signature   : (none)
    Source RPM  : rdo-release-epoxy-1.el9s.src.rpm
    Build Date  : Fri Mar 14 17:12:13 2025
    Build Host  : doogie-n1.rdu2.centos.org
    Packager    : CBS <cbs@centos.org>
    Vendor      : CentOS Cloud SIG
    URL         : https://github.com/rdo-infra/rdo-release
    Summary     : RDO repository configuration
    Description :
    This package contains the RDO repository
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Messaging
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-NFV
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
    /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9
    /etc/yum.repos.d/ceph-reef.repo
    /etc/yum.repos.d/messaging.repo
    /etc/yum.repos.d/nfv-openvswitch.repo
    /etc/yum.repos.d/rdo-release.repo
    /etc/yum.repos.d/rdo-testing.repo
    
    
  • Code Block
    languagebash
    export REL=caracal
    cp /etc/httpd/conf.d/openstack-dashboard.conf /etc/httpd/conf.d/openstack-dashboard.conf.$REL
  • Code Block
    languagebash
    dnf update
    
    ## se update fallisce con problemi vari:
    [root@controller-01 ~]# dnf update
    CentOS-9 - Ceph Reef                                                                                                                         489 kB/s | 415 kB     00:00    
    OpenStack Epoxy Repository                                                                                                                   2.5 MB/s | 1.7 MB     00:00    
    Last metadata expiration check: 0:00:01 ago on Fri 13 Mar 2026 10:41:31 AM CET.
    Error: 
     Problem 1: cannot install both python3-keystone-1:27.0.0-1.el9s.noarch from openstack-epoxy and python3-keystone-1:25.0.0-1.el9s.noarch from @System
      - package python3-keystone+memcache-1:25.0.0-1.el9s.noarch from @System requires python3-keystone = 1:25.0.0-1.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-keystone-1:25.0.0-1.el9s.noarch
      - problem with installed package python3-keystone+memcache-1:25.0.0-1.el9s.noarch
     Problem 2: cannot install both python3-oslo-messaging-16.1.0-1.el9s.noarch from openstack-epoxy and python3-oslo-messaging-14.7.2-1.el9s.noarch from @System
      - package python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch from @System requires python3-oslo-messaging = 14.7.2-1.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-oslo-messaging-14.7.2-1.el9s.noarch
      - problem with installed package python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch
     Problem 3: cannot install both python3-requests-2.32.3-4.el9s.noarch from openstack-epoxy and python3-requests-2.31.0-3.el9s.noarch from @System
      - package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch from @System requires python3-requests = 2.31.0-3.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-requests-2.31.0-3.el9s.noarch
      - problem with installed package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch
    (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
    
    # Quindi vanno rimossi i seguenti pacchetti
    
    rpm -e --nodeps python3-keystone+memcache
    rpm -e --nodeps python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch
    rpm -e --nodeps python3-oslo-messaging+amqp1 
    
    dnf update -y
    
    Se da problemi con update
    
    editare 
    /etc/yum.repos.d/EGI-trustanchors.repo
    e mettere gpgcheck a 1 (ovvero disabilitare il check)
    
    
  • Code Block
    languagebash
    titlerisultato
    collapsetrue
    # Nell’update vengono scaricati i nuovi rpm: attenzione a questi file di configurazione
    
    ## cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.$REL
    cp /etc/nova/nova.conf /etc/nova/nova.conf.$REL
    cp /etc/placement/placement.conf /etc/placement/placement.conf.$REL
    cp /etc/heat/heat.conf /etc/heat/heat.conf.$REL
    cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.$REL
    cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.$REL
    cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.$REL
    cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.$REL
    cp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.$REL
    cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.$REL
    cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.$REL
    cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.$REL
    cp /etc/httpd/conf.d/auth_openidc.conf /etc/httpd/conf.d/auth_openidc.conf.$REL
    
    ## mv -f /etc/openstack-dashboard/local_settings.rpmnew /etc/openstack-dashboard/local_settings
    mv -f /etc/nova/nova.conf.rpmnew /etc/nova/nova.conf
    mv -f /etc/placement/placement.conf.rpmnew /etc/placement/placement.conf
    mv -f /etc/heat/heat.conf.rpmnew /etc/heat/heat.conf
    mv -f /etc/neutron/dhcp_agent.ini.rpmnew /etc/neutron/dhcp_agent.ini
    mv -f /etc/neutron/l3_agent.ini.rpmnew /etc/neutron/l3_agent.ini
    mv -f /etc/neutron/metadata_agent.ini.rpmnew /etc/neutron/metadata_agent.ini
    mv -f /etc/neutron/plugins/ml2/ml2_conf.ini.rpmnew /etc/neutron/plugins/ml2/ml2_conf.ini
    mv -f /etc/neutron/plugins/ml2/openvswitch_agent.ini.rpmnew /etc/neutron/plugins/ml2/openvswitch_agent.ini
    mv -f /etc/keystone/keystone.conf.rpmnew /etc/keystone/keystone.conf
    mv -f /etc/glance/glance-api.conf.rpmnew /etc/glance/glance-api.conf
    mv -f /etc/cinder/cinder.conf.rpmnew /etc/cinder/cinder.conf
    mv -f /etc/httpd/conf.d/auth_openidc.conf.rpmnew /etc/httpd/conf.d/auth_openidc.conf
  • Cambiare classe in Foreman con Epoxy
  • Code Block
    languagebash
    Da pagina web di Foreman, modificare la classe puppet del controller selezionando Epoxy
    in https://cld-config.cloud.pd.infn.it/hosts/controller-xx.cloud.pd.infn.it editare l'host sostituendo l'hostgroup "hosts_all/ControllerNode-Test" con "hosts_all/ControllerNode_Test-Epoxy"
    
    Nel controller poi eseguire
    puppet agent -t 
    
    A questo punto tutti i servizi sono configurati
  • girare puppet nel nodo 
    Code Block
    languageshell
    puppet agent -t


  • attivare i servizi modificando service.pp per far partire i servizi 
    Code Block
    languageshell
    # modificare in service.pp tutti i servizi
       ensure      => running,
       enable      => true,
    # e committare in git
  •  riabilitare puppet nel nodo
    Code Block
    languageshell
    systemctl start puppet
    systemctl enable puppet
  • modificare il cld-config il file di haproxy per utilizzare i due controller 
    Code Block
    languageshell
    cp  /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg.orig  /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
  • eseguire puppet nei tre haproxy
  • fare i contract o online-migration del db per i servizi che lo richiedono 
    Code Block
    languageshell
    # Dopo l'aggiornamento del controller-02
    su -s /bin/sh -c "keystone-manage db_sync --contract" keystone    
    
    su -s /bin/sh -c "glance-manage db contract" glance
    
    su -s /bin/sh -c "neutron-db-manage upgrade --contract" neutron
    
    su -s /bin/sh -c "cinder-manage db online_data_migrations" cinder
    
-->> QUI 31/03 <--

  • verificare la creazione di nuove VM. Se la contastualizzazione non funziona dando errore di connessione al metadata server allora controllare se compare l'agente tra i network agent list e vedere quando l'heartbeat e' stato eseguito l'ultima volta.
    • se la data e' vecchia, rimuovere l'agent dai due controller e fare il reboot
      Code Block
      languageshell
      [root@controller-02 nova]# openstack network agent list
      Could not load 'message_list': module 'zaqarclient.queues.v2.cli' has no attribute 'OldListMessages'
      Could not load 'message_post': module 'zaqarclient.queues.v2.cli' has no attribute 'OldPostMessages'
      +--------------------------------------+--------------------+----------------------------------+-------------------+-------+-------+---------------------------+
      | ID
  • |     fill_virtual_interface_list     |      0       |     0     | |         migrate_empty_ratio         |      0       |     0     | |   migrate_quota_classes_to_api_db   |      0       |     0     | |    migrate_quota_limits_to_api_db   |      0       |     0     | |      migration_migrate_to_uuid      |      0       |     0     | |          populate_dev_uuids         |      0       |     0     | |     populate_instance_compute_id    |      50      |     0     | | populate_missing_availability_zones |      0       |     0     | |      populate_queued_for_delete     |      0       |     0     | |           populate_user_id          |      50      |     0     | |            populate_uuids           |      0       |     0     | +-------------------------------------+--------------+-----------+

    Controllare se sono installati openstack-client e selinux

    Code Block
    languagebash
    [root@controller-01 ~]# yum list installed | grep openstackclient
    python-openstackclient-lang.noarch                                6.6.1-1.el9s                     @centos-openstack-caracal       
    python3-openstackclient.noarch                                    6.6.1-1.el9s                     @centos-openstack-caracal       
    
    [root@controller-01 ~]# yum list installed | grep openstack-selinux
    openstack-selinux.noarch                                          0.8.40-1.el9s                    @centos-openstack-zed           

    Controllare versione kernel e ceph

    Code Block
    languagebash
    [root@controller-01 ~]# yum list installed | grep kernel kernel.x86_64 5.14.0-427.24.1.el9_4 @anaconda kernel.x86_64 5.14.0-503.33.1.el9_5 @baseos kernel-core.x86_64
    •                                    | Agent Type         
  • 5.14.0-427.24.1.el9_4
    • | Host            
  • @anaconda
    •      
  • kernel-core.x86_64
    •             | Availability Zone | Alive | State | Binary                    |
      +--------------------------------------+--------------------+----------------------------------+-------------------+-------+-------+---------------------------+
      | 03b6f400-d961-42cd-9df9-89e87dd58ca9 | Open vSwitch agent | 
  • 5.14.0-503.33.1.el9_5
    • controller-02.cloud.pd.infn.it   | None        
  • @baseos
    •       | :-)   | UP    | neutron-openvswitch-agent 
  • kernel-headers.x86_64
    • |
      | 10f518b3-d9a6-4adf-a482-20723682b5f5 | Metadata agent     
  • 5.14.0-503.33.1.el9_5 @appstream kernel-modules.x86_64
    • | controller-02.cloud.pd.infn.it   | None              | XXX   | UP    | neutron-metadata-agent    |
      | 3241aa58-f697-478c-bacc-4e10d7cc43e7 | Open vSwitch agent 
  • 5.14.0-427.24.1.el9_4
    • | controller-01.cloud.pd.infn.it   | None        
  • @anaconda
    •       | XXX   | UP    | neutron-openvswitch-agent |
      | 7b34d1ad-99a7-4ca8-a1e6-82a90737a635 | Open vSwitch agent 
  • kernel-modules.x86_64
    • | t2-cld-nat-test.cloud.pd.infn.it | None              | :-)   | UP    | neutron-openvswitch-agent |
      | 7c026284-8b62-420d-9163-464c3b28bf24 | Open vSwitch agent 
  • 5.14.0-503.33.1.el9_5
    • | compute-01.cloud.pd.infn.it      | None     
  • @baseos
    •          
    • | :-)   | UP    | neutron-openvswitch-agent 
  • kernel-modules-core.x86_64
    • |
      | 940d868e-8605-42e5-a731-b07e2a2a311e | DHCP agent         | controller-01.cloud.pd.infn.it   | nova              | XXX   | UP    
  • 5.14.0-427.24.1.el9_4
    • | neutron-dhcp-agent        |
      | aa34b512-89d8-4913-aee1-9f2d2fdf124c | L3 agent 
  • @anaconda
    •           | controller-01.cloud.pd.infn.it   | nova        
  • kernel-modules-core.x86_64
    •       | XXX   | UP    | neutron-l3-agent          |
      | b60f9a09-06ad-4562-b1c9-72ef265200a6 | DHCP agent         | 
  • 5
    • controller-02.
  • 14
    • cloud.
  • 0-503
    • pd.
  • 33.1.el9_5
    • infn.it   | nova           
  • @baseos
    •    | :-)   | UP    | neutron-dhcp-agent        |
      | b91764b8-58a2-4ad6-a8fc-fd20aa664571 | L3 agent  
  • kernel-srpm-macros.noarch
    •          | controller-02.cloud.pd.infn.it   | nova              | :-)   | UP    | neutron-l3-agent   
  • 1.0-13.el9
    •        |
      | be79d4c8-f24d-47f9-876b-09ed34614dc2 | Open vSwitch agent | compute-03.cloud.pd.infn.it      | None  
  • @appstream
    •             | :-)   | UP    | 
  • kernel-tools.x86_64
    • neutron-openvswitch-agent |
      | df3074d3-0add-4f78-a5f4-fde900e764f2 | Open vSwitch agent | compute-02.cloud.pd.infn.it      | None              | :-)   | UP    | neutron-openvswitch-agent |
      | fd8b02e9-ca5f-43d4-b1fc-31163ba2b7b3 | Open vSwitch agent | 
  • 5.14.0-503.33.1.el9_5
    • compute-04.cloud.pd.infn.it      | None       
  • @baseos
    •        | :-)   | UP    | 
  • kernel-tools-libs.x86_64 5.14.0-503.33.1.el9_5 @baseos                                        [root@controller-01 ~]# yum list installed | grep ceph blosc.x86_64 1.21.0-3.el9s @centos-ceph-pacific centos-release-ceph-reef.noarch 1.0-1.el9 @extras ceph-common.x86_64
    • neutron-openvswitch-agent |
      +--------------------------------------+--------------------+----------------------------------+-------------------+-------+-------+---------------------------+
      
      [root@controller-02 nova]# openstack network agent show 10f518b3-d9a6-4adf-a482-20723682b5f5
      Could not load 'message_list': module 'zaqarclient.queues.v2.cli' has no attribute 'OldListMessages'
      Could not load 'message_post': module 'zaqarclient.queues.v2.cli' has no attribute 'OldPostMessages'
      +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
      | Field             
  • 2:18.2.4-2.el9s
    • | Value            
  • @centos-ceph-reef
    •  
  •  
    •  
  •      
    •  
  •  
    •  
  •    [root@controller-01
    •  
  • ~]#
    •  
  • uname
    •  
  • -a Linux controller-01.cloud.pd.infn.it 5.14.0-503.33.1.el9_5.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Mar 20 03:39:23 EDT 2025 x86_64 x86_64 x86_64 GNU/Linux

    Rimuovere release Caracal

    Code Block
    languagebash
    yum remove centos-release-openstack-caracal.noarch

    Installare Epoxy

    Code Block
    languagebash
    dnf install -y https://trunk.rdoproject.org/rdo_release/rdo-release.el9s.rpm (serve questo e contiene il repo epoxy) #### facendo un check [root@todelff ~]# rpm -qil rdo-release Name
    •                                      
  • :
    •  
  • rdo-release Version
    •      
  • :
    •  
  • epoxy Release
    •      
  • :
    •  
  • 1.el9s Architecture:
    •  
  • noarch Install
    •  
  • Date:
    •  
  • Wed
    •  
  • Mar
    •  
  • 11
    •  
  • 15:29:25
    •  
  • 2026 Group
    •        
  • :
    •  
  • System
    •  
  • Environment/Base Size
    •         
  • :
    •  
  • 13372 License
    •      
  • :
    •  
  • Apache2 Signature
    •    
  • :
    •  
  • (none) Source
    •  
  • RPM
    •   
  • :
    •  
  • rdo-release-epoxy-1.el9s.src.rpm Build
    •  
  • Date
    •   
  • :
    •  
  • Fri
    •  
  • Mar
    •  
  • 14
    •  
  • 17:12:13
    •  
  • 2025 Build
    •  
  • Host
    •   
  • :
    •  
  • doogie-n1.rdu2.centos.org Packager
    •     
  • :
    •  
  • CBS
    •  
  • <cbs@centos.org> Vendor
    •       
  • :
    •  
  • CentOS
    •  
  • Cloud
    •  
  • SIG URL
    •          
  • : https://github.com/rdo-infra/rdo-release Summary : RDO repository configuration Description : This package contains the RDO repository /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Messaging /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-NFV /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9 /etc/yum.repos.d/ceph-reef.repo /etc/yum.repos.d/messaging.repo /etc/yum.repos.d/nfv-openvswitch.repo /etc/yum.repos.d/rdo-release.repo /etc/yum.repos.d/rdo-testing.repo
  • Salvare configurazioni che di solito vengono sovrascritte

    Code Block
    languagebash
    export REL=caracal
    cp /etc/httpd/conf.d/openstack-dashboard.conf /etc/httpd/conf.d/openstack-dashboard.conf.$REL
  • Update pacchetti

    Code Block
    languagebash
    dnf update ## se update fallisce con problemi vari: [root@controller-01 ~]# dnf update CentOS-9 - Ceph Reef
    •  |
      +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
      | admin_state_up    | UP                                                                                                                                                    |
      | agent_type        | Metadata agent                                                                                                                                        |
      | alive             | XXX                                                            
  • 489
    •  
  • kB/s
    •  
  • |
    •  
  • 415
    •  
  • kB
    •      
  • 00:00
    •     
  • OpenStack
    •  
  • Epoxy
    •  
  • Repository
    •                                                                         |
      | availability_zone | None                                        
  • 2.5 MB/s | 1.7 MB
    •           
  • 00:00
    •     
  • Last
    •  
  • metadata
    •  
  • expiration
    •  
  • check:
    •  
  • 0:00:01
    •  
  • ago
    •  
  • on
    •  
  • Fri
    •  
  • 13
    •  
  • Mar
    •  
  • 2026
    •  
  • 10:41:31
    •  
  • AM
    •  
  • CET. Error:
    •  
    •  
  • Problem
    •  
  • 1:
    •  
  • cannot
    •  
  • install
    •  
  • both
    •  
  • python3-keystone-1:27.0.0-1.el9s.noarch
    •  
  • from
    •  
  • openstack-epoxy
    •  
  • and
    •  
  • python3-keystone-1:25.0.0-1.el9s.noarch
    •  
  • from
    •  
  • @System
    •   
  • -
    •  
  • package
    •  
  • python3-keystone+memcache-1:25.0.0-1.el9s.noarch
    •  
  • from
    •  
  • @System
    •  
  • requires
    •  
  • python3-keystone
    •  
  • =
    •  
  • 1:25.0.0-1.el9s,
    •  
  • but
    •  
  • none
    •  
  • of
    •  
  • the
    •  
  • providers
    •  
  • can
    •  
  • be
    •  
  • installed
    •   
  • -
    •  
  • cannot
    •  
  • install
    •  
  • the
    •  
  • best
    •  
  • update
    •  
  • candidate
    •  
  • for
    •  
  • package
    •  
  • python3-keystone-1:25.0.0-1.el9s.noarch
    •   
  • -
    •  
  • problem
    •  
  • with
    •  
  • installed
    •  
  • package
    •  
  • python3-keystone+memcache-1:25.0.0-1.el9s.noarch
    •  
  • Problem
    •  
  • 2:
    •  
  • cannot
    •  
  • install
    •  
  • both
    •  
  • python3-oslo-messaging-16.1.0-1.el9s.noarch
    •  
  • from
    •  
  • openstack-epoxy
    •  
  • and
    •  
  • python3-oslo-messaging-14.7.2-1.el9s.noarch
    •  
  • from
    •  
  • @System
    •   
  • -
    •  
  • package
    •  
  • python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch
    •  
  • from
    •  
  • @System
    •  
  • requires
    •  
  • python3-oslo-messaging
    •  
  • =
    •  
  • 14.7.2-1.el9s,
    •  
  • but
    •  
  • none
    •  
  • of
    •  
  • the
    •  
  • providers
    •  
  • can
    •  
  • be
    •  
  • installed - cannot install the best update candidate for package python3-oslo-messaging-14.7.2-1.el9s.noarch - problem with installed package python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch Problem 3: cannot install both python3-requests-2.32.3-4.el9s.noarch from openstack-epoxy and python3-requests-2.31.0-3.el9s.noarch from @System - package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch from @System requires python3-requests = 2.31.0-3.el9s, but none of the providers can be installed - cannot install the best update candidate for package python3-requests-2.31.0-3.el9s.noarch - problem with installed package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) # Quindi vanno rimossi i seguenti pacchetti rpm -e --nodeps python3-keystone+memcache rpm -e --nodeps python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch rpm -e --nodeps python3-oslo-messaging+amqp1  dnf update -y
  • Salvare le vecchie configurazioni
  • Code Block
    languagebash
    titlerisultato
    collapsetrue
    # Nell’update vengono scaricati i nuovi rpm: attenzione a questi file di configurazione
    
    cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.$REL
    cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.$REL
    cp /etc/nova/nova.conf /etc/nova/nova.conf.$REL
    cp /etc/placement/placement.conf /etc/placement/placement.conf.$REL
    cp /etc/heat/heat.conf /etc/heat/heat.conf.$REL
    cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.$REL
    cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.$REL
    cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.$REL
    cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.$REL
    cp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.$REL
    cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.$REL
    cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.$REL
    cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.$REL
    cp /etc/httpd/conf.d/auth_openidc.conf /etc/httpd/conf.d/auth_openidc.conf.$REL
    
    mv -f /etc/openstack-dashboard/local_settings.rpmnew /etc/openstack-dashboard/local_settings
    mv -f /etc/neutron/neutron.conf.rpmnew /etc/neutron/neutron.conf
    mv -f /etc/nova/nova.conf.rpmnew /etc/nova/nova.conf
    mv -f /etc/placement/placement.conf.rpmnew /etc/placement/placement.conf
    mv -f /etc/heat/heat.conf.rpmnew /etc/heat/heat.conf
    mv -f /etc/neutron/dhcp_agent.ini.rpmnew /etc/neutron/dhcp_agent.ini
    mv -f /etc/neutron/l3_agent.ini.rpmnew /etc/neutron/l3_agent.ini
    mv -f /etc/neutron/metadata_agent.ini.rpmnew /etc/neutron/metadata_agent.ini
    mv -f /etc/neutron/plugins/ml2/ml2_conf.ini.rpmnew /etc/neutron/plugins/ml2/ml2_conf.ini
    mv -f /etc/neutron/plugins/ml2/openvswitch_agent.ini.rpmnew /etc/neutron/plugins/ml2/openvswitch_agent.ini
    mv -f /etc/keystone/keystone.conf.rpmnew /etc/keystone/keystone.conf
    mv -f /etc/glance/glance-api.conf.rpmnew /etc/glance/glance-api.conf
    mv -f /etc/cinder/cinder.conf.rpmnew /etc/cinder/cinder.conf
    mv -f /etc/httpd/conf.d/auth_openidc.conf.rpmnew /etc/httpd/conf.d/auth_openidc.conf
  • In puppet modificare il file  /var/puppet/puppet_epoxy_env_test/controller_epoxy/manifests/service.pp  perche' non faccia partire i servizi una volta aggiornati. Per questo mettere tutti i servizi in stopped
  • Code Block
    languagebash
    # in service.pp mettere per tutti i servizi
     ensure      => stopped,
     enable      => false,
    # e committare su git
  • In Foreman abilitare la classe Epoxy: da pagina web modificare la classe puppet del controller-01 selezionando Epoxy e poi eseguire puppet sul nodo
Code Block
languagebash
In https://cld-config.cloud.pd.infn.it/hosts/controller-xx.cloud.pd.infn.it editare l'host sostituendo l'hostgroup "hosts_all/ComputeNode-Test" con "hosts_all/ComputeNode-Test_Epoxy"

Nel controller poi eseguire
puppet agent -t 

A questo punto tutti i servizi sono configurati sul controller-01

...

KEYSTONE

Code Block
languagebash
# TODO: backup database keystone

su -s /bin/sh -c "keystone-manage doctor" keystone

[root@controller-01 StartServices]# su -s /bin/sh -c "keystone-manage doctor" keystone
WARNING: `keystone.conf [cache] enabled` is not enabled.
    Caching greatly improves the performance of keystone, and it is highly
    recommended that you enable it.

su -s /bin/sh -c "keystone-manage db_sync --expand" keystone

===============================================================================================
Dopo l'aggiornamento del controller-02 e fatto ripartire httpd, si deve eseguire il comando
su -s /bin/sh -c "keystone-manage db_sync --contract" keystone    

...

Code Block
languagebash
1) su -s /bin/sh -c "placement-manage db sync" placement 

2) accendere i servizi per keystone, placement e dashboard
systemctl start httpd.service memcached.service shibd.service

3) in cld-config modificare il file dell'HAproxy in modo che i tre servizi keystone, placement e dashboard (memcached) puntino al controller-01 commentando il controller-02 (controllare porte 5000, 5001, 443, 8778, 11211):

cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg

4) eseguire puppet sui tre haproxy 
ssh root@cld-haproxy-test-01 / 02/ 03
puppet agent -t

5) spegnere e disabilitare i servizi sul controller-02
systemctl stop httpd.service memcached.service shibd.service
systemctl disable httpd.service memcached.service shibd.service


Controllare che funzioni tutto a livello di dashboard, in particolare il calendario prenotazioni GPU (se non funziona interviene Sergio)

...

GLANCE

Code Block
languagebash
ATTENZIONE: controllare se c'e'un ordine per l'update di glance (si possono avere due release diverse contemporaneamente?). 
Esiste  ma e' considerato non production ready, o la doc non e' aggiornata (https://docs.openstack.org/glance/2025.1/admin/zero-downtime-db-upgrade.html) 
Probabilmente per glance e' meglio non rischiare e fare il down durante l'update

1) spegnere il servizio glance sul controller-02
systemctl stop openstack-glance-api.service
systemctl disable openstack-glance-api.service

Sul controller-01 (gia' configurato ad Epoxy perche' abbiamo girato puppet):

2) su -s /bin/sh -c "glance-manage db expand" glance

[root@controller-01 StartServices]# cat /var/log/glance/glance-manage.log 
2026-03-16 17:30:38.111 173040 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
2026-03-16 17:30:38.111 173040 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.

3) su -s /bin/sh -c "glance-manage db migrate" glance

[root@controller-01 StartServices]# su -s /bin/sh -c "glance-manage db migrate" glance
2026-03-16 17:31:33.469 173073 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
2026-03-16 17:31:33.470 173073 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
Database is up to date. No migrations needed.
[root@controller-01 StartServices]# 

4) systemctl start openstack-glance-api.service

Mar 16 17:31:56 controller-01.cloud.pd.infn.it glance-api[173117]: 2026-03-16 17:31:56.058 173117 WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set with>


5)) Modificare l'HA proxy in modo che glance punti al controller-01
in cld-config:
cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
(controllare porta 9292)

6) eseguire puppet nei tre haproxy
puppet agent -t

=============================================================
Quanto anche il controller-02 sara' aggiornato, eseguire
su -s /bin/sh -c "glance-manage db contract" glance

...

Code Block
languagebash
su -s /bin/sh -c "nova-status upgrade check" nova
su -s /bin/sh -c "nova-manage api_db sync" nova

in nova-manage.log
2026-03-17 10:59:31.218 208205 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
2026-03-17 10:59:31.219 208205 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.


su -s /bin/sh -c "nova-manage db sync" nova

in nova-manage.log
026-03-17 11:00:31.148 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade 13863f4e1612 -> d60bddf7a903, add_constraint_instance_share_avoid_duplicates
2026-03-17 11:00:32.539 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade d60bddf7a903 -> 2903cd72dc14, add_tls_port_to_console_auth_tokens
2026-03-17 11:00:32.746 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Context impl MySQLImpl.
2026-03-17 11:00:32.747 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Will assume non-transactional DDL.
2026-03-17 11:00:32.755 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade 13863f4e1612 -> d60bddf7a903, add_constraint_instance_share_avoid_duplicates
2026-03-17 11:00:33.176 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade d60bddf7a903 -> 2903cd72dc14, add_tls_port_to_console_auth_tokens


Far partire il servizio nel controller1 
systemctl start \
    openstack-nova-api.service \
    openstack-nova-scheduler.service \
    openstack-nova-conductor.service \
    openstack-nova-novncproxy.service

Modificare l'HA proxy in modo che nova punti al controller-01
in cld-config:
cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
(controllare porte 8774, 8775, 6080)

Eseguire puppet nei tre haproxy
puppet agent -t

Spegnere e disabilitare il servizio nel controller2

 systemctl stop \
    openstack-nova-api.service \
    openstack-nova-scheduler.service \
    openstack-nova-conductor.service \
    openstack-nova-novncproxy.service
 
  systemctl disable \
    openstack-nova-api.service \
    openstack-nova-scheduler.service \
    openstack-nova-conductor.service \
    openstack-nova-novncproxy.service 

==============================================================================
Quando anche il controller2 e tutti i compute saranno aggiornati, eseguire di nuovo
su -s /bin/sh -c "nova-manage db online_data_migrations" nova

NEUTRON

Code Block
languagebash
su -s /bin/sh -c "neutron-db-manage upgrade --expand" neutron

INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  Running upgrade (expand) for neutron ...
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade 0e6eff810791 -> 175fa80908e1
INFO  [alembic.runtime.migration] Running upgrade 175fa80908e1 -> 5bcb7b31ec7d
INFO  [alembic.runtime.migration] Running upgrade 5bcb7b31ec7d -> ad80a9f07c5c
  OK


Far partire il servizio 

systemctl start neutron-server.service 
systemctl start neutron-openvswitch-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service neutron-l3-agent.service

Modificare l'HA proxy in modo che neutron punti al controller-01
in cld-config:
cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
(controllare porta 9696)

Eseguire puppet nei tre haproxy
puppet agent -t

Stoppare e disabilitare il servizio sul controller2
systemctl stop neutron-openvswitch-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service neutron-l3-agent.service
systemctl stop neutron-server.service

systemctl disable neutron-openvswitch-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service neutron-l3-agent.service
systemctl disable neutron-server.service
 

CONTROLLARE: 
[root@controller-01 neutron]# openstack server list
Could not load 'message_list': module 'zaqarclient.queues.v2.cli' has no attribute 'OldListMessages'
Could not load 'message_post': module 'zaqarclient.queues.v2.cli' has no attribute 'OldPostMessages'
....



=========================================================================
Quando anche il controller2 sara' aggiornato eseguire il comando

su -s /bin/sh -c "neutron-db-manage upgrade --contract" neutron

...

    • |
      | binary            | neutron-metadata-agent                                                                                                                                |
      | configuration     | {'log_agent_heartbeats': False, 'metadata_proxy_socket': '/var/lib/neutron/metadata_proxy', 'nova_metadata_host': '192.168.60.24',                    |
      |                   | 'nova_metadata_port': 8775}                                                                                                                           |
      | created_at        | 2018-11-06 09:30:53                                                                                                                                   |
      | description       | None                                                                                                                                                  |
      | ha_state          | None                                                                                                                                                  |
      | host              | controller-02.cloud.pd.infn.it                                                                                                                        |
      | id                | 10f518b3-d9a6-4adf-a482-20723682b5f5                                                                                                                  |
      | last_heartbeat_at | 2026-03-17 10:41:41                                                                                                                                   |
      | resources_synced  | None                                                                                                                                                  |
      | started_at        | 2026-03-09 10:56:21                                                                                                                                   |
      | topic             | N/A                                                                                                                                                   |
      +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
      [root@controller-02 nova]# 
      
      
    • Dopo aver eliminato i metadata agent e fatto il reboot la situazione e' la seguente: 
    • Code Block
      languageshell
      [root@controller-01 ~]# openstack network agent list
      Could not load 'message_list': module 'zaqarclient.queues.v2.cli' has no attribute 'OldListMessages'
      Could not load 'message_post': module 'zaqarclient.queues.v2.cli' has no attribute 'OldPostMessages'
      +--------------------------------------+--------------------+----------------------------------+-------------------+-------+-------+---------------------------+
      | ID
Code Block
languagebash
su -s /bin/sh -c "cinder-manage db sync" cinder

2026-03-17 11:57:43.085 212882 INFO cinder.db.migration [-] Applying migration(s)
2026-03-17 11:57:43.088 212882 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
2026-03-17 11:57:43.088 212882 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
2026-03-17 11:57:43.132 212882 INFO cinder.db.migration [-] Migration(s) applied

Far partire il servizio sul controller1 
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service

Modificare l'HA per far puntare il servizio al controller1

Modificare l'HA proxy in modo che cinder punti al controller-01
in cld-config:
cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_cinder_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
(controllare porta 8776)

Eseguire puppet nei tre haproxy
puppet agent -t

Stopparlo e disabilitarlo sul controller2
systemctl stop openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
systemctl disable openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service

==============================================================================
Quando il controller2 sara' aggiornato rieseguire il online_data_migration
su -s /bin/sh -c "cinder-manage db online_data_migrations" cinder
  • HEAT
Code Block
languagebash
su -s /bin/sh -c "heat-manage db_sync" heat

026-03-17 12:27:45.669 216268 INFO heat.db.migration [-] Applying migration(s)
2026-03-17 12:27:45.682 216268 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
2026-03-17 12:27:45.682 216268 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
2026-03-17 12:27:45.689 216268 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
2026-03-17 12:27:45.689 216268 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
2026-03-17 12:27:45.696 216268 INFO heat.db.migration [-] Migration(s) applied


Accendere il servizio sul controller1 
systemctl start openstack-heat-api.service \
  openstack-heat-api-cfn.service openstack-heat-engine.service

Modificare l'HA proxy in modo che heat punti al controller-01
in cld-config:
cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_cinder_heat_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
(controllare porte 8000, 8004)

Eseguire puppet nei tre haproxy
puppet agent -t
e spegnerlo e disabilitarlo sul controller2

 systemctl stop openstack-heat-api.service \
  openstack-heat-api-cfn.service openstack-heat-engine.service
 
systemctl disable openstack-heat-api.service \
  openstack-heat-api-cfn.service openstack-heat-engine.service
  • DASHBOARD: nulla da fare

A questo punto tutti i servizi puntano al controller1.

Installazione Epoxy nel controller-02

  • rimuovere Caracal
  • Code Block
    languagebash
    yum remove centos-release-openstack-caracal.noarch
  • installare Epoxy
  • Code Block
    languagebash
    dnf install -y https://trunk.rdoproject.org/rdo_release/rdo-release.el9s.rpm  (serve questo e contiene il repo epoxy)
    
    #### facendo un check
    [root@todelff ~]# rpm -qil rdo-release
    Name        : rdo-release
    Version     : epoxy
    Release     : 1.el9s
    Architecture: noarch
    Install Date: Wed Mar 11 15:29:25 2026
    Group       : System Environment/Base
    Size        : 13372
    License     : Apache2
    Signature   : (none)
    Source RPM  : rdo-release-epoxy-1.el9s.src.rpm
    Build Date  : Fri Mar 14 17:12:13 2025
    Build Host  : doogie-n1.rdu2.centos.org
    Packager    : CBS <cbs@centos.org>
    Vendor      : CentOS Cloud SIG
    URL         : https://github.com/rdo-infra/rdo-release
    Summary     : RDO repository configuration
    Description :
    This package contains the RDO repository
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Messaging
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-NFV
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
    /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9
    /etc/yum.repos.d/ceph-reef.repo
    /etc/yum.repos.d/messaging.repo
    /etc/yum.repos.d/nfv-openvswitch.repo
    /etc/yum.repos.d/rdo-release.repo
    /etc/yum.repos.d/rdo-testing.repo
    
    
  • Code Block
    languagebash
    export REL=caracal
    cp /etc/httpd/conf.d/openstack-dashboard.conf /etc/httpd/conf.d/openstack-dashboard.conf.$REL
    Code Block
    languagebash
    dnf update ## se update fallisce con problemi vari: [root@controller-01 ~]# dnf update CentOS-9 - Ceph Reef
    •                                    
  • 489 kB/s
    • | 
  • 415
    • Agent 
  • kB
    • Type     
  • 00:00
    •     
  • OpenStack
    • | 
  • Epoxy
    • Host 
  • Repository
    •                             | 
  • 2.5 MB/s | 1.7 MB 00:00 Last metadata expiration check: 0:00:01 ago on Fri 13 Mar 2026 10:41:31 AM CET. Error: Problem 1: cannot install both python3-keystone-1:27.0.0-1.el9s.noarch from openstack-epoxy and python3-keystone-1:25.0.0-1.el9s.noarch from @System - package python3-keystone+memcache-1:25.0.0-1.el9s.noarch from @System requires python3-keystone = 1:25.0.0-1.el9s, but none of the providers can be installed - cannot install the best update candidate for package python3-keystone-1:25.0.0-1.el9s.noarch - problem with installed package python3-keystone+memcache-1:25.0.0-1.el9s.noarch Problem 2: cannot install both python3-oslo-messaging-16.1.0-1.el9s.noarch from openstack-epoxy and python3-oslo-messaging-14.7.2-1.el9s.noarch from @System - package python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch from @System requires python3-oslo-messaging = 14.7.2-1.el9s, but none of the providers can be installed - cannot install the best update candidate for package python3-oslo-messaging-14.7.2-1.el9s.noarch - problem with installed package python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch Problem 3: cannot install both python3-requests-2.32.3-4.el9s.noarch from openstack-epoxy and python3-requests-2.31.0-3.el9s.noarch from @System - package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch from @System requires python3-requests = 2.31.0-3.el9s, but none of the providers can be installed - cannot install the best update candidate for package python3-requests-2.31.0-3.el9s.noarch - problem with installed package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) # Quindi vanno rimossi i seguenti pacchetti rpm -e --nodeps python3-keystone+memcache rpm -e --nodeps python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch rpm -e --nodeps python3-oslo-messaging+amqp1  dnf update -y Se da problemi con update editare /etc/yum.repos.d/EGI-trustanchors.repo e mettere gpgcheck a 1 (ovvero disabilitare il check)
  • Code Block
    languagebash
    titlerisultato
    collapsetrue
    # Nell’update vengono scaricati i nuovi rpm: attenzione a questi file di configurazione
    
    ## cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.$REL
    cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.$REL
    cp /etc/nova/nova.conf /etc/nova/nova.conf.$REL
    cp /etc/placement/placement.conf /etc/placement/placement.conf.$REL
    cp /etc/heat/heat.conf /etc/heat/heat.conf.$REL
    cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.$REL
    cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.$REL
    cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.$REL
    cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.$REL
    cp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.$REL
    cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.$REL
    cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.$REL
    cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.$REL
    cp /etc/httpd/conf.d/auth_openidc.conf /etc/httpd/conf.d/auth_openidc.conf.$REL
    
    ## mv -f /etc/openstack-dashboard/local_settings.rpmnew /etc/openstack-dashboard/local_settings
    mv -f /etc/neutron/neutron.conf.rpmnew /etc/neutron/neutron.conf
    mv -f /etc/nova/nova.conf.rpmnew /etc/nova/nova.conf
    mv -f /etc/placement/placement.conf.rpmnew /etc/placement/placement.conf
    mv -f /etc/heat/heat.conf.rpmnew /etc/heat/heat.conf
    mv -f /etc/neutron/dhcp_agent.ini.rpmnew /etc/neutron/dhcp_agent.ini
    mv -f /etc/neutron/l3_agent.ini.rpmnew /etc/neutron/l3_agent.ini
    mv -f /etc/neutron/metadata_agent.ini.rpmnew /etc/neutron/metadata_agent.ini
    mv -f /etc/neutron/plugins/ml2/ml2_conf.ini.rpmnew /etc/neutron/plugins/ml2/ml2_conf.ini
    mv -f /etc/neutron/plugins/ml2/openvswitch_agent.ini.rpmnew /etc/neutron/plugins/ml2/openvswitch_agent.ini
    mv -f /etc/keystone/keystone.conf.rpmnew /etc/keystone/keystone.conf
    mv -f /etc/glance/glance-api.conf.rpmnew /etc/glance/glance-api.conf
    mv -f /etc/cinder/cinder.conf.rpmnew /etc/cinder/cinder.conf
    mv -f /etc/httpd/conf.d/auth_openidc.conf.rpmnew /etc/httpd/conf.d/auth_openidc.conf
  • Cambiare classe in Foreman con Epoxy
  • Code Block
    languagebash
    Da pagina web di Foreman, modificare la classe puppet del controller selezionando Epoxy
    in https://cld-config.cloud.pd.infn.it/hosts/controller-xx.cloud.pd.infn.it editare l'host sostituendo l'hostgroup "hosts_all/ComputeNode-Test" con "hosts_all/ComputeNode-Test_Epoxy"
    
    Nel controller poi eseguire
    puppet agent -t 
    
    A questo punto tutti i servizi sono configurati
  • girare puppet nel nodo 
    Code Block
    languageshell
    puppet agent -t
  • attivare i servizi modificando service.pp per far partire i servizi 
    Code Block
    languageshell
    # modificare in service.pp tutti i servizi
       ensure      => running,
       enable      => true,
    # e committare in git
  • modificare il cld-config il file di haproxy per utilizzare i due controller 
    Code Block
    languageshell
    cp  /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg.orig  /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
  • eseguire puppet nei tre haproxy
  • fare i contract o online-migration del db per i servizi che lo richiedono 
    Code Block
    languageshell
    # Dopo l'aggiornamento del controller-02
    su -s /bin/sh -c "keystone-manage db_sync --contract" keystone    
    
    su -s /bin/sh -c "glance-manage db contract" glance
    
    su -s /bin/sh -c "neutron-db-manage upgrade --contract" neutron
    
    su -s /bin/sh -c "cinder-manage db online_data_migrations" cinder

...

    • Availability Zone | Alive | State | Binary                    |
      +--------------------------------------+--------------------+----------------------------------+-------------------+-------+-------+---------------------------+
      | 03b6f400-d961-42cd-9df9-89e87dd58ca9 | Open vSwitch agent | controller-02.cloud.pd.infn.it   | None              | :-)   | UP    | neutron-openvswitch-agent |
      | 3241aa58-f697-478c-bacc-4e10d7cc43e7 | Open vSwitch agent | controller-01.cloud.pd.infn.it   | None              | :-)   | UP    | neutron-openvswitch-agent |
      | 7b34d1ad-99a7-4ca8-a1e6-82a90737a635 | Open vSwitch agent | t2-cld-nat-test.cloud.pd.infn.it | None              | :-)   | UP    | neutron-openvswitch-agent |
      | 7c026284-8b62-420d-9163-464c3b28bf24 | Open vSwitch agent | compute-01.cloud.pd.infn.it      | None              | :-)   | UP    | neutron-openvswitch-agent |
      | 940d868e-8605-42e5-a731-b07e2a2a311e | DHCP agent         | controller-01.cloud.pd.infn.it   | nova              | :-)   | UP    | neutron-dhcp-agent        |
      | aa34b512-89d8-4913-aee1-9f2d2fdf124c | L3 agent           | controller-01.cloud.pd.infn.it   | nova              | :-)   | UP    | neutron-l3-agent          |
      | b60f9a09-06ad-4562-b1c9-72ef265200a6 | DHCP agent         | controller-02.cloud.pd.infn.it   | nova              | :-)   | UP    | neutron-dhcp-agent        |
      | b91764b8-58a2-4ad6-a8fc-fd20aa664571 | L3 agent           | controller-02.cloud.pd.infn.it   | nova              | :-)   | UP    | neutron-l3-agent          |
      | be79d4c8-f24d-47f9-876b-09ed34614dc2 | Open vSwitch agent | compute-03.cloud.pd.infn.it      | None              | :-)   | UP    | neutron-openvswitch-agent |
      | df3074d3-0add-4f78-a5f4-fde900e764f2 | Open vSwitch agent | compute-02.cloud.pd.infn.it      | None              | :-)   | UP    | neutron-openvswitch-agent |
      | fd8b02e9-ca5f-43d4-b1fc-31163ba2b7b3 | Open vSwitch agent | compute-04.cloud.pd.infn.it      | None              | :-)   | UP    | neutron-openvswitch-agent |
      +--------------------------------------+--------------------+----------------------------------+-------------------+-------+-------+---------------------------+
      
      


Far partire tutti i db mysql del cluster percona, accendendoli con ordine inverso allo spegnimento

...

Mettere in drain un nodo alla volta.

openstack compute service set --disable compute-01.cloud.pd.infn.it nova-compute

openstack compute service list


Per il singolo nodo in drain, migrare le VM con live migration quando possibile (altrimenti si spegne e si migra)

In foreman cambiamo la classe per Epoxy

...