Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Da Caracal a Epoxy

Procedura indicativa

A grandi linee la procedura sara':

  

        -
  • Sul secondo controller i servizi sono attivi con versione C
  •     -
  • Si spengono due dei tre db percona
  • Si configura HAproxy in modo da far puntare al secondo controller
  • Si spengono tutti i servizi sul primo controller
  •      
  • Si aggiornano i pacchetti a E
  • e si aggiiornano i file di configurazione
  • sul primo controller
  • (
  • senza fare partire i servizi
  • )
          Sul primo controller: keystone-manage db_sync --expand (il --expand di fatto fa in modo che il Db sia usabile sia per la versione C che per la versione E)
          Sul primo controller si avvia keystone
          si spegne keystone sul secondo controller
          Sul primo controller: keystone-manage db_sync --contract
        lo stesso per tutti gli altri servizi
        Alla fine si aggiorna il secondo controller e si fanno partire anche li' i servizi
       
  • Si configura e si fa partire un servizio alla volta sul primo controller e si configura HAproxy in modo da far puntare per quel servizio al primo controller
  • Si spengono tutti i servizi sul secondo controller e si aggiorna il secondo controller a epoxy, facendo partire i servizi
  • Si modifica HAproxy in modo che punti a entrambi i controller
  • Si aggiornano i compute node uno alla volta

Check preliminari ed installazione release sul controller1

Azioni da fare prima di cominciare con l'installazione della release

  • Spegnere mysql su due dei tre nodi percona ricordando l'ordine di spegnimento

  • Controllare che tutti i servizi Openstack siano spenti (MS no)
  • Migrare online il db placement e nova
  • Code Block
    language
  • bash
    #placement va fatto prima di nova
    su -s /bin/sh -c "placement-manage db online_data_migration" placement
    su -s /bin/sh -c "nova-manage db online_data_migration" nova
    su -s /bin/sh -c "cinder-manage db online_data_migrations" cinder
  • shell
    [root@cld-db-test-06 ~]# systemctl stop mysql
    [root@cld-db-test-05 ~]# systemctl stop mysql
  • Modificare la classe puppet epoxy dei controller (service.pp) in modo che non faccia partire i servizi
    Code Block
    languageshell
    ATTENZIONE: modificare il file service.pp di puppet perche' non faccia partire i servizi una volta aggiornati. Per questo mettere tutti i servizi in
     ensure
  • Controllare se sono installati openstack-client e selinux

    Code Block
    languagebash
    [root@controller-01 ~]# yum list installed | grep openstackclient python-openstackclient-lang.noarch 6.6.1-1.el9s
  •       => stopped,
     
  • enable    
  • @centos-openstack-caracal
  •   
  • python3-openstackclient.noarch 6.6.1-1.el9s @centos-openstack-caracal [root@controller-01 ~]# yum list installed | grep openstack-selinux openstack-selinux.noarch 0.8.40-1.el9s
  • => false,
    e committare su git
  • Modificare l'HA in modo che per tutti i servizi punti al controller-02, che ha i servizi a Caracal attivi. Per questo modificare il file in cld-config ed eseguire puppet sui tre haproxy
    Code Block
    languageshell
    titleHAProxy
    # in cld-config.cloud.pd.infn.it
    cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_cinder_heat_spento01_acceso02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    
    # in cld-haproxy-test-01.pd.infn.it, 02 e 03
    puppet agent -t
  • In entrambi i controller spegnere e disabilitare puppet 
Code Block
languagebash
systemctl stop puppet
systemctl disable puppet

Staccare tutti i router dall'L3 del controller-01

Code Block
languageshell
Trovare quali sono i router
openstack router list
+--------------------------------------+-------------+--------+-------+----------------------------------+-------------+------+
| ID                    

...

           

...

Controllare versione kernel e ceph

...

languagebash

...

 

...

  

...

 

...

| 

...

Name 

...

 

...

 

...

     | Status | State | Project                          | Distributed | HA   |
+--------------------------------------+-------------+--------+-------+----------------------------------+-------------+------+
| 92e8b080-f3aa-4d9f-b3d4-613e0dbfd099 | Lan         

...

| ACTIVE | UP    | 56c3f5c047e74a78a71438c4412e6e13 | False  

...

     | 

...

True |
| 9e31c216-0635-4d21-b7aa-63fe4aee875e | ext-to-vos  | ACTIVE | UP    | 56c3f5c047e74a78a71438c4412e6e13 | False       | True 

...

|
| eaa80135-6b79-44e0-b637-cef88d09b85c | CloudVeneto | ACTIVE | UP    | 56c3f5c047e74a78a71438c4412e6e13 

...

| False       | True |
+--------------------------------------+-------------+--------+-------+----------------------------------+-------------+------+

trovare l'ip address nell'external_gateway_info per ogni router e vedere in quale controller e' attaccato

openstack router show 92e8b080-f3aa-4d9f-b3d4-613e0dbfd099

+-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                   | Value                                                                                                                                           |
+-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up          | UP                                                                                                                                              |
| availability_zone_hints |                                                                                                                                                 |
| availability_zones      | nova                                                                                                                                            |
| created_at              | 2018-11-28T16:06:24Z                                                                                                                            |
| description             |                                                                                                                                                 |
| distributed             | False                                                                                                                                           |
| enable_ndp_proxy        | None                                                                                                                                            |
| external_gateway_info   | {"network_id": "38356cfc-d83a-40f0-8604-09ddea12aa20", "external_fixed_ips": [{"subnet_id": "ec498b88-cbda-45d3-8f9f-174d335c6670",             |
|                         | "ip_address": "172.25.27.180"}], "enable_snat": false}                                                                                          |
...

[root@controller-02 ~]# ip netns exec qrouter-92e8b080-f3aa-4d9f-b3d4-613e0dbfd099 ip addr show | grep 172.25.27.180
    inet 172.25.27.180/24 scope global qg-fcc0f7ca-b4

lo stesso comando nel controller-01 non ritorna nulla quindi e' nel controller-02

analogamente per gli altri.

Se bisogna staccare dal controller-01 perche' in active, i comandi da eseguire sono ad esempio:

# for i in $(openstack router list -f value -c ID); do echo $i; openstack network agent list --agent-type l3 --sort-column Host --router $i --long; done 
92e8b080-f3aa-4d9f-b3d4-613e0dbfd099
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
| ID                                   | Agent Type | Host                           | Availability Zone | Alive | State | Binary           | HA State |
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
| aa34b512-89d8-4913-aee1-9f2d2fdf124c | L3 agent   | controller-01.cloud.pd.infn.it | nova              | :-)   | UP    | neutron-l3-agent | standby  |
| b91764b8-58a2-4ad6-a8fc-fd20aa664571 | L3 agent   | controller-02.cloud.pd.infn.it | nova              | :-)   | UP    | neutron-l3-agent | active   |
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
9e31c216-0635-4d21-b7aa-63fe4aee875e
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
| ID                                   | Agent Type | Host                           | Availability Zone | Alive | State | Binary           | HA State |
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
| aa34b512-89d8-4913-aee1-9f2d2fdf124c | L3 agent   | controller-01.cloud.pd.infn.it | nova              | :-)   | UP    | neutron-l3-agent | standby  |
| b91764b8-58a2-4ad6-a8fc-fd20aa664571 | L3 agent   | controller-02.cloud.pd.infn.it | nova              | :-)   | UP    | neutron-l3-agent | active   |
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
eaa80135-6b79-44e0-b637-cef88d09b85c
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
| ID                                   | Agent Type | Host                           | Availability Zone | Alive | State | Binary           | HA State |
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
| aa34b512-89d8-4913-aee1-9f2d2fdf124c | L3 agent   | controller-01.cloud.pd.infn.it | nova              | :-)   | UP    | neutron-l3-agent | active  |
| b91764b8-58a2-4ad6-a8fc-fd20aa664571 | L3 agent   | controller-02.cloud.pd.infn.it | nova              | :-)   | UP    | neutron-l3-agent | standby |
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+


quindi ad esempio per spostare il router eaa80135-6b79-44e0-b637-cef88d09b85c attivo attaccato all'agente L3 aa34b512-89d8-4913-aee1-9f2d2fdf124c del controller-01 nello 02 dovremmo fare:
openstack network agent remove router --l3 aa34b512-89d8-4913-aee1-9f2d2fdf124c eaa80135-6b79-44e0-b637-cef88d09b85c;

dopo un po' troveremo:
eaa80135-6b79-44e0-b637-cef88d09b85c
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
| ID                                   | Agent Type | Host                           | Availability Zone | Alive | State | Binary           | HA State |
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+
| aa34b512-89d8-4913-aee1-9f2d2fdf124c | L3 agent   | controller-01.cloud.pd.infn.it | nova              | :-)   | UP    | neutron-l3-agent | standby  |
| b91764b8-58a2-4ad6-a8fc-fd20aa664571 | L3 agent   | controller-02.cloud.pd.infn.it | nova              | :-)   | UP    | neutron-l3-agent | active   |
+--------------------------------------+------------+--------------------------------+-------------------+-------+-------+------------------+----------+


Installazione Epoxy nel controller-01

  • Nel controller-01 spegnere e disabilitare tutti i servizi Openstack

    • Code Block
      languagebash
      cd /root/StartServices
      ./complete.sh stop
      ./complete.sh disable
  • Fare il backup del db (sia tutto insieme che singolo db)
  • Code Block
    languagebash
    [root@cld-db-test-04 backup]# mkdir /backup/BackupCaracalPrimaDellUpdate
    [root@cld-db-test-04 ~]# mysqldump -u root -p --all-databases > /backup/130326/cld-db_test_04_caracal_dump_all.sql
    [root@cld-db-test-04 ~]# /usr/local/bin/mysql_dump_separate_db
    # e sposto i file risultanti da /backup/mysql a /backup/130326 (altrimenti in  /backup/mysql  verrebbero cancellati)
  • Migrare online il db placement, nova e cinder. ATTENZIONE: puo' metterci molto tempo a migrare
  • Code Block
    languagebash
    #placement va fatto prima di nova
    su -s /bin/sh -c "placement-manage db online_data_migrations" placement
    su -s /bin/sh -c "nova-manage db online_data_migrations" nova
    su -s /bin/sh -c "cinder-manage db online_data_migrations" cinder
  • Code Block
    languageshell
    titleesempio
    collapsetrue
    50 rows matched query populate_instance_compute_id, 0 migrated
    
    +-------------------------------------+--------------+-----------+
    
    |              Migration              | Total Needed | Completed |
    
    +-------------------------------------+--------------+-----------+
    
    |     fill_virtual_interface_list     |      0       |     0     |
    
    |         migrate_empty_ratio         |      0       |     0     |
    
    |   migrate_quota_classes_to_api_db   |      0       |     0     |
    
    |    migrate_quota_limits_to_api_db   |      0       |     0     |
    
    |      migration_migrate_to_uuid      |      0       |     0     |
    
    |          populate_dev_uuids         |      0       |     0     |
    
    |     populate_instance_compute_id    |      50      |     0     |
    
    | populate_missing_availability_zones |      0       |     0     |
    
    |      populate_queued_for_delete     |      0       |     0     |
    
    |           populate_user_id          |      50      |     0     |
    
    |            populate_uuids           |      0       |     0     |
    
    +-------------------------------------+--------------+-----------+
    
    


  • Controllare se sono installati openstack-client e selinux

    Code Block
    languagebash
    [root@controller-01 ~]# yum list installed | grep openstackclient
    python-openstackclient-lang.noarch                                6.6.1-1.el9s                     @centos-openstack-caracal       
    python3-openstackclient.noarch                                    6.6.1-1.el9s                     @centos-openstack-caracal       
    
    [root@controller-01 ~]# yum list installed | grep openstack-selinux
    openstack-selinux.noarch                                          0.8.40-1.el9s                    @centos-openstack-zed           


  • Controllare versione kernel e ceph

    Code Block
    languagebash
    [root@controller-01 ~]#  yum list installed | grep kernel
    kernel.x86_64                                                     5.14.0-427.24.1.el9_4            @anaconda                       
    kernel.x86_64                                                     5.14.0-503.33.1.el9_5            @baseos                         
    kernel-core.x86_64                                                5.14.0-427.24.1.el9_4            @anaconda                       
    kernel-core.x86_64                                                5.14.0-503.33.1.el9_5            @baseos                         
    kernel-headers.x86_64                                             5.14.0-503.33.1.el9_5            @appstream                      
    kernel-modules.x86_64                                             5.14.0-427.24.1.el9_4            @anaconda                       
    kernel-modules.x86_64                                             5.14.0-503.33.1.el9_5            @baseos                         
    kernel-modules-core.x86_64                                        5.14.0-427.24.1.el9_4            @anaconda                       
    kernel-modules-core.x86_64                                        5.14.0-503.33.1.el9_5            @baseos                         
    kernel-srpm-macros.noarch                                         1.0-13.el9                       @appstream                      
    kernel-tools.x86_64                                               5.14.0-503.33.1.el9_5            @baseos                         
    kernel-tools-libs.x86_64                                          5.14.0-503.33.1.el9_5            @baseos                                       
    
    [root@controller-01 ~]#  yum list installed | grep ceph 
    blosc.x86_64                                                      1.21.0-3.el9s                    @centos-ceph-pacific            
    centos-release-ceph-reef.noarch                                   1.0-1.el9                        @extras                         
    ceph-common.x86_64    
                                                2:18.2.4-2.el9s                  @centos-ceph-reef             
    
    [root@controller-01 ~]# uname -a
    Linux controller-01.cloud.pd.infn.it 5.14.0-503.33.1.el9_5.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Mar 20 03:39:23 EDT 2025 x86_64 x86_64 x86_64 GNU/Linux
  • Rimuovere release Caracal

    Code Block
    languagebash
    yum remove centos-release-openstack-caracal.noarch


  • Installare Epoxy

    Code Block
    languagebash
    dnf install -y https://trunk.rdoproject.org/rdo_release/rdo-release.el9s.rpm  (serve questo e contiene il repo epoxy)
    
    #### facendo un check
    [root@todelff ~]# rpm -qil rdo-release
    Name        : rdo-release
    Version     : epoxy
    Release     : 1.el9s
    Architecture: noarch
    Install Date: Wed Mar 11 15:29:25 2026
    Group       : System Environment/Base
    Size        : 13372
    License     : Apache2
    Signature   : (none)
    Source RPM  : rdo-release-epoxy-1.el9s.src.rpm
    Build Date  : Fri Mar 14 17:12:13 2025
    Build Host  : doogie-n1.rdu2.centos.org
    Packager    : CBS <cbs@centos.org>
    Vendor      : CentOS Cloud SIG
    URL         : https://github.com/rdo-infra/rdo-release
    Summary     : RDO repository configuration
    Description :
    This package contains the RDO repository
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Messaging
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-NFV
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
    /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9
    /etc/yum.repos.d/ceph-reef.repo
    /etc/yum.repos.d/messaging.repo
    /etc/yum.repos.d/nfv-openvswitch.repo
    /etc/yum.repos.d/rdo-release.repo
    /etc/yum.repos.d/rdo-testing.repo
    
    


  • Salvare configurazioni che di solito vengono sovrascritte

    Code Block
    languagebash
    export REL=caracal
    cp /etc/httpd/conf.d/openstack-dashboard.conf /etc/httpd/conf.d/openstack-dashboard.conf.$REL
  • Update pacchetti

    Code Block
    languagebash
    dnf update
    
    ## se update fallisce con problemi vari:
    [root@controller-01 ~]# dnf update
    CentOS-9 - Ceph Reef                                                                                                                         489 kB/s | 415 kB     00:00    
    OpenStack Epoxy Repository                                                                                                                   2.5 MB/s | 1.7 MB     00:00    
    Last metadata expiration check: 0:00:01 ago on Fri 13 Mar 2026 10:41:31 AM CET.
    Error: 
     Problem 1: cannot install both python3-keystone-1:27.0.0-1.el9s.noarch from openstack-epoxy and python3-keystone-1:25.0.0-1.el9s.noarch from @System
      - package python3-keystone+memcache-1:25.0.0-1.el9s.noarch from @System requires python3-keystone = 1:25.0.0-1.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-keystone-1:25.0.0-1.el9s.noarch
      - problem with installed package python3-keystone+memcache-1:25.0.0-1.el9s.noarch
     Problem 2: cannot install both python3-oslo-messaging-16.1.0-1.el9s.noarch from openstack-epoxy and python3-oslo-messaging-14.7.2-1.el9s.noarch from @System
      - package python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch from @System requires python3-oslo-messaging = 14.7.2-1.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-oslo-messaging-14.7.2-1.el9s.noarch
      - problem with installed package python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch
     Problem 3: cannot install both python3-requests-2.32.3-4.el9s.noarch from openstack-epoxy and python3-requests-2.31.0-3.el9s.noarch from @System
      - package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch from @System requires python3-requests = 2.31.0-3.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-requests-2.31.0-3.el9s.noarch
      - problem with installed package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch
    (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
    
    # Quindi vanno rimossi i seguenti pacchetti
    
    rpm -e --nodeps python3-keystone+memcache
    rpm -e --nodeps python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch
    rpm -e --nodeps python3-oslo-messaging+amqp1 
    
    mettere gpgcheck=0 in /etc/yum.repos.d/EGI-trustanchors.repo
    
    dnf update -y
    
    


  • Salvare le vecchie configurazioni
  • Code Block
    languagebash
    titlerisultato
    collapsetrue
    # Nell’update vengono scaricati i nuovi rpm: attenzione a questi file di configurazione
    
    cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.$REL
    cp /etc/nova/nova.conf /etc/nova/nova.conf.$REL
    cp /etc/placement/placement.conf /etc/placement/placement.conf.$REL
    cp /etc/heat/heat.conf /etc/heat/heat.conf.$REL
    cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.$REL
    cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.$REL
    cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.$REL
    cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.$REL
    cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.$REL
    cp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.$REL
    cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.$REL
    cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.$REL
    cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.$REL
    cp /etc/httpd/conf.d/auth_openidc.conf /etc/httpd/conf.d/auth_openidc.conf.$REL
    
    mv -f /etc/openstack-dashboard/local_settings.rpmnew /etc/openstack-dashboard/local_settings
    mv -f /etc/httpd/conf.d/openstack-dashboard.conf.rpmnew /etc/httpd/conf.d/openstack-dashboard.conf
    mv -f /etc/nova/nova.conf.rpmnew /etc/nova/nova.conf
    mv -f /etc/placement/placement.conf.rpmnew /etc/placement/placement.conf
    mv -f /etc/heat/heat.conf.rpmnew /etc/heat/heat.conf
    mv -f /etc/neutron/neutron.conf.rpmnew /etc/neutron/neutron.conf
    mv -f /etc/neutron/dhcp_agent.ini.rpmnew /etc/neutron/dhcp_agent.ini
    mv -f /etc/neutron/l3_agent.ini.rpmnew /etc/neutron/l3_agent.ini
    mv -f /etc/neutron/metadata_agent.ini.rpmnew /etc/neutron/metadata_agent.ini
    mv -f /etc/neutron/plugins/ml2/ml2_conf.ini.rpmnew /etc/neutron/plugins/ml2/ml2_conf.ini
    mv -f /etc/neutron/plugins/ml2/openvswitch_agent.ini.rpmnew /etc/neutron/plugins/ml2/openvswitch_agent.ini
    mv -f /etc/keystone/keystone.conf.rpmnew /etc/keystone/keystone.conf
    mv -f /etc/glance/glance-api.conf.rpmnew /etc/glance/glance-api.conf
    mv -f /etc/cinder/cinder.conf.rpmnew /etc/cinder/cinder.conf
    mv -f /etc/httpd/conf.d/auth_openidc.conf.rpmnew /etc/httpd/conf.d/auth_openidc.conf
  • In puppet modificare il file  /var/puppet/puppet_epoxy_env_test/controller_epoxy/manifests/service.pp  perche' non faccia partire i servizi una volta aggiornati. Per questo mettere tutti i servizi in stopped
  • Code Block
    languagebash
    # in service.pp mettere per tutti i servizi
     ensure      => stopped,
     enable      => false,
    # e committare su git


  • In Foreman abilitare la classe Epoxy: da pagina web modificare la classe puppet del controller-01 selezionando Epoxy e poi eseguire puppet sul nodo

Code Block
languagebash
In https://cld-config.cloud.pd.infn.it/hosts/controller-xx.cloud.pd.infn.it editare l'host sostituendo l'hostgroup "hosts_all/ControllerNode-Test" con "hosts_all/ControllerNode_Test-Epoxy"

Nel controller poi eseguire
puppet agent -t 

Se ci sono problemi con i sertificati (di solito dopo il ripristino del clone), vedere procedura in https://confluence.infn.it/x/kw5-B

A questo punto tutti i servizi sono configurati sul controller-01


  • KEYSTONE

    Code Block
    languagebash
    # TODO: backup database keystone
    
    su -s /bin/sh -c "keystone-manage doctor" keystone
    
    [root@controller-01 StartServices]# su -s /bin/sh -c "keystone-manage doctor" keystone
    WARNING: `keystone.conf [cache] enabled` is not enabled.
        Caching greatly improves the performance of keystone, and it is highly
        recommended that you enable it.
    
    su -s /bin/sh -c "keystone-manage db_sync --expand" keystone
    
    ===============================================================================================
    Dopo l'aggiornamento del controller-02 e fatto ripartire httpd, si deve eseguire il comando
    su -s /bin/sh -c "keystone-manage db_sync --contract" keystone    
  • PLACEMENT
  • Code Block
    languagebash
    1) su -s /bin/sh -c "placement-manage db sync" placement 
    
    2) accendere i servizi per keystone, placement e dashboard
    systemctl start httpd.service memcached.service shibd.service
    
    3) in cld-config modificare il file dell'HAproxy in modo che i tre servizi keystone, placement e dashboard (memcached) puntino al controller-01 commentando il controller-02 (controllare porte 5000, 5001, 443, 8778, 11211):
    
    cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    
    4) eseguire puppet sui tre haproxy 
    ssh root@cld-haproxy-test-01 / 02/ 03
    puppet agent -t
    
    5) spegnere e disabilitare i servizi sul controller-02
    systemctl stop httpd.service memcached.service shibd.service
    systemctl disable httpd.service memcached.service shibd.service
    
    
    Controllare che funzioni tutto a livello di dashboard, in particolare il calendario prenotazioni GPU (se non funziona interviene Sergio)
    
    
  • GLANCE

    Code Block
    languagebash
    ATTENZIONE: controllare se c'e'un ordine per l'update di glance (si possono avere due release diverse contemporaneamente?). 
    Esiste  ma e' considerato non production ready, o la doc non e' aggiornata (https://docs.openstack.org/glance/2025.1/admin/zero-downtime-db-upgrade.html) 
    
    Per glance e' meglio non rischiare e fare il down dei servizio sui due controller quindi:
    
    1) spegnere il servizio glance sul controller-02
    systemctl stop openstack-glance-api.service
    systemctl disable openstack-glance-api.service
    
    Sul controller-01 (gia' configurato ad Epoxy perche' abbiamo girato puppet):
    
    2) su -s /bin/sh -c "glance-manage db expand" glance
    
    [root@controller-01 StartServices]# cat /var/log/glance/glance-manage.log 
    2026-03-16 17:30:38.111 173040 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
    2026-03-16 17:30:38.111 173040 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
    
    3) su -s /bin/sh -c "glance-manage db migrate" glance
    
    [root@controller-01 StartServices]# su -s /bin/sh -c "glance-manage db migrate" glance
    2026-03-16 17:31:33.469 173073 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
    2026-03-16 17:31:33.470 173073 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
    Database is up to date. No migrations needed.
    [root@controller-01 StartServices]# 
    
    4) systemctl start openstack-glance-api.service
    
    Mar 16 17:31:56 controller-01.cloud.pd.infn.it glance-api[173117]: 2026-03-16 17:31:56.058 173117 WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set with>
    
    
    5)) Modificare l'HA proxy in modo che glance punti al controller-01
    in cld-config:
    cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    (controllare porta 9292)
    
    6) eseguire puppet nei tre haproxy
    puppet agent -t
    
    =============================================================
    Quanto anche il controller-02 sara' aggiornato, eseguire
    su -s /bin/sh -c "glance-manage db contract" glance
    
    
  • NOVA    
    Code Block
    languagebash
    su -s /bin/sh -c "nova-status upgrade check" nova
    su -s /bin/sh -c "nova-manage api_db sync" nova
    
    in nova-manage.log
    2026-03-17 10:59:31.218 208205 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
    2026-03-17 10:59:31.219 208205 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
    
    
    su -s /bin/sh -c "nova-manage db sync" nova
    
    in nova-manage.log
    026-03-17 11:00:31.148 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade 13863f4e1612 -> d60bddf7a903, add_constraint_instance_share_avoid_duplicates
    2026-03-17 11:00:32.539 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade d60bddf7a903 -> 2903cd72dc14, add_tls_port_to_console_auth_tokens
    2026-03-17 11:00:32.746 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Context impl MySQLImpl.
    2026-03-17 11:00:32.747 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Will assume non-transactional DDL.
    2026-03-17 11:00:32.755 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade 13863f4e1612 -> d60bddf7a903, add_constraint_instance_share_avoid_duplicates
    2026-03-17 11:00:33.176 208229 INFO alembic.runtime.migration [None req-22977721-3f23-4cb3-ac86-834aa11e3b59 - - - - - -] Running upgrade d60bddf7a903 -> 2903cd72dc14, add_tls_port_to_console_auth_tokens
    
    
    Far partire il servizio nel controller1 
    systemctl start \
        openstack-nova-api.service \
        openstack-nova-scheduler.service \
        openstack-nova-conductor.service \
        openstack-nova-novncproxy.service
    
    Modificare l'HA proxy in modo che nova punti al controller-01
    in cld-config:
    cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    (controllare porte 8774, 8775, 6080)
    
    Eseguire puppet nei tre haproxy
    puppet agent -t
    
    Spegnere e disabilitare il servizio nel controller2
    
     systemctl stop \
        openstack-nova-api.service \
        openstack-nova-scheduler.service \
        openstack-nova-conductor.service \
        openstack-nova-novncproxy.service
     
      systemctl disable \
        openstack-nova-api.service \
        openstack-nova-scheduler.service \
        openstack-nova-conductor.service \
        openstack-nova-novncproxy.service 
    
    ==============================================================================
    Quando anche il controller2 e tutti i compute saranno aggiornati, eseguire di nuovo
    su -s /bin/sh -c "nova-manage db online_data_migrations" nova
  • NEUTRON

    Code Block
    languagebash
    su -s /bin/sh -c "neutron-db-manage upgrade --expand" neutron
    
    a monitor compare:
    INFO  [alembic.runtime.migration] Context impl MySQLImpl.
    INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
      Running upgrade (expand) for neutron ...
    INFO  [alembic.runtime.migration] Context impl MySQLImpl.
    INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
    INFO  [alembic.runtime.migration] Running upgrade 0e6eff810791 -> 175fa80908e1
    INFO  [alembic.runtime.migration] Running upgrade 175fa80908e1 -> 5bcb7b31ec7d
    INFO  [alembic.runtime.migration] Running upgrade 5bcb7b31ec7d -> ad80a9f07c5c
      OK
    
    
    Far partire il servizio 
    
    systemctl start neutron-server.service 
    systemctl start neutron-openvswitch-agent.service neutron-dhcp-agent.service \
      neutron-metadata-agent.service neutron-l3-agent.service
    
    
    N.B compare a log  2026-03-31 11:16:14.500 882189 WARNING oslo_config.cfg [-] Deprecated: Option "api_paste_config" 
    
    
    #Staccare i router dal controller-02
    
    for i in $(openstack router list -f value -c ID); do echo $i; openstack network agent list --agent-type l3 --sort-column Host --router $i --long; done 
    openstack network agent remove router --l3 aa34b512-89d8-4913-aee1-9f2d2fdf124c eaa80135-6b79-44e0-b637-cef88d09b85c;
    
    openstack network agent remove router --l3 b91764b8-58a2-4ad6-a8fc-fd20aa664571 92e8b080-f3aa-4d9f-b3d4-613e0dbfd099
    openstack network agent remove router --l3 b91764b8-58a2-4ad6-a8fc-fd20aa664571 9e31c216-0635-4d21-b7aa-63fe4aee875e
    openstack network agent remove router --l3 b91764b8-58a2-4ad6-a8fc-fd20aa664571 eaa80135-6b79-44e0-b637-cef88d09b85c
    
    #verificare che l'ip sia collegato ora allo 01
    ip netns exec qrouter-92e8b080-f3aa-4d9f-b3d4-613e0dbfd099 ip a | grep 172.25.27.180
    ip netns exec qrouter-9e31c216-0635-4d21-b7aa-63fe4aee875e ip a | grep 90.147.77.210
    ip netns exec qrouter-eaa80135-6b79-44e0-b637-cef88d09b85c ip a | grep 90.147.143.145
    
    
    Modificare l'HA proxy in modo che neutron punti al controller-01
    in cld-config:
    cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    (controllare porta 9696)
    
    Eseguire puppet nei tre haproxy
    puppet agent -t
    
    Stoppare e disabilitare il servizio sul controller2
    systemctl stop neutron-openvswitch-agent.service neutron-dhcp-agent.service \
      neutron-metadata-agent.service neutron-l3-agent.service
    systemctl stop neutron-server.service
    
    systemctl disable neutron-openvswitch-agent.service neutron-dhcp-agent.service \
      neutron-metadata-agent.service neutron-l3-agent.service
    systemctl disable neutron-server.service
     
    
    CONTROLLARE: 
    [root@controller-01 neutron]# openstack server list
    Could not load 'message_list': module 'zaqarclient.queues.v2.cli' has no attribute 'OldListMessages'
    Could not load 'message_post': module 'zaqarclient.queues.v2.cli' has no attribute 'OldPostMessages'
    ....
    
    
    
    =========================================================================
    Quando anche il controller2 sara' aggiornato eseguire il comando
    
    su -s /bin/sh -c "neutron-db-manage upgrade --contract" neutron


  • CINDER
Code Block
languagebash
su -s /bin/sh -c "cinder-manage db sync" cinder

2026-03-17 11:57:43.085 212882 INFO cinder.db.migration [-] Applying migration(s)
2026-03-17 11:57:43.088 212882 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
2026-03-17 11:57:43.088 212882 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
2026-03-17 11:57:43.132 212882 INFO cinder.db.migration [-] Migration(s) applied

Far partire il servizio sul controller1 
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service

Modificare l'HA per far puntare il servizio al controller1

Modificare l'HA proxy in modo che cinder punti al controller-01
in cld-config:
cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_cinder_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
(controllare porta 8776)

Eseguire puppet nei tre haproxy
puppet agent -t

Stopparlo e disabilitarlo sul controller2
systemctl stop openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
systemctl disable openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service

==============================================================================
Quando il controller2 sara' aggiornato rieseguire il online_data_migration
su -s /bin/sh -c "cinder-manage db online_data_migrations" cinder


  • HEAT
Code Block
languagebash
su -s /bin/sh -c "heat-manage db_sync" heat

026-03-17 12:27:45.669 216268 INFO heat.db.migration [-] Applying migration(s)
2026-03-17 12:27:45.682 216268 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
2026-03-17 12:27:45.682 216268 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
2026-03-17 12:27:45.689 216268 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
2026-03-17 12:27:45.689 216268 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
2026-03-17 12:27:45.696 216268 INFO heat.db.migration [-] Migration(s) applied


Accendere il servizio sul controller1 
systemctl start openstack-heat-api.service \
  openstack-heat-api-cfn.service openstack-heat-engine.service

Modificare l'HA proxy in modo che heat punti al controller-01
in cld-config:
cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_cinder_heat_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
(controllare porte 8000, 8004)

Eseguire puppet nei tre haproxy
puppet agent -t
e spegnerlo e disabilitarlo sul controller2

 systemctl stop openstack-heat-api.service \
  openstack-heat-api-cfn.service openstack-heat-engine.service
 
systemctl disable openstack-heat-api.service \
  openstack-heat-api-cfn.service openstack-heat-engine.service


  • DASHBOARD: nulla da fare

A questo punto tutti i servizi puntano al controller1.

Installazione Epoxy nel controller-02

  • rimuovere Caracal
  • Code Block
    languagebash
    yum remove centos-release-openstack-caracal.noarch
  • installare Epoxy
  • Code Block
    languagebash
    dnf install -y https://trunk.rdoproject.org/rdo_release/rdo-release.el9s.rpm  (serve questo e contiene il repo epoxy)
    
    #### facendo un check
    [root@todelff ~]# rpm -qil rdo-release
    Name        : rdo-release
    Version     : epoxy
    Release     : 1.el9s
    Architecture: noarch
    Install Date: Wed Mar 11 15:29:25 2026
    Group       : System Environment/Base
    Size        : 13372
    License     : Apache2
    Signature   : (none)
    Source RPM  : rdo-release-epoxy-1.el9s.src.rpm
    Build Date  : Fri Mar 14 17:12:13 2025
    Build Host  : doogie-n1.rdu2.centos.org
    Packager    : CBS <cbs@centos.org>
    Vendor      : CentOS Cloud SIG
    URL         : https://github.com/rdo-infra/rdo-release
    Summary     : RDO repository configuration
    Description :
    This package contains the RDO repository
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Messaging
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-NFV
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
    /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9
    /etc/yum.repos.d/ceph-reef.repo
    /etc/yum.repos.d/messaging.repo
    /etc/yum.repos.d/nfv-openvswitch.repo
    /etc/yum.repos.d/rdo-release.repo
    /etc/yum.repos.d/rdo-testing.repo
    
    
  • Code Block
    languagebash
    export REL=caracal
    cp /etc/httpd/conf.d/openstack-dashboard.conf /etc/httpd/conf.d/openstack-dashboard.conf.$REL
  • Code Block
    languagebash
    dnf update
    
    ## se update fallisce con problemi vari:
    [root@controller-01 ~]# dnf update
    CentOS-9 - Ceph Reef                                                                                                                         489 kB/s | 415 kB     00:00    
    OpenStack Epoxy Repository                                                                                                                   2.5 MB/s | 1.7 MB     00:00    
    Last metadata expiration check: 0:00:01 ago on Fri 13 Mar 2026 10:41:31 AM CET.
    Error: 
     Problem 1: cannot install both python3-keystone-1:27.0.0-1.el9s.noarch from openstack-epoxy and python3-keystone-1:25.0.0-1.el9s.noarch from @System
      - package python3-keystone+memcache-1:25.0.0-1.el9s.noarch from @System requires python3-keystone = 1:25.0.0-1.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-keystone-1:25.0.0-1.el9s.noarch
      - problem with installed package python3-keystone+memcache-1:25.0.0-1.el9s.noarch
     Problem 2: cannot install both python3-oslo-messaging-16.1.0-1.el9s.noarch from openstack-epoxy and python3-oslo-messaging-14.7.2-1.el9s.noarch from @System
      - package python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch from @System requires python3-oslo-messaging = 14.7.2-1.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-oslo-messaging-14.7.2-1.el9s.noarch
      - problem with installed package python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch
     Problem 3: cannot install both python3-requests-2.32.3-4.el9s.noarch from openstack-epoxy and python3-requests-2.31.0-3.el9s.noarch from @System
      - package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch from @System requires python3-requests = 2.31.0-3.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-requests-2.31.0-3.el9s.noarch
      - problem with installed package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch
    (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
    
    # Quindi vanno rimossi i seguenti pacchetti
    
    rpm -e --nodeps python3-keystone+memcache
    rpm -e --nodeps python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch
    rpm -e --nodeps python3-oslo-messaging+amqp1 
    
    dnf update -y
    
    Se da problemi con update
    
    editare 
    /etc/yum.repos.d/EGI-trustanchors.repo
    e mettere gpgcheck a 1 (ovvero disabilitare il check)
    
    
  • Code Block
    languagebash
    titlerisultato
    collapsetrue
    # Nell’update vengono scaricati i nuovi rpm: attenzione a questi file di configurazione
    
    ## cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.$REL
    cp /etc/nova/nova.conf /etc/nova/nova.conf.$REL
    cp /etc/placement/placement.conf /etc/placement/placement.conf.$REL
    cp /etc/heat/heat.conf /etc/heat/heat.conf.$REL
    cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.$REL
    cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.$REL
    cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.$REL
    cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.$REL
    cp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.$REL
    cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.$REL
    cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.$REL
    cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.$REL
    cp /etc/httpd/conf.d/auth_openidc.conf /etc/httpd/conf.d/auth_openidc.conf.$REL
    
    ## mv -f /etc/openstack-dashboard/local_settings.rpmnew /etc/openstack-dashboard/local_settings
    mv -f /etc/nova/nova.conf.rpmnew /etc/nova/nova.conf
    mv -f /etc/placement/placement.conf.rpmnew /etc/placement/placement.conf
    mv -f /etc/heat/heat.conf.rpmnew /etc/heat/heat.conf
    mv -f /etc/neutron/dhcp_agent.ini.rpmnew /etc/neutron/dhcp_agent.ini
    mv -f /etc/neutron/l3_agent.ini.rpmnew /etc/neutron/l3_agent.ini
    mv -f /etc/neutron/metadata_agent.ini.rpmnew /etc/neutron/metadata_agent.ini
    mv -f /etc/neutron/plugins/ml2/ml2_conf.ini.rpmnew /etc/neutron/plugins/ml2/ml2_conf.ini
    mv -f /etc/neutron/plugins/ml2/openvswitch_agent.ini.rpmnew /etc/neutron/plugins/ml2/openvswitch_agent.ini
    mv -f /etc/keystone/keystone.conf.rpmnew /etc/keystone/keystone.conf
    mv -f /etc/glance/glance-api.conf.rpmnew /etc/glance/glance-api.conf
    mv -f /etc/cinder/cinder.conf.rpmnew /etc/cinder/cinder.conf
    mv -f /etc/httpd/conf.d/auth_openidc.conf.rpmnew /etc/httpd/conf.d/auth_openidc.conf
  • Cambiare classe in Foreman con Epoxy
  • Code Block
    languagebash
    Da pagina web di Foreman, modificare la classe puppet del controller selezionando Epoxy
    in https://cld-config.cloud.pd.infn.it/hosts/controller-xx.cloud.pd.infn.it editare l'host sostituendo l'hostgroup "hosts_all/ControllerNode-Test" con "hosts_all/ControllerNode_Test-Epoxy"
    
    Nel controller poi eseguire
    puppet agent -t 
    
    A questo punto tutti i servizi sono configurati
  • girare puppet nel nodo 
    Code Block
    languageshell
    puppet agent -t


  • attivare i servizi modificando service.pp per far partire i servizi 
    Code Block
    languageshell
    # modificare in service.pp tutti i servizi
       ensure      => running,
       enable      => true,
    # e committare in git
  •  riabilitare puppet nel nodo
    Code Block
    languageshell
    systemctl start puppet
    systemctl enable puppet
  • modificare il cld-config il file di haproxy per utilizzare i due controller 
    Code Block
    languageshell
    cp  /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg.orig  /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
  • eseguire puppet nei tre haproxy
  • fare i contract o online-migration del db per i servizi che lo richiedono 
    Code Block
    languageshell
    # Dopo l'aggiornamento del controller-02
    su -s /bin/sh -c "keystone-manage db_sync --contract" keystone    
    
    su -s /bin/sh -c "glance-manage db contract" glance
    
    su -s /bin/sh -c "neutron-db-manage upgrade --contract" neutron
    
    su -s /bin/sh -c "cinder-manage db online_data_migrations" cinder
    
-->> QUI 31/03 <--

  • verificare la creazione di nuove VM. Se la contastualizzazione non funziona dando errore di connessione al metadata server allora controllare se compare l'agente tra i network agent list e vedere quando l'heartbeat e' stato eseguito l'ultima volta.
    • se la data e' vecchia, rimuovere l'agent dai due controller e fare il reboot
      Code Block
      languageshell
      [root@controller-02 nova]# openstack network agent list
      Could not load 'message_list': module 'zaqarclient.queues.v2.cli' has no attribute 'OldListMessages'
      Could not load 'message_post': module 'zaqarclient.queues.v2.cli' has no attribute 'OldPostMessages'
      +--------------------------------------+--------------------+----------------------------------+-------------------+-------+-------+---------------------------+
      | ID                                   | Agent Type         | Host                             | Availability Zone | Alive | State | Binary                    |
      +--------------------------------------+--------------------+----------------------------------+-------------------+-------+-------+---------------------------+
      | 03b6f400-d961-42cd-9df9-89e87dd58ca9 | Open vSwitch agent | controller-02.cloud.pd.infn.it   | None              | :-)   | UP    | neutron-openvswitch-agent |
      | 10f518b3-d9a6-4adf-a482-20723682b5f5 | Metadata agent     | controller-02.cloud.pd.infn.it   | None              | XXX   | UP    | neutron-metadata-agent    |
      | 3241aa58-f697-478c-bacc-4e10d7cc43e7 | Open vSwitch agent | controller-01.cloud.pd.infn.it   | None              | XXX   | UP    | neutron-openvswitch-agent |
      | 7b34d1ad-99a7-4ca8-a1e6-82a90737a635 | Open vSwitch agent | t2-cld-nat-test.cloud.pd.infn.it | None              | :-)   | UP    | neutron-openvswitch-agent |
      | 7c026284-8b62-420d-9163-464c3b28bf24 | Open vSwitch agent | compute-01.cloud.pd.infn.it      | None              | :-)   | UP    | neutron-openvswitch-agent |
      | 940d868e-8605-42e5-a731-b07e2a2a311e | DHCP agent         | controller-01.cloud.pd.infn.it   | nova              | XXX   | UP    | neutron-dhcp-agent        |
      | aa34b512-89d8-4913-aee1-9f2d2fdf124c | L3 agent           | controller-01.cloud.pd.infn.it   | nova              | XXX   | UP    | neutron-l3-agent          |
      | b60f9a09-06ad-4562-b1c9-72ef265200a6 | DHCP agent         | controller-02.cloud.pd.infn.it   | nova              | :-)   | UP    | neutron-dhcp-agent        |
      | b91764b8-58a2-4ad6-a8fc-fd20aa664571 | L3 agent           | controller-02.cloud.pd.infn.it   | nova              | :-)   | UP    | neutron-l3-agent          |
      | be79d4c8-f24d-47f9-876b-09ed34614dc2 | Open vSwitch agent | compute-03.cloud.pd.infn.it      | None              | :-)   | UP    | neutron-openvswitch-agent |
      | df3074d3-0add-4f78-a5f4-fde900e764f2 | Open vSwitch agent | compute-02.cloud.pd.infn.it      | None              | :-)   | UP    | neutron-openvswitch-agent |
      | fd8b02e9-ca5f-43d4-b1fc-31163ba2b7b3 | Open vSwitch agent | compute-04.cloud.pd.infn.it      | None              | :-)   | UP    | neutron-openvswitch-agent |
      +--------------------------------------+--------------------+----------------------------------+-------------------+-------+-------+---------------------------+
      
      [root@controller-02 nova]# openstack network agent show 10f518b3-d9a6-4adf-a482-20723682b5f5
      Could not load 'message_list': module 'zaqarclient.queues.v2.cli' has no attribute 'OldListMessages'
      Could not load 'message_post': module 'zaqarclient.queues.v2.cli' has no attribute 'OldPostMessages'
      +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
      | Field             | Value                                                                                                                                                 |
      +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
      | admin_state_up    | UP                                                                                                                                                    |
      | agent_type        | Metadata agent                                                                                                                                        |
      | alive             | XXX                                                                                                                                                   |
      | availability_zone | None                                                                                                                                                  |
      | binary            | neutron-metadata-agent                                                                                                                                |
      | configuration     | {'log_agent_heartbeats': False, 'metadata_proxy_socket': '/var/lib/neutron/metadata_proxy', 'nova_metadata_host': '192.168.60.24',                    |
      |                   | 'nova_metadata_port': 8775}                                                                                                                           |
      | created_at        | 2018-11-06 09:30:53                                                                                                                                   |
      | description       | None                                                                                                                                                  |
      | ha_state          | None                                                                                                                                                  |
      | host              | controller-02.cloud.pd.infn.it                                                                                                                        |
      | id                | 10f518b3-d9a6-4adf-a482-20723682b5f5                                                                                                                  |
      | last_heartbeat_at | 2026-03-17 10:41:41                                                                                                                                   |
      | resources_synced  | None      
  • kernel-core.x86_64 5.14.0-427.24.1.el9_4 @anaconda kernel-core.x86_64 5.14.0-503.33.1.el9_5
    •            
  • @baseos
    •                          
  • kernel-headers.x86_64
    •                                              
  • 5.14.0-503.33.1.el9_5
    •             
  • @appstream
    •                       
  • kernel-modules.x86_64
    •                          |
      | started_at        | 2026-03-09 10:56:21           
  • 5.14.0-427.24.1.el9_4
    •                
  • @anaconda
    •                        
  • kernel-modules.x86_64
    •                                              
  • 5.14.0-503.33.1.el9_5
    •             
  • @baseos
    •                          
  • kernel-modules-core.x86_64
    • |
      | topic             | N/A                                    
  • 5.14.0-427.24.1.el9_4
    •              
  • @anaconda
    •                        
  • kernel-modules-core.x86_64
    •                                         
  • 5.14.0-503.33.1.el9_5
    •               
  • @baseos
    •                      
  • kernel-srpm-macros.noarch 1.0-13.el9 @appstream kernel-tools.x86_64 5.14.0-503.33.1.el9_5 @baseos kernel-tools-libs.x86_64 5.14.0-503.33.1.el9_5 @baseos                                       
    • |
      +-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
      [root@controller-02 nova]# 
      
      
    • Dopo aver eliminato i metadata agent e fatto il reboot la situazione e' la seguente: 
    • Code Block
      languageshell
      [root@controller-01 ~]# openstack 
  • yum
    • network agent list
      Could 
  • installed
    • not 
  • | grep ceph blosc.x86_64 1.21.0-3.el9s @centos-ceph-pacific centos-release-ceph-reef.noarch 1.0-1.el9 @extras ceph-common.x86_64 2:18.2.4-2.el9s @centos-ceph-reef              [root@controller-01 ~]# uname -a Linux controller-01.cloud.pd.infn.it 5.14.0-503.33.1.el9_5.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Mar 20 03:39:23 EDT 2025 x86_64 x86_64 x86_64 GNU/Linux

    Rimuovere release Caracal

    Code Block
    languagebash
    yum remove centos-release-openstack-caracal.noarch

    Installare Epoxy

    Code Block
    languagebash
    dnf install -y https://trunk.rdoproject.org/rdo_release/rdo-release.el9s.rpm  (potrebbe servire)
    dnf install centos-release-openstack-epoxy
  • Salvare configurazioni che di solito vengono sovrascritte

    Code Block
    languagebash
    export REL=caracal
    cp /etc/httpd/conf.d/openstack-dashboard.conf /etc/httpd/conf.d/openstack-dashboard.conf.$REL
  • Update pacchetti

    Code Block
    languagebash
    dnf update -y
    dnf upgrade -y
    
    da verificare e scrivere output
    
    
    
    Code Block
    languagebash
    titlerisultato
    collapsetrue
    # DA VERIFICARE PER EPOXY 
    # Nell’update vengono scaricati i nuovi rpm: attenzione a questi file di configurazione
    
    cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.$REL
    cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.$REL
    cp /etc/nova/nova.conf /etc/nova/nova.conf.$REL
    cp /etc/placement/placement.conf /etc/placement/placement.conf.$REL
    cp /etc/heat/heat.conf /etc/heat/heat.conf.$REL
    cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.$REL
    cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.$REL
    cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.$REL
    cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.$REL
    cp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.$REL
    cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.$REL
    cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.$REL
    cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.$REL
    cp /etc/httpd/conf.d/auth_openidc.conf /etc/httpd/conf.d/auth_openidc.conf.$REL
    
    mv -f /etc/openstack-dashboard/local_settings.rpmnew /etc/openstack-dashboard/local_settings
    mv -f /etc/neutron/neutron.conf.rpmnew /etc/neutron/neutron.conf
    mv -f /etc/nova/nova.conf.rpmnew /etc/nova/nova.conf
    mv -f /etc/placement/placement.conf.rpmnew /etc/placement/placement.conf
    mv -f /etc/heat/heat.conf.rpmnew /etc/heat/heat.conf
    mv -f /etc/neutron/dhcp_agent.ini.rpmnew /etc/neutron/dhcp_agent.ini
    mv -f /etc/neutron/l3_agent.ini.rpmnew /etc/neutron/l3_agent.ini
    mv -f /etc/neutron/metadata_agent.ini.rpmnew /etc/neutron/metadata_agent.ini
    mv -f /etc/neutron/plugins/ml2/ml2_conf.ini.rpmnew /etc/neutron/plugins/ml2/ml2_conf.ini
    mv -f /etc/neutron/plugins/ml2/openvswitch_agent.ini.rpmnew /etc/neutron/plugins/ml2/openvswitch_agent.ini
    mv -f /etc/keystone/keystone.conf.rpmnew /etc/keystone/keystone.conf
    mv -f /etc/glance/glance-api.conf.rpmnew /etc/glance/glance-api.conf
    mv -f /etc/cinder/cinder.conf.rpmnew /etc/cinder/cinder.conf
    mv -f /etc/httpd/conf.d/auth_openidc.conf.rpmnew /etc/httpd/conf.d/auth_openidc.conf
  • Code Block
    languagebash
    Aggiorniamo le configurazioni con puppet
    
    puppet agent -t 
    (il servizi devono rimanere spenti pero'...controllare)
  • KEYSTONE

    Code Block
    languagebash
    #crudini --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONE_xx_yyy@192.168.60.88:4306/keystone
    #crudini --set /etc/keystone/keystone.conf token provider fernet
    
    su -s /bin/sh -c "keystone-manage doctor" keystone
    su -s /bin/sh -c "keystone-manage db_sync --expand" keystone
    
    Facciamo ripartire httpd
    systemctl start httpd
    
    
    Spegnamo Keystone sul controller2 e facciamo aggiornamento alla fine di tutti i servizi o aggiorniamo ogni servizio singolarmente?
    
    Nel caso di aggiornamento del servizio nel controller2: 
    
    systemctl stop httpd
    dnf install -y https://trunk.rdoproject.org/rdo_release/rdo-release.el9s.rpm  (potrebbe servire)
    dnf install centos-release-openstack-epoxy
    dnf update openstack-keystone httpd python3-mod_wsgi
    
    e con i file di configurazione come facciamo?
    
    systemctl start httpd
    su -s /bin/sh -c "keystone-manage db_sync --contract" keystone   
    
    Dalla doc openstack:
    Update your configuration files (/etc/keystone/) on all nodes (except the first node, which you’ve already done) with those corresponding to the latest release.
    Upgrade all keystone nodes to the next release, and restart them one at a time. During this step, you’ll have a mix of releases operating side by side, both writing to the database.
    As the next release begins writing to the new schema, database triggers will also migrate the data to the old schema, keeping both data schemas in sync.
    Run keystone-manage db_sync --contract to remove the old schema and all data migration triggers.
    When this process completes, the database will no longer be able to support the previous release.
    
    Farei aggiornamento del controller2 tutto alla fine
    in questo caso, dopo l'aggiornamento del controller2 e fatto ripartire keystone, si deve eseguire il comando
    su -s /bin/sh -c "keystone-manage db_sync --contract" keystone    
  • GLANCE

    Code Block
    languagebash
    # crudini --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_xx_yyy@192.168.60.88:5306/glance     
    
    su -s /bin/sh -c "glance-manage db expand" glance
    
    su -s /bin/sh -c "glance-manage db migrate" glance
    
    systemctl start openstack-glance-api.service
    
    Spegnamo il servizio nel controller2
    
    systemctl stop openstack-glance-api.service
    
    Quanto anche il controller2 sara' aggiornato, eseguire
    su -s /bin/sh -c "glance-manage db contract" glance
    
    
  • PLACEMENT

    Code Block
    languagebash
    crudini --set /etc/placement/placement.conf placement_database connection \ mysql+pymysql://placement:PLACEMENT_xx_yyy@192.168.60.88:6306/placement
    
    su -s /bin/sh -c "placement-manage db expand" placement    ---> da verificare, altre guide dicono di fare solo il sync, che expand e contract non c'e' per placement...
    
    accendere il servizio placement sul controller1 e spengerlo sul controller2
    systemctl stop httpd
    
    Quando il controller2 sara' aggiornato
    su -s /bin/sh -c "placement-manage db contract" placement ( da verificare)
    
    
  • NOVA

    Code Block
    languagebash
    #crudini --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_xx_yyy@192.168.60.88:6306/nova_api
    #crudini --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_xx_yyy@192.168.60.88:6306/nova
    #crudini --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_zzz@192.168.60.223:5672
    
    su -s /bin/sh -c "nova-status upgrade check" nova
    su -s /bin/sh -c "nova-manage api_db sync" nova
    su -s /bin/sh -c "nova-manage db sync" nova
    
    Far partire il servizio nel controller1 e spegnerlo nel controller2 
    systemctl start/stop \
        openstack-nova-api.service \
        openstack-nova-scheduler.service \
        openstack-nova-conductor.service \
        openstack-nova-novncproxy.service
    
    Quando anche il controller2 e tutti i compute saranno aggiornati, eseguire di nuovo
    su -s /bin/sh -c "nova-manage db online_data_migrations" nova
  • NEUTRON

    Code Block
    languagebash
    #crudini --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:NEUTRON_xx_yyy@192.168.60.88:5306/neutron
    #crudini --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:RABBIT_zzz@192.168.60.223:5672
    #crudini --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
    #crudini --set /etc/neutron/neutron.conf DEFAULT service_plugins router
    #crudini --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True 
    
    #crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan,gre
    #crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
    #crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
    #crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
    #crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks *
    #crudini --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
    
    #su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
     
    su -s /bin/sh -c "neutron-db-manage upgrade --expand" neutron
    
    Stoppare il servizio sul controller2
    systemctl stop neutron-server.service \
      neutron-openvswitch-agent.service neutron-dhcp-agent.service \
      neutron-metadata-agent.service
    
    systemctl stop neutron-l3-agent.service
    
    Quando anche il controller2 sara' aggiornato eseguire il comando
    
    su -s /bin/sh -c "neutron-db-manage upgrade --contract" neutron

    CINDER

    Code Block
    languagebash
    #crudini --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:CINDER_xx_yyy@192.168.60.88:5306/cinder
    #crudini --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:RABBIT_zzz@192.168.60.223:5672
    
    su -s /bin/sh -c "cinder-manage db sync" cinder
    
    Far partire il servizio sul controller1 e stopparlo sul controller2
    systemctl start/stop openstack-cinder-api.service openstack-cinder-scheduler.service
    Quando il controller2 sara' aggiornato rieseguire il online_data_migration
    su -s /bin/sh -c "cinder-manage db online_data_migrations" cinder

    HEAT

    Code Block
    languagebash
    crudini --set /etc/heat/heat.conf database connection mysql+pymysql://heat:HEAT_xx_yyy@192.168.60.88:4306/heat
    crudini --set /etc/heat/heat.conf DEFAULT transport_url rabbit://openstack:RABBIT_zzz@192.168.60.223:5672
    
    su -s /bin/sh -c "heat-manage db_sync --command expand" heat
    su -s /bin/sh -c "heat-manage db_sync --command migrate_data" heat
    
    Accendere il servizio sul controller1 e spegnerlo sul controller2
    systemctl start/stop openstack-heat-api.service \
      openstack-heat-api-cfn.service openstack-heat-engine.service
    
    Quando anche il controller2 sara' aggiornato
    su -s /bin/sh -c "heat-manage db_sync --command contract" heat
    DASHBOARD: nulla da fare
    • load 'message_list': module 'zaqarclient.queues.v2.cli' has no attribute 'OldListMessages'
      Could not load 'message_post': module 'zaqarclient.queues.v2.cli' has no attribute 'OldPostMessages'
      +--------------------------------------+--------------------+----------------------------------+-------------------+-------+-------+---------------------------+
      | ID                                   | Agent Type         | Host                             | Availability Zone | Alive | State | Binary                    |
      +--------------------------------------+--------------------+----------------------------------+-------------------+-------+-------+---------------------------+
      | 03b6f400-d961-42cd-9df9-89e87dd58ca9 | Open vSwitch agent | controller-02.cloud.pd.infn.it   | None              | :-)   | UP    | neutron-openvswitch-agent |
      | 3241aa58-f697-478c-bacc-4e10d7cc43e7 | Open vSwitch agent | controller-01.cloud.pd.infn.it   | None              | :-)   | UP    | neutron-openvswitch-agent |
      | 7b34d1ad-99a7-4ca8-a1e6-82a90737a635 | Open vSwitch agent | t2-cld-nat-test.cloud.pd.infn.it | None              | :-)   | UP    | neutron-openvswitch-agent |
      | 7c026284-8b62-420d-9163-464c3b28bf24 | Open vSwitch agent | compute-01.cloud.pd.infn.it      | None              | :-)   | UP    | neutron-openvswitch-agent |
      | 940d868e-8605-42e5-a731-b07e2a2a311e | DHCP agent         | controller-01.cloud.pd.infn.it   | nova              | :-)   | UP    | neutron-dhcp-agent        |
      | aa34b512-89d8-4913-aee1-9f2d2fdf124c | L3 agent           | controller-01.cloud.pd.infn.it   | nova              | :-)   | UP    | neutron-l3-agent          |
      | b60f9a09-06ad-4562-b1c9-72ef265200a6 | DHCP agent         | controller-02.cloud.pd.infn.it   | nova              | :-)   | UP    | neutron-dhcp-agent        |
      | b91764b8-58a2-4ad6-a8fc-fd20aa664571 | L3 agent           | controller-02.cloud.pd.infn.it   | nova              | :-)   | UP    | neutron-l3-agent          |
      | be79d4c8-f24d-47f9-876b-09ed34614dc2 | Open vSwitch agent | compute-03.cloud.pd.infn.it      | None              | :-)   | UP    | neutron-openvswitch-agent |
      | df3074d3-0add-4f78-a5f4-fde900e764f2 | Open vSwitch agent | compute-02.cloud.pd.infn.it      | None              | :-)   | UP    | neutron-openvswitch-agent |
      | fd8b02e9-ca5f-43d4-b1fc-31163ba2b7b3 | Open vSwitch agent | compute-04.cloud.pd.infn.it      | None              | :-)   | UP    | neutron-openvswitch-agent |
      +--------------------------------------+--------------------+----------------------------------+-------------------+-------+-------+---------------------------+
      
      


Far partire tutti i db mysql del cluster percona, accendendoli con ordine inverso allo spegnimento

Code Block
languageshell
[root@cld-db-test-05 ~]# systemctl start mysql
[root@cld-db-test-06 ~]# systemctl start mysql



Compute

Mettere in drain un nodo alla volta.

openstack compute service set --disable compute-01.cloud.pd.infn.it nova-compute

openstack compute service list


Per il singolo nodo in drain, migrare le VM con live migration quando possibile (altrimenti si spegne e si migra)

In foreman cambiamo la classe per Epoxy

Giro puppet 


In caso di nodi con VM che non possono essere migrate come fare l'update (vedi in passato)




----QUI---

Configurare il controller via puppet

  • in foreman https://cld-config.cloud.pd.infn.it/hosts/controller-01.cloud.pd.infn.it editare l'host sostituendo l'hostgroup "hosts_all/ControllerNode-Test" con "hosts_all/ControllerNode-Test_Epoxy" ed eseguire puppet nel nodo

Installare pacchetti openstack-heat-ui e python3-osc-placement


Code Block
languagebash
[root@controller-01 yum.repos.d]# yum install openstack-heat-ui
Last metadata expiration check: 1:28:25 ago on Thu 23 Jan 2025 04:37:45 PM CET.
Dependencies resolved.
==============================================================================================================================================================================================
 Package                                                Architecture                     Version                                     Repository                                          Size
==============================================================================================================================================================================================
Installing:
 openstack-heat-ui                                      noarch                           11.0.0-2.el9s                               centos-openstack-caracal                           892 k
Installing dependencies:
 python3-XStatic-Angular-UUID                           noarch                           0.0.4.0-13.el9s                             centos-openstack-caracal                            13 k
 python3-XStatic-Angular-Vis                            noarch                           4.16.0.0-10.el9s                            centos-openstack-caracal                            13 k
 python3-XStatic-FileSaver                              noarch                           1.3.2.0-10.el9s                             centos-openstack-caracal                            13 k
 python3-XStatic-JS-Yaml                                noarch                           3.8.1.0-11.el9s                             centos-openstack-caracal                            13 k
 python3-XStatic-Json2yaml                              noarch                           0.1.1.0-10.el9s                             centos-openstack-caracal                            13 k
 xstatic-angular-uuid-common                            noarch                           0.0.4.0-13.el9s                             centos-openstack-caracal                            11 k
 xstatic-angular-vis-common                             noarch                           4.16.0.0-10.el9s                            centos-openstack-caracal                           9.6 k
 xstatic-filesaver-common                               noarch                           1.3.2.0-10.el9s                             centos-openstack-caracal                            11 k
 xstatic-js-yaml-common                                 noarch                           3.8.1.0-11.el9s                             centos-openstack-caracal                            30 k
 xstatic-json2yaml-common                               noarch                           0.1.1.0-10.el9s                             centos-openstack-caracal                           9.2 k

Transaction Summary
=====================================================


[root@controller-01 keystone]# yum install python3-osc-placement
Last metadata expiration check: 2:05:32 ago on Thu 23 Jan 2025 04:37:45 PM CET.
Dependencies resolved.
==============================================================================================================================================================================================
 Package                                            Architecture                        Version                                   Repository                                             Size
==============================================================================================================================================================================================
Installing:
 python3-osc-placement                              noarch                              4.3.0-1.el9s                              centos-openstack-caracal                               51 k

Transaction Summary
============================================================

Rabbit per nova e neutron

in caracal abbiamo deciso si utilizzare un rabbit dedicato per il servizio nova, uno per il servizio neutron e uno per tutti gli altri servizi. Va quindi ridefinita cell

...

Code Block
languagebash
[root@controller-01 etc]# nova-manage cell_v2 list_cells --verbose
+-------+--------------------------------------+----------------------------------------------------+----------------------------------------------------------------+----------+
|  Name |                 UUID                 |                   Transport URL                    |                      Database Connection                       | Disabled |
+-------+--------------------------------------+----------------------------------------------------+----------------------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |                     none://///                     | mysql+pymysql://nova:NOVA_xx_yyy@192.168.60.88:6306/nova_cell0 |  False   |
| cell1 | 8fc9fbbe-697a-4d92-9ff6-cba3feb50b8e | rabbit://openstack:RABBIT_zzz@192.168.60.223:5672 |    mysql+pymysql://nova:NOVA_xx_yyy@192.168.60.88:6306/nova    |  False   |
+-------+--------------------------------------+----------------------------------------------------+----------------------------------------------------------------+----------+


[root@controller-01 etc]# nova-manage cell_v2 update_cell --cell 8fc9fbbe-697a-4d92-9ff6-cba3feb50b8e --transport-url rabbit://openstack:RABBIT_zzz@192.168.60.225:5672 --database_connection mysql+pymysql://nova:NOVA_xx_yyy@192.168.60.88:6306/nova


[root@controller-01 etc]# nova-manage cell_v2 list_cells --verbose
+-------+--------------------------------------+----------------------------------------------------+----------------------------------------------------------------+----------+
|  Name |                 UUID                 |                   Transport URL                    |                      Database Connection                       | Disabled |
+-------+--------------------------------------+----------------------------------------------------+----------------------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |                     none://///                     | mysql+pymysql://nova:NOVA_xx_yyy@192.168.60.88:6306/nova_cell0 |  False   |
| cell1 | 8fc9fbbe-697a-4d92-9ff6-cba3feb50b8e | rabbit://openstack:RABBIT_zzz@192.168.60.225:5672 |    mysql+pymysql://nova:NOVA_xx_yyy@192.168.60.88:6306/nova    |  False   |
+-------+--------------------------------------+----------------------------------------------------+----------------------------------------------------------------+----------+

Aggiornare ceph a reef abilitando repo epel


Code Block
languagebash
[root@controller-01 log]# yum update \*ceph\* --enablerepo=epel
Last metadata expiration check: 1:23:23 ago on Mon 07 Apr 2025 12:52:12 PM CEST.
Dependencies resolved.
===================================================================================================================================================================================================================
 Package                                                  Architecture                              Version                                              Repository                                           Size
===================================================================================================================================================================================================================
Upgrading:
 abseil-cpp                                               x86_64                                    20211102.0-4.el9                                     epel                                                551 k
 ceph-common                                              x86_64                                    2:18.2.4-2.el9s                                      centos-ceph-reef                                     18 M
 grpc-data                                                noarch                                    1.46.7-10.el9                                        epel                                                 19 k
 libarrow                                                 x86_64                                    9.0.0-13.el9                                         epel                                                4.4 M
 libarrow-doc                                             noarch                                    9.0.0-13.el9                                         epel                                                 25 k
 libcephfs2                                               x86_64                                    2:18.2.4-2.el9s                                      centos-ceph-reef                                    691 k
 librados2                                                x86_64                                    2:18.2.4-2.el9s                                      centos-ceph-reef                                    3.2 M
 libradosstriper1                                         x86_64                                    2:18.2.4-2.el9s                                      centos-ceph-reef                                    457 k
 librbd1                                                  x86_64                                    2:18.2.4-2.el9s                                      centos-ceph-reef                                    2.9 M
 librgw2                                                  x86_64                                    2:18.2.4-2.el9s                                      centos-ceph-reef                                    4.4 M
 parquet-libs                                             x86_64                                    9.0.0-13.el9                                         epel                                                838 k
 python3-ceph-argparse                                    x86_64                                    2:18.2.4-2.el9s                                      centos-ceph-reef                                     46 k
 python3-ceph-common                                      x86_64                                    2:18.2.4-2.el9s                                      centos-ceph-reef                                    130 k
 python3-cephfs                                           x86_64                                    2:18.2.4-2.el9s                                      centos-ceph-reef                                    163 k
 python3-grpcio                                           x86_64                                    1.46.7-10.el9                                        epel                                                2.0 M
 python3-rados                                            x86_64                                    2:18.2.4-2.el9s                                      centos-ceph-reef                                    320 k
 python3-rbd                                              x86_64                                    2:18.2.4-2.el9s                                      centos-ceph-reef                                    299 k
 python3-rgw                                              x86_64                                    2:18.2.4-2.el9s                                      centos-ceph-reef                                    100 k
 re2                                                      x86_64                                    1:20211101-20.el9                                    epel                                                191 k
 thrift                                                   x86_64                                    0.15.0-4.el9                                         epel                                                1.6 M

Transaction Summary
===================================================================================================================================================================================================================
Upgrade  20 Packages

Riabilitare servizio puppet

Code Block
languagebash
systemctl enable puppet

Fare il reboot del nodo

Code Block
languagebash
shutdown -r now

Ricordarsi che il calendar delle GPU va installato a mano