Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Modificare la classe puppet epoxy dei controller (service.pp) in modo che non faccia partire i servizi
    Code Block
    languageshell
    ATTENZIONE: modificare il file service.pp di puppet perche' non faccia partire i servizi una volta aggiornati. Per questo mettere tutti i servizi in
     ensure      => stopped,
     enable      => false,
    e committare su git
  • Modificare l'HA in modo che per tutti i servizi punti al controller2 (controller-02, che ha i servizi a caracal attivi)Caracal attivi. Per questo modificare il file in cld-config ed eseguire puppet sui tre haproxy
    Code Block
    languageshell
    titleHAProxy
    in cld-config
    cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_cinder_heat_spento01_acceso02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    
    in cld-haproxy-test-01 - 02 - 03 
    puppet agent -t
  • In entrambi i controller spegnere e disabilitare puppet Spegnere e disabilitare puppet e spegnere e disabilitare tutti i servizi Openstack
Code Block
languagebash
systemctl stop puppet
systemctl disable puppet



  • Nel controller-01 spegnere e disabilitare tutti i servizi Openstack
    • Code Block
      languagebash
      cd /root/StartServices
      ./complete.sh stop
      ./complete.sh disable
  • Fare il backup del db (sia tutto insieme che singolo db)  METTERE IN DRAIN I COMPUTE?
  • Code Block
    languagebash
    [root@cld-db-test-04 backup]# mkdir /backup/BackupCaracalPrimaDellUpdate
    [root@cld-db-test-04 ~]# mysqldump -u root -p --all-databases > /backup/130326/cld-db_test_04_caracal_dump_all.sql
    [root@cld-db-test-04 ~]# /usr/local/bin/mysql_dump_separate_db
    # e sposto i file risultanti da /backup/mysql a /backup/130326 (altrimenti in  /backup/mysql  verrebbero cancellati)
  • Migrare online il db placement, nova e cinder. ATTENZIONE: puo' metterci molto tempo a migrare
  • Code Block
    languagebash
    #placement va fatto prima di nova
    su -s /bin/sh -c "placement-manage db online_data_migrations" placement
    su -s /bin/sh -c "nova-manage db online_data_migrations" nova
    su -s /bin/sh -c "cinder-manage db online_data_migrations" cinder

...

  • Controllare se sono installati openstack-client e selinux

    Code Block
    languagebash
    [root@controller-01 ~]# yum list installed | grep openstackclient
    python-openstackclient-lang.noarch                                6.6.1-1.el9s                     @centos-openstack-caracal       
    python3-openstackclient.noarch                                    6.6.1-1.el9s                     @centos-openstack-caracal       
    
    [root@controller-01 ~]# yum list installed | grep openstack-selinux
    openstack-selinux.noarch                                          0.8.40-1.el9s                    @centos-openstack-zed           


  • Controllare versione kernel e ceph

    Code Block
    languagebash
    [root@controller-01 ~]#  yum list installed | grep kernel
    kernel.x86_64                                                     5.14.0-427.24.1.el9_4            @anaconda                       
    kernel.x86_64                                                     5.14.0-503.33.1.el9_5            @baseos                         
    kernel-core.x86_64                                                5.14.0-427.24.1.el9_4            @anaconda                       
    kernel-core.x86_64                                                5.14.0-503.33.1.el9_5            @baseos                         
    kernel-headers.x86_64                                             5.14.0-503.33.1.el9_5            @appstream                      
    kernel-modules.x86_64                                             5.14.0-427.24.1.el9_4            @anaconda                       
    kernel-modules.x86_64                                             5.14.0-503.33.1.el9_5            @baseos                         
    kernel-modules-core.x86_64                                        5.14.0-427.24.1.el9_4            @anaconda                       
    kernel-modules-core.x86_64                                        5.14.0-503.33.1.el9_5            @baseos                         
    kernel-srpm-macros.noarch                                         1.0-13.el9                       @appstream                      
    kernel-tools.x86_64                                               5.14.0-503.33.1.el9_5            @baseos                         
    kernel-tools-libs.x86_64                                          5.14.0-503.33.1.el9_5            @baseos                                       
    
    [root@controller-01 ~]#  yum list installed | grep ceph 
    blosc.x86_64                                                      1.21.0-3.el9s                    @centos-ceph-pacific            
    centos-release-ceph-reef.noarch                                   1.0-1.el9                        @extras                         
    ceph-common.x86_64    
                                                2:18.2.4-2.el9s                  @centos-ceph-reef             
    
    [root@controller-01 ~]# uname -a
    Linux controller-01.cloud.pd.infn.it 5.14.0-503.33.1.el9_5.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Mar 20 03:39:23 EDT 2025 x86_64 x86_64 x86_64 GNU/Linux
  • Rimuovere release Caracal

    Code Block
    languagebash
    yum remove centos-release-openstack-caracal.noarch


  • Installare Epoxy

    Code Block
    languagebash
    dnf install -y https://trunk.rdoproject.org/rdo_release/rdo-release.el9s.rpm  (serve questo e contiene il repo epoxy)
    
    #### facendo un check
    [root@todelff ~]# rpm -qil rdo-release
    Name        : rdo-release
    Version     : epoxy
    Release     : 1.el9s
    Architecture: noarch
    Install Date: Wed Mar 11 15:29:25 2026
    Group       : System Environment/Base
    Size        : 13372
    License     : Apache2
    Signature   : (none)
    Source RPM  : rdo-release-epoxy-1.el9s.src.rpm
    Build Date  : Fri Mar 14 17:12:13 2025
    Build Host  : doogie-n1.rdu2.centos.org
    Packager    : CBS <cbs@centos.org>
    Vendor      : CentOS Cloud SIG
    URL         : https://github.com/rdo-infra/rdo-release
    Summary     : RDO repository configuration
    Description :
    This package contains the RDO repository
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Messaging
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-NFV
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
    /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9
    /etc/yum.repos.d/ceph-reef.repo
    /etc/yum.repos.d/messaging.repo
    /etc/yum.repos.d/nfv-openvswitch.repo
    /etc/yum.repos.d/rdo-release.repo
    /etc/yum.repos.d/rdo-testing.repo
    
    


  • Salvare configurazioni che di solito vengono sovrascritte

    Code Block
    languagebash
    export REL=caracal
    cp /etc/httpd/conf.d/openstack-dashboard.conf /etc/httpd/conf.d/openstack-dashboard.conf.$REL
  • Update pacchetti

    Code Block
    languagebash
    dnf update -y
    
    ## se update fallisce con problemi vari:
    [root@controller-01 ~]# dnf update
    CentOS-9 - Ceph Reef                                                                                                                         489 kB/s | 415 kB     00:00    
    OpenStack Epoxy Repository                                                                                                                   2.5 MB/s | 1.7 MB     00:00    
    Last metadata expiration check: 0:00:01 ago on Fri 13 Mar 2026 10:41:31 AM CET.
    Error: 
     Problem 1: cannot install both python3-keystone-1:27.0.0-1.el9s.noarch from openstack-epoxy and python3-keystone-1:25.0.0-1.el9s.noarch from @System
      - package python3-keystone+memcache-1:25.0.0-1.el9s.noarch from @System requires python3-keystone = 1:25.0.0-1.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-keystone-1:25.0.0-1.el9s.noarch
      - problem with installed package python3-keystone+memcache-1:25.0.0-1.el9s.noarch
     Problem 2: cannot install both python3-oslo-messaging-16.1.0-1.el9s.noarch from openstack-epoxy and python3-oslo-messaging-14.7.2-1.el9s.noarch from @System
      - package python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch from @System requires python3-oslo-messaging = 14.7.2-1.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-oslo-messaging-14.7.2-1.el9s.noarch
      - problem with installed package python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch
     Problem 3: cannot install both python3-requests-2.32.3-4.el9s.noarch from openstack-epoxy and python3-requests-2.31.0-3.el9s.noarch from @System
      - package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch from @System requires python3-requests = 2.31.0-3.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-requests-2.31.0-3.el9s.noarch
      - problem with installed package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch
    (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
    
    # Quindi vanno rimossi i seguenti pacchetti
    
    rpm -e --nodeps python3-keystone+memcache
    rpm -e --nodeps python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch
    rpm -e --nodeps python3-oslo-messaging+amqp1 
    
    dnf update -y
    
    
    Code Block
    languagebash
    titlerisultato
    collapsetrue
    # DA VERIFICARE PER EPOXY 
    # Nell’update vengono scaricati i nuovi rpm: attenzione a questi file di configurazione
    
    cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.$REL
    cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.$REL
    cp /etc/nova/nova.conf /etc/nova/nova.conf.$REL
    cp /etc/placement/placement.conf /etc/placement/placement.conf.$REL
    cp /etc/heat/heat.conf /etc/heat/heat.conf.$REL
    cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.$REL
    cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.$REL
    cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.$REL
    cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.$REL
    cp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.$REL
    cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.$REL
    cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.$REL
    cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.$REL
    cp /etc/httpd/conf.d/auth_openidc.conf /etc/httpd/conf.d/auth_openidc.conf.$REL
    
    mv -f /etc/openstack-dashboard/local_settings.rpmnew /etc/openstack-dashboard/local_settings
    mv -f /etc/neutron/neutron.conf.rpmnew /etc/neutron/neutron.conf
    mv -f /etc/nova/nova.conf.rpmnew /etc/nova/nova.conf
    mv -f /etc/placement/placement.conf.rpmnew /etc/placement/placement.conf
    mv -f /etc/heat/heat.conf.rpmnew /etc/heat/heat.conf
    mv -f /etc/neutron/dhcp_agent.ini.rpmnew /etc/neutron/dhcp_agent.ini
    mv -f /etc/neutron/l3_agent.ini.rpmnew /etc/neutron/l3_agent.ini
    mv -f /etc/neutron/metadata_agent.ini.rpmnew /etc/neutron/metadata_agent.ini
    mv -f /etc/neutron/plugins/ml2/ml2_conf.ini.rpmnew /etc/neutron/plugins/ml2/ml2_conf.ini
    mv -f /etc/neutron/plugins/ml2/openvswitch_agent.ini.rpmnew /etc/neutron/plugins/ml2/openvswitch_agent.ini
    mv -f /etc/keystone/keystone.conf.rpmnew /etc/keystone/keystone.conf
    mv -f /etc/glance/glance-api.conf.rpmnew /etc/glance/glance-api.conf
    mv -f /etc/cinder/cinder.conf.rpmnew /etc/cinder/cinder.conf
    mv -f /etc/httpd/conf.d/auth_openidc.conf.rpmnew /etc/httpd/conf.d/auth_openidc.conf
  • Code Block
    languagebash
    Aggiorniamo le configurazioni con puppet 
    ATTENZIONE: modificarecontrollare il file service.pp di puppet perche' non faccia partire i servizi una volta aggiornati. Per questo mettere tutti i servizi in
     ensure      => stopped,
     enable      => false,
    e committare su git
    
    Da pagina web di Foreman, modificare la classe puppet del controller selezionando Epoxy
    in https://cld-config.cloud.pd.infn.it/hosts/controller-xx.cloud.pd.infn.it editare l'host sostituendo l'hostgroup "hosts_all/ComputeNode-Test" con "hosts_all/ComputeNode-Test_Epoxy"
    
    Nel controller poi eseguire
    puppet agent -t 
    
    A questo punto tutti i servizi sono configurati


  • KEYSTONE

    Code Block
    languagebash
    # TODO: backup database keystone
    
    su -s /bin/sh -c "keystone-manage doctor" keystone
    
    [root@controller-01 StartServices]# su -s /bin/sh -c "keystone-manage doctor" keystone
    WARNING: `keystone.conf [cache] enabled` is not enabled.
        Caching greatly improves the performance of keystone, and it is highly
        recommended that you enable it.
    
    su -s /bin/sh -c "keystone-manage db_sync --expand" keystone
    
    
    su -s /bin/sh -c "keystone-manage db_sync --expand" keystone
    
    ===============================================================================================
    Dopo l'aggiornamento del controller2controller-02 e fatto ripartire httpd, si deve eseguire il comando
    su -s /bin/sh -c "keystone-manage db_sync --contract" keystone    
  • PLACEMENT
  • Code Block
    languagebash
    1) su -s /bin/sh -c "placement-manage db sync" placement 
    
    Usare il file dell'HAproxy in modo che i tre servizi keystone, placement e dashboard (memcached) puntino al controller-01 commentando il controller-02
    in cld-config il file si trova in  /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/controller01httpd.cfg
    
    procedura:
    
    1) accendo sync" placement 
    
    2) accendere i servizi per keystone, placement e dashboard
    systemctl start httpd.service memcached.service shibd.service
    
    23) in cld-config copiomodificare il file /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/controller01httpd.cfg in  /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfgdell'HAproxy in modo che i tre servizi keystone, placement e dashboard (memcached) puntino al controller-01 commentando il controller-02 (controllare porte 5000, 5001, 443, 8778, 11211):
    
    cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/controller01httpdservizio_httpd_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    
    34) eseguire puppet sui tre haproxy 
    ssh root@cld-haproxy-test-01 / 02/ 03
    puppet agent -t
    
    4) spegnere e disabilitare puppet sul controller-02
    systemctl stop puppet
    systemctl disable puppet
    
    5) spegnere e disabilitare i servizi sul controller-02
    systemctl stop httpd.service memcached.service shibd.service
    systemctl disable httpd.service memcached.service shibd.service
    
    
    Controllare che funzioni tutto a livello di dashboard, in particolare il calendario prenotazioni GPU (se non funziona interviene Sergio)
    
    
  • GLANCE

    Code Block
    languagebash
    ATTENZIONE: controllare se c'e'un ordine per l'update di glance (si possono avere due release diverse contemporaneamente?). 
    
    Esiste  ma e' considerato non production ready, o la doc non e' aggiornata (https://docs.openstack.org/glance/2025.1/admin/zero-downtime-db-upgrade.html) 
    Probabilmente per glance e' meglio non rischiare e fare il down durante l'update
    
    1) spegnere il servizio glance sul controller-02
    systemctl stop openstack-glance-api.service
    systemctl disable openstack-glance-api.service
    
    2)Sul sul controller-01 (gia' configurato ad Epoxy perche' abbiamo girato puppet):
    
    2) su -s /bin/sh -c "glance-manage db expand" glance
    
    [root@controller-01 StartServices]# cat /var/log/glance/glance-manage.log 
    2026-03-16 17:30:38.111 173040 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
    2026-03-16 17:30:38.111 173040 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
    
    
    3) su -s /bin/sh -c "glance-manage db migrate" glance
    
    [root@controller-01 StartServices]# su -s /bin/sh -c "glance-manage db migrate" glance
    2026-03-16 17:31:33.469 173073 INFO alembic.runtime.migration [-] Context impl MySQLImpl.
    2026-03-16 17:31:33.470 173073 INFO alembic.runtime.migration [-] Will assume non-transactional DDL.
    Database is up to date. No migrations needed.
    [root@controller-01 StartServices]# 
    
    
    4) systemctl start openstack-glance-api.service
    
    Mar 16 17:31:56 controller-01.cloud.pd.infn.it glance-api[173117]: 2026-03-16 17:31:56.058 173117 WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set with>
    
    
    35)) Modificare l'HA proxy in modo che glance punti al controller1controller-01
    in cld-config copiare il file :
    cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_acceso01_spento02.cfg in haproxy_el9.cfg
    e girare/etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    (controllare porta 9292)
    
    6) eseguire puppet nei tre haproxy
    puppet agent -t
    
    =============================================================
    Quanto anche il controller2controller-02 sara' aggiornato, eseguire
    su -s /bin/sh -c "glance-manage db contract" glance
    
    
  • ---→>>>> QUI
  • NOVA    NOVA
    Code Block
    languagebash
    su -s /bin/sh -c "nova-status upgrade check" nova
    su -s /bin/sh -c "nova-manage api_db sync" nova
    su -s /bin/sh -c "nova-manage db sync" nova
    
    Far partire il servizio nel controller1 
    systemctl start \
        openstack-nova-api.service \
        openstack-nova-scheduler.service \
        openstack-nova-conductor.service \
        openstack-nova-novncproxy.service
    
    Modificare l'HA in modo che nova punti al controller1    openstack-nova-conductor.service \
        openstack-nova-novncproxy.service
    
    Modificare l'HA proxy in modo che nova punti al controller-01
    in cld-config:
    cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    (controllare porte 8774, 8775, 6080)
    
    Eseguire puppet nei tre haproxy
    puppet agent -t
    
    Spegnere il servizio nel controller2
    
     systemctl stop \
        openstack-nova-api.service \
        openstack-nova-scheduler.service \
        openstack-nova-conductor.service \
        openstack-nova-novncproxy.service
     
     \
        openstack-nova-novncproxy.service
     
    ==============================================================================
    Quando anche il controller2 e tutti i compute saranno aggiornati, eseguire di nuovo
    su -s /bin/sh -c "nova-manage db online_data_migrations" nova
  • NEUTRON

    Code Block
    languagebash
    su -s /bin/sh -c "neutron-db-manage upgrade --expand" neutron
    
    Far partire il servizio 
    
    systemctl start neutron-server.service \
      neutron-openvswitch-agent.service neutron-dhcp-agent.service \
      neutron-metadata-agent.service
    
    systemctl start neutron-l3-agent.service
    
    Modificare l'HA per far puntare neutron al controller2.service \
      neutron-metadata-agent.service
    
    systemctl start neutron-l3-agent.service
    
    Modificare l'HA proxy in modo che neutron punti al controller-01
    in cld-config:
    cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    (controllare porta 9696)
    
    Eseguire puppet nei tre haproxy
    puppet agent -t
    
    Stoppare il servizio sul controller2
    systemctl stop neutron-server.service \
      neutron-openvswitch-agent.service neutron-dhcp-agent.service \
      neutron-metadata-agent.service
    
    systemctl stop neutron-l3-agent.service
    
    
    =========================================================================
    Quando anche il controller2 sara' aggiornato eseguire il comando
    
    su -s /bin/sh -c "neutron-db-manage upgrade --contract" neutron


  • CINDER

    Code Block
    languagebash
    su -s /bin/sh -c "cinder-manage db sync" cinder
    
    Far partire il servizio sul controller1 
    systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
    
    Modificare l'HA per far puntare il servizio al controller1 start openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
    
    Modificare l'HA per far puntare il servizio al controller1
    
    Modificare l'HA proxy in modo che cinder punti al controller-01
    in cld-config:
    cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_cinder_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    (controllare porta 8776)
    
    Eseguire puppet nei tre haproxy
    puppet agent -t
    
    Stopparlo sul controller2
    
    systemctl stop openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
     openstack-cinder-volume.service
    
    ==============================================================================
    Quando il controller2 sara' aggiornato rieseguire il online_data_migration
    su -s /bin/sh -c "cinder-manage db online_data_migrations" cinder


    HEAT

    Code Block
    languagebash
    su -s /bin/sh -c "heat-manage db_sync --command expand" heat
    su -s /bin/sh -c "heat-manage db_sync --command migrate_data" heat
    
    Accendere il servizio sul controller1 
    systemctl start openstack-heat-api.service \
      openstack-heat-api-cfn.service openstack-heat-engine.service
    
    Modificare l'HA per far puntare il servizio nel controller1
    
    systemctl start openstack-heat-api.service \
      openstack-heat-api-cfn.service openstack-heat-engine.service
    
    Modificare l'HA proxy in modo che heat punti al controller-01
    in cld-config:
    cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/servizio_httpd_glance_nova_neutron_cinder_heat_acceso01_spento02.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    (controllare porte 8000, 8004)
    
    Eseguire puppet nei tre haproxy
    puppet agent -t
    e spegnerlo sul controller2
    
     systemctl stop openstack-heat-api.service \
      openstack-heat-api-cfn.service openstack-heat-engine.service
     
    ===================================================================
    Quando anche il controller2 sara' aggiornato
    su -s /bin/sh -c "heat-manage db_sync --command contract" heat


  • DASHBOARD: nulla da fare

...