Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Controllare se sono installati openstack-client e selinux

    Code Block
    languagebash
    [root@controller-01 ~]# yum list installed | grep openstackclient
    python-openstackclient-lang.noarch                                6.6.1-1.el9s                     @centos-openstack-caracal       
    python3-openstackclient.noarch                                    6.6.1-1.el9s                     @centos-openstack-caracal       
    
    [root@controller-01 ~]# yum list installed | grep openstack-selinux
    openstack-selinux.noarch                                          0.8.40-1.el9s                    @centos-openstack-zed           


  • Controllare versione kernel e ceph

    Code Block
    languagebash
    [root@controller-01 ~]#  yum list installed | grep kernel
    kernel.x86_64                                                     5.14.0-427.24.1.el9_4            @anaconda                       
    kernel.x86_64                                                     5.14.0-503.33.1.el9_5            @baseos                         
    kernel-core.x86_64                                                5.14.0-427.24.1.el9_4            @anaconda                       
    kernel-core.x86_64                                                5.14.0-503.33.1.el9_5            @baseos                         
    kernel-headers.x86_64                                             5.14.0-503.33.1.el9_5            @appstream                      
    kernel-modules.x86_64                                             5.14.0-427.24.1.el9_4            @anaconda                       
    kernel-modules.x86_64                                             5.14.0-503.33.1.el9_5            @baseos                         
    kernel-modules-core.x86_64                                        5.14.0-427.24.1.el9_4            @anaconda                       
    kernel-modules-core.x86_64                                        5.14.0-503.33.1.el9_5            @baseos                         
    kernel-srpm-macros.noarch                                         1.0-13.el9                       @appstream                      
    kernel-tools.x86_64                                               5.14.0-503.33.1.el9_5            @baseos                         
    kernel-tools-libs.x86_64                                          5.14.0-503.33.1.el9_5            @baseos                                       
    
    [root@controller-01 ~]#  yum list installed | grep ceph 
    blosc.x86_64                                                      1.21.0-3.el9s                    @centos-ceph-pacific            
    centos-release-ceph-reef.noarch                                   1.0-1.el9                        @extras                         
    ceph-common.x86_64    
                                                2:18.2.4-2.el9s                  @centos-ceph-reef             
    
    [root@controller-01 ~]# uname -a
    Linux controller-01.cloud.pd.infn.it 5.14.0-503.33.1.el9_5.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Mar 20 03:39:23 EDT 2025 x86_64 x86_64 x86_64 GNU/Linux
  • Rimuovere release Caracal

    Code Block
    languagebash
    yum remove centos-release-openstack-caracal.noarch


  • Installare Epoxy

    Code Block
    languagebash
    dnf install -y https://trunk.rdoproject.org/rdo_release/rdo-release.el9s.rpm  (serve questo e contiene il repo epoxy)
    
    #### facendo un check
    [root@todelff ~]# rpm -qil rdo-release
    Name        : rdo-release
    Version     : epoxy
    Release     : 1.el9s
    Architecture: noarch
    Install Date: Wed Mar 11 15:29:25 2026
    Group       : System Environment/Base
    Size        : 13372
    License     : Apache2
    Signature   : (none)
    Source RPM  : rdo-release-epoxy-1.el9s.src.rpm
    Build Date  : Fri Mar 14 17:12:13 2025
    Build Host  : doogie-n1.rdu2.centos.org
    Packager    : CBS <cbs@centos.org>
    Vendor      : CentOS Cloud SIG
    URL         : https://github.com/rdo-infra/rdo-release
    Summary     : RDO repository configuration
    Description :
    This package contains the RDO repository
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Messaging
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-NFV
    /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
    /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9
    /etc/yum.repos.d/ceph-reef.repo
    /etc/yum.repos.d/messaging.repo
    /etc/yum.repos.d/nfv-openvswitch.repo
    /etc/yum.repos.d/rdo-release.repo
    /etc/yum.repos.d/rdo-testing.repo
    
    


  • Salvare configurazioni che di solito vengono sovrascritte

    Code Block
    languagebash
    export REL=caracal
    cp /etc/httpd/conf.d/openstack-dashboard.conf /etc/httpd/conf.d/openstack-dashboard.conf.$REL
  • Update pacchetti

    Code Block
    languagebash
    dnf update -y
    
    ## update fallisce con problemi vari:
    [root@controller-01 ~]# dnf update
    CentOS-9 - Ceph Reef                                                                                                                         489 kB/s | 415 kB     00:00    
    OpenStack Epoxy Repository                                                                                                                   2.5 MB/s | 1.7 MB     00:00    
    Last metadata expiration check: 0:00:01 ago on Fri 13 Mar 2026 10:41:31 AM CET.
    Error: 
     Problem 1: cannot install both python3-keystone-1:27.0.0-1.el9s.noarch from openstack-epoxy and python3-keystone-1:25.0.0-1.el9s.noarch from @System
      - package python3-keystone+memcache-1:25.0.0-1.el9s.noarch from @System requires python3-keystone = 1:25.0.0-1.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-keystone-1:25.0.0-1.el9s.noarch
      - problem with installed package python3-keystone+memcache-1:25.0.0-1.el9s.noarch
     Problem 2: cannot install both python3-oslo-messaging-16.1.0-1.el9s.noarch from openstack-epoxy and python3-oslo-messaging-14.7.2-1.el9s.noarch from @System
      - package python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch from @System requires python3-oslo-messaging = 14.7.2-1.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-oslo-messaging-14.7.2-1.el9s.noarch
      - problem with installed package python3-oslo-messaging+amqp1-14.7.2-1.el9s.noarch
     Problem 3: cannot install both python3-requests-2.32.3-4.el9s.noarch from openstack-epoxy and python3-requests-2.31.0-3.el9s.noarch from @System
      - package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch from @System requires python3-requests = 2.31.0-3.el9s, but none of the providers can be installed
      - cannot install the best update candidate for package python3-requests-2.31.0-3.el9s.noarch
      - problem with installed package python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch
    (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
    
    # Quindi vanno rimossi i seguenti pacchetti
    
    rpm -e --nodeps python3-keystone+memcache
    rpm -e --nodeps python3-requests+use_chardet_on_py3-2.31.0-3.el9s.noarch
    rpm -e --nodeps python3-oslo-messaging+amqp1 amqp1 
    
    dnf update -y
    
    
    Code Block
    languagebash
    titlerisultato
    collapsetrue
    # DA VERIFICARE PER EPOXY 
    # Nell’update vengono scaricati i nuovi rpm: attenzione a questi file di configurazione
    
    cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.$REL
    cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.$REL
    cp /etc/nova/nova.conf /etc/nova/nova.conf.$REL
    cp /etc/placement/placement.conf /etc/placement/placement.conf.$REL
    cp /etc/heat/heat.conf /etc/heat/heat.conf.$REL
    cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.$REL
    cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.$REL
    cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.$REL
    cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.$REL
    cp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.$REL
    cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.$REL
    cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.$REL
    cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.$REL
    cp /etc/httpd/conf.d/auth_openidc.conf /etc/httpd/conf.d/auth_openidc.conf.$REL
    
    mv -f /etc/openstack-dashboard/local_settings.rpmnew /etc/openstack-dashboard/local_settings
    mv -f /etc/neutron/neutron.conf.rpmnew /etc/neutron/neutron.conf
    mv -f /etc/nova/nova.conf.rpmnew /etc/nova/nova.conf
    mv -f /etc/placement/placement.conf.rpmnew /etc/placement/placement.conf
    mv -f /etc/heat/heat.conf.rpmnew /etc/heat/heat.conf
    mv -f /etc/neutron/dhcp_agent.ini.rpmnew /etc/neutron/dhcp_agent.ini
    mv -f /etc/neutron/l3_agent.ini.rpmnew /etc/neutron/l3_agent.ini
    mv -f /etc/neutron/metadata_agent.ini.rpmnew /etc/neutron/metadata_agent.ini
    mv -f /etc/neutron/plugins/ml2/ml2_conf.ini.rpmnew /etc/neutron/plugins/ml2/ml2_conf.ini
    mv -f /etc/neutron/plugins/ml2/openvswitch_agent.ini.rpmnew /etc/neutron/plugins/ml2/openvswitch_agent.ini
    mv -f /etc/keystone/keystone.conf.rpmnew /etc/keystone/keystone.conf
    mv -f /etc/glance/glance-api.conf.rpmnew /etc/glance/glance-api.conf
    mv -f /etc/cinder/cinder.conf.rpmnew /etc/cinder/cinder.conf
    mv -f /etc/httpd/conf.d/auth_openidc.conf.rpmnew /etc/httpd/conf.d/auth_openidc.conf
  • Code Block
    languagebash
    Aggiorniamo le configurazioni con puppet (
    ATTENZIONE: modificare il file service.pp di puppet perche' non faccia partire i servizi una volta aggiornati init.pp)
    
    Modificare la classe puppet del controller con Epoxy
    In foreman cambiamo. Per questo mettere tutti i servizi in
     ensure      => stopped,
     enable      => false,
    e committare su git
    
    Da pagina web di Foreman, modificare la classe conpuppet del quellacontroller perselezionando Epoxy: 
    in https://cld-config.cloud.pd.infn.it/hosts/controller-xx.cloud.pd.infn.it editare l'host sostituendo l'hostgroup "hosts_all/ComputeNode-Test" con "hosts_all/ComputeNode-Test_Epoxy"
    
    Nel controller poi eseguire
    puppet agent -t 
    


  • KEYSTONE

    Code Block
    languagebash
    # TODO: backup database keystone
    
    su -s /bin/sh -c "keystone-manage doctor" keystone
    
    [root@controller-01 StartServices]# su -s /bin/sh -c "keystone-manage doctor" keystone
    WARNING: `keystone.conf [cache] enabled` is not enabled.
        Caching greatly improves the performance of keystone, and it is highly
        recommended that you enable it.
    
    su -s /bin/sh -c "keystone-manage db_sync --expand" keystone
    
    Dopo l'aggiornamento del controller2 e fatto ripartire httpd, si deve eseguire il comando
    su -s /bin/sh -c "keystone-manage db_sync --contract" keystone    
  • PLACEMENT
  • Code Block
    languagebash
    su -s /bin/sh -c "placement-manage db sync" placement 
    
    Usare il file dell'HAproxy in modo che i tre servizi keystone, placement e dashboard (memcached) puntino al controller-01 commentando il controller-02
    in cld-config il file si trova in  /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/controller01httpd.cfg
    
    procedura:
    
    1) accendo i servizi per keystone, placement e dashboard
    systemctl start httpd.service memcached.service shibd.service
    
    2) in cld-config copio il file /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/controller01httpd.cfg in  /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    
    cp /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/controller01httpd.cfg /etc/puppetlabs/code/environments/production/modules/cloudtest_haproxy/files/haproxy_el9.cfg
    
    3) eseguire puppet sui tre haproxy 
    ssh root@cld-haproxy-test-01 / 02/ 03
    puppet agent -t
    
    4) spegnere e disabilitare puppet sul controller-02
    systemctl stop puppet
    systemctl disable puppet
    
    5) spegnere e disabilitare i servizi sul controller-02
    systemctl stop httpd.service memcached.service shibd.service
    systemctl disable httpd.service memcached.service shibd.service
    
    
    Controllare che funzioni tutto a livello di dashboard, in particolare il calendario prenotazioni GPU (se non funziona interviene Sergio)
    
    
  • GLANCE

    Code Block
    languagebash
    ATTENZIONE: controllare se c'e'un ordine per l'update di glance (si possono avere due release diverse contemporaneamente?)
    
    Esiste  ma e' considerato non production ready, o la doc non e' aggiornata (https://docs.openstack.org/glance/2025.1/admin/zero-downtime-db-upgrade.html) 
    Probabilmente per glance e' meglio non rischiare e fare il down durante l'update
    
    su -s /bin/sh -c "glance-manage db expand" glance
    su -s /bin/sh -c "glance-manage db migrate" glance
    
    systemctl start openstack-glance-api.service
    
    Modificare l'HA proxy im podo che per glance punti al controller1
    
    Spegnere il servizio nel controller2
    systemctl stop openstack-glance-api.service
    
    Quanto anche il controller2 sara' aggiornato, eseguire
    su -s /bin/sh -c "glance-manage db contract" glance
    
    
  • NOVA
    Code Block
    languagebash
    su -s /bin/sh -c "nova-status upgrade check" nova
    su -s /bin/sh -c "nova-manage api_db sync" nova
    su -s /bin/sh -c "nova-manage db sync" nova
    
    Far partire il servizio nel controller1 
    systemctl start \
        openstack-nova-api.service \
        openstack-nova-scheduler.service \
        openstack-nova-conductor.service \
        openstack-nova-novncproxy.service
    
    Modificare l'HA in modo che nova punti al controller1
    
    Spegnere il servizio nel controller2
    
     systemctl stop \
        openstack-nova-api.service \
        openstack-nova-scheduler.service \
        openstack-nova-conductor.service \
        openstack-nova-novncproxy.service
     
    Quando anche il controller2 e tutti i compute saranno aggiornati, eseguire di nuovo
    su -s /bin/sh -c "nova-manage db online_data_migrations" nova
  • NEUTRON

    Code Block
    languagebash
    su -s /bin/sh -c "neutron-db-manage upgrade --expand" neutron
    
    Far partire il servizio 
    
    systemctl start neutron-server.service \
      neutron-openvswitch-agent.service neutron-dhcp-agent.service \
      neutron-metadata-agent.service
    
    systemctl start neutron-l3-agent.service
    
    Modificare l'HA per far puntare neutron al controller2
    
    Stoppare il servizio sul controller2
    systemctl stop neutron-server.service \
      neutron-openvswitch-agent.service neutron-dhcp-agent.service \
      neutron-metadata-agent.service
    
    systemctl stop neutron-l3-agent.service
    
    Quando anche il controller2 sara' aggiornato eseguire il comando
    
    su -s /bin/sh -c "neutron-db-manage upgrade --contract" neutron


  • CINDER

    Code Block
    languagebash
    su -s /bin/sh -c "cinder-manage db sync" cinder
    
    Far partire il servizio sul controller1 
    systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
    
    Modificare l'HA per far puntare il servizio al controller1
    
    Stopparlo sul controller2
    
    systemctl stop openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service
    
    Quando il controller2 sara' aggiornato rieseguire il online_data_migration
    su -s /bin/sh -c "cinder-manage db online_data_migrations" cinder


    HEAT

    Code Block
    languagebash
    su -s /bin/sh -c "heat-manage db_sync --command expand" heat
    su -s /bin/sh -c "heat-manage db_sync --command migrate_data" heat
    
    Accendere il servizio sul controller1 
    systemctl start openstack-heat-api.service \
      openstack-heat-api-cfn.service openstack-heat-engine.service
    
    Modificare l'HA per far puntare il servizio nel controller1
    
    e spegnerlo sul controller2
    
     systemctl stop openstack-heat-api.service \
      openstack-heat-api-cfn.service openstack-heat-engine.service
     
    Quando anche il controller2 sara' aggiornato
    su -s /bin/sh -c "heat-manage db_sync --command contract" heat


  • DASHBOARD: nulla da fare

...