Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

In the following instructions the node cld-np-19, an INFN node, is the one to be reinstalled


First of all take note of the IP addresses used by the VM for the management and data networks (192.168.60.x and 192.168.61.x) ) and the relevant interfaces

...

  • The hostgroup must be hosts_all
  • The kickstart to be used must be TBC
  • TBC


When the node restart restarts after the update, make sure that SELinux is disabled:

...

Code Block
languagebash
modprobe br_netfilter

and then rerun puppet


Start and enablepuppet:If the procedure terminated without error, enable puppet and reboot the host

Code Block
languagebash
systemctl start puppet; systemctl enable puppetenable puppet
shutdown -r now


Enable the nagios check for lv


Log on cld-nagios.cloud.pd.infn.it


 cd /etc/nagios/objects/


Edit cloudcomputenodes.cfg (or the other file where this compute node is defined) and add a passive check:


Code Block
define service{
        use                             LVS       ; Name of service template to use
        host_name                       cld-np-19
        service_description             LVS
        freshness_threshold             28800
        }


Make sure there are no problems in the configuration:


Code Block
[root@cld-nagios objects]# nagios -v /etc/nagios/nagios.cfg


and restart nagios to have applied the new check:


Code Block
 [root@cld-nagios objects]# systemctl restart nagios






  • Enable the checks in Nagios for this host

  • Wait till all checks are ok (in particular the VM network and volume one: remember to enable the node just for the time to force the run from Nagios; othwerwise the check will fail.)

  • Add this node in the following scripts on cld-ctrl-01: /usr/local/bin/display_usage_of_hypervisors.sh, /usr/local/bin/host_aggregate.sh, /usr/local/bin/new_project.sh, /usr/local/bin/free_resources_compute_nodes.sh
  • Add this node in the /etc/cron.daily/vm-log.sh script on cld-log
  • Create the directory /var/disk-iscsi/qemu/cld-np-19 in cld-log
  • Verify that a ssh from root@cld-log to root@cld-np-19 works without requiring password
  • Stop puppet on the 2 controller nodes:

    Code Block
    languagebash
    systemctl stop puppet


  • Add the new compute node in cld-config:/var/puppet/puppet_yoga/controller_yoga/templates/aai_settings.py.erb

  • Run puppet on the first controller node (this will trigger a restart of httpd):

    Code Block
    languagebash
    puppet agent -t


  • Run puppet on the second controller node (this will trigger a restart of httpd):

    Code Block
    languagebash
    puppet agent -t
    


  • Start puppet on the two controller nodes:

    Code Block
    languagebash
    systemctl start puppet 
    


  • Enable the host:

    Code Block
    languagebash
    openstack compute service set --enable cld-np-19.cloud.pd.infn.it nova-compute
    


  • Add the host to the 'admin' aggregate

...