Add New Compute Node to Existing OpenStack using Packstack

openstack
OpenStack is quite reliable Cloud solution, that provides extensibility and scalability. That means, if the Cloud is running out of resources for new tenants and instances, it can be easily extended with new Hypervisors (Compute nodes) practically on-line.

In this tutorial we will extend existing OpenStack installation (Controller node, Compute node) with new Compute0 node on-line, without shutting down existing nodes. The easiest and fastest way to extend existing Openstack Cloud on-line is to use Packstack, the automated installer script.
add_new_compute_node

Existing nodes:
Controller node: 192.168.2.4, CentOS7
Compute node: 192.168.2.5, CentOS7

New node:
Compute0 node: 192.168.2.8, CentOS7

Steps:

1. Modify answer file on Controller node

Log in to Controller node as root, backup your existing answers.txt file:

[root@controller ~]# cp /root/answers.txt /root/answers.txt.backup

Modify following parameters in existing answers.txt file to look like below:

[root@controller ~]# vim /root/answers.txt
EXCLUDE_SERVERS=192.168.2.4,192.168.2.5
CONFIG_COMPUTE_HOSTS=192.168.2.5,192.168.2.8

Note: Ensure you have set correct IPs in EXCLUDE_SERVERS parameter to prevent existing nodes from being accidentally re-installed!

Here is attached answers.txt file, we used when adding new Compute0 node.

2. Prepare new node for OpenStack deployment

Prepare new hardware:
– install CentOS7 64bit on new hardware
– disable and stop NetworkManager service
– install openstack/juno repository from RDO

Note: check article Install OpenStack Juno on CentOS 7 / RHEL 7 for details on how to prepare node for Openstack deployment.

3. Add new node to existing OpenStack

Add new Compute0 node using Packstack installer script:

[root@controller ~]# packstack --answer-file=/root/answers.txt

Deployment of OpenStack on new node may take a while:

Welcome to the Packstack setup utility

The installation log file is available at: /var/tmp/packstack/20151121-205216-By3WFS/openstack-setup.log

Installing:
Clean Up                                             [ DONE ]
root@192.168.2.8's password: 
Setting up ssh keys                                  [ DONE ]
Discovering hosts' details                           [ DONE ]
Adding pre install manifest entries                  [ DONE ]
Installing time synchronization via NTP              [ DONE ]
Preparing servers                                    [ DONE ]
Adding AMQP manifest entries                         [ DONE ]
Adding MariaDB manifest entries                      [ DONE ]
Fixing Keystone LDAP config parameters to be undef if empty[ DONE ]
Adding Keystone manifest entries                     [ DONE ]
Adding Glance Keystone manifest entries              [ DONE ]
Adding Glance manifest entries                       [ DONE ]
Adding Cinder Keystone manifest entries              [ DONE ]
Checking if the Cinder server has a cinder-volumes vg[ DONE ]
Adding Cinder manifest entries                       [ DONE ]
Adding Nova API manifest entries                     [ DONE ]
Adding Nova Keystone manifest entries                [ DONE ]
Adding Nova Cert manifest entries                    [ DONE ]
Adding Nova Conductor manifest entries               [ DONE ]
Creating ssh keys for Nova migration                 [ DONE ]
Gathering ssh host keys for Nova migration           [ DONE ]
Adding Nova Compute manifest entries                 [ DONE ]
Adding Nova Scheduler manifest entries               [ DONE ]
Adding Nova VNC Proxy manifest entries               [ DONE ]
Adding OpenStack Network-related Nova manifest entries[ DONE ]
Adding Nova Common manifest entries                  [ DONE ]
Adding Neutron API manifest entries                  [ DONE ]
Adding Neutron Keystone manifest entries             [ DONE ]
Adding Neutron L3 manifest entries                   [ DONE ]
Adding Neutron L2 Agent manifest entries             [ DONE ]
Adding Neutron DHCP Agent manifest entries           [ DONE ]
Adding Neutron LBaaS Agent manifest entries          [ DONE ]
Adding Neutron Metering Agent manifest entries       [ DONE ]
Adding Neutron Metadata Agent manifest entries       [ DONE ]
Checking if NetworkManager is enabled and running    [ DONE ]
Adding OpenStack Client manifest entries             [ DONE ]
Adding Horizon manifest entries                      [ DONE ]
Adding Swift Keystone manifest entries               [ DONE ]
Adding Swift builder manifest entries                [ DONE ]
Adding Swift proxy manifest entries                  [ DONE ]
Adding Swift storage manifest entries                [ DONE ]
Adding Swift common manifest entries                 [ DONE ]
Adding MongoDB manifest entries                      [ DONE ]
Adding Redis manifest entries                        [ DONE ]
Adding Ceilometer manifest entries                   [ DONE ]
Adding Ceilometer Keystone manifest entries          [ DONE ]
Adding Nagios server manifest entries                [ DONE ]
Adding Nagios host manifest entries                  [ DONE ]
Adding post install manifest entries                 [ DONE ]
Installing Dependencies                              [ DONE ]
Copying Puppet modules and manifests                 [ DONE ]
Applying 192.168.2.8_prescript.pp
192.168.2.8_prescript.pp:                            [ DONE ]       
Applying 192.168.2.8_chrony.pp
192.168.2.8_chrony.pp:                               [ DONE ]    
Applying 192.168.2.8_nova.pp
192.168.2.8_nova.pp:                                 [ DONE ]  
Applying 192.168.2.8_neutron.pp
192.168.2.8_neutron.pp:                              [ DONE ]     
Applying 192.168.2.8_nagios_nrpe.pp
192.168.2.8_nagios_nrpe.pp:                          [ DONE ]         
Applying 192.168.2.8_postscript.pp
192.168.2.8_postscript.pp:                           [ DONE ]        
Applying Puppet manifests                            [ DONE ]
Finalizing                                           [ DONE ]

 **** Installation completed successfully ******

Additional information:
 * File /root/keystonerc_admin has been created on OpenStack client host 192.168.2.4. To use the command line tools you need to source the file.
 * To access the OpenStack Dashboard browse to http://192.168.2.4/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
 * To use Nagios, browse to http://192.168.2.4/nagios username: nagiosadmin, password: 72659a0e75ee4f48
 * The installation log file is available at: /var/tmp/packstack/20151121-205216-By3WFS/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20151121-205216-By3WFS/manifests

3. Verify new node

Check, if Compute0 node is now included in OpenStack Cloud. Source admin keystone file to import OpenStack admin credentials to your session variables:

[root@controller ~]# source /root/keystonerc_admin 

Check existing nodes:

[root@controller ~(keystone_admin)]# nova-manage host list
host                     	zone           
controller               	internal       
compute                  	nova           
compute0                 	nova           

Verify, if nova-compute service is running on new Compute0 node:

[root@controller ~(keystone_admin)]# nova-manage service list
Binary           Host        Zone      Status     State   Updated_At
nova-consoleauth controller  internal  enabled    ':-)'   2015-11-21 22:06:36
nova-scheduler   controller  internal  enabled    ':-)'   2015-11-21 22:06:38
nova-conductor   controller  internal  enabled    ':-)'   2015-11-21 22:06:37
nova-cert        controller  internal  enabled    ':-)'   2015-11-21 22:06:38
nova-compute     compute     nova      enabled    ':-)'   2015-11-21 22:06:37
nova-compute     compute0    nova      enabled    ':-)'   2015-11-21 22:06:37

4. Configure new node

Execute the below commands on Compute0 node only to configure it for internal network traffic.

Create eth1 interface configuration file (if it doesn’t exist):

[root@compute0 ~]# touch /etc/sysconfig/network-scripts/ifcfg-eth1

Check MAC address for eth1 interface:

[root@compute0 ~]# ip addr show eth1 | grep link/ether
   link/ether 52:54:00:44:74:d7 brd ff:ff:ff:ff:ff:ff

Modify ifcfg-eth1 file to look like below (put here MAC of eth1 interface):

HWADDR=52:54:00:44:74:d7
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth1
NM_CONTROLLED=no
ONBOOT=yes

Enable eth1 interface:

[root@compute0 ~]# ifup eth1

Check, if eth1 interface is now up:

[root@compute0 ~]# ip addr show eth1
3: eth1: mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
   link/ether 52:54:00:44:74:d7 brd ff:ff:ff:ff:ff:ff
   inet6 fe80::5054:ff:fe44:74d7/64 scope link
   valid_lft forever preferred_lft forever

Add eth1 interface port to br-eth1 OVS bridge:

[root@compute0 ~]# ovs-vsctl add-port br-eth1 eth1

Verify new OVS configuration including eth1 port attached to br-eth1 OVS bridge:

[root@compute0 ~]# ovs-vsctl show
81941570-6478-47b3-b3ed-abb105fe3ff6
    Bridge br-int
        fail_mode: secure
        Port "int-br-eth1"
            Interface "int-br-eth1"
                type: patch
                options: {peer="phy-br-eth1"}
        Port br-int
            Interface br-int
                type: internal
    Bridge "br-eth1"
        Port "phy-br-eth1"
            Interface "phy-br-eth1"
                type: patch
                options: {peer="int-br-eth1"}
        Port "eth1"
            Interface "eth1"
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
    ovs_version: "2.3.1"

Restart network service:

[root@compute0 ~]# systemctl restart network

5. Test functionality of new node

To test newly added Compute0 node operability launch an Instance on Compute0 node and check, if it’s getting internal IP address from internal Neutron DHCP server in internal network.

And that’s it, we have just added another Compute node to existing OpenStack Cloud 🙂