OpenStack Kilo 3 Node Installation (Controller, Network, Compute) on CentOS 7

install openstack on three nodes
In this tutorial we will install OpenStack Kilo release from RDO repository on three nodes (Controller, Network, Compute) based on CentOS 7 operating system using packstack automated script. The following installation utilizes VLAN based internal software network infrastructure for communication between instances.

Environment used:
public network (Floating IP network): 192.168.2.0/24
internal network (on each node): no IP space, physical connection only (eth1)
controller node public IP: 192.168.2.12 (eth0)
network node public IP: 192.168.2.13 (eth0)
compute node public IP: 192.168.2.14 (eth0)
OS version (each node): CentOS Linux release 7.2.1511 (Core)
install openstack on 3 nodes

Controller node interfaces configuration before OpenStack installation:

[root@controller ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:00:cb:3f brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.12/24 brd 192.168.2.255 scope global dynamic eth0
       valid_lft 78230sec preferred_lft 78230sec
    inet6 fe80::5054:ff:fe00:cb3f/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:14:1f:a8 brd ff:ff:ff:ff:ff:ff

Network node interfaces configuration before OpenStack installation:

[root@network ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:31:b1:ca brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.13/24 brd 192.168.2.255 scope global dynamic eth0
       valid_lft 79917sec preferred_lft 79917sec
    inet6 fe80::5054:ff:fe31:b1ca/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:00:0c:98 brd ff:ff:ff:ff:ff:ff

Compute node interfaces configuration before OpenStack installation:

[root@compute ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:53:9d:7b brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.14/24 brd 192.168.2.255 scope global dynamic eth0
       valid_lft 84744sec preferred_lft 84744sec
    inet6 fe80::5054:ff:fe53:9d7b/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:e3:b2:d4 brd ff:ff:ff:ff:ff:ff

Steps:

1. Update system on all nodes (Controller, Network, Compute):

[root@controller ~]# yum update
[root@network ~]# yum update
[root@compute ~]# yum update

2. Install RDO repository (Controller node):

[root@controller ~]# yum install https://repos.fedorapeople.org/repos/openstack/openstack-kilo/rdo-release-kilo-1.noarch.rpm

Verify installed RDO package:

[root@controller ~]# rpm -qa | grep rdo-release
rdo-release-kilo-1.noarch

3. Install packstack automated installer (Controller node):

[root@controller ~]# yum install openstack-packstack

4. Disable and Stop NetworkManager on all nodes (Controller, Network, Compute)

Neutron currently (as per OpenStack Kilo release) doesn’t support NetworkManager, so we have to stop and disable it on all nodes:

[root@controller ~]# systemctl stop NetworkManager
[root@controller ~]# systemctl disable NetworkManager
[root@network ~]# systemctl stop NetworkManager
[root@network ~]# systemctl disable NetworkManager
[root@compute ~]# systemctl stop NetworkManager
[root@compute ~]# systemctl disable NetworkManager

5. Generate answer file for packstack automated installation (Controller node):

[root@controller ~]# packstack --gen-answer-file=/root/answers.txt

Backup answer file (/root/answers.txt) file before we start modifying it:

[root@controller ~]# cp /root/answers.txt /root/answers.txt.backup

Now edit answer file (/root/answers.txt) and modify below parameters (Controller node):

CONFIG_NTP_SERVERS=0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org
CONFIG_NAGIOS_INSTALL=n
CONFIG_CONTROLLER_HOST=192.168.2.12
CONFIG_COMPUTE_HOSTS=192.168.2.14
CONFIG_NETWORK_HOSTS=192.168.2.13
CONFIG_USE_EPEL=y
CONFIG_RH_OPTIONAL=n
CONFIG_KEYSTONE_ADMIN_PW=password
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan
CONFIG_NEUTRON_ML2_VLAN_RANGES=physnet1:1000:2000
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1
CONFIG_PROVISION_DEMO=n

Here we attach answers.txt file used during our 3 node Openstack Kilo installation.

Note: we left the rest of the parameters with their default values, as they are not critical for the installation to succeed, but of course feel free to modify them according to your needs.

6. Install OpenStack Kilo using packstack (Controller node)

Launch packstack automated installation (Controller node):

[root@controller ~]# packstack --answer-file=/root/answers.txt

Installation takes about 1 hour, packstack will prompt us for root password for each node (Controller, Network, Compute) in order to be able to deploy Openstack services on all nodes using Puppet script:

Welcome to the Packstack setup utility

The installation log file is available at: /var/tmp/packstack/20160320-230116-mT1aV6/openstack-setup.log

Installing:
Clean Up                                             [ DONE ]
Discovering ip protocol version                      [ DONE ]
root@192.168.2.12's password: 
root@192.168.2.13's password: 
root@192.168.2.14's password: 
Setting up ssh keys                                  [ DONE ]
Preparing servers                                    [ DONE ]
Pre installing Puppet and discovering hosts' details [ DONE ]
Adding pre install manifest entries                  [ DONE ]
Installing time synchronization via NTP              [ DONE ]
Setting up CACERT                                    [ DONE ]
Adding AMQP manifest entries                         [ DONE ]
Adding MariaDB manifest entries                      [ DONE ]
Fixing Keystone LDAP config parameters to be undef if empty[ DONE ]
Adding Keystone manifest entries                     [ DONE ]
Adding Glance Keystone manifest entries              [ DONE ]
Adding Glance manifest entries                       [ DONE ]
Adding Cinder Keystone manifest entries              [ DONE ]
Checking if the Cinder server has a cinder-volumes vg[ DONE ]
Adding Cinder manifest entries                       [ DONE ]
Adding Nova API manifest entries                     [ DONE ]
Adding Nova Keystone manifest entries                [ DONE ]
Adding Nova Cert manifest entries                    [ DONE ]
Adding Nova Conductor manifest entries               [ DONE ]
Creating ssh keys for Nova migration                 [ DONE ]
Gathering ssh host keys for Nova migration           [ DONE ]
Adding Nova Compute manifest entries                 [ DONE ]
Adding Nova Scheduler manifest entries               [ DONE ]
Adding Nova VNC Proxy manifest entries               [ DONE ]
Adding OpenStack Network-related Nova manifest entries[ DONE ]
Adding Nova Common manifest entries                  [ DONE ]
Adding Neutron FWaaS Agent manifest entries          [ DONE ]
Adding Neutron LBaaS Agent manifest entries          [ DONE ]
Adding Neutron API manifest entries                  [ DONE ]
Adding Neutron Keystone manifest entries             [ DONE ]
Adding Neutron L3 manifest entries                   [ DONE ]
Adding Neutron L2 Agent manifest entries             [ DONE ]
Adding Neutron DHCP Agent manifest entries           [ DONE ]
Adding Neutron Metering Agent manifest entries       [ DONE ]
Adding Neutron Metadata Agent manifest entries       [ DONE ]
Checking if NetworkManager is enabled and running    [ DONE ]
Adding OpenStack Client manifest entries             [ DONE ]
Adding Horizon manifest entries                      [ DONE ]
Adding Swift Keystone manifest entries               [ DONE ]
Adding Swift builder manifest entries                [ DONE ]
Adding Swift proxy manifest entries                  [ DONE ]
Adding Swift storage manifest entries                [ DONE ]
Adding Swift common manifest entries                 [ DONE ]
Adding MongoDB manifest entries                      [ DONE ]
Adding Redis manifest entries                        [ DONE ]
Adding Ceilometer manifest entries                   [ DONE ]
Adding Ceilometer Keystone manifest entries          [ DONE ]
Adding post install manifest entries                 [ DONE ]
Copying Puppet modules and manifests                 [ DONE ]
Applying 192.168.2.12_prescript.pp
Applying 192.168.2.13_prescript.pp
Applying 192.168.2.14_prescript.pp
192.168.2.12_prescript.pp:                           [ DONE ]        
192.168.2.14_prescript.pp:                           [ DONE ]        
192.168.2.13_prescript.pp:                           [ DONE ]        
Applying 192.168.2.12_chrony.pp
Applying 192.168.2.13_chrony.pp
Applying 192.168.2.14_chrony.pp
192.168.2.13_chrony.pp:                              [ DONE ]     
192.168.2.12_chrony.pp:                              [ DONE ]     
192.168.2.14_chrony.pp:                              [ DONE ]     
Applying 192.168.2.12_amqp.pp
Applying 192.168.2.12_mariadb.pp
192.168.2.12_amqp.pp:                                [ DONE ]      
192.168.2.12_mariadb.pp:                             [ DONE ]      
Applying 192.168.2.12_keystone.pp
Applying 192.168.2.12_glance.pp
Applying 192.168.2.12_cinder.pp
192.168.2.12_keystone.pp:                            [ DONE ]       
192.168.2.12_glance.pp:                              [ DONE ]       
192.168.2.12_cinder.pp:                              [ DONE ]       
Applying 192.168.2.12_api_nova.pp
192.168.2.12_api_nova.pp:                            [ DONE ]       
Applying 192.168.2.12_nova.pp
Applying 192.168.2.14_nova.pp
192.168.2.12_nova.pp:                                [ DONE ]   
192.168.2.14_nova.pp:                                [ DONE ]   
Applying 192.168.2.12_neutron.pp
Applying 192.168.2.13_neutron.pp
Applying 192.168.2.14_neutron.pp
192.168.2.14_neutron.pp:                             [ DONE ]      
192.168.2.12_neutron.pp:                             [ DONE ]      
192.168.2.13_neutron.pp:                             [ DONE ]      
Applying 192.168.2.12_osclient.pp
Applying 192.168.2.12_horizon.pp
192.168.2.12_osclient.pp:                            [ DONE ]       
192.168.2.12_horizon.pp:                             [ DONE ]       
Applying 192.168.2.12_ring_swift.pp
192.168.2.12_ring_swift.pp:                          [ DONE ]         
Applying 192.168.2.12_swift.pp
192.168.2.12_swift.pp:                               [ DONE ]    
Applying 192.168.2.12_mongodb.pp
Applying 192.168.2.12_redis.pp
192.168.2.12_mongodb.pp:                             [ DONE ]      
192.168.2.12_redis.pp:                               [ DONE ]      
Applying 192.168.2.12_ceilometer.pp
192.168.2.12_ceilometer.pp:                          [ DONE ]         
Applying 192.168.2.12_postscript.pp
Applying 192.168.2.13_postscript.pp
Applying 192.168.2.14_postscript.pp
192.168.2.13_postscript.pp:                          [ DONE ]         
192.168.2.12_postscript.pp:                          [ DONE ]         
192.168.2.14_postscript.pp:                          [ DONE ]         
Applying Puppet manifests                            [ DONE ]
Finalizing                                           [ DONE ]

 **** Installation completed successfully ******

Additional information:
 * File /root/keystonerc_admin has been created on OpenStack client host 192.168.2.12. To use the command line tools you need to source the file.
 * To access the OpenStack Dashboard browse to http://192.168.2.12/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
 * Because of the kernel update the host 192.168.2.12 requires reboot.
 * The installation log file is available at: /var/tmp/packstack/20160320-230116-mT1aV6/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20160320-230116-mT1aV6/manifests

Time to test our installation – log in to the Horizon (OpenStack Dashboard), type the following in your web browser:

http://192.168.2.12/dashboard

You should see Dashboard Login screen, type login and password (in our case: admin/password):
openstack dashboard
Network interfaces configuraton on Controller node right after OpenStack installation:

[root@controller ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:00:cb:3f brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.12/24 brd 192.168.2.255 scope global dynamic eth0
       valid_lft 86123sec preferred_lft 86123sec
    inet6 fe80::5054:ff:fe00:cb3f/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 52:54:00:14:1f:a8 brd ff:ff:ff:ff:ff:ff

Network interfaces configuraton on Network node right after OpenStack installation:

[root@network ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:31:b1:ca brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.13/24 brd 192.168.2.255 scope global dynamic eth0
       valid_lft 86119sec preferred_lft 86119sec
    inet6 fe80::5054:ff:fe31:b1ca/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 52:54:00:00:0c:98 brd ff:ff:ff:ff:ff:ff
4: ovs-system:  mtu 1500 qdisc noop state DOWN 
    link/ether 6a:a3:0b:de:21:96 brd ff:ff:ff:ff:ff:ff
5: br-ex:  mtu 1500 qdisc noop state DOWN 
    link/ether 16:72:f3:cc:df:47 brd ff:ff:ff:ff:ff:ff
6: br-int:  mtu 1500 qdisc noop state DOWN 
    link/ether 8a:bf:ea:78:6d:46 brd ff:ff:ff:ff:ff:ff
7: br-eth1:  mtu 1500 qdisc noop state DOWN 
    link/ether a2:19:a2:50:ed:46 brd ff:ff:ff:ff:ff:ff

Network interfaces configuraton on Compute node right after OpenStack installation:

[root@compute ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:53:9d:7b brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.14/24 brd 192.168.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe53:9d7b/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
    link/ether 52:54:00:e3:b2:d4 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fee3:b2d4/64 scope link 
       valid_lft forever preferred_lft forever
4: ovs-system:  mtu 1500 qdisc noop state DOWN 
    link/ether ea:2e:e8:dd:5b:a7 brd ff:ff:ff:ff:ff:ff
5: br-eth1:  mtu 1500 qdisc noop state DOWN 
    link/ether b2:c8:a3:20:45:4d brd ff:ff:ff:ff:ff:ff
6: br-int:  mtu 1500 qdisc noop state DOWN 
    link/ether 8a:ec:90:42:80:43 brd ff:ff:ff:ff:ff:ff

Services on Controller node:

[root@controller ~]# systemctl list-unit-files | grep openstack
openstack-ceilometer-alarm-evaluator.service  enabled 
openstack-ceilometer-alarm-notifier.service   enabled 
openstack-ceilometer-api.service              enabled 
openstack-ceilometer-central.service          enabled 
openstack-ceilometer-collector.service        enabled 
openstack-ceilometer-notification.service     enabled 
openstack-ceilometer-polling.service          disabled
openstack-cinder-api.service                  enabled 
openstack-cinder-backup.service               enabled 
openstack-cinder-scheduler.service            enabled 
openstack-cinder-volume.service               enabled 
openstack-glance-api.service                  enabled 
openstack-glance-registry.service             enabled 
openstack-glance-scrubber.service             disabled
openstack-keystone.service                    disabled
openstack-losetup.service                     enabled 
openstack-nova-api.service                    enabled 
openstack-nova-cert.service                   enabled 
openstack-nova-conductor.service              enabled 
openstack-nova-console.service                disabled
openstack-nova-consoleauth.service            enabled 
openstack-nova-metadata-api.service           disabled
openstack-nova-novncproxy.service             enabled 
openstack-nova-scheduler.service              enabled 
openstack-nova-xvpvncproxy.service            disabled
openstack-swift-account-auditor.service       enabled 
openstack-swift-account-auditor@.service      disabled
openstack-swift-account-reaper.service        enabled 
openstack-swift-account-reaper@.service       disabled
openstack-swift-account-replicator.service    enabled 
openstack-swift-account-replicator@.service   disabled
openstack-swift-account.service               enabled 
openstack-swift-account@.service              disabled
openstack-swift-container-auditor.service     enabled 
openstack-swift-container-auditor@.service    disabled
openstack-swift-container-reconciler.service  disabled
openstack-swift-container-replicator.service  enabled 
openstack-swift-container-replicator@.service disabled
openstack-swift-container-updater.service     enabled 
openstack-swift-container-updater@.service    disabled
openstack-swift-container.service             enabled 
openstack-swift-container@.service            disabled
openstack-swift-object-auditor.service        enabled 
openstack-swift-object-auditor@.service       disabled
openstack-swift-object-expirer.service        enabled 
openstack-swift-object-replicator.service     enabled 
openstack-swift-object-replicator@.service    disabled
openstack-swift-object-updater.service        enabled 
openstack-swift-object-updater@.service       disabled
openstack-swift-object.service                enabled 
openstack-swift-object@.service               disabled
openstack-swift-proxy.service                 enabled 
[root@controller ~]# systemctl list-unit-files | grep neutron
neutron-dhcp-agent.service                    disabled
neutron-l3-agent.service                      disabled
neutron-metadata-agent.service                disabled
neutron-netns-cleanup.service                 disabled
neutron-ovs-cleanup.service                   disabled
neutron-server.service                        enabled 
[root@controller ~]# systemctl list-unit-files | grep ovs
neutron-ovs-cleanup.service                   disabled

Services on Network node:

[root@network ~]# systemctl list-unit-files | grep openstack
[root@network ~]# systemctl list-unit-files | grep neutron
neutron-dhcp-agent.service             enabled 
neutron-l3-agent.service               enabled 
neutron-metadata-agent.service         enabled 
neutron-netns-cleanup.service          disabled
neutron-openvswitch-agent.service      enabled 
neutron-ovs-cleanup.service            enabled 
neutron-server.service                 disabled
[root@network ~]# systemctl list-unit-files | grep ovs
neutron-ovs-cleanup.service            enabled 

Services on Compute node:

[root@compute ~]# systemctl list-unit-files | grep openstack
openstack-ceilometer-compute.service   enabled 
openstack-ceilometer-polling.service   disabled
openstack-nova-compute.service         enabled 
[root@compute ~]# systemctl list-unit-files | grep neutron
neutron-dhcp-agent.service             disabled
neutron-l3-agent.service               disabled
neutron-metadata-agent.service         disabled
neutron-netns-cleanup.service          disabled
neutron-openvswitch-agent.service      enabled 
neutron-ovs-cleanup.service            enabled 
neutron-server.service                 disabled
[root@compute ~]# systemctl list-unit-files | grep ovs
neutron-ovs-cleanup.service            enabled 

OVS configuration on Controller node:

[root@controller ~]# ovs-vsctl show
-bash: ovs-vsctl: command not found

OVS configuration on Network node:

[root@network ~]# ovs-vsctl show
b2afe2a2-5573-4108-9c2a-347e8d91183e
    Bridge "br-eth1"
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
        Port "phy-br-eth1"
            Interface "phy-br-eth1"
                type: patch
                options: {peer="int-br-eth1"}
    Bridge br-int
        fail_mode: secure
        Port "int-br-eth1"
            Interface "int-br-eth1"
                type: patch
                options: {peer="phy-br-eth1"}
        Port br-int
            Interface br-int
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "2.3.1"

OVS configuration on Compute node:

[root@compute ~]# ovs-vsctl show
413d0132-aff0-4d98-ad6f-50b64b4bb13f
    Bridge "br-eth1"
        Port "phy-br-eth1"
            Interface "phy-br-eth1"
                type: patch
                options: {peer="int-br-eth1"}
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "int-br-eth1"
            Interface "int-br-eth1"
                type: patch
                options: {peer="phy-br-eth1"}
    ovs_version: "2.3.1"

As we can see the majority of network interfaces were created on Network node, which will be now responsible for inbound / outbound traffic handling. All the Neutron critical services are now installed on Network node.

Now create OVS (openvswitch) bridges and bind them to physical network interfaces on OpenStack nodes.

Note: we will not perform any modifications on Controller node interfaces, as Controller is not running any network related Openstack services, so it’s not involved in any traffic to / from Openstack Instances.

7. Configure network interfaces and Open vSwitch (OVS)

Backup / create new network interface files on Network node:

[root@network ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth0 /root/ifcfg-eth0.backup
[root@network ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-br-ex
[root@network ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth1 /root/ifcfg-eth1.backup

Modify ifcfg-eth0 file on Network node to look like below:

DEVICE=eth0
ONBOOT=yes

Modify ifcfg-br-ex file on Network node to look like below:

DEVICE=br-ex
TYPE=Ethernet
BOOTPROTO=static
ONBOOT=yes
NM_CONTROLLED=no
IPADDR=192.168.2.13
PREFIX=24
GATEWAY=192.168.2.1
PEERDNS=yes
DNS1=8.8.8.8
DNS2=8.8.4.4

Modify ifcfg-eth1 file on Network node to look like below:

DEVICE=eth1
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no

Connect eth0 interface as a port to br-ex bridge on Network node:

Note: below command will trigger network restart, so you will lose network connection for a while! The network connection should be brought up again, if you modified ifcfg-eth0 and ifcfg-br-ex files correctly.

[root@network ~]# ovs-vsctl add-port br-ex eth0; systemctl restart network

Now let’s connect eth1 interface as a port to br-eth1 bridge on Network node (this will restart network too):

[root@network ~]# ovs-vsctl add-port br-eth1 eth1; systemctl restart network

Verify new network interfaces configuration on Network node after our modifications (public IP is now assigned to br-ex interface):

[root@network ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
    link/ether 52:54:00:31:b1:ca brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe31:b1ca/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
    link/ether 52:54:00:00:0c:98 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe00:c98/64 scope link 
       valid_lft forever preferred_lft forever
4: ovs-system:  mtu 1500 qdisc noop state DOWN 
    link/ether 6a:a3:0b:de:21:96 brd ff:ff:ff:ff:ff:ff
5: br-ex:  mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 16:72:f3:cc:df:47 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.13/24 brd 192.168.2.255 scope global br-ex
       valid_lft forever preferred_lft forever
    inet6 fe80::1472:f3ff:fecc:df47/64 scope link 
       valid_lft forever preferred_lft forever
6: br-int:  mtu 1500 qdisc noop state DOWN 
    link/ether 8a:bf:ea:78:6d:46 brd ff:ff:ff:ff:ff:ff
7: br-eth1:  mtu 1500 qdisc noop state DOWN 
    link/ether a2:19:a2:50:ed:46 brd ff:ff:ff:ff:ff:ff

Verify OVS configuration on Network node. Now port eth0 should be assigned to br-ex and port eth1 should be assigned to br-eth1:

[root@network ~]# ovs-vsctl show
dbc8c4e4-d717-482c-83da-3f3aafe50ed5
    Bridge "br-eth1"
        Port "eth1"
            Interface "eth1"
        Port "phy-br-eth1"
            Interface "phy-br-eth1"
                type: patch
                options: {peer="int-br-eth1"}
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
    Bridge br-int
        fail_mode: secure
        Port "int-br-eth1"
            Interface "int-br-eth1"
                type: patch
                options: {peer="phy-br-eth1"}
        Port br-int
            Interface br-int
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth0"
            Interface "eth0"
    ovs_version: "2.4.0"

Backup ifcfg-eth1 network interface file on Compute node:

[root@compute ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth1 /root/ifcfg-eth1.backup

Modify ifcfg-eth1 file on Compute node to look like below:

DEVICE=eth1
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no

Connect eth1 interface as a port to br-eth1 bridge on Compute node (this will restart network):

[root@compute ~]# ovs-vsctl add-port br-eth1 eth1; systemctl restart network

Verify network interfaces configuration on Compute node after our modifications (eth1 interface should be UP now):

[root@compute ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:53:9d:7b brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.14/24 brd 192.168.2.255 scope global dynamic eth0
       valid_lft 85399sec preferred_lft 85399sec
    inet6 fe80::5054:ff:fe53:9d7b/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
    link/ether 52:54:00:e3:b2:d4 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fee3:b2d4/64 scope link 
       valid_lft forever preferred_lft forever
4: ovs-system:  mtu 1500 qdisc noop state DOWN 
    link/ether 1e:45:da:ed:53:db brd ff:ff:ff:ff:ff:ff
5: br-eth1:  mtu 1500 qdisc noop state DOWN 
    link/ether b2:c8:a3:20:45:4d brd ff:ff:ff:ff:ff:ff
6: br-int:  mtu 1500 qdisc noop state DOWN 
    link/ether 8a:ec:90:42:80:43 brd ff:ff:ff:ff:ff:ff

Verify OVS configuration on Compute node. Now port eth1 should be assigned to br-eth1:

[root@compute ~]# ovs-vsctl show
413d0132-aff0-4d98-ad6f-50b64b4bb13f
    Bridge "br-eth1"
        Port "phy-br-eth1"
            Interface "phy-br-eth1"
                type: patch
                options: {peer="int-br-eth1"}
        Port "eth1"
            Interface "eth1"
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "int-br-eth1"
            Interface "int-br-eth1"
                type: patch
                options: {peer="phy-br-eth1"}
    ovs_version: "2.3.1"

On Compute node set vnc proxy client IP address in /etc/nova/nova.conf file:

vncserver_proxyclient_address=192.168.2.14

Note: above parameter value allows us to connect to Instance via VNC Dashboard Console.

Restart openstack-nova-compute service on Compute node:

[root@compute ~]# systemctl restart openstack-nova-compute

8. Verify OpenStack services

After packstack based Openstack installation a file /root/keystonerc_admin is created on Controller node. This file contains admin credentials and other authentication parameters needed to operate and maintain our cloud. It looks like below:

unset OS_SERVICE_TOKEN
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL=http://192.168.2.12:5000/v2.0
export PS1='[\u@\h \W(keystone_admin)]\$ '

export OS_TENANT_NAME=admin
export OS_REGION_NAME=RegionOne

Let’s source this file to import OpenStack admin credentials into Linux system variables, to avoid being prompted for password each time we want to invoke OpenStack command:

[root@controller ~]# source /root/keystonerc_admin 
[root@controller ~(keystone_admin)]#

Note: after sourcing the file our prompt should now include keystone_admin phrase

Verify OpenStack status:

[root@controller ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     inactive  (disabled on boot)
neutron-l3-agent:                       inactive  (disabled on boot)
neutron-metadata-agent:                 inactive  (disabled on boot)
== Swift services ==
openstack-swift-proxy:                  active
openstack-swift-account:                active
openstack-swift-container:              active
openstack-swift-object:                 active
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                active
== Ceilometer services ==
openstack-ceilometer-api:               active
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           inactive  (disabled on boot)
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
openstack-ceilometer-notification:      active
== Support services ==
mysqld:                                 inactive  (disabled on boot)
dbus:                                   active
target:                                 active
rabbitmq-server:                        active
memcached:                              active
== Keystone users ==
+----------------------------------+------------+---------+----------------------+
|                id                |    name    | enabled |        email         |
+----------------------------------+------------+---------+----------------------+
| 741c84ddf2e648d0938a8009417707eb |   admin    |   True  |    root@localhost    |
| 72829abede3048deb8f59757d663bd76 | ceilometer |   True  | ceilometer@localhost |
| 349ec7422da94c61915e0efda2916c38 |   cinder   |   True  |   cinder@localhost   |
| 874ff05c243a465ab8c00d558f390e56 |   glance   |   True  |   glance@localhost   |
| d818b9abe3174c04a2cfb0955d1f0751 |  neutron   |   True  |  neutron@localhost   |
| 4f8f8913302148168b048053f808730b |    nova    |   True  |    nova@localhost    |
| 752db7525a1f474dad33141323297977 |   swift    |   True  |   swift@localhost    |
+----------------------------------+------------+---------+----------------------+
== Glance images ==
+----+------+-------------+------------------+------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+----+------+-------------+------------------+------+--------+
+----+------+-------------+------------------+------+--------+
== Nova managed services ==
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | controller | internal | enabled | up    | 2016-03-24T19:41:24.000000 | -               |
| 2  | nova-scheduler   | controller | internal | enabled | up    | 2016-03-24T19:41:24.000000 | -               |
| 3  | nova-conductor   | controller | internal | enabled | up    | 2016-03-24T19:41:24.000000 | -               |
| 4  | nova-cert        | controller | internal | enabled | up    | 2016-03-24T19:41:24.000000 | -               |
| 5  | nova-compute     | compute    | nova     | enabled | up    | 2016-03-24T19:41:24.000000 | -               |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
== Nova networks ==
+----+-------+------+
| ID | Label | Cidr |
+----+-------+------+
+----+-------+------+
== Nova instance flavors ==
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
== Nova instances ==
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+

Verify Neutron agent list:

[root@controller ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| id                                   | agent_type         | host    | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| 38cfed83-711e-47e7-a519-a8121f8e54a9 | Open vSwitch agent | network | ":-)" | True           | neutron-openvswitch-agent |
| 5fe9033c-58c2-4998-b65b-19c4e0435c28 | L3 agent           | network | ":-)" | True           | neutron-l3-agent          |
| 78ff9457-a950-4d61-9879-83cbf01cbd5c | Metadata agent     | network | ":-)" | True           | neutron-metadata-agent    |
| 7cbb9bc6-ef97-4e0e-9620-421bc30d390d | DHCP agent         | network | ":-)" | True           | neutron-dhcp-agent        |
| cf34648f-754e-4d2c-984b-27ee7d726a72 | Open vSwitch agent | compute | ":-)" | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+

Verify host list:

[root@controller ~(keystone_admin)]# nova-manage host list
host                     	zone           
controller               	internal       
compute                  	nova        

Verify service list:

[root@controller ~(keystone_admin)]# nova-manage service list
Binary            Host           Zone          Status     State   Updated_At
nova-consoleauth  controller     internal      enabled    ":-)"   2016-03-24 19:49:24
nova-scheduler    controller     internal      enabled    ":-)"   2016-03-24 19:49:24
nova-conductor    controller     internal      enabled    ":-)"   2016-03-24 19:49:24
nova-cert         controller     internal      enabled    ":-)"   2016-03-24 19:49:24
nova-compute      compute        nova          enabled    ":-)"   2016-03-24 19:49:24

…and that’s it, we have successfuly installed and configured Openstack Kilo on 3 nodes.
We can now create Project Tenant and launch OpenStack Instances.