OpenStack Newton VXLAN based installation on 3 CentOS 7 nodes

OpenStack Newton VXLAN based installation on 3 CentOS 7 nodes
OpenStack is a free and open source cloud computing platform developed as a joint project of Rackspace Hosting and NASA, consisting of many well know technologies like: Linux KVM, LVM, iSCSI, MariaDB (MySQL), RabbitMQ or Python Django.

In our previous articles we presented OpenStack installations based on VLAN internal networking.

In this article we will install OpenStack Newton release from RDO repository on three CentOS 7 based nodes (Controller, Network, Compute), but this time, unlike our previous articles, we will use VXLAN based internal networking for communication between Nova Instances.

OpenStack Newton VXLAN based installation on 3 CentOS 7 nodes

Environment used:
public network (Floating IP network): 192.168.2.0/24
internal network (on each node): no IP space, physical connection only (eth1)
Controller node public IP: 192.168.2.21 (eth0)
Network node public IP: 192.168.2.22 (eth0 – br-ex)
Compute node public IP: 192.168.2.23 (eth0)
OS version (each node): CentOS Linux release 7.2.1511 (Core)

Steps:

1. Prerequisites for Newton OpenStack installation

In order for the OpenStack Newton release installation to run smoothly and accomplish without errors, some configuration must be done on all OpenStack nodes regarding netowrk interfaces setup prior to the installation.

Note: public network interfaces (eth0) should have static IP configuration, both public (eth0) and internal (eth1) interface should have corresponding config files (ifcfg-eth0, ifcfg-eth1) and both should be in state UP, NetworkManager service should be stopped and disabled on all OpenStack nodes.

Controller node interfaces configuration before OpenStack installation:

[root@controller ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:bd:87:d1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.21/24 brd 192.168.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2a01:112f:5d4:6b00:5054:ff:febd:87d1/64 scope global mngtmpaddr dynamic 
       valid_lft 872sec preferred_lft 272sec
    inet6 fe80::5054:ff:febd:87d1/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:05:c7:de brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe05:c7de/64 scope link 
       valid_lft forever preferred_lft forever
[root@controller ~]# ip route show
default via 192.168.2.1 dev eth0 
169.254.0.0/16 dev eth0  scope link  metric 1002 
169.254.0.0/16 dev eth1  scope link  metric 1003 
192.168.2.0/24 dev eth0  proto kernel  scope link  src 192.168.2.21 
[root@controller ~]# cat /etc/resolv.conf 
# Generated by NetworkManager
nameserver 8.8.8.8
nameserver 8.8.4.4
[root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
NAME=eth0
UUID=650ea528-3b38-4973-9648-d577bfb53ecb
DEVICE=eth0
ONBOOT=yes
IPADDR=192.168.2.21
PREFIX=24
GATEWAY=192.168.2.1
PEERDNS=no
NM_CONTROLLED=no
[root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
HWADDR=52:54:00:05:c7:de
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth1
NM_CONTROLLED=no
ONBOOT=yes
[root@controller ~]# systemctl status NetworkManager
● NetworkManager.service - Network Manager
   Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled; vendor preset: enabled)
   Active: inactive (dead)

Network node interfaces configuration before OpenStack installation:

[root@network ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:65:52:e1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.22/24 brd 192.168.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2a01:112f:5d4:6b00:5054:ff:fe65:52e1/64 scope global mngtmpaddr dynamic 
       valid_lft 887sec preferred_lft 287sec
    inet6 fe80::5054:ff:fe65:52e1/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:25:ea:93 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe25:ea93/64 scope link 
       valid_lft forever preferred_lft forever
[root@network ~]# ip route show
default via 192.168.2.1 dev eth0 
169.254.0.0/16 dev eth0  scope link  metric 1002 
169.254.0.0/16 dev eth1  scope link  metric 1003 
192.168.2.0/24 dev eth0  proto kernel  scope link  src 192.168.2.22 
[root@network ~]# cat /etc/resolv.conf 
# Generated by NetworkManager
nameserver 8.8.8.8
nameserver 8.8.4.4
[root@network ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
NAME=eth0
UUID=650ea528-3b38-4973-9648-d577bfb53ecb
DEVICE=eth0
ONBOOT=yes
IPADDR=192.168.2.22
PREFIX=24
GATEWAY=192.168.2.1
PEERDNS=no
NM_CONTROLLED=no
[root@network ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
HWADDR=52:54:00:25:ea:93
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth1
NM_CONTROLLED=no
ONBOOT=yes
[root@network ~]# systemctl status NetworkManager
● NetworkManager.service - Network Manager
   Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled; vendor preset: enabled)
   Active: inactive (dead)

Compute node interfaces configuration before OpenStack installation:

[root@compute ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:da:9b:cf brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.23/24 brd 192.168.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2a01:112f:5d4:6b00:5054:ff:feda:9bcf/64 scope global mngtmpaddr dynamic 
       valid_lft 893sec preferred_lft 293sec
    inet6 fe80::5054:ff:feda:9bcf/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:97:e7:90 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe97:e790/64 scope link 
       valid_lft forever preferred_lft forever
[root@compute ~]# ip route show
default via 192.168.2.1 dev eth0 
169.254.0.0/16 dev eth0  scope link  metric 1002 
169.254.0.0/16 dev eth1  scope link  metric 1003 
192.168.2.0/24 dev eth0  proto kernel  scope link  src 192.168.2.23 
[root@compute ~]# cat /etc/resolv.conf 
# Generated by NetworkManager
nameserver 8.8.8.8
nameserver 8.8.4.4
[root@compute ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
NAME=eth0
UUID=650ea528-3b38-4973-9648-d577bfb53ecb
DEVICE=eth0
ONBOOT=yes
IPADDR=192.168.2.23
PREFIX=24
GATEWAY=192.168.2.1
PEERDNS=no
NM_CONTROLLED=no
[root@compute ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
HWADDR=52:54:00:97:e7:90
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth1
NM_CONTROLLED=no
ONBOOT=yes
[root@compute ~]# systemctl status NetworkManager
● NetworkManager.service - Network Manager
   Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled; vendor preset: enabled)
   Active: inactive (dead)

Stop and disable NetworkManager on all OpenStack nodes (if not yet disabled):

[root@controller ~]# systemctl stop NetworkManager
[root@controller ~]# systemctl disable NetworkManager
[root@network ~]# systemctl stop NetworkManager
[root@network ~]# systemctl disable NetworkManager
[root@compute ~]# systemctl stop NetworkManager
[root@compute ~]# systemctl disable NetworkManager

Update system on all OpenStack nodes (Controller, Network, Compute):

[root@controller ~]# yum update
[root@network ~]# yum update
[root@compute ~]# yum update

2. Install OpenStack Newton RDO repository (Controller node)

Install RDO repository RPM package:

[root@controller ~]# yum install https://repos.fedorapeople.org/repos/openstack/openstack-newton/rdo-release-newton-3.noarch.rpm

3. Install packstack automated installer (Controller node)

[root@controller ~]# yum install openstack-packstack

4. Generate answer file for packstack automated installation (Controller node)

[root@controller ~]# packstack --gen-answer-file=/root/answers.txt
Packstack changed given value  to required value /root/.ssh/id_rsa.pub

Backup answer file (/root/answers.txt) file before we start modifying it:

[root@controller ~]# cp /root/answers.txt /root/answers.txt.backup

5. Edit answer file (Controller only)

Edit answer file (/root/answers.txt) and modify it’s parameters to look like below:

CONFIG_NTP_SERVERS=0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org
CONFIG_NAGIOS_INSTALL=n
CONFIG_CONTROLLER_HOST=192.168.2.21
CONFIG_COMPUTE_HOSTS=192.168.2.23
CONFIG_NETWORK_HOSTS=192.168.2.22
CONFIG_USE_EPEL=y
CONFIG_RH_OPTIONAL=n
CONFIG_STORAGE_HOST=192.168.2.21
CONFIG_KEYSTONE_ADMIN_PW=password
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-eth1:eth1
CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-eth1
CONFIG_PROVISION_DEMO=n

Here you can find answers.txt file used during our VXLAN based 3 node Openstack Newton installation using packstack.

Note: we left the rest of the parameters with their default values, as they are not critical for the installation to succeed, feel free to modify them according to your needs (if you know what you’re doing).

6. Install OpenStack Newton using packstack (Controller only)

Launch packstack automated installation:

[root@controller ~]# packstack --answer-file=/root/answers.txt --timeout=600

Installation takes about 2 hours (depends on hardware), packstack will prompt us for root password for each node (Controller, Network, Compute) in order to be able to deploy OpenStack services on all nodes using Puppet script:

Welcome to the Packstack setup utility

The installation log file is available at: /var/tmp/packstack/20161017-234845-DJwu4I/openstack-setup.log

Installing:
Clean Up                                             [ DONE ]
Discovering ip protocol version                      [ DONE ]
root@192.168.2.23's password: 
root@192.168.2.22's password: 
root@192.168.2.21's password: 
Setting up ssh keys                                  [ DONE ]
Preparing servers                                    [ DONE ]
Pre installing Puppet and discovering hosts' details [ DONE ]
Preparing pre-install entries                        [ DONE ]
Installing time synchronization via NTP              [ DONE ]
Setting up CACERT                                    [ DONE ]
Preparing AMQP entries                               [ DONE ]
Preparing MariaDB entries                            [ DONE ]
Fixing Keystone LDAP config parameters to be undef if empty[ DONE ]
Preparing Keystone entries                           [ DONE ]
Preparing Glance entries                             [ DONE ]
Checking if the Cinder server has a cinder-volumes vg[ DONE ]
Preparing Cinder entries                             [ DONE ]
Preparing Nova API entries                           [ DONE ]
Creating ssh keys for Nova migration                 [ DONE ]
Gathering ssh host keys for Nova migration           [ DONE ]
Preparing Nova Compute entries                       [ DONE ]
Preparing Nova Scheduler entries                     [ DONE ]
Preparing Nova VNC Proxy entries                     [ DONE ]
Preparing OpenStack Network-related Nova entries     [ DONE ]
Preparing Nova Common entries                        [ DONE ]
Preparing Neutron LBaaS Agent entries                [ DONE ]
Preparing Neutron API entries                        [ DONE ]
Preparing Neutron L3 entries                         [ DONE ]
Preparing Neutron L2 Agent entries                   [ DONE ]
Preparing Neutron DHCP Agent entries                 [ DONE ]
Preparing Neutron Metering Agent entries             [ DONE ]
Checking if NetworkManager is enabled and running    [ DONE ]
Preparing OpenStack Client entries                   [ DONE ]
Preparing Horizon entries                            [ DONE ]
Preparing Swift builder entries                      [ DONE ]
Preparing Swift proxy entries                        [ DONE ]
Preparing Swift storage entries                      [ DONE ]
Preparing Gnocchi entries                            [ DONE ]
Preparing MongoDB entries                            [ DONE ]
Preparing Redis entries                              [ DONE ]
Preparing Ceilometer entries                         [ DONE ]
Preparing Aodh entries                               [ DONE ]
Preparing Puppet manifests                           [ DONE ]
Copying Puppet modules and manifests                 [ DONE ]
Applying 192.168.2.21_controller.pp
192.168.2.21_controller.pp:                          [ DONE ]         
Applying 192.168.2.22_network.pp
192.168.2.22_network.pp:                             [ DONE ]      
Applying 192.168.2.23_compute.pp
192.168.2.23_compute.pp:                             [ DONE ]      
Applying Puppet manifests                            [ DONE ]
Finalizing                                           [ DONE ]

 **** Installation completed successfully ******

Additional information:
 * File /root/keystonerc_admin has been created on OpenStack client host 192.168.2.21. To use the command line tools you need to source the file.
 * To access the OpenStack Dashboard browse to http://192.168.2.21/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
 * The installation log file is available at: /var/tmp/packstack/20161017-234845-DJwu4I/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20161017-234845-DJwu4I/manifests

7. Verify (briefly) OpenStack Newton installation

Let’s quickly verify OpenStack installation and login to the Horizon (OpenStack Dashboard), type the following in your web browser:

http://192.168.2.21/dashboard

You should see Horizon login screen:
openstack newton vxlan based deployment
Check, if you are able to login to Horizon by typing admin credentials you provided in answer file (answers.txt), in our case it’s admin/password.

Verify network setup and services running on all OpenStack nodes right after packstack installation.

Controller node configuration (right after installation):

[root@controller ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:bd:87:d1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.21/24 brd 192.168.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2a01:112f:5d4:6b00:5054:ff:febd:87d1/64 scope global mngtmpaddr dynamic 
       valid_lft 893sec preferred_lft 293sec
    inet6 fe80::5054:ff:febd:87d1/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:05:c7:de brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe05:c7de/64 scope link 
       valid_lft forever preferred_lft forever
[root@controller ~]# ovs-vsctl show
-bash: ovs-vsctl: command not found
[root@controller ~]# systemctl list-unit-files | grep openstack
openstack-aodh-api.service                    disabled
openstack-aodh-evaluator.service              enabled 
openstack-aodh-listener.service               enabled 
openstack-aodh-notifier.service               enabled 
openstack-ceilometer-api.service              disabled
openstack-ceilometer-central.service          enabled 
openstack-ceilometer-collector.service        enabled 
openstack-ceilometer-notification.service     enabled 
openstack-ceilometer-polling.service          disabled
openstack-cinder-api.service                  enabled 
openstack-cinder-backup.service               enabled 
openstack-cinder-scheduler.service            enabled 
openstack-cinder-volume.service               enabled 
openstack-glance-api.service                  enabled 
openstack-glance-glare.service                disabled
openstack-glance-registry.service             enabled 
openstack-glance-scrubber.service             disabled
openstack-gnocchi-api.service                 disabled
openstack-gnocchi-metricd.service             enabled 
openstack-gnocchi-statsd.service              enabled 
openstack-losetup.service                     enabled 
openstack-nova-api.service                    enabled 
openstack-nova-cert.service                   enabled 
openstack-nova-conductor.service              enabled 
openstack-nova-console.service                disabled
openstack-nova-consoleauth.service            enabled 
openstack-nova-metadata-api.service           disabled
openstack-nova-novncproxy.service             enabled 
openstack-nova-os-compute-api.service         disabled
openstack-nova-scheduler.service              enabled 
openstack-nova-xvpvncproxy.service            disabled
openstack-swift-account-auditor.service       enabled 
openstack-swift-account-auditor@.service      disabled
openstack-swift-account-reaper.service        enabled 
openstack-swift-account-reaper@.service       disabled
openstack-swift-account-replicator.service    enabled 
openstack-swift-account-replicator@.service   disabled
openstack-swift-account.service               enabled 
openstack-swift-account@.service              disabled
openstack-swift-container-auditor.service     enabled 
openstack-swift-container-auditor@.service    disabled
openstack-swift-container-reconciler.service  disabled
openstack-swift-container-replicator.service  enabled 
openstack-swift-container-replicator@.service disabled
openstack-swift-container-updater.service     enabled 
openstack-swift-container-updater@.service    disabled
openstack-swift-container.service             enabled 
openstack-swift-container@.service            disabled
openstack-swift-object-auditor.service        enabled 
openstack-swift-object-auditor@.service       disabled
openstack-swift-object-expirer.service        enabled 
openstack-swift-object-reconstructor.service  disabled
openstack-swift-object-reconstructor@.service disabled
openstack-swift-object-replicator.service     enabled 
openstack-swift-object-replicator@.service    disabled
openstack-swift-object-updater.service        enabled 
openstack-swift-object-updater@.service       disabled
openstack-swift-object.service                enabled 
openstack-swift-object@.service               disabled
openstack-swift-proxy.service                 enabled 
[root@controller ~]# systemctl list-unit-files | grep neutron
neutron-dhcp-agent.service                    disabled
neutron-l3-agent.service                      disabled
neutron-linuxbridge-cleanup.service           disabled
neutron-metadata-agent.service                disabled
neutron-netns-cleanup.service                 disabled
neutron-ovs-cleanup.service                   disabled
neutron-server.service                        enabled 
[root@controller ~]# systemctl list-unit-files | grep ovs
neutron-ovs-cleanup.service                   disabled

Network node configuration (right after installation):

[root@network ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:65:52:e1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.22/24 brd 192.168.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2a01:112f:5d4:6b00:5054:ff:fe65:52e1/64 scope global mngtmpaddr dynamic 
       valid_lft 862sec preferred_lft 262sec
    inet6 fe80::5054:ff:fe65:52e1/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
    link/ether 52:54:00:25:ea:93 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe25:ea93/64 scope link 
       valid_lft forever preferred_lft forever
4: ovs-system:  mtu 1500 qdisc noop state DOWN 
    link/ether 16:d5:12:cb:99:78 brd ff:ff:ff:ff:ff:ff
5: br-ex:  mtu 1500 qdisc noop state DOWN 
    link/ether 62:82:13:1f:f0:4b brd ff:ff:ff:ff:ff:ff
6: br-int:  mtu 1500 qdisc noop state DOWN 
    link/ether 32:bc:61:9f:9c:45 brd ff:ff:ff:ff:ff:ff
7: br-tun:  mtu 1500 qdisc noop state DOWN 
    link/ether 36:3d:7c:3d:83:49 brd ff:ff:ff:ff:ff:ff
8: br-eth1:  mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 72:0d:31:16:a1:40 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::700d:31ff:fe16:a140/64 scope link 
       valid_lft forever preferred_lft forever
[root@network ~]# ovs-vsctl show
6ba6fce3-ff81-4d9e-a0b5-8ad6987e2488
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge "br-eth1"
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "eth1"
            Interface "eth1"
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
        Port "phy-br-eth1"
            Interface "phy-br-eth1"
                type: patch
                options: {peer="int-br-eth1"}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-c0a80217"
            Interface "vxlan-c0a80217"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.2.22", out_key=flow, remote_ip="192.168.2.23"}
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "int-br-eth1"
            Interface "int-br-eth1"
                type: patch
                options: {peer="phy-br-eth1"}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.5.0"
[root@network ~]# systemctl list-unit-files | grep openstack
[root@network ~]# systemctl list-unit-files | grep neutron
neutron-dhcp-agent.service             enabled 
neutron-l3-agent.service               enabled 
neutron-linuxbridge-cleanup.service    disabled
neutron-metadata-agent.service         enabled 
neutron-metering-agent.service         enabled 
neutron-netns-cleanup.service          disabled
neutron-openvswitch-agent.service      enabled 
neutron-ovs-cleanup.service            enabled 
neutron-server.service                 disabled
[root@network ~]# systemctl list-unit-files | grep ovs
neutron-ovs-cleanup.service            enabled

Compute node configuration (right after installation):

[root@compute ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:da:9b:cf brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.23/24 brd 192.168.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2a01:112f:5d4:6b00:5054:ff:feda:9bcf/64 scope global mngtmpaddr dynamic 
       valid_lft 870sec preferred_lft 270sec
    inet6 fe80::5054:ff:feda:9bcf/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
    link/ether 52:54:00:97:e7:90 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe97:e790/64 scope link 
       valid_lft forever preferred_lft forever
4: ovs-system:  mtu 1500 qdisc noop state DOWN 
    link/ether 8e:77:5e:52:c6:82 brd ff:ff:ff:ff:ff:ff
5: br-int:  mtu 1500 qdisc noop state DOWN 
    link/ether d2:68:77:1e:49:42 brd ff:ff:ff:ff:ff:ff
6: br-tun:  mtu 1500 qdisc noop state DOWN 
    link/ether 02:15:8d:73:56:48 brd ff:ff:ff:ff:ff:ff
7: br-eth1:  mtu 1500 qdisc noop state DOWN 
    link/ether 9e:b3:d2:94:c1:48 brd ff:ff:ff:ff:ff:ff
[root@compute ~]# ovs-vsctl show
6b8ed57a-4e2a-4ddd-ae5f-cc6872a83067
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "int-br-eth1"
            Interface "int-br-eth1"
                type: patch
                options: {peer="phy-br-eth1"}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-c0a80216"
            Interface "vxlan-c0a80216"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.2.23", out_key=flow, remote_ip="192.168.2.22"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge "br-eth1"
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
        Port "phy-br-eth1"
            Interface "phy-br-eth1"
                type: patch
                options: {peer="int-br-eth1"}
        Port "eth1"
            Interface "eth1"
    ovs_version: "2.5.0"
[root@compute ~]# systemctl list-unit-files | grep openstack
openstack-ceilometer-compute.service   enabled 
openstack-ceilometer-polling.service   disabled
openstack-nova-compute.service         enabled 
[root@compute ~]# systemctl list-unit-files | grep neutron
neutron-dhcp-agent.service             disabled
neutron-l3-agent.service               disabled
neutron-linuxbridge-cleanup.service    disabled
neutron-metadata-agent.service         disabled
neutron-netns-cleanup.service          disabled
neutron-openvswitch-agent.service      enabled 
neutron-ovs-cleanup.service            enabled 
neutron-server.service                 disabled
[root@compute ~]# systemctl list-unit-files | grep ovs
neutron-ovs-cleanup.service            enabled

7. Configure public network interface and Open vSwitch (OVS) on Network node

Time to create OVS (openvswitch) bridges and bind them to physical network interfaces on OpenStack nodes.

Note: we will not perform any modifications on Controller node interfaces, as Controller is not running any network related Openstack services. We will also not modify Compute node interfaces – this was done by packstack and appropriate parameters in answer file (answers.txt)

Backup / create new network interface files on Network node:

[root@network ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth0 /root/ifcfg-eth0.backup
[root@network ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-br-ex

Modify ifcfg-eth0 file on Network node to look like below:

DEVICE=eth0
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-ex

Modify ifcfg-br-ex file on Network node to look like below:

TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
NAME=br-ex
DEVICE=br-ex
ONBOOT=yes
IPADDR=192.168.2.22
PREFIX=24
GATEWAY=192.168.2.1
PEERDNS=no
NM_CONTROLLED=no

Verify ifcfg-eth1 file on Network node, it should look like below (no modifications needed):

DEVICE=eth1
NAME=eth1
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-eth1
ONBOOT=yes
BOOTPROTO=none

Connect eth0 interface as a port to br-ex bridge on Network node:

Note: below command will trigger network restart, you will lose network connection for a while! The network connection should be brought up again, if you modified ifcfg-eth0 and ifcfg-br-ex files correctly.

[root@network ~]# ovs-vsctl add-port br-ex eth0; systemctl restart network

Network interfaces configuration on Network node after our modifications (public IP is now assigned to br-ex interface):

[root@network ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
    link/ether 52:54:00:65:52:e1 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe65:52e1/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
    link/ether 52:54:00:25:ea:93 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe25:ea93/64 scope link 
       valid_lft forever preferred_lft forever
4: ovs-system:  mtu 1500 qdisc noop state DOWN 
    link/ether 0a:a0:3b:a2:69:d9 brd ff:ff:ff:ff:ff:ff
5: br-ex:  mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 62:82:13:1f:f0:4b brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.22/24 brd 192.168.2.255 scope global br-ex
       valid_lft forever preferred_lft forever
    inet6 2a01:112f:5d4:6b00:6082:13ff:fe1f:f04b/64 scope global mngtmpaddr dynamic 
       valid_lft 898sec preferred_lft 298sec
    inet6 fe80::6082:13ff:fe1f:f04b/64 scope link 
       valid_lft forever preferred_lft forever
6: br-int:  mtu 1500 qdisc noop state DOWN 
    link/ether 32:bc:61:9f:9c:45 brd ff:ff:ff:ff:ff:ff
7: br-tun:  mtu 1500 qdisc noop state DOWN 
    link/ether 36:3d:7c:3d:83:49 brd ff:ff:ff:ff:ff:ff
11: br-eth1:  mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether de:3f:ed:a1:d0:47 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::dc3f:edff:fea1:d047/64 scope link 
       valid_lft forever preferred_lft forever
[root@network ~]# ovs-vsctl show
6ba6fce3-ff81-4d9e-a0b5-8ad6987e2488
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-ex
        Port "eth0"
            Interface "eth0"
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-c0a80217"
            Interface "vxlan-c0a80217"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.2.22", out_key=flow, remote_ip="192.168.2.23"}
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "int-br-eth1"
            Interface "int-br-eth1"
                type: patch
                options: {peer="phy-br-eth1"}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
    Bridge "br-eth1"
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "eth1"
            Interface "eth1"
        Port "phy-br-eth1"
            Interface "phy-br-eth1"
                type: patch
                options: {peer="int-br-eth1"}
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
    ovs_version: "2.5.0"

8. Setup VNC Server on Compute node

On Compute node set vnc proxy client IP address in /etc/nova/nova.conf file:

vncserver_proxyclient_address=192.168.2.23

Note: above parameter value allows us to connect to Instance via VNC Dashboard Console.

Restart openstack-nova-compute service on Compute node:

[root@compute ~]# systemctl restart openstack-nova-compute

9. Verify Openstack services

After packstack based Openstack installation a file /root/keystonerc_admin is created on Controller node. This file contains admin credentials and other authentication parameters needed to operate and maintain our cloud:

[root@controller ~]# cat keystonerc_admin 
unset OS_SERVICE_TOKEN
    export OS_USERNAME=admin
    export OS_PASSWORD=password
    export OS_AUTH_URL=http://192.168.2.21:5000/v2.0
    export PS1='[\u@\h \W(keystone_admin)]\$ '
    
export OS_TENANT_NAME=admin
export OS_REGION_NAME=RegionOne

Let’s source this file to import OpenStack admin credentials into Linux system variables:

[root@controller ~]# source keystonerc_admin 
[root@controller ~(keystone_admin)]#

Perform brief verification of OpenStack components.

List Nova hosts by service:

[root@controller ~(keystone_admin)]# nova host-list
+------------+-------------+----------+
| host_name  | service     | zone     |
+------------+-------------+----------+
| controller | cert        | internal |
| controller | consoleauth | internal |
| controller | scheduler   | internal |
| controller | conductor   | internal |
| compute    | compute     | nova     |
+------------+-------------+----------+

List Nova services by host:

[root@controller ~(keystone_admin)]# nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-cert        | controller | internal | enabled | up    | 2016-10-22T21:47:29.000000 | -               |
| 2  | nova-consoleauth | controller | internal | enabled | up    | 2016-10-22T21:47:29.000000 | -               |
| 5  | nova-scheduler   | controller | internal | enabled | up    | 2016-10-22T21:47:29.000000 | -               |
| 6  | nova-conductor   | controller | internal | enabled | up    | 2016-10-22T21:47:29.000000 | -               |
| 7  | nova-compute     | compute    | nova     | enabled | up    | 2016-10-22T21:47:31.000000 | -               |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+

List Neutron services by host:

[root@controller ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+---------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host    | availability_zone | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+---------+-------------------+-------+----------------+---------------------------+
| 01f6a3e8-8146-436a-85bc-35a0a99fd5ee | Metadata agent     | network |                   | ":-)" | True           | neutron-metadata-agent    |
| 06156c38-a2fe-4d9c-8265-841eeb8fdeea | Open vSwitch agent | compute |                   | ":-)" | True           | neutron-openvswitch-agent |
| 07bc741a-2ab3-4559-a0e8-793af5cdbc98 | DHCP agent         | network | nova              | ":-)" | True           | neutron-dhcp-agent        |
| 1899b923-6e13-41c4-9c8a-c31bb1cb206c | Metering agent     | network |                   | ":-)" | True           | neutron-metering-agent    |
| 23747441-b1cb-4259-abf3-95bc89261795 | L3 agent           | network | nova              | ":-)" | True           | neutron-l3-agent          |
| a884d1f8-c0a5-4f16-b4ae-a7adefbc0a87 | Open vSwitch agent | network |                   | ":-)" | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+---------+-------------------+-------+----------------+---------------------------+

We have just finished OpenStack Newton VXLAN based installation on 3 nodes: Controller, Network, Compute, based on CentOS 7. You can now create project tenants and launch instances in your new OpenStack cloud.


Share on: Share on FacebookShare on Google+Tweet about this on TwitterShare on StumbleUponShare on LinkedInPin on PinterestBuffer this pageShare on TumblrDigg thisFlattr the authorShare on RedditShare on VKShare on Yummly
Liked it? Take a second to support tuxfixer.com on Patreon!
Hadoop Developer Training

12 thoughts on “OpenStack Newton VXLAN based installation on 3 CentOS 7 nodes

  1. Robetus December 1, 2016 at 02:18

    Thank you for this wonderful writeup. Can I use http://www.tuxfixer.com/add-new-compute-node-to-existing-openstack-using-packstack/ to add a compute node to this installation?

    • Grzegorz Juszczak December 1, 2016 at 10:08

      Hi Robetus

      Yes, you can use “Add new compute node…” tutorial, although your installation is now VXLAN based, and that tutorial is about adding node to VLAN based installation, but the whole procedure should be analogical for VXLAN setup.

  2. Robetus December 1, 2016 at 14:27

    Thank you. Would you consider creating a tutorial for adding a node with a VXLAN setup?

    • Grzegorz Juszczak December 2, 2016 at 12:01

      Hi Robetus
      I will consider making such a tutorial.
      Thank you for remark.

  3. Ronaldo December 29, 2016 at 18:43

    Hello
    On the eth1 board of the host network and computer, is it necessary to put ip or not?
    Thank you

    • Grzegorz Juszczak December 30, 2016 at 20:19

      Hello
      It’s not mandatory.
      I used to assign IP to eth1 from time to time just for testing purposes, but OpenStack just needs the physical connection on eth1 interface, since all the 3rd layer stuff (like IP) is handled by openVswitch (OVS).

  4. Amjad April 17, 2017 at 09:00

    Hi

    how to configure alias nic, let assume you have eth1 and create an alias nic eth1:0 , so how configure this in packstack answer file.

    • Grzegorz Juszczak May 15, 2017 at 22:32

      Hi Amjad

      Never tried packstack installation including aliases.

      regards
      GJ

  5. Amjad September 10, 2017 at 22:14

    Hi
    I try to follow the same for OpenStack ocata, all networks and bridges were created, but the instance can not get an ip.

    Cloud you please do the same for Ocata.

    Thanks

    • Grzegorz Juszczak September 13, 2017 at 00:28

      Thanks Amjad, I will try to write tutorial regarding Ocata within upcoming few weeks.

      • Amjad September 13, 2017 at 21:06

        Hi

        Thanks for reply, I can now get an ip for instance but not reaching the external network.

  6. Amjad September 10, 2017 at 22:26

    Add more details, I used untagged vlan on all nodes, how configure all to use vlan e.g.
    eth0
    eth0.100

    Thanks

Leave a Reply

Name *
Email *
Website