Install OpenStack Juno on CentOS 7 / RHEL 7

openstack
OpenStack is a free and open source cloud computing platform developed as a joint project of Rackspace Hosting and NASA. Users primarily deploy it as an Infrastructure as a Service (IaaS) solution. OpenStack cloud consists of many well know technologies like: Linux KVM, LVM, iSCSI, MariaDB (MySQL), RabbitMQ or Python Django.

OpenStack architecture overview:

  1. Horizon: web browser user interface (dashboard) based on Python Django for creating and managing instances (virtual machines)
  2. Keystone: authentication and authorization framework
  3. Neutron: network connectivity as a service
  4. Cinder: persistent block storage for instances based on LVM
  5. Nova: instances management system based on Linux KVM
  6. Glance: registry for instance images
  7. Swift: file storage for cloud
  8. Ceilometer: metering engine for collecting billable data and analysis.
  9. Heat: orchestration service for template-based instance deployment

In this tutorial we will install OpenStack Juno release from RDO repository on two nodes (controller node & compute node) based on CentOS 7 / RHEL 7.

Environment used:
public network (Floating IP network): 192.168.2.0/24
internal network: no IP space, physical connection only (eth1)
public controller IP: 192.168.2.4 (eth0)
public compute IP: 192.168.2.5 (eth0)
openstack_diagram

Controller node network interfaces configuration before OpenStack installation:

[root@controller ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:cf:f6:ef brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.4/24 brd 192.168.2.255 scope global dynamic eth0
       valid_lft 42065sec preferred_lft 42065sec
    inet6 fe80::5054:ff:fecf:f6ef/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 52:54:00:c6:92:ee brd ff:ff:ff:ff:ff:ff

Compute node network interfaces configuration before OpenStack installation:

[root@compute ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:4d:fa:06 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.5/24 brd 192.168.2.255 scope global dynamic eth0
       valid_lft 42039sec preferred_lft 42039sec
    inet6 fe80::5054:ff:fe4d:fa06/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 52:54:00:fe:a6:c8 brd ff:ff:ff:ff:ff:ff

First of all, stop and disable NetworkManager on both nodes (controller and compute):

systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl enable network

Update your system on both nodes (controller and compute):

yum update

Install RDO repository (controller node):

yum install https://repos.fedorapeople.org/repos/openstack/EOL/openstack-juno/rdo-release-juno-1.noarch.rpm

Install packstack automated installer (controller node):

yum install openstack-packstack

Generate answer file for packstack automated installation (controller node):

packstack --gen-answer-file=/root/answers.txt

Edit answer file (/root/answers.txt) and modify below parameters (controller node):

CONFIG_NTP_SERVERS=0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org
CONFIG_COMPUTE_HOSTS=192.168.2.5
CONFIG_KEYSTONE_ADMIN_PW=password
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vlan
CONFIG_NEUTRON_ML2_VLAN_RANGES=physnet1:1000:2000
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-eth1

Here attached: answers.txt file used during our installation.

Launch packstack automated installation (controller node):

packstack --answer-file=/root/answers.txt

Installation will take about 1-1,5h, we will be prompted for root password for all nodes (in our case: controller and compute):

Welcome to the Packstack setup utility

The installation log file is available at: /var/tmp/packstack/20150412-171545-6LJ0WP/openstack-setup.log

Installing:
Clean Up                                             [ DONE ]
root@192.168.2.4's password: 
root@192.168.2.5's password: 
Setting up ssh keys                                  [ DONE ]
Discovering hosts' details                           [ DONE ]
Adding pre install manifest entries                  [ DONE ]
Installing time synchronization via NTP              [ DONE ]
Preparing servers                                    [ DONE ]
...

After successful installation we should get the output similar to the below:

...
Finalizing                                           [ DONE ]

 **** Installation completed successfully ******

Additional information:
 * File /root/keystonerc_admin has been created on OpenStack client host 192.168.2.4. To use the command line tools you need to source the file.
 * To access the OpenStack Dashboard browse to http://192.168.2.4/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
 * To use Nagios, browse to http://192.168.2.4/nagios username: nagiosadmin, password: 72659a0e75ee4f48
 * The installation log file is available at: /var/tmp/packstack/20150412-171545-6LJ0WP/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20150412-171545-6LJ0WP/manifests

Test your installation – login to the Horizon (OpenStack Dashboard), type the following in your web browser:

http://192.168.2.4/dashboard

You should see dashboard login screen, type login and password (in our case: admin/password):
openstack dashboard

Let’s go back to the console, create OVS (openvswitch) bridges and bind them to physical network interfaces on both nodes

After OpenStack installation we have following network interfaces on controller node:

[root@controller ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:cf:f6:ef brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.4/24 brd 192.168.2.255 scope global dynamic eth0
       valid_lft 41134sec preferred_lft 41134sec
    inet6 fe80::5054:ff:fecf:f6ef/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 52:54:00:c6:92:ee brd ff:ff:ff:ff:ff:ff
4: ovs-system:  mtu 1500 qdisc noop state DOWN 
    link/ether 72:b8:b8:de:3a:f7 brd ff:ff:ff:ff:ff:ff
5: br-int:  mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 0e:f7:ad:b9:21:48 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::cf7:adff:feb9:2148/64 scope link 
       valid_lft forever preferred_lft forever
6: br-eth1:  mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether f2:d0:68:22:b2:46 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::f0d0:68ff:fe22:b246/64 scope link 
       valid_lft forever preferred_lft forever
7: br-ex:  mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 76:7a:de:52:ec:42 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::747a:deff:fe52:ec42/64 scope link 
       valid_lft forever preferred_lft forever

…and on compute node:

[root@compute ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:4d:fa:06 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.5/24 brd 192.168.2.255 scope global dynamic eth0
       valid_lft 40548sec preferred_lft 40548sec
    inet6 fe80::5054:ff:fe4d:fa06/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 52:54:00:fe:a6:c8 brd ff:ff:ff:ff:ff:ff
6: ovs-system:  mtu 1500 qdisc noop state DOWN 
    link/ether 2e:11:a9:be:7b:cc brd ff:ff:ff:ff:ff:ff
7: br-int:  mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether a2:b9:7e:04:cd:48 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a0b9:7eff:fe04:cd48/64 scope link 
       valid_lft forever preferred_lft forever
8: br-eth1:  mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 36:8c:69:06:42:4b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::348c:69ff:fe06:424b/64 scope link 
       valid_lft forever preferred_lft forever

Type following commands on controller node:

cp /etc/sysconfig/network-scripts/ifcfg-eth0 /root/ifcfg-eth0.backup
cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-br-ex
cp /etc/sysconfig/network-scripts/ifcfg-eth1 /root/ifcfg-eth1.backup

Modify ifcfg-eth0 file on controller node to look like:

DEVICE=eth0
HWADDR=52:54:00:CF:F6:EF
ONBOOT=yes

Modify ifcfg-br-ex file on controller node to look like:

DEVICE=br-ex
TYPE=Ethernet
BOOTPROTO=static
ONBOOT=yes
NM_CONTROLLED=no
IPADDR=192.168.2.4
PREFIX=24

Modify ifcfg-eth1 file on controller node to look like:

DEVICE=eth1
HWADDR=52:54:00:C6:92:EE
TYPE=Ethernet
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes

Connect eth0 interface to br-ex bridge on controller node.
Below command will trigger network restart, so you will lose network connection for a while! The connection should be brought up again, if you modified ifcfg-eth0 and ifcfg-br-ex files correctly.

ovs-vsctl add-port br-ex eth0; systemctl restart network

Now let’s connect eth1 interface to br-eth1 bridge (this will restart network too):

ovs-vsctl add-port br-eth1 eth1; systemctl restart network

Now your network interfaces configuration on controller node should look like below (public IP is now assigned to br-ex interface):

[root@controller ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
    link/ether 52:54:00:cf:f6:ef brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fecf:f6ef/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
    link/ether 52:54:00:c6:92:ee brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fec6:92ee/64 scope link 
       valid_lft forever preferred_lft forever
4: ovs-system:  mtu 1500 qdisc noop state DOWN 
    link/ether ea:c6:b3:ff:17:ba brd ff:ff:ff:ff:ff:ff
5: br-eth1:  mtu 1500 qdisc noop state DOWN 
    link/ether f2:d0:68:22:b2:46 brd ff:ff:ff:ff:ff:ff
6: br-ex:  mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 76:7a:de:52:ec:42 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.4/24 brd 192.168.2.255 scope global br-ex
       valid_lft forever preferred_lft forever
    inet6 fe80::747a:deff:fe52:ec42/64 scope link 
       valid_lft forever preferred_lft forever
7: br-int:  mtu 1500 qdisc noop state DOWN 
    link/ether 0e:f7:ad:b9:21:48 brd ff:ff:ff:ff:ff:ff

Check OVS configuration on controller node. Now port eth0 should be assigned to br-ex and port eth1 should be assigned to br-eth1:

[root@controller ~]# ovs-vsctl show
0dcba8a0-bebe-4785-82d6-7c67619874cd
    Bridge "br-eth1"
        Port "phy-br-eth1"
            Interface "phy-br-eth1"
                type: patch
                options: {peer="int-br-eth1"}
        Port "eth1"
            Interface "eth1"
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth0"
            Interface "eth0"
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "int-br-eth1"
            Interface "int-br-eth1"
                type: patch
                options: {peer="phy-br-eth1"}
    ovs_version: "2.1.3"

Modify ifcfg-eth1 file on compute node to look like:

DEVICE=eth1
HWADDR=52:54:00:FE:A6:C8
TYPE=Ethernet
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes

Now on compute node connect eth1 interface to br-eth1 bridge (this will restart network):

ovs-vsctl add-port br-eth1 eth1; systemctl restart network

Check OVS configuration on compute node. Now port eth1 should be assigned to br-eth1:

[root@compute ~]# ovs-vsctl show
cc9e8eff-ea10-40dc-adeb-2d6ee6fc9ed9
    Bridge br-int
        fail_mode: secure
        Port "int-br-eth1"
            Interface "int-br-eth1"
                type: patch
                options: {peer="phy-br-eth1"}
        Port br-int
            Interface br-int
                type: internal
    Bridge "br-eth1"
        Port "phy-br-eth1"
            Interface "phy-br-eth1"
                type: patch
                options: {peer="int-br-eth1"}
        Port "eth1"
            Interface "eth1"
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
    ovs_version: "2.1.3"

Time to check our new OpenStack cloud status and functionality.
After each OpenStack installation a file /root/keystonerc_admin is created on controller node. This file contains admin credentials and other authentication parameters needed to operate and maintain our cloud. It looks like below:

[root@controller ~]# cat /root/keystonerc_admin 
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL=http://192.168.2.4:5000/v2.0/
export OS_REGION_NAME=RegionOne
export PS1='[\u@\h \W(keystone_admin)]\$ '

Let’s source this file to import OpenStack admin credentials into Linux system variables, to avoid being prompted for password each time we want to invoke OpenStack command:

[root@controller ~]# source /root/keystonerc_admin
[root@controller ~(keystone_admin)]# 

Check OpenStack status on controller node to ensure mandatory services are running:

[root@controller ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
...

Verify hosts list on controller node:

[root@controller ~(keystone_admin)]# nova-manage host list
host                     	zone           
controller               	internal       
compute                  	nova           

Verify services on cloud hosts (execute on controller node):

[root@controller ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth controller                           internal         enabled    :-)   2015-04-12 22:27:24
nova-scheduler   controller                           internal         enabled    :-)   2015-04-12 22:27:25
nova-conductor   controller                           internal         enabled    :-)   2015-04-12 22:27:24
nova-cert        controller                           internal         enabled    :-)   2015-04-12 22:27:21
nova-compute     compute                              nova             enabled    :-)   2015-04-12 22:27:24

Our OpenStack cloud is now installed and ready to work 🙂
Find out, how to Create project tenant in OpenStack and launch instances.