Deploy project tenant in OpenStack using Heat orchestration stack

Deploy tenant in OpenStack using Heat Orchestration service stack
Heat is an OpenStack Orchestration service, which implements an orchestration engine to launch multiple composite cloud applications based on templates in the form of text files that can be treated like code. Heat service is able to read YAML (.yaml, .yml) files and perform different tasks inside OpenStack environment included in YAML components. Using Heat Orchestration we can create instances, networks or even whole tenants with just single mouse click in OpenStack dashboard (Horizon), if we have previously prepared YAML file with Heat instructions to be performed in OpenStack cloud.

In this tutorial we will create example .yaml file for Heat orchestration containing instructions and components needed to deploy project tenant in OpenStack and launch instances inside the tenant. Next, we will create our stack on single OpenStack all-in-one node based on CentOS 7.3 operating system.
Continue reading “Deploy project tenant in OpenStack using Heat orchestration stack”

Create tenant in OpenStack Newton using command line interface

Create tenant in Openstack Newton using command line interface
OpenStack comes out of the box with it’s dashboard called Horizon. Horizon provides GUI, which let us manage our OpenStack environment in pretty easy and inuitive way. However basic tasks, like tenant creation or instance commissioning, can be time consuming when performed in Horizon. Using command line interface with previously prepared command templates can be more efficient and faster.

In this tutorial we present how to create Project Tenant in OpenStack Newton using command line intrerface and launch Cirros based Instances inside the Tenant.

Some time ago OpenStack Community introduced new tool called OpenStackClient (OSC) with it’s openstack command utility to unify OpenStack management, which encompasses the following components: Compute, Identity, Image, Object Storage and Block Storage APIs. So far keystone command utility was withdrawn from OpenStack as deprecated and replaced by mentioned openstack command utility. In this tutorial for Newton release we are going to use openstack commands where possible to become familiar with OpenStackClient CLI.
Continue reading “Create tenant in OpenStack Newton using command line interface”

Install OpenStack Newton All In One with Heat Service on CentOS 7

Install OpenStack Newton All In One with Heat Service on CentOS 7
In OpenStack all-in-one configuration all OpenStack nodes (controller node, compute node, network node) are installed on a single machine. All in one configuration can be quickly deployed for testing purposes and is often recommended for developers to test their applications on top of OpenStack environment.

In this tutorial we install OpenStack Newton release from RDO repository including Heat Orchestration service on single node (all-in-one installation) based on CentOS 7 / RHEL 7 using packstack installer script.
Continue reading “Install OpenStack Newton All In One with Heat Service on CentOS 7”

OpenStack Newton VXLAN based installation on 3 CentOS 7 nodes

OpenStack Newton VXLAN based installation on 3 CentOS 7 nodes
OpenStack is a free and open source cloud computing platform developed as a joint project of Rackspace Hosting and NASA, consisting of many well know technologies like: Linux KVM, LVM, iSCSI, MariaDB (MySQL), RabbitMQ or Python Django.

In our previous articles we presented OpenStack installations based on VLAN internal networking.

In this article we will install OpenStack Newton release from RDO repository on three CentOS 7 based nodes (Controller, Network, Compute), but this time, unlike our previous articles, we will use VXLAN based internal networking for communication between Nova Instances.

OpenStack Newton VXLAN based installation on 3 CentOS 7 nodes
Continue reading “OpenStack Newton VXLAN based installation on 3 CentOS 7 nodes”

Download Scientific Linux OpenStack KVM qcow2 image by tuxfixer.com

Download Scientific Linux OpenStack KVM qcow2 image by tuxfixer.com
Scientific Linux is a Fermilab sponsored, stable, scalable, and extensible operating system for scientific computing based on Red Hat Enterprise Linux. It has even been loaded onto systems at the International Space Station. Two most famous experiments to depend on Scientific Linux are the Collider Detector and DZero experiments at Fermilab and the Large Hadron Collider experiments at CERN.

Below you can find Scientific Linux OpenStack/KVM 64bit qcow2 image based on SL 7.2 (Nitrogen) x86_64 release.
Continue reading “Download Scientific Linux OpenStack KVM qcow2 image by tuxfixer.com”

Download Trisquel Linux OpenStack KVM qcow2 image by tuxfixer.com

Download Trisquel GNU/Linux OpenStack KVM qcow2 image by tuxfixer.com
Trisquel GNU/Linux is an elegant and lightweight Linux distribution, derived from Ubuntu, which uses fully free software system without proprietary software or firmware. Trisquel uses modified kernel from Ubuntu distribution, with the non-free code removed and is listed by the Free Software Foundation as a distribution that contains only free software.

Below you can find Trisquel GNU/Linux OpenStack/KVM 64bit qcow2 image based on Trisquel Mini 7.0 x86_64 release.
Continue reading “Download Trisquel Linux OpenStack KVM qcow2 image by tuxfixer.com”

OpenStack Horizon Error: Unable to get network agents info

OpenStack Horizon Error: Unable to get network agents info
OpenStack Dashboard Error: Unable to get network agents info often seen in Horizon is a result of Neutron related problems.

Usually the problem is caused by neutron-service failure due to service operation time outs.

The below screenshot presents OpenStack Dashboard Error: Unable to get network agents info:

openstack error: unable to get network agents info
Continue reading “OpenStack Horizon Error: Unable to get network agents info”

Install and Configure OpenStack Mitaka with GlusterFS on CentOS 7

integrate openstack with glusterfs storage
OpenStack can use diffirent backend technologies for Cinder Volumes Service to create volumes for Instances running in cloud. The default and most common backend used for Cinder Service is LVM (Logical Volume Manager), but it has one basic disadventage – it’s slow and overloads the server which serves LVM (usually Controller), especially during volume operations like volume deletion. OpenStack supports other Cinder backend technologies, like GlusterFS which is more sophisticated and reliable solution, provides redundancy and does not occupy Controller’s resources, because it usually runs on separate dedicated servers.

In this tutorial we are going to deploy VLAN based OpenStack Mitaka on three CentOS 7 nodes (Controller, Network, Compute) using Packstack installer script and integrate it with already existing GlusterFS redundant storage based on two Gluster Peers.

openstack installation and integration with glusterfs
Continue reading “Install and Configure OpenStack Mitaka with GlusterFS on CentOS 7”

OpenStack: Unable to Connect to Horizon Dashboard

Packstack Error: Command exceeded timeout
Pretty often after rebooting Controller node or powering it on after shutdown, the Horizon (OpenStack Dashboard) is not responding, but we know that it worked before reboot. This issue may be caused by httpd service (Apache), which entered failed state right after powering on the Controller node.

This results in browser’s connection problem to the Horizon:

chrome_unable to connect
Continue reading “OpenStack: Unable to Connect to Horizon Dashboard”

KVM OpenStack Error: /Stage[main]/Nova::Db::Sync/Exec[nova-db-sync]

Packstack Error: Command exceeded timeout
Few times during KVM based OpenStack (Mitaka, Newton) automated installations using packstack we encountered DB synchronization errors.

It turned out, that these installation errors appeared due to slow network and/or poor hardware performance of KVM virtualized hardware used to build OpenStack virtual nodes (Controller, Network, Compute).
Continue reading “KVM OpenStack Error: /Stage[main]/Nova::Db::Sync/Exec[nova-db-sync]”