OpenStack Dashboard Error: Unable to get network agents info often seen in Horizon is a result of Neutron related problems.
Usually the problem is caused by neutron-service failure due to service operation time outs.
The below screenshot presents OpenStack Dashboard Error: Unable to get network agents info:
Continue reading “OpenStack Horizon Error: Unable to get network agents info”
OpenStack can use diffirent backend technologies for Cinder Volumes Service to create volumes for Instances running in cloud. The default and most common backend used for Cinder Service is LVM (Logical Volume Manager), but it has one basic disadventage – it’s slow and overloads the server which serves LVM (usually Controller), especially during volume operations like volume deletion. OpenStack supports other Cinder backend technologies, like GlusterFS which is more sophisticated and reliable solution, provides redundancy and does not occupy Controller’s resources, because it usually runs on separate dedicated servers.
In this tutorial we are going to deploy VLAN based OpenStack Mitaka on three CentOS 7 nodes (Controller, Network, Compute) using Packstack installer script and integrate it with already existing GlusterFS redundant storage based on two Gluster Peers.
Continue reading “Install and Configure OpenStack Mitaka with GlusterFS on CentOS 7”
Pretty often after rebooting Controller node or powering it on after shutdown, the Horizon (OpenStack Dashboard) is not responding, but we know that it worked before reboot. This issue may be caused by httpd service (Apache), which entered failed state right after powering on the Controller node.
This results in browser’s connection problem to the Horizon:
Continue reading “OpenStack: Unable to Connect to Horizon Dashboard”
Few times during KVM based OpenStack (Mitaka, Newton) automated installations using packstack we encountered DB synchronization errors.
It turned out, that these installation errors appeared due to slow network and/or poor hardware performance of KVM virtualized hardware used to build OpenStack virtual nodes (Controller, Network, Compute).
Continue reading “KVM OpenStack Error: /Stage[main]/Nova::Db::Sync/Exec[nova-db-sync]”
If you transfer qcow2 images very frequently across OpenStack Clouds or between KVM and Openstack environments, they can quickly grow larger. Luckily qcow2 image size can be decreased to reasonable values using qemu-img tool. Below we present how to shrink Openstack/KVM qcow2 image.
Continue reading “How to shrink OpenStack qcow2 image using qemu-img”
Kali Linux is a Debian-derived Linux distribution designed for digital forensics and penetration testing. Kali Linux can run natively when installed on a computer’s hard disk, can be booted from a live CD or live USB, or it can run as KVM virtual machine or OpenStack instance using qcow2 image.
Below you can purchase Kali Linux OpenStack / KVM 64bit qcow2 Image Bundle of ready to use images based on Kali Linux ISO with preinstalled Metapackages (more info).
Bundle includes two groups of images: cloud-init based images (including -ci suffix) with passwordless SSH access using key-pair and dedicated user – ideal for OpenStack platform, and non cloud-init images with standard SSH root access using password dedicated for OpenStack and KVM.
Continue reading “Download Kali Linux OpenStack KVM qcow2 image by tuxfixer.com”
The administration of the large-scale production cloud environments requires the management of dozens of customer’s virtual servers (OpenStack Instances) in the cloud on the daily basis. Manual configuration of the multiple newly created Instances in the OpenStack cloud at a time would be problematic for cloud administrators. Luckily, OpenStack is equipped with metadata service cooperating with the so-called cloud-init script, which together do the magic of automated Instances’ mass configuration.
Metadata service runs usually on Controller node in multi-node environment and is accessible by the Instances running on Compute nodes to let them retrieve instance-specific data, like IP address or hostname. Instances access metadata service at http://169.254.169.254. The HTTP request from Instance either hits the router or DHCP namespace depending on the route in the Instance. The metadata proxy service adds Instance IP address and Router ID information to the request. The metadata proxy service sends this request to neutron-metadata-agent. The neutron-metadata-agent service forwards the request to the nova-api-metadata server, by adding some new headers, i.e. Instance ID, to the request. The metadata service supports two sets of APIs: an OpenStack metadata API and an EC2-compatible API.
Continue reading “Configure OpenStack Instance at boot using cloud-init and user data”
OpenStack Snapshots can be utilized to backup Instance before some critical changes are made on Instance OS or to migrate Instance to the new OpenStack Cloud.
In this tutorial we will create snapshot from existing Instance to launch it in different Cloud, but you can also create snapshot just to backup the Instance and restore it’s state later in the same Cloud, if needed.
Continue reading “OpenStack: Create Instance Snapshot to backup or migrate Instance”
After OpenStack installation it can turn out, that the IP allocation pool of the subnet, we have just created is too small. If the allocation pool refers to public / provider network, we will quickly run out of free Floating IPs. Moreover, OpenStack Dashboard (Horizon) doesn’t provide the ability to extend or modify subnet IP alocation pool of already created subnet with already allocated IPs. But we can use dirty workaround and manually edit MariaDB which stores Openstack configuration data.
Continue reading “How to Extend Subnet Allocation Pool in OpenStack”
In this tutorial we will install OpenStack Kilo release from RDO repository on three nodes (Controller, Network, Compute) based on CentOS 7 operating system using packstack automated script. The following installation utilizes VLAN based internal software network infrastructure for communication between instances.
public network (Floating IP network): 192.168.2.0/24
internal network (on each node): no IP space, physical connection only (eth1)
controller node public IP: 192.168.2.12 (eth0)
network node public IP: 192.168.2.13 (eth0)
compute node public IP: 192.168.2.14 (eth0)
OS version (each node): CentOS Linux release 7.2.1511 (Core)
Continue reading “OpenStack Kilo 3 Node Installation (Controller, Network, Compute) on CentOS 7”