Developing cluster software is complicated if you must actually run a whole cluster on a set of physical machines. This begs for a development environment that is self contained and can be run without any setup.

The goals for the Cloud Integrated Advanced Orchestrator development environment are that it:

  • Requires very minimal setup by the user
  • Does not affect the user’s development system in any manner (i.e. the user can keep the firewall rules, selinux setup,… intact)
  • Supports modes that allow it to run on a range of devices from powerful workstations to less powerful laptops
  • Provides the ability to validate all code changes the user makes against the project’s release criterion

This page documents a way to set up an entire Cloud Integrated Advanced Orchestrator cluster inside a single machine. This cluster-in-a-machine mode is ideal for developers that desire the ability to build the project from source, make changes and perform quick end to end functional integration testing without requiring multiple machines/VM’s, creating a custom networking environment or maintaining a bevy of physical machines and a physical network.

We support two modes of operation:

  • Configurable Cloud VM (ccloudvm) mode: Where a virtual machine is automatically created and launched, and the virtual cluster is setup and tested within the virtual machine
  • bare metal mode: Where the virtual cluster is setup on the host machine itself

The ccloudvm mode is the preferred mode of development on systems that have the resources and CPU capabilities needed, as it fully isolates the Cloud Integrated Advanced Orchestrator virtual cluster and sets up an environment in which it is known to work seamlessly. In addition, the ccloudvm mode does not require any changes to the user’s network firewall setup. However, ccloudvm mode does require VT-x nesting to be supported by the host.

The bare metal mode is the highest performance mode, but may require some network firewall modification. It also uses less resources and can run on machines whose CPUs do not support VT-x nesting.

In both modes the cluster is configured in a special all in one development mode where cluster nodes have dual roles (i.e launcher can be a Network Node and a Compute Node at the same time)

In the text below machine refers to the ccloudvm VM in the case of the ccloudvm mode, it refers to the host system in the case of the bare metal mode.

Components running on the Machine

  1. Controller 	
  2. Scheduler 	
  3. Compute+Network Node Agent (i.e. CN + NN Launcher)
  4. Workloads (Containers and VMs)
  5. Mock Openstack Services
  6. Machine Local DHCP Server

The machine acts as the compute node, network node, ciao-controller, ciao-scheduler and also hosts other openstack and dhcp services.

Graphical Overview

When the system is functioning the overall setup manifests as follows:

As you can see below the Cluster runs on a isolated virtual network resident inside the machine. Hence the cluster is invisible outside the machine and completely self contained.

  |                                                                            |
  |                                                                            |
  |                                                                            |
  |                                                [Tenant VMs]  [CNCI VMs]    |
  |                                                   |  |  |       ||         |
  |                                   Tenant Bridges ----------     ||         |
  |                                                       |         ||         |
  |                                                       |         ||         |
  |      [scheduler] [controller]  [CN+NN Launcher]       |         ||         |
  |           ||       ||             ||                  |         ||         |
  |           ||       ||             ||                  |         ||         |
  |           ||       ||             ||                  |         ||         |
  |           ||       ||             ||                  |         ||         |
  |           ||       ||             ||                  |         ||         |
  |           ||       ||             ||      [DHCP/DNS   |         ||         |
  |           ||       ||             ||        Server]   |         ||         |
  |           ||       ||             ||           ||     |         ||         |
  |  ------------------------------------------------------------------------  |
  |           Host Local Network Bridge + macvlan (ciao_br, ciaovlan)          |
  |                                                                            |
  |                                                                            |
                              Development Machine

Install Go

On the host install the latest release of go for your distribution Installing Go.

Getting Started with Configurable Cloud VM (ccloudvm)

Ccloudvm is a small utility for setting up a VM that contains everything you need to run Single VM. All you need to have installed on your machine is:

  • Go 1.8 or greater

Once Go is installed you simply need to type

go get
$(go env GOPATH)/bin/ccloudvm setup
$(go env GOPATH)/bin/ccloudvm create ciao

ccloudvm will install some needed dependencies on your local PC such as qemu and xorriso. It will then download an Ubuntu Cloud Image and create a VM based on this image. It will boot the VM and install in that VM everything you need to run Single VM, including docker, ceph, go, gcc, etc. When ccloudvm create has finished you can connect to the newly created VM with

$GOPATH/bin/ccloudvm connect

Your host’s GOPATH is mounted inside the VM. Thus you can edit Go code on your host machine and test in Single VM.


One of the nice things about using ccloudvm is that it is proxy aware. When you run ccloudvm create, ccloudvm looks in its environment for proxy variables such as http_proxy, https_proxy and no_proxy. If it finds them it ensures that these proxies are correctly configured for all the software that it installs and uses inside the VM, e.g., apt, docker, wget, ciao. So if your development machine is sitting behind a proxy, ensure you have your proxy environment variables set before running ccloudvm.

Getting Started with Bare Metal

Install Docker

Install latest docker for your distribution based on the instructions from Docker Installing Docker.

Install Cloud Integrated Advanced Orchestrator dependencies

Install the following packages which are required:

  1. qemu-system-x86_64 and qemu-img, to launch the VMs and create qcow images
  2. gcc, required to build some of the dependencies
  3. dnsmasq, required to setup a test DHCP server

On clearlinux all of these dependencies can be satisfied by installing the following bundles:

swupd bundle-add cloud-control go-basic os-core-dev kvm-host os-installer

Setup password less sudo

Setup passwordless sudo for the user who will be running the script below.

Cluster External Network Access

If you desire to provide external network connectivity to the workloads then the host needs to act as gateway to the Internet. The host needs to enable ipv4 forwarding and ensure all traffic exiting the cluster via the host is NATed.

This assumes the host has a single network interface. For multi homed systems, the setup is more complicated and needs appropriate routing setup which is outside the scope of this document. If you have a custom firewall configuration, you will need set things up appropriately.

Very simplistically this can be done by

#$device is the network interface on the host
iptables -t nat -A POSTROUTING -o $device -j MASQUERADE 

echo 1 > /proc/sys/net/ipv4/ip_forward

Download and build the sources

Download and build the Cloud Integrated Advanced Orchestrator sources:

cd $GOPATH/src
go get -v -u -tags debug

You should see no errors.

Verify that the Cloud Integrated Advanced Orchestrator is fully functional

Now that you have the machine setup (either a bare metal setup or a ccloudvm VM setup).

You can now quickly verify that all aspects of the cluster including VM launch, container launch, and networking.

These steps are performed inside the machine.

To do this simply run the following:

cd $GOPATH/src/
. ~/local/
#Cleanup any previous setup
#Set up the test environment
. ~/local/
#Perform a full cluster test

The script will:

  • Create multiple Instances of Tenant VMs and Containers
  • Test network connectivity between containers
  • Test for ssh reach ability into VMs with private and external IPs
  • Delete all the VM’s and Container that were created

If the script reports success, it indicates to the developer that any changes made have not broken any functionality across all the Cloud Integrated Advanced Orchestrator components.

To quickly test any changes you make run and observe no failures.

Prior to submitting a change request to the Cloud Integrated Advanced Orchestrator, please run the BAT tests below in addition to to ensure your changes meet the ciao acceptance criterion. The time needed for ./ and ./ to build the project from source, configure its components into a virtual cluster, then launch and teardown containers and VMs, is in the order of one minute total elapsed time.

Ongoing Usage

Once it’s finished, the script leaves behind a virtual cluster which can be used to perform manual tests. These tests are performed using the ciao tool.

The ciao tool requires that some environment variables be set up before it will work properly. These variables contain the URLs of the various Cloud Integrated Advanced Orchestrator services and the credentials needed to access these services. The script creates a shell source that contains valid values for the newly set up cluster. To initialise these variables you just need to source that file, e.g,

. ~/local/

To check everything is working try the following command

ciao list workloads

Running the BAT tests

The ciao project includes a set of acceptance tests that must pass before each release is made. The tests perform various tasks such as listing workloads, creating and deleting instances, etc. These tests can be run inside the machine

# Source the file if you have not already done so
. ~/local/
cd $GOPATH/src/
test-cases -v ./...

For more information on the BAT tests please see the README.

Cleanup / Teardown

To cleanup and tear down the cluster:

cd $GOPATH/src/
#Cleanup any previous setup
. ~/local/

Known Issues with Bare Metal

  • Does not work on Fedora due to default firewall rules. Issue #526

In order to allow the traffic required by the test cases you can add temporary rules like the ones show below

iptables -I INPUT   1 -p tcp -m tcp --dport 8888 -j ACCEPT
iptables -I INPUT   1 -p 47 -j ACCEPT
iptables -I OUTPUT  1 -p 47 -j ACCEPT
iptables -I INPUT   1 -p tcp --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -I OUTPUT  1 -p tcp --sport 22 -m conntrack --ctstate ESTABLISHED -j ACCEPT
iptables -I FORWARD 1 -p tcp --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -I FORWARD 1 -p tcp --sport 22 -m conntrack --ctstate ESTABLISHED -j ACCEPT
iptables -I FORWARD 1 -p udp -m udp --dport 67:68 -j ACCEPT
iptables -I FORWARD 1 -p udp -m udp --dport 123 -j ACCEPT
iptables -I FORWARD 1 -p udp -m udp --dport 53 -j ACCEPT
iptables -I FORWARD 1 -p udp -m udp --dport 5355 -j ACCEPT
iptables -I FORWARD 1 -p icmp -j ACCEPT

And delete them after the tests using

iptables -D INPUT   -p tcp -m tcp --dport 8888 -j ACCEPT
iptables -D INPUT   -p 47 -j ACCEPT
iptables -D OUTPUT  -p 47 -j ACCEPT
iptables -D INPUT   -p tcp --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -D OUTPUT  -p tcp --sport 22 -m conntrack --ctstate ESTABLISHED -j ACCEPT
iptables -D FORWARD -p tcp --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -D FORWARD -p tcp --sport 22 -m conntrack --ctstate ESTABLISHED -j ACCEPT
iptables -D FORWARD -p udp -m udp --dport 67:68 -j ACCEPT
iptables -D FORWARD -p udp -m udp --dport 123 -j ACCEPT
iptables -D FORWARD -p udp -m udp --dport 53 -j ACCEPT
iptables -D FORWARD -p udp -m udp --dport 5355 -j ACCEPT
iptables -D FORWARD -p icmp -j ACCEPT