Introducing the OpenVolcano architecture

OpenVolcano is based on highly modular and extensible software architecture. Its main logical building blocks, as depicted in the figure below, are the StratoV and the Caldera, which represent the control and the data plane of the architecture. Physically, these building blocks, and the elements composing them, can be located in multiple machines spread inside the telecom provider edge network, in order to guarantee the best allocation of the applications and functions needed to realize the personal cloud services. The communication among the various modules and the stakeholders (users, telecom operator and cloud service provider) is mainly performed through RESTful APIs and web interfaces, with additional OpenFlow (OF) connections and proprietary protocols when needed.


The StratoV includes the control-plane features. As shown the figure above, from the functionality point of view, the StratoV can be divided into three layers, which are devoted to the collection of the data and their configuration, the elaboration of the resource allocation and forwarding rules, and the commit of such rules.
The network manager is physically located in a remote/separate server, but it belongs to the control plane architecture in every respect, and more specifically to the Data Collection and Configuration layer. The network manager is in charge of the long term configuration/optimization of the available physical and logical resources to properly satisfy the bandwidth and quality levels required by the different cloud services instantiated over time. In more detail, it has the ability to collect the monitored data coming from the elements in an internal database, which provides a full view of the deployed services and states of infrastructure resources. Thanks to this information, the Long-term Analytics task allows predicting service demand, planning resource provisioning, preventing congestion and possible failures, and maximizing energy saving.
The database collects the information provided by OpenStack that regards the users and their subscribed services. The database has also the ability to detect modifications in the services subscriptions and notify them to the control functionalities through a REST interface.
The pyroclast block presents the same functionalities provided by the OpenStack Nova and Neutron modules. In more detail, it is exploited by the cloud service provider to manage the service chains, including the computation and networking aspects needed for a proper service support.
The Elaboration layer comprehends all of the functionalities needed to aggregate, elaborate and deliver the updated physical and logical configuration for the data plane devices. It is composed of three main modules.
The cinder module is in charge of aggregating the monitored data and detecting possible faults. This module can perform synchronous requests for information to the data plane devices and the network manager and verify the proper behavior of all network components. The conduit module can be considered the center of the control plane, as it is responsible for receiving the information related to monitoring and services, forwarding it for further elaboration, and then provide the obtained configuration to the second control stage.
The crater module is in charge of the real-time configuration of the logical and virtual resources, which can be triggered by the current users’ location and the monitoring of parameters like the power consumption or CPU utilization. The data used to feed this engine come from both the aggregated monitored information provided by the cinder and the deployed services as reported by the database. Exploiting these data, a number of Consolidation and Orchestration algorithms calculate the optimal resource allocation, according to the required QoE/QoS and the estimated workload/traffic volumes, and the OpenFlow rules.
In the case of faults, cinder signals the presence of a device or sub-element that is not behaving properly to the conduit, which sends a request to the crater. The device is then removed from the graph of the available resources and a new configuration is computed accordingly.
The communication of the re-calculated OF rules and allocations is managed again by the conduit, which transmits them to the Commit stage to be further made available to the data plane elements. Specifically, the OF rules computed by crater are sent to magma, which provides the typical OF controller capabilities and is responsible for updating the flow tables of the elements devoted to forwarding (e.g., physical and virtual OF switches).
Finally, lava has the ability to abstract the virtualization libraries deployed at the data plane in order to make them transparent to the computed resources allocation. In this way, the rules can be computed by crater without the knowledge of the virtualization platform (libvirt, Capedwarf, etc.) deployed on each server.


The Caldera of OpenVolcano represents the data plane and can be composed of up to four modules. As shown in Figure 2, the servers in the data plane are connected among themselves and to the StratoV through OpenFlow switches. The servers composing Caldera are also built on top of a Linux architecture. Netlink is used for communication of the kernel with the user-space, where most of our implementations take place. In fact, user-space libraries have been demonstrated to provide massive performance improvements, at the price of a reduced number of already available network functionalities with respect to the kernel-level forwarding. Aside from Netlink, which is mainly used for the exchange of generic data between the kernel and user-space, like requests for acknowledgement, echo or FIB updates, the other interfaces between kernel and user-space are the /proc and /dev filesystems. The ACPI standard is used to represent the power management capabilities of the hardware components (packet processing engines, network cards, cores, etc.). Packets incoming through the NICs are managed by high performance data packet applications based on DPDK. All packets received through the NICs are then taken in charge by the quake module, a software OpenFlow switch implemented in the user-space according to the OF 1.3 specification, which will be described in more detail in Section VI. KVM has been chosen to implement the hypervisor in charge of handling the VMs according to the directives sent by conduit, as sketched in the previous section. The geyser module provides the data related to the server current status to the crater module after the aggregation operated by cinder. Such data include the current energy state (according to the ACPI standard), CPU load and memory, which can be monitored both globally and at single process level. The communication is managed through the GAL REST interface. The geyser and quake modules are present in every server composing the data plane, while additional libraries can be available depending on the deployed service. In the example reported in Figure 2, libvirt is used to manage VM migrations, while other libraries such as Capedwarf allow the management of PaaS instances.