Infra plugins

Status Check

The statuscheck infrastructure plugin monitors the overall status of a CN-Infra based app by collecting and aggregating partial statuses of agents plugins. The status is exposed to external clients via ETCD - datasync and HTTP, as shown in the following diagram:

status check

Overall Agent Status

The overall Agent Status is aggregated from all Plugins’ Status (logical AND for each Plugin Status success/error).

The agent’s current overall status can be retrieved from ETCD from the following key: /vnf-agent/<agent-label>/check/status

$ etcdctl get /vnf-agent/<agent-label>/check/status/v1/agent

To verify the agent status via HTTP (e.g. for Kubernetes liveness and readiness probes, use the /liveness and /readiness URLs:

$ curl -X GET http://localhost:9191/liveness
$ curl -X GET http://localhost:9191/readiness

To change the HTTP server port (default 9191), use the http-port option of the agent, e.g.:

$ vpp-agent -http-port 9090

Plugin Status

Plugin may use PluginStatusWriter.ReportStateChange API to PUSH the status information at any time. For optimum performance, ‘statuscheck’ will then propagate the status report further to external clients only if it has changed since the last update.

Alternatively, plugin may chose to use the PULL based approach and define the probe function passed to PluginStatusWriter.Register API. statuscheck will then periodically probe the plugin for the current status. Once again, the status is propagated further only if it has changed since the last enquiry.

It is recommended not to mix the PULL and the PUSH based approach within the same plugin.

To retrieve the current status of a plugin from ETCD, use the following key template: /vnf-agent/<agent-label>/check/status/v1/plugin/<PLUGIN_NAME>

For example, to retrieve the status of the GoVPP plugin, use:

$ etcdctl get /vnf-agent/<agent-label>/check/status/v1/plugin/GOVPP
{"state":2,"last_change":1496322205,"last_update":1496322361,"error":"VPP disconnected"}

Push Plugin Status: status check pull

Pull Plugins Status - PROBING: status check push

Index Map

The idxmap package provides an enhanced mapping structure to help in the following use cases:

  • Exposing read-only access to plugin local data for other plugins
  • Secondary indexing
  • Data caching for key-value store (such as ETCD)

For more detailed description see the godoc.

Exposing plugin local information Use Case

App plugins often need to expose some structured information to other plugins inside the agent (see the following diagram). Structured data stored in idxmap are available for (possibly concurrent) read access to other plugins:

  1. either via lookup using primary keys or secondary indices;
  2. or via watching for data changes in the map using channels or callbacks (subscribe for changes and receive notification once an item is added, changed or removed).

idxmap local

Caching Use Case

It is useful to have the data from a key-value store cached when you need to:

  • minimize the number of lookups into the key-value store
  • execute lookups by secondary indexes for key-value stores that do not necessarily support secondary indexing (e.g. ETCD)

CacheHelper turns idxmap (injected as field IDX) into an indexed local copy of remotely stored key-value data. CacheHelper watches the target key-value store for data changes and resync events. Received key-value pairs are transformed into the name-value (+ secondary indices if defined) pairs and stored into the injected idxmap instance. For a visual explanation, see the diagram below:

idxmap cache

The constructor that combines CacheHelper with idxmap to build the cache from the example can be found there as well.

Log Manager

Log manager plugin allows to view and modify log levels of loggers using REST API.

API - List all registered loggers:

```curl -X GET http://<host>:<port>/log/list```
  • Set log level for a registered logger: curl -X PUT http://<host>:<port>/log/<logger-name>/<log-level>

<log-level> is one of debug,info,warning,error,fatal,panic

<host> and <port> are determined by configuration of rest.Plugin.

Config file

  • Logger config file is composed of two parts: the default level applied for all plugins, and a map where every logger can have its own log level defined.


Initial log level can be set using environmental variable INITIAL_LOGLVL. The variable replaces default-level from configuration file. However, loggers (partial definition) replace default value set by environmental variable for specific loggers defined.


A simple utility able to log measured time periods various events. To create a new tracer, call:

t := NewTracer(name string, log logging.Logger)

Tracer object can store a new entry with t.LogTime(entity string, start Time) where entity is a string representation of a measured object (name of a function, structure or just simple string) and start is a start time. Tracer can measure repeating event (in a loop for example). Every event will be stored with the particular index.

Use t.Get() to read all measurements. The Trace object contains a list of entries and overall time duration.

Last method is t.Clear() which removes all entries from the internal database.


The client package provides single purpose clients for publishing synchronous/asynchronous messages and for consuming selected topics.

The mux package uses these clients and allows to share their access to Kafka brokers among multiple entities. This package also implements the generic messaging API defined in the parent package.


Minimal supported version of Kafka is determined by sarama library - Kafka 0.10 and 0.9, although older releases are still likely to work.

If you don’t have Kafka installed locally you can use docker image for testing:

sudo docker run -p 2181:2181 -p 9092:9092 --name kafka --rm \
 --env ADVERTISED_HOST= --env ADVERTISED_PORT=9092 spotify/kafka

Kafka plugin

Kafka plugin provides access to Kafka brokers.


  • Location of the Kafka configuration file can be defined either by command line flag kafka-config or set via KAFKA_CONFIG env variable.

Status Check

  • Kafka plugin has a mechanism to periodically check a connection status of the Kafka server.


The multiplexer instance has an access to Kafka brokers. To share the access it allows to create connections. There are available two connection types one support message of type []byte and the other proto.Message. Both of them allows to create several SyncPublishers and AsyncPublishers that implements BytesPublisher interface or ProtoPubliser respectively. The connections also provide API for consuming messages implementing BytesMessage interface or ProtoMessage respectively.

    +-----------------+                                  +---------------+
    |                 |                                  |               |
    |  Kafka brokers  |        +--------------+     +----| SyncPublisher |
    |                 |        |              |     |    |               |
    +--------^--------+    +---| Connection   <-----+    +---------------+
             |             |   |              |
   +---------+----------+  |   +--------------+
   |  Multiplexer       |  |
   |                    <--+
   | SyncProducer       <--+   +--------------+
   | AsyncProducer      |  |   |              |
   | Consumer           |  |   | Connection   <-----+    +----------------+
   |                    |  +---|              |     |    |                |
   |                    |      +--------------+     +----| AsyncPublisher |
   +--------------------+                                |                | 

Process manager

The process manager plugin provides a set of methods to create a plugin-defined processes instance implementing a set of methods to manage and monitor them. There are several ways how to obtain a process instance via ProcessManager API:

New process with options: using method NewProcess(<cmd>, <options>...) which requires a command and a set of optional parameters. New process from template: using method NewProcessFromTemplate(<tmp>) which requires template as a parameter Attach to existing process: using method AttachProcess(<pid>). The process ID is required to order to attach.



Since application (management plugin) is parent of all processes, application termination causes all started processes to stop as well. This can be changed with Detach option (see process options).

Process management methods:

  • Start() starts the plugin-defined process, stores the instance and does initial status file read
  • Restart() tries to gracefully stop (force stop if fails) the process and starts it again. If the instance is not running, it is started.
  • Stop() stops the instance using SIGTERM signal. Process is not guaranteed to be stopped. Note that child processes (not detached) may end up as defunct if stopped this way.
  • StopAndWait() stops the instance using SIGTERM signal and waits until the process completes.
  • Kill() force-stops the process using SIGKILL signal and releases all the resources used.
  • Wait() waits until the process completes.
  • Signal() allows user to send custom signal to a process. Note that some signals may cause unexpected behavior in process handling.

Process monitor methods:

  • IsAlive() returns true if process is running
  • GetNotificationChan() returns channel where process status notifications will be sent. Useful when a process is created via template with ‘notify’ field set to true. In other cases, the channel is provided by user.
  • GetName returns process name as defined in status file
  • GetPid() returns process ID
  • UpdateStatus() updates internal status of the plugin and returns the actual status file
  • GetCommand() returns original process command. Always empty for attached processes.
  • GetArguments() returns original arguments the process was run with. Always empty for attached processes.
  • GetStartTime() returns time stamp when the process was started for the last time
  • GetUpTime() returns process up-time in nanoseconds

Status watcher

Every process is watched for status changes (it does not matter which way it was crated) despite the process is running or not. The watcher uses standard statuses (running, sleeping, idle, etc.). The state is read from process status file and every change is reported. The plugin also defines two plugin-wide statues: * Initial - only for newly created processes, means that the process command was defined but not started yet * Terminated - if the process is not running or does not respond * Unavailable - if the process is running but the status cannot be obtained The process status is periodically polled and notifications can be sent to the user defined channel. In case process was crated via template, channel was initialized in the plugin and can be obtained via GetNotificationChan().

Process options

Following options are available for processes. All options can be defined in the API method as well as in the template. All of them are optional.

Args: takes string array as parameter, process will be started with given arguments.

Restarts: takes a number as a parameter, defines count of automatic restarts when the process state becomes terminated.


Usability is limited when defined via template (only standard os.stdout and os.stderr can be used)

Detach: no parameters, started process detaches from the parent application and will be given to current user. This setting allows the process to run even after the parent was terminated.

EnvVar can be used to define environment variables (for example os.Environ for all)

Template: requires name, and run-on-startup flag. This setup creates a template on process creation. The template path has to be set in the plugin.

Notify: allows user to provide a notification channel for status changes.


The template is a file which defines process configuration for plugin manager. All templates should be stored in the path defined in the plugin config file.

./process-manager-plugin -process-manager-config=<path-to-file>

The template can be either written by hand using proto model, or generated with the Template option while creating a new process.

On the plugin init, all templates are read, and those with run-on-startup set to ‘true’ are also immediately started. The template contains several fields defining process name, command, arguments and all the other fields from options.

The plugin API allows to read templates directly with GetTemplate(<name) or GetAllTmplates(). The template object can be used as parameter to start a new process from it.

Service Label

The servicelabel is a small Core Agent Plugin, which other plugins can use to obtain the microservice label, i.e. the string used to identify the particular VNF. The label is primarily used to prefix keys in ETCD data store so that the configurations of different VNFs do not get mixed up.


  • the serviceLabel can be set either by commandline flag microservice-label or environment variable MICROSERVICE_LABEL


Example of retrieving and using the microservice label:

plugin.Label = servicelabel.GetAgentLabel()
dbw.Watch(dataChan, cfg.SomeConfigKeyPrefix(plugin.Label))