Exporting data out of LAVA

LAVA supports two methods of extracting data and results are available whilst the job is running, XML-RPC and the REST API.

In addition, LAVA has two methods of pushing notifications about activity within LAVA, notifications and publishing events.


LAVA makes the test results available directly from the instance, without needing to go through lava-tool. The results for any test job which the user can view can be downloaded as CSV and YAML format.

For example, the results for test job number 123 are available as CSV using: https://validation.linaro.org/results/123/csv. The same results for job number 123 are available as YAML using: https://validation.linaro.org/results/123/yaml

If you know the test definition name, you can download the results for that specific test definition only in the same way. https://validation.linaro.org/results/123/singlenode-advanced/csv for the data in CSV format and https://validation.linaro.org/results/123/singlenode-advanced/yaml for the YAML format.

Some test jobs can be restricted to particular users or groups of users. The results of these test jobs are restricted in the same way. To download these results, you will need to specify your username and one of your Authentication Tokens - remember to quote the URL if using it on the command line or the & will likely be interpreted by your shell:


$ curl 'https://validation.linaro.org/results/123/singlenode-advanced/yaml?user=user.name&token=yourtokentextgoeshereononeverylongline'

Use the Username as specified in your Profile - this may differ from the username you use when logging in with LDAP.


Take care of your tokens - avoid using personal tokens in scripts and test definitions or other files that end up in public git repositories. Wherever supported, use https:// when using a token.


LAVA uses XML-RPC to communicate between dispatchers and the server and methods are available to query various information in LAVA.


When using XML-RPC to communicate with a remote server, check whether https:// can be used to protect the token. http:// connections to a remote XML-RPC server will transmit the token in plaintext. Not all servers have https:// configured. If a token becomes compromised, log in to that LAVA instance and delete the token before creating a new one.

The general structure of an XML-RPC call can be shown in this python snippet:

import xmlrpclib
import json

config = json.dumps({ ... })

XML-RPC can also be used to query data anonymously:

import xmlrpclib
server = xmlrpclib.ServerProxy("http://sylvester.codehelp/RPC2")
print server.system.listMethods()

Individual XML-RPC commands are documented on the API Help page.

User specified notifications

Users can have notifications about submitted test jobs by adding a notify block to the test job submission.

The basic setup of the notifications in job definitions will have criteria, verbosity, recipients and compare blocks.

Criteria tells the system when the notifications should be sent and

verbosity will tell the system how detailed the email notification should be.

Recipient methods accept email and irc options.

Here’s the example notification setup. For more information please go to User notifications in LAVA.

Example test job notification

    status: incomplete
  verbosity: quiet
  - to:
     user: neil.williams
     method: irc

Event notifications

Event notifications are handled by the lava-publisher service on the master. By default, event notifications are disabled.


lava-publisher is distinct from the publishing API. Publishing events covers status changes for devices and test jobs. The publishing API covers copying files from test jobs to external sites.

http://ivoire.dinauz.org/linaro/bus/ is an example of the status change information which can be made available using lava-publisher. Events include:

  • metadata on the instance which was the source of the event
  • description of a status change on that instance.

Example metadata

  • Date and time
  • Topic, for example org.linaro.validation.staging.device
  • the uuid of the message
  • Username

The topic is intended to allow receivers of the event to use filters on incoming events and is configurable by the admin of each instance.

Example device notification

   "device": "staging-qemu05",
   "device_type": "qemu",
   "health_status": "Pass",
   "job": 156223,
   "pipeline": true,
   "status": "Idle"

Event notifications are disabled by default and must be configured before being enabled.

Write your own event notification client

It is quite straight forward to get events from lava-publisher.

Users can embed this example piece code in their own local client app to listen to the job and/or device events and act according to the return data.

This script can also be used standalone from command line but is otherwise only an example.

python zmq_client.py -j 357 -p tcp:// -t 1200

zmq_client.py script:

import argparse
import yaml
import logging
import re
import signal
import time
import zmq
from zmq.utils.strtypes import b, u

FINISHED_JOB_STATUS = ["Complete", "Incomplete", "Canceled"]

class JobEndTimeoutError(Exception):
    """ Raise when the specified job does not finish in certain timeframe. """

class Timeout():
    """ Timeout error class with ALARM signal. Accepts time in seconds. """
    class TimeoutError(Exception):

    def __init__(self, sec):
        self.sec = sec

    def __enter__(self):
        signal.signal(signal.SIGALRM, self.timeout_raise)

    def __exit__(self, *args):

    def timeout_raise(self, *args):
        raise Timeout.TimeoutError()

class JobListener():

    def __init__(self, url):
        self.context = zmq.Context.instance()
        self.sock = self.context.socket(zmq.SUB)

        self.sock.setsockopt(zmq.SUBSCRIBE, b"")

    def wait_for_job_end(self, job_id, timeout=None):

            with Timeout(timeout):
                while True:
                    msg = self.sock.recv_multipart()
                        (topic, uuid, dt, username, data) = msg[:]
                    except IndexError:
                        # Droping invalid message

                    data = yaml.safe_load(data)
                    if "job" in data:
                        if data["job"] == job_id:
                            if data["status"] in FINISHED_JOB_STATUS:
                                return data

        except Timeout.TimeoutError:
            raise JobEndTimeoutError(
                "JobListener timed out after %s seconds." % timeout)

def main():
    # Parse the command line
    parser = argparse.ArgumentParser()
    parser.add_argument("-p", "--publisher", default="tcp://",
                        help="Publisher host and port")
    parser.add_argument("-j", "--job-id", type=int,
                        help="Job ID to wait for")
    parser.add_argument("-t", "--timeout", type=int,
                        help="Timeout in seconds")

    options = parser.parse_args()

    listener = JobListener(options.publisher)
    print listener.wait_for_job_end(options.job_id, options.timeout)

if __name__ == '__main__':

Download or view zmq_client.py

If you are interested in using event notifications for a custom frontend, you might want also to look at the code for the ReactOWeb example website: https://github.com/ivoire/ReactOWeb

Extending the client to submit and wait

You may want to expand this example to use the XML-RPC API to submit a testjob and retrieve the publisher port at the same time. It is up to you to decide how to protect the token used for the submission:

import xmlrpclib

username = "USERNAME"
token = "TOKEN_STRING"
hostname = "HOSTNAME"
scheme = "https"  # or http if https is not available for this instance.
server = xmlrpclib.ServerProxy("%s://%s:%s@%s/RPC2" % (scheme, username, token, hostname))
port = server.scheduler.get_publisher_event_socket()

At this point, port will be 5500 or whatever the instance has configured as the port for event notifications. The publisher details can then be constructed as:

publisher = "tcp://%s:%s" % (hostname, port)

If the YAML test job submission is in a file called job.yaml, the example can be continued to load and submit this test job:

with open('job.yaml', 'r') as filedata:
    data = filedata.read()
job_id = server.scheduler.submit_job(data)

If the job uses MultiNode LAVA then job_id will be a list and you will need to decide which job to monitor.