Skip to content

LNST Task API

jtluka edited this page Oct 26, 2015 · 35 revisions

After some time of writing LNST tasks in XML you probably keep repeating:

"XML is such a bad language to write sequence of commands, it's so limited. This should be done differently!!"

And you're quite right. We got tired of this, too. Therefore we've added a new feature that allows you to use your own python program that drives the test execution with ability to access all of the LNST infrastructure - machine properties, command execution, template functions, background execution - from the program. Besides the LNST infrastructure you can benefit from the Python language facilities as loops, conditional executions and whatever Python provides in it's library.

Note that the machine requirements description still needs to be provided as an XML code in the recipe.

1. Usage in LNST recipe

Let's assume that your Python program you're using for the task execution is called task_check_ping.py and in your current directory there's a recipe my_recipe.xml:

$ ls .
my_recipe.xml
task_check_ping.py

To include the python script in your recipe use following code:

<lnstrecipe>
    <network>
        <host id="1">
            <interfaces>
                <eth label="ttnet" id="testiface">
                    <addresses>
                        <address value="192.168.100.240/24"/>
                    </addresses>
                </eth>
            </interfaces>
        </host>
        <host id="2">
            <interfaces>
                <eth label="ttnet" id="testiface">
                    <addresses>
                        <address value="192.168.100.215/24"/>
                    </addresses>
                </eth>
            </interfaces>
        </host>
    </machines> 

    <!-- This is it! -->
    <task python="task_check_ping.py"/>
</lnstrecipe>

NOTE: The python task pathnames are relative to the pathname of recipe that calls them

1.1 Content of the task_check_ping.py

    # Mandatory import, the ctl handle contains the API
    from lnst.Controller.Task import ctl

    # Get handles for the machines
    m1 = ctl.get_host("1")
    m2 = ctl.get_host("2")

    # Set a config option (persistent=True)
    m1.config("/proc/sys/net/ipv4/conf/all/forwarding", "0", True)

    # run a shell command
    devname = m2.get_devname("testiface")
    m1.run("echo %s" % devname, timeout=30)

    # prepare a module for execution
    ping_mod = ctl.get_module("IcmpPing", options={
                                  "addr": m2.get_ip("testiface", 0),
                                  "count": 40,
                                  "interval": 0.2,
                                  "limit_rate": 95})

    # run the module twice on machine one
    ping_test = m1.run(ping_mod, timeout=30)
    ping_test = m1.run(ping_mod, bg=True, timeout=30)

    # make the controller wait
    ctl.wait(5)

    # interrupt the process
    ping_test.intr()

As you can see all that you need to do to access the LNST API is simply importing the Controller handle

   from lnst.Controller.Task import ctl

Let's have a closer look on the API itself.

2. Controller API

The controller handle provides following methods

ControllerAPI
get_host(self, host_id) to get a HostAPI handle for the machine from the recipe spec with a specific id
get_hosts(self) returns a dictionary that maps host_ids to HostAPI objects
get_module(self, name, options) to get a test module API handle
wait(self, seconds) to make controller wait for a specific amount of seconds
get_alias(self, alias) gets the value of a previously defined alias
connect_PerfRepo(self, mapping_file, url, username, password) connects to a PerfRepo instance, returns a PerfRepoAPI object
get_configuration(self) returns the configuration parsed from the recipe file
get_mapping(self) returns the mapping of the recipe configuration to pool machines

The get_host() method returns a handle that is needed when you want to access the information about the interfaces or run a command on the machine. See the Host API for details on how to do that or look at the examples below.

The get_hosts() method returns a dictionary that maps host_ids of all configured hosts to a HostAPI object.

The get_module() simply returns the Test Module handle that can be run on a test machine. This method takes an optional options dictionary to initialize the Test Module.

The wait() method is an equivalent for the <ctl_wait> tag in the recipe xml. It takes one parameter seconds and it's value tells the controller how long it should wait before it continues in the task execution.

The get_alias() method can be used to access aliases defined in the highest namespace of the recipe -- <lnstrecipe>.

The connect_PerfRepo() method is used to get an API object used to communicate with a PerfRepo instance. The first call will create the connection, consecutive calls will only return the already existing connection, if you want to connect to multiple PerfRepo instances you can create a new PerfRepoAPI object manually. Parameter mapping_file specifies the file to read PerfRepo mapping data from. Parameters url, username, password are optional and when not set the method will read them from your configuration file.

The get_configuration() and get_mapping() methods are mostly helper functions used when preparing data to send to PerfRepo, but since they could be useful on they're own they were made public.

Examples

Controller API's get_host() example
   m1 = ctl.get_host("1")
   m2 = ctl.get_host("2")

   ifc_ip = m2.get_ip("testiface")
   ifc_hwaddr = m2.get_hwaddr("testiface")

   m1.run("ping -c 5 " + ifc_ip)
   m1.run("arp -n | grep -i " + ifc_hwaddr)
Controller API's get_module() example

Following example shows the IcmpPing module initialization and it's execution on the test machine.

   m1 = ctl.get_host("1")
   m2 = ctl.get_host("2")

   ping_module = ctl.get_module("IcmpPing", options={
                                  "addr": m2.get_ip("testiface", 0),
                                  "count": 40,
                                  "interval": 0.2,
                                  "limit_rate": 95})
   m1.run(ping_module)
Controller API's wait() example
   logging.info("I'll wait for 10 seconds")
   ctl.wait(10)
   logging.info("Task execution continues ...")
Controller API's get_alias() example
   my_value = ctl.get_alias("my_alias")

Will return the value of my_alias defined like this:

<lnstrecipe>
    <define>
        <alias name="my_alias" value="my_value"/>
    </define>
    ...
</lnstrecipe>

3. Host API

Host API provides following methods

HostAPI
config(self, option, value, persistent=False, netns=None) to configure values under /proc or /sys directories
run(self, what, **kwargs) to run commands or test modules on the test machines
get_interface(self, interface_id) returns an InterfaceAPI object that can be used to access the specified interface
get_interfaces(self) returns a dictionary mapping interface_id to InterfaceAPI objects of all configured interfaces on the host
sync_resources(self, modules=[], tools=[]) to manually synchronize resources to the test machine

The config() is the equivalent of the <config> command used in the recipe xml. It takes optional parameters, the option defining the path to the file under /proc or /sys directory, the value containing the value to set the option to and persistent flag to make the value persistent between individual tasks. The optional netns parameter controls which network namespace should be configured.

The run() is the equivalent of the <run> command used in the recipe xml. The parameter what is used to pass either Test Module handle obtained thorugh the Host API's get_module() method or string containing a command to be run on the command line. This method takes following keyword arguments that modify its behaviour. For the usage see the examples section below.

run() **kwargs
bg boolean if set to True the command will be run in background
expect ["pass"|"fail"] if set to "fail" the command is expected to fail - in other words if it succeeds this is considered as the testcase failed
timeout integer time limit in seconds
tool string run from a tool (the same as from in recipe xml)
netns string run the command in a specified network namespace

The last method sync_resources() can be used when the controller is run with the option -r (--reduce-sync) is enabled. In this case the controller will skip resource synchronization for the specific python task and instead the user is expected to manually synchronize required resources by calling this method.

Examples

Host API's config() example
    m1 = ctl.get_host("1")

    m1.config("/proc/sys/net/ipv4/conf/all/forwarding", "0", True)
Host API's run() examples

First example shows how to run the netcat tool on a test machine's command line.

    m1 = ctl.get_host("1")

    ifc_ipaddr = m1.get_ip("testiface")
    m1.run("nc -l %s" % ifc_ipaddr)

As you could note the example above is not very useful since it would block the task execution so the following example modifies it a bit so the program is run in the background.

    m1 = ctl.get_host("1")

    ifc_ipaddr = m1.get_ip("testiface")
    nc_cmd = m1.run("nc -l %s" % ifc_ipaddr, bg=True)

    ctl.wait(30)

    nc_cmd.kill()

Still not very useful, right? So, next example further extends the previous example and shows how to run a Test Module from the LNST Test Module library, in this example it is NetCat module as the client counterpart wrapper to the netcat tool

    m1 = ctl.get_host("1")
    m2 = ctl.get_host("2")

    listen_port = 1234

    ifc_ipaddr = m1.get_ip("testiface")
    nc_cmd = m1.run("nc -l %s %s" % (ifc_ipaddr, listen_port), bg=True)

    nc_module = ctl.get_module("NetCat", options={
                                   "addr": ifc_ipaddr,
                                   "port": listen_port,
                                   "duration": 30})
    m2.run(nc_module)

    nc_cmd.wait()

The last example shows how to run 3rd party tools. We'll be using the tcp_conn tool packaged within LNST.

    m1 = ctl.get_host("1")
    m2 = ctl.get_host("2")

    port_range="10000-10050"
    ifc_addr = m2.get_ip("testiface")

    # running the 3rd party tool
    server = m2.run("./tcp_listen -p %s -a %s -c" % (port_range, ifc_addr), from="tcp_conn", bg=True)
    client = m1.run("./tcp_connect -p %s -a %s -c" % (port_range, ifc_addr), from="tcp_conn")

    # wait one minute and interrupt the tcp_conn tools
    ctl.wait(60)
    client.intr()    
    server.intr()
Host API's get_devname() example

Following example shows how to use get_devname() method and use the config() method and device name to set it's forwarding state.

    m1 = ctl.get_host("1")
    devname = m1.get_devname("testiface")

    logging.info("enabling forwarding on interface %s" % devname) 
    m1.config(option="/proc/sys/net/ipv4/conf/%s/forwarding" % devname, 1)
Host API's get_hwaddr() example

In the following example get_hwaddr() method is used to check that hardware address of the testiface gets into the arp cache.

   m1 = ctl.get_host("1")
   m2 = ctl.get_host("2")

   ifc_ip = m2.get_ip("testiface")
   ifc_hwaddr = m2.get_hwaddr("testiface")

   m1.run("ping -c 5 " + ifc_ip)
   m1.run("arp -n | grep -i " + ifc_hwaddr)
Host API's get_ip() example

This example demonstrates get_ip() method used to get the IP address of testiface interface and use it as the parameter to the NetCat program.

    m1 = ctl.get_host("1")

    ifc_ipaddr = m1.get_ip("testiface")
    m1.run("nc -l %s" % ifc_ipaddr)
Host API's get_prefix() example

This example demonstrates get_prefix() method used to get the netmask part of an address of testiface interface.

    m1 = ctl.get_host("1")

    ifc_addr = m1.get_ip("testiface")
    ifc_addr_netmask = m1.get_prefix("testiface")
    m1.run("ip route add %s/%s dev gre0" % (ifc_addr, ifc_addr_netmask))
Host API's sync_resources() example

This example demonstrates sync_resources() method used to synchronize the IcmpPing and multicast resources with the slave machine with id 1.

    m1 = ctl.get_host("1")
    m1.sync_resources(modules=["IcmpPing"], tools=["multicast"])

4. Module API

Module API is an API class representing a module, other than that it doesn't do much, it's just a convenient way to store the module name and module options. Instance of this class is passed as a parameter to run() method from the HostAPI, it can be used more than once and the method set_options() can be used to change the options.

5. Process API

ProcessAPI provides following methods for handling executed commands on the test machines

ProcessAPI
passed(self) returns a boolean result of the process
get_result returns the whole command result
wait(self) blocking wait until the command returns
intr(self) interrupt the command sending SIG_INT signal
kill(self) kill the command sending SIG_KILL signal

Methods wait(), intr() and `kill() are used when a user runs any command or test module in the background.

Keep in mind that after issuing the kill() method the command results are disposed since it's considered an intentional termination of the process. That also means that LNST sets command's result as passed.

Process API's wait() example
    m1 = ctl.get_host("1")
    m2 = ctl.get_host("2")

    listen_port = 1234

    ifc_ipaddr = m1.get_ip("testiface")
    nc_cmd = m1.run("nc -l %s %s" % (ifc_ipaddr, listen_port), bg=True)

    nc_module = ctl.get_module("NetCat", options={
                                   "addr": ifc_ipaddr,
                                   "port": listen_port,
                                   "duration": 30})
    m2.run(nc_module)

    nc_cmd.wait()
Process API's intr() example
    m1 = ctl.get_host("1")
    m2 = ctl.get_host("2")

    port_range="10000-10050"
    ifc_addr = m2.get_ip("testiface")

    # running the 3rd party tool
    server = m2.run("./tcp_listen -p %s -a %s -c" % (port_range, ifc_addr), from="tcp_conn", bg=True)
    client = m1.run("./tcp_connect -p %s -a %s -c" % (port_range, ifc_addr), from="tcp_conn")

    # wait one minute and interrupt the tcp_conn tools
    ctl.wait(60)
    client.intr()    
    server.intr()
Process API's kill() example

Following example shows the usage of the kill() method. This example will start stress program in the background, keeps it running for one minute and finally kills the stress program.

    m1 = ctl.get_host("1")

    stress_bg = m1.run("stress -m 20 -c 4", bg=True)
    ctl.wait(60)
    stress_bg.kill()

6. Interface API

InterfaceAPI instances provide an interface to access interface attributes such as hardware address or ip addresses. The class supports the following methods:

InterfaceAPI
get_id(self) returns the id of the interface that was used in the recipe
get_network(self) returns the network label that was used in the recipe
get_devname(self) returns the device name of the interface
get_hwaddr(self) returns the hardware address of the interfaec
get_ip(self, ip_index) returns an ip address specified by it's index, the value is just the address, without the network mask
get_ips(self) returns a list of all ip addresses of the interface, values are tuples of the address and the network mask
get_prefix(self, ip_index) returns the network mask length of the ip address specified by the index
get_mtu(self) returns the MTU (Maximum Transmission Unit) value of the network device
set_mtu(self, mtu) configures the MTU (Maximum Transmission Unit) value of the network device
set_link_up(self) sets the link of the interface up
set_link_up(self) sets the link of the interface down

6. PerfRepo

Recently we added a prototype support for talking to PerfRepo instances that can be used for storing results of performance tests. This section describes how to do that and how it works. As mentioned this is a very new feature so expect bugs.

Prepare PerfRepo

To write a working task that talks to PerfRepo you first need to do some preparation on the PerfRepo side. All results (in PerfRepo called "TestExecutions") are bound to a "Test" therefore if you want to save a TestExecution you need to reference an existing Test object. You can easily create one through the web UI so that shouldn't be a problem. You also need to define "Metrics" for the Test. Here we define a simple convention:

  • for each value you want to store create 3 <etrics named: value_name, value_name_min, value_name_max
  • each TestExecution you save needs to define all the Metrics of the Test it is associated with

PerfRepo is capable of more complex workflows, but for now we want to keep it simple while we figure out the details.

Next we also have basic support for comparing results against baselines. Baselines in PerfRepo are bound to "Reports" so if you want to use this functionality you will need to create a "Metric history report" and define a new baseline. LNST will use the latest baseline defined to do the comparison.

PerfRepoAPI and PerfRepoResult

Are the two classes you will be using. PerfRepoAPI is the API that simplifies talking to PerfRepo and PerfRepoResult simplifies the creation of TestExecution objects.

PerfRepoAPI
connected(self) returns True if a connection to PerfRepo was established
connect(self, url, username, password) connects to a PerfRepo instance
new_result(self, testUid, name) creates a new PerfRepoResult object, testUid identifies a Test object stored in PerfRepo and name is the name of a newly created TestExecution object
save_result(self, result) creates a new TestExecution object in PerfRepo, result is a PerfRepoResult object
get_baseline(self, report_id) returns information about the newest baseline of the specified Report
compare_to_baseline(self, result, report_id, metric_name) gets the newest baseline from the specified report and uses it to compare it to the given result, only the specified metric (with '_min', and '_max') is compared based on the metric type (less/higher is better), returns True if the result is better than the baseline.
compare_testExecutions(self, firs, second, metric_name) compares two TestExecution objects, this is a helper function used by "compare_to_baseline"
PerfRepoResult
add_value(self, val_name, value) adds a new value to the TestExecution being created, val_name specifies the Metric and value is a float number
set_configuration(self, configuration=None) when configuration is None, calls the HostAPI function get_configuration. The configuration dictionary is transformed into a list of (key, value) pairs where key is a dot separated string that describes the structure of the original dictionary. This method is called automatically during PerfRepoResult object initialization. This configuration is sent to PerfRepo as parameters of the TestExecution object and describes the measured result.
set_mapping(self, mapping=None) same as the previous method, but with recipe to pool match information. Also called automatically on object initialization.
set_tags(self, tags) adds a list of tags to the TestExecution object
get_testExecution(self) returns the created TestExecution object, used by PerfRepoAPI when sending data to PerfRepo

Example of use

#run some tests and get a result

#first you connect to a PerfRepo instance
perf_api = ctl.connect_PerfRepo(hostname, username, password)

#you then create an object representing your new result, you need to
#specify the TestName (TestUid) that the result will be associated with
#and a name for this particular TestExecution
result = perf_api.new_result(test_name, test_run_name)

#you can now add the measure values
result.add_value(metric_name, performance_result)
#and set some tags
result.set_tags(list_of_string_tags)

#when you're finished you save_result sends the object to PerfRepo which
#stores it in the database
perf_api.save_result(result)

#finally you can compare your result against a baseline, baselines are
#associated with reports so you need to know it's id (user id's are not
#supported yet but I expect we will have that in the future)
perf_api.compare_to_baseline(result, report_id_with_a_baseline, metric_name)