Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REP-2014] Benchmarking performance in ROS 2 #364

Merged
merged 37 commits into from
Aug 24, 2023
Merged
Changes from 7 commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
6a5c201
Initial commit for REP-2014, Benchmarking performance in ROS 2
vmayoral Sep 29, 2022
8f0601b
Add introduction and motivation
vmayoral Oct 11, 2022
98b6883
Add rest of the sections
vmayoral Oct 11, 2022
b5e6bfb
Remove .vscode files
vmayoral Oct 11, 2022
220e873
Minor adjustments in text, rewrite for clarity
vmayoral Oct 11, 2022
1e62af6
Add License
vmayoral Oct 12, 2022
4b98c34
ros_tracing now's in GitHub, point it out
vmayoral Oct 13, 2022
fa942f9
Address comments about tracing and benchmarking section
vmayoral Oct 15, 2022
7f8e848
Add examples about bandwidth measurements in ROS
vmayoral Oct 15, 2022
81428cc
Update rep-2014.rst
vmayoral Oct 15, 2022
3f69558
Update rep-2014.rst
vmayoral Oct 15, 2022
87e75f2
Update rep-2014.rst
vmayoral Oct 15, 2022
f27b6b7
Update rep-2014.rst
vmayoral Oct 15, 2022
10e1aa4
Add reference_system to the prior work list
vmayoral Oct 15, 2022
456e7e1
Fix URL typo
vmayoral Oct 15, 2022
5cf1801
Update rep-2014.rst
vmayoral Oct 17, 2022
d8ad74e
Update rep-2014.rst
vmayoral Oct 17, 2022
37d3c9c
Update rep-2014.rst
vmayoral Oct 19, 2022
7833e03
Update rep-2014.rst
vmayoral Nov 2, 2022
8bf1576
Update rep-2014, typos
vmayoral Nov 2, 2022
35b89b3
Further clarify value for package maintainers
vmayoral Nov 2, 2022
cada0d8
Be more direct about how to approach benchmarking
vmayoral Nov 2, 2022
bf0b674
Re-work references format, APA
vmayoral Nov 3, 2022
0e34041
Add Ingo and Christophe as co-authors
vmayoral Nov 5, 2022
e7fcba4
Update rep-2014.rst
vmayoral Dec 6, 2022
1183abe
Update rep-2014.rst
vmayoral Mar 7, 2023
69f4679
Update rep-2014.rst
vmayoral Mar 7, 2023
b84368b
Add opaque/transaparent plots
vmayoral Jun 15, 2023
f992409
Merge branch 'rep-2014' of https://github.com/vmayoral/rep into rep-2014
vmayoral Jun 15, 2023
e458d4a
Add Rayza's contributions as an author
vmayoral Jun 15, 2023
8c8dc6a
Update rep-2014.rst
vmayoral Jun 15, 2023
2fe85e0
Update rep-2014.rst
vmayoral Jun 15, 2023
58aeba3
Update rep-2014.rst
vmayoral Jun 15, 2023
049fe78
Merge branch 'rep-2014' of https://github.com/vmayoral/rep into rep-2014
vmayoral Jun 15, 2023
2fe1e20
Remove 'Tracing and benchmarking' duplicated section
vmayoral Jun 15, 2023
a191cf5
Add clarification on reference impl.
vmayoral Jun 15, 2023
027ee88
Set the status to "Rejected".
clalancette Aug 24, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
238 changes: 238 additions & 0 deletions rep-2014.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,238 @@
REP: 2014
Title: Benchmarking performance in ROS 2
Author: Víctor Mayoral-Vilches <[email protected]>
Status: Draft
Type: Informational
vmayoral marked this conversation as resolved.
Show resolved Hide resolved
Content-Type: text/x-rst
Created: 29-Sept-2022
Post-History: 11-Oct-2022


Abstract
========

This REP describes some principles and guidelines for benchmarking performance in ROS 2.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you make this "This Informational REP" to make it clear this is not a Standard REP?

Also, can you add "This REP then provides recommendations for tools to use when benchmarking ROS 2, and reference material for prior work done using those tools."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.



Motivation
vmayoral marked this conversation as resolved.
Show resolved Hide resolved
==========

Benchmarking is the act of running a computer program to assess its relative performance. In the context of ROS 2, performance information can help robotists design more efficient robotic systems and select the right hardware for their robotic application. It can also help understand the trade-offs between different algorithms that implement the same capability, and help them choose the best approach for their use case. Performance data can also be used to compare different versions of ROS 2 and to identify regressions. Finally, performance information can be used to help prioritize future development efforts.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vmayoral REPs are usually written with one sentence per line. For example: https://raw.githubusercontent.com/ros-infrastructure/rep/master/rep-2000.rst

vmayoral marked this conversation as resolved.
Show resolved Hide resolved


The myriad combinations of robot hardware and robotics software make assessing robotic-system performance in an architecture-neutral, representative, and reproducible manner challenging. This REP attempts to provide some guidelines to help robotists benchmark their systems in a consistent and reproducible manner by following a quantitative approach. This REP also provides a set of tools and examples to help guide robotists while collecting and reporting performance data.
vmayoral marked this conversation as resolved.
Show resolved Hide resolved

Value for stakeholders:

- Package maintainers can use these guidelines to integrate performance benchmarking data in their packages.
vmayoral marked this conversation as resolved.
Show resolved Hide resolved

- Consumers can use the guidelines in the REP to benchmark ROS Nodes and Graphs in an architecture-neutral, representative, and reproducible manner, as well as the corresponding performance data offered in ROS packages to set expectations on the capabilities of each.

- Hardware vendors and robot manufacturers can use these guidelines to show evidence of the performance of their systems solutions with ROS in an architecture-neutral, representative, and reproducible manner.
vmayoral marked this conversation as resolved.
Show resolved Hide resolved

The guidelines in here are intended to be a living document that can be updated as new information becomes available.


vmayoral marked this conversation as resolved.
Show resolved Hide resolved
A quantitative approach
-----------------------
vmayoral marked this conversation as resolved.
Show resolved Hide resolved
Performance must be studied with real examples and measurements on real robotic computations, rather than simply as a collection of definitions, designs and/or marketing actions. The quantitative approach [1]_ to robotics systems architecture fits well in this context and helps robotic architects come up with better performing architectures through an empirical strategy.
vmayoral marked this conversation as resolved.
Show resolved Hide resolved


Approaches to benchmark performance
-----------------------------------
There're different types of benchmarking approaches, specially when related to performance. The following definitions clarify the most popular terms inspired by [2]_:

- `Functional performance testing`: Functional performance is the measurable performance of the system’s functions which the user can experience. For example, in a motion planning algorithm, measures could include items like the ratio the algorithm gets the motion plan right.

- `Non-functional performance testing`: The measurable performance of those aspects that don't belong to the system's functions. In the motion planning example, the latency the planner, the memory consumption, the CPU usage, etc.
vmayoral marked this conversation as resolved.
Show resolved Hide resolved

vmayoral marked this conversation as resolved.
Show resolved Hide resolved

- `Black-box performance testing`: Measures performance by eliminating the layers above the *layer-of-interest* and replacing those with a specific test application that stimulates the layer-of-interest in the way you need it. This allows to gather the measurement data right inside your specific test application. The acquisition of the data (the probe) resides inside the test application. A major design constraint in such a test is that you have to eliminate the “application” in order to test the system. Otherwise, the real application and the test application would likely interfere.
vmayoral marked this conversation as resolved.
Show resolved Hide resolved

- `Grey-box performance testing`: More application-specific measure which is capable of watching internal states of the system and can measure (probe) certain points in the system, thus generate the measurement data minimal interference. Requires to instrument the complete application.
vmayoral marked this conversation as resolved.
Show resolved Hide resolved

Graphically depicted:

::

Probe Probe
+ +
| |
+--------|------------|-------+ +-----------------------------+
| | | | | |
| +--|------------|-+ | | |
| | v v | | | - latency <--------------+ Probe
| | | | | - throughput<--------------+ Probe
| | Function | | | - memory <--------------+ Probe
| | | | | - power <--------------+ Probe
| +-----------------+ | | |
| System under test | | System under test |
+-----------------------------+ +-----------------------------+


Functional Non-functional


+-------------+ +----------------------------+
| Test App. | | +-----------------------+ |
| + + + + | | | Application | |
+--|-|--|--|--+---------------+ | | <------------+ Probe
| | | | | | | +-----------------------+ |
| v v v v | | |
| Probes | | <------------+ Probe
| | | |
| System under test | | System under test |
| | | <------------+ Probe
| | | |
| | | |
+-----------------------------+ +----------------------------+


Black-Box Grey-box


Tracing and benchmarking
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this section should come earlier in the document. Next to the definition of bechmarking (line 20 ish).

^^^^^^^^^^^^^^^^^^^^^^^^

Tracing and benchmarking can be defined as follows:

- `tracing`: a technique used to understand what goes on in a running software system.

- `benchmarking`: a method of comparing the performance of various systems by running a common test.

From these definitions, inherently one can determine that both benchmarking and tracing are connected in the sense that the test/benchmark will use a series of measurements for comparison. These measurements will come from tracing probes. In other words, tracing will collect data that will then be fed into a benchmark program for comparison.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Measurements don't have to come from tracing, right? The "black box performance testing" section above seems to say that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The definition of tracing that is given above is so broad that it encompasses any kind of procedure "used to understand what goes on in a running software system". I haven't seen such a broad definition before, usually, tracing is defined as logging (partial) execution information while the system is running. Common usage is that these probes try to be very low overhead and log only information available directly at the tracepoint. This more narrow definition of tracing would not encompass all kinds of performance data capture I could envision (for example, another common approach is to compute internal statistics and make them available in various ways).

Unless it is really intended to restrict data capture to tracing, which would need a justification, I would suggest using the more common, narrow definition of the term in the definition section, and then edit this paragraph to encompass diverse methods of capturing data.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sloretz and @iluetkeb, see fa942f9. Let me know if that addresses your concerns or if you'd like further changes. Feel free to suggest them if appropriate.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @iluetkeb. I usually mention something like "fast/low-overhead, low-level logging ..."

@vmayoral I'd explicitly add "low-overhead, low-level" before "logging ..." just to make sure people don't get it mixed up with typical user-comprehensible string logging.





Prior work
vmayoral marked this conversation as resolved.
Show resolved Hide resolved
----------
There're various past efforts in the robotics community to benchmark ROS robotic systems. The following are some of the most representative ones:


- `ros2_benchmarking <https://github.com/piappl/ros2_benchmarking/>`_ : First implementation available for ROS 2, aimed to provide a framework to compare ROS and ROS 2 communications.
- `performance_test <https://gitlab.com/ApexAI/performance_test/>`_: Tool is designed to measure inter and intra-process communications. Runs at least one publisher and at least one subscriber, each one in one independent thread or process and records different performance metrics. It also provides a way to generate a report with the results through a different package.
vmayoral marked this conversation as resolved.
Show resolved Hide resolved
- `ros2-performance <https://github.com/irobot-ros/ros2-performance/>`_: Another framework to evaluate ROS communications and inspired on `performance_test`. There's a decent rationale in the form of a proposal, a good evaluation of prior work and a well documented set of experiments.
- `system_metrics_collector <https://github.com/ros-tooling/system_metrics_collector/>`_: A lightweight and *real-time* metrics collector for ROS 2. Automatically collects and aggregates *CPU* % used and *memory* % performance metrics used by both system and ROS 2 processes. Data is aggregated in order to provide constant time average, min, max, sample count, and standard deviation values for each collected metric. *Deprecated*.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would put the Deprecated comment at the beginning of the description to make it more visible.

- `ros2_latency_evaluation <https://github.com/Barkhausen-Institut/ros2_latency_evaluation/>`_: A tool to benchmarking performance of a ROS 2 Node system in separate processses (initially focused on both inter-process and intra-process interactions, later focused). Forked from `ros2-performance`.
- `ros2_timer_latency_measurement <https://github.com/hsgwa/ros2_timer_latency_measurement/>`_: A minimal *real-time safe* testing utility for measuring jitter and latency. Measures nanosleep latency between ROS child threads and latency of timer callbacks (also within ROS) across two different Linux kernel setups (`vanilla` and a `RT_PREEMPT`` patched kernel).
- `buildfarm_perf_tests <https://github.com/ros2/buildfarm_perf_tests/>`_: Tests which run regularly on the official ROS 2 buildfarm. Formally, extends `performance_test` with additional tests that measure additional metrics including CPU usage, memory, resident anonymous memory or virtual memory.
- `ros-tracing <https://github.com/ros2/ros2_tracing>`_: Tracing tools for ROS 2 built upon LTTng which allow collecting runtime execution information on real-time distributed systems, using the low-overhead LTTng tracer. Performance evaluation can be scripted out of the data collected from all these trace points. The ROS 2 core layers (rmw, rcl, rclcpp) have been instrumented with LTTng probes which allow collecting information of ROS 2 targets without the need to modify the ROS 2 core code (*system under test)*. There various publications available about *ros-tracing* [3]_ [4]_ and it is used actively to benchmark ROS 2 in real scenarios including perception and mapping [5]_, hardware acceleration [6]_ [7]_ or self-driving mobility [8]_.
vmayoral marked this conversation as resolved.
Show resolved Hide resolved


Industry standards
------------------
There are no globally accepted industry standards for benchmarking robotic systems. The closest initiative to a standardization effort in robotics is the European H2020 Project EUROBENCH which aimed at creating the first benchmarking framework for robotic systems in Europe focusing on bipedal locomotion. The project has been completed in 2022 and the results are available in [9]_. The project has been a great success and has been used to benchmark a wide range of bidepal robotic systems throughout experiments however there're no public plans to escalate the project to other types of robots, nor the tools have been used elsewhere.


When looking at other related areas to robotics we find the MLPerf Inference and MLCommons initiatives which are the closest to what we are trying to achieve in ROS 2. The MLPerf Inference is an open source project that aims to define a common set of benchmarks for evaluating the performance of machine learning inference engines. The MLCommons is an open source project that aims to define a common set of benchmarks for evaluating the performance of machine learning models. Both projects have been very successful and are widely used in the industry. The MLPerf Inference project has been completed in 2021 and the results inference benchmarks available in [10]_. The MLCommons project has become an industry standard in Machine Learning and the results publicly disclosed in [11]_.


Performance metrics in robotics
===============================
Robots are deterministic machines and their performance should be understood by considering metrics like the following ones:
vmayoral marked this conversation as resolved.
Show resolved Hide resolved

- **latency**: time between the start and the completion of a task.
vmayoral marked this conversation as resolved.
Show resolved Hide resolved
vmayoral marked this conversation as resolved.
Show resolved Hide resolved

- **bandwidth or throughput**: the total amount of work done in a given time for a task.

- **power**: the electrical energy per unit of time consumed while executing a given task.

- **performance-per-watt**: total amount of work (generally *bandwidth* or *throughput*) that can be delivered for every watt of power consumed.

- **memory**: the amount of short-term data (not to be confused with storage) required while executing a given task.

These metrics can help determine performance characteristics of a robotic system. Of most relevance for robotic systems we often encounter the *real-time* and *determinism* characteristics defined as follows:

- **real-time**: ability of completing a task's computations while meeting time deadlines
- **determinsm**: that a task happens in exactly the same timeframe, each time.
vmayoral marked this conversation as resolved.
Show resolved Hide resolved
vmayoral marked this conversation as resolved.
Show resolved Hide resolved


For example, a robotic system may be able to perform a task in a short amount of time (*low latency*), but it may not be able to do it in *real-time*. In this case, the system would be considered to be *non-real-time* given the time deadlines imposed. On the other hand, a robotic system may be able to perform a task in *real-time*, but it may not be able to do it in a short amount of time. In this case, the system would be considered to be *non-interactive*. Finally, a robotic system may be able to perform a task in real-time and in a short amount of time, but it may consume a lot of *power*. In this case, the system would be considered to be *non-energy-efficient*.

In another example, a robotic system that can perform a task in 1 second with a power consumption of `2W` is twice as fast (*latency*) as another robotic system that can perform the same task in 2 seconds with a power consumption of `0.5W`. However, the second robotic system is twice as efficient as the first one. In this case, the solution that requires less power would be the best option from an energy efficiency perspective (with a higher *performance-per-watt*). Similarly, a robotic system that has a high bandwidth but consumes a lot of energy might not be the best option for a mobile robot that must operate for a long time on a battery.

Therefore, it is important to consider different of these metrics when benchmarking a robotic system. The metrics presented in this REP are intended to be used as a guideline, and should be adapted to the specific needs of a robot.


Methodology for benchmarking performance in ROS 2
=================================================

In this REP, we **recommend adopting a grey-box and non-functional benchmarking approach** to measure performance and allow to evaluate ROS 2 individual nodes as well as complete computational graphs. To realize it in an architecture-neutral, representative, and reproducible manner, we also recommend using the Linux Tracing Toolkit next generation (`LTTng <https://lttng.org/>`_) through the `ros-tracing` project, which leverages probes already inserted in the ROS 2 core layers and tools to facilitate benchmarking ROS 2 abstractions.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section seems to be the most important part of the REP. I'd recommend moving this to the top. I also recommend removing any definitions not used here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree about the importance but struggle to see how this could help a non-experienced person (into the topic). This paragraph leverages prior definitions including what's benchmarking, grey-box and non-functional approaches to it. Without properly defining these things, understanding things properly might be very hard.

We somewhat bumped into such scenario at ros-navigation/navigation2#2788 wherein after a few interactions, even for experienced ROS developers, it became somewhat clear to me (due to the lack of familiary with this approach) that the value of benchmarking things with this approach wasn't being understood properly. Lots of unnecesary demands were set because the approach was simply not understood, to later take a much less rigorous approach to benchmark the nav stack. In my experience the topics introduced in here are hard to digest and require proper context.

Maybe a compromise would be to grab these bits and rewrite them without introducing too much terminology in the abstract of the REP? @sloretz feel free to make a suggestion.

vmayoral marked this conversation as resolved.
Show resolved Hide resolved

The following diagram shows the proposed methodology for benchmarking performance in ROS 2 which consists of 3 steps:

::


+--------------+
+----------------+ rebuild | |
| +----------> |
start +----------> 1. trace graph | | 2. benchmark +----------> 3. report
| | | |
+----+------^--^-+ | |
| | | +-------+------+
| | | |
+------+ | |
LTTng +--------------------+
re-instrument


1. instrument both the target ROS 2 abstraction/application using `LTTng <https://lttng.org/>`_. Refer to `ros2_tracing <https://github.com/ros2/ros2_tracing>`_ for tools, documentation and ROS 2 core layers tracepoints;
Copy link

@ErikAtApex ErikAtApex Jun 15, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
1. instrument both the target ROS 2 abstraction/application using `LTTng <https://lttng.org/>`_. Refer to `ros2_tracing <https://github.com/ros2/ros2_tracing>`_ for tools, documentation and ROS 2 core layers tracepoints;
1. instrument the target ROS 2 application using `LTTng <https://lttng.org/>`_. Refer to `ros2_tracing <https://github.com/ros2/ros2_tracing>`_ for tools, documentation and ROS 2 core layers tracepoints;

2. trace and benchmark the ROS 2 application;
vmayoral marked this conversation as resolved.
Show resolved Hide resolved
3. create performance reports with the results of the benchmarking.


Reference implementation and recommendations
============================================

The reader is referred to `ros2_tracing <https://github.com/ros2/ros2_tracing>`_ and `LTTng <https://lttng.org/>`_ to familiarize herself with the tools and the methodology of collecting and analyzing performance data. In addition, [3]_ and [4]_ present comprehensive descriptions of the `ros2_tracing <https://github.com/ros2/ros2_tracing>`_ tools and the `LTTng <https://lttng.org/>`_ infrastructure.
vmayoral marked this conversation as resolved.
Show resolved Hide resolved

Reference implementations complying with the recommendations of this REP can be found in literature for applications like perception and mapping [5]_, hardware acceleration [6]_ [7]_ or self-driving mobility [8]_. A particular example of interest for the reader is the instrumentation of the `image_pipeline <https://github.com/ros-perception/image_pipeline/tree/humble/>`_ ROS 2 package [12]_, which is a set of nodes for processing image data in ROS 2. The `image_pipeline <https://github.com/ros-perception/image_pipeline/tree/humble/>`_ package has been instrumented with LTTng probes available in the ROS 2 `Humble` release, which results in various perception Components (e.g. `RectifyNode <https://github.com/ros-perception/image_pipeline/blob/ros2/image_proc/src/rectify.cpp#L82/>`_ *Component*) leveraging intrumentation which if enabled, can help trace the computational graph information flow of a ROS 2 application using such Component. The results of benchmarking the performance of `image_pipeline <https://github.com/ros-perception/image_pipeline/tree/humble/>`_ are available in [13]_ and launch scripts to both trace and analyze perception graphs available in [14]_.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vmayoral as I mentioned on Discourse, we could perhaps add a sentence to mention use-cases like heterogeneous systems and mention that ros2_tracing is probably not the only tool that would be used.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm think we can relax a bit the language in here and point to tracing in general, while mentioning ros2_tracing as one example that's already integrated in ROS 2.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest we make this section more explicit via a bulleted list. E.g. "Here are reference implementations for particular application domains. Developers in these domains are encouraged to format their results in a similar style."



References and Footnotes
========================

.. [1] Z. Hennessy, J. L., & Patterson, D. A. (2011). Computer architecture: a quantitative approach.

.. [2] A. Pemmaiah​, D. Pangercic, D. Aggarwal, K. Neumann, K. Marcey, "Performance Testing in ROS 2".
https://drive.google.com/file/d/15nX80RK6aS8abZvQAOnMNUEgh7px9V5S/view

.. [3] Bédard, Christophe, Ingo Lütkebohle, and Michel Dagenais. "ros2_tracing: Multipurpose Low-Overhead Framework for Real-Time Tracing of ROS 2." IEEE Robotics and Automation Letters 7.3 (2022): 6511-6518.

.. [4] Bédard, Christophe, et al. "Message Flow Analysis with Complex Causal Links for Distributed ROS 2 Systems." arXiv preprint arXiv:2204.10208 (2022).

.. [5] Lajoie, Pierre-Yves, Christophe Bédard, and Giovanni Beltrame. "Analyze, Debug, Optimize: Real-Time Tracing for Perception and Mapping Systems in ROS 2." arXiv preprint arXiv:2204.11778 (2022).

.. [6] Mayoral-Vilches, V., Neuman, S. M., Plancher, B., & Reddi, V. J. (2022). "RobotCore: An Open Architecture for Hardware Acceleration in ROS 2".
https://arxiv.org/pdf/2205.03929.pdf

.. [7] Mayoral-Vilches, V. (2021). "Kria Robotics Stack".
https://www.xilinx.com/content/dam/xilinx/support/documentation/white_papers/wp540-kria-robotics-stack.pdf

.. [8] Li, Zihang, Atsushi Hasegawa, and Takuya Azumi. "Autoware_Perf: A tracing and performance analysis framework for ROS 2 applications." Journal of Systems Architecture 123 (2022): 102341.
vmayoral marked this conversation as resolved.
Show resolved Hide resolved

.. [9] European robotic framework for bipedal locomotion benchmarking
https://eurobench2020.eu/

.. [10] MLPerf™ inference benchmarks
https://github.com/mlcommons/inference

.. [11] MLCommons
https://mlcommons.org/en/

.. [12] image_pipeline ROS 2 package. An image processing pipeline for ROS. `Humble` branch.
https://github.com/ros-perception/image_pipeline/tree/humble

.. [13] Case study: accelerating ROS 2 perception
https://github.com/ros-acceleration/community/issues/20#issuecomment-1047570391

.. [14] acceleration_examples. ROS 2 package examples demonstrating the use of hardware acceleration.
https://github.com/ros-acceleration/acceleration_examples


Copyright
=========

This document is placed in the public domain or under the CC0-1.0-Universal license, whichever is more permissive.