Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REP-2014] Benchmarking performance in ROS 2 #364
[REP-2014] Benchmarking performance in ROS 2 #364
Changes from 7 commits
6a5c201
8f0601b
98b6883
b5e6bfb
220e873
1e62af6
4b98c34
fa942f9
7f8e848
81428cc
3f69558
87e75f2
f27b6b7
10e1aa4
456e7e1
5cf1801
d8ad74e
37d3c9c
7833e03
8bf1576
35b89b3
cada0d8
bf0b674
0e34041
e7fcba4
1183abe
69f4679
b84368b
f992409
e458d4a
8c8dc6a
2fe85e0
58aeba3
049fe78
2fe1e20
a191cf5
027ee88
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you make this "This Informational REP" to make it clear this is not a Standard REP?
Also, can you add "This REP then provides recommendations for tools to use when benchmarking ROS 2, and reference material for prior work done using those tools."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's clearly specified at the top. See https://github.com/ros-infrastructure/rep/pull/364/files#diff-26ca1241a786a8a285a3dc326474f69a9b971c06eab2db73c0ddd53f8a74ae1cR5. Past work is detailed below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vmayoral REPs are usually written with one sentence per line. For example: https://raw.githubusercontent.com/ros-infrastructure/rep/master/rep-2000.rst
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this section should come earlier in the document. Next to the definition of bechmarking (line 20 ish).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Measurements don't have to come from tracing, right? The "black box performance testing" section above seems to say that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The definition of tracing that is given above is so broad that it encompasses any kind of procedure "used to understand what goes on in a running software system". I haven't seen such a broad definition before, usually, tracing is defined as logging (partial) execution information while the system is running. Common usage is that these probes try to be very low overhead and log only information available directly at the tracepoint. This more narrow definition of tracing would not encompass all kinds of performance data capture I could envision (for example, another common approach is to compute internal statistics and make them available in various ways).
Unless it is really intended to restrict data capture to tracing, which would need a justification, I would suggest using the more common, narrow definition of the term in the definition section, and then edit this paragraph to encompass diverse methods of capturing data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sloretz and @iluetkeb, see fa942f9. Let me know if that addresses your concerns or if you'd like further changes. Feel free to suggest them if appropriate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @iluetkeb. I usually mention something like "fast/low-overhead, low-level logging ..."
@vmayoral I'd explicitly add "low-overhead, low-level" before "logging ..." just to make sure people don't get it mixed up with typical user-comprehensible string logging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would put the Deprecated comment at the beginning of the description to make it more visible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section seems to be the most important part of the REP. I'd recommend moving this to the top. I also recommend removing any definitions not used here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree about the importance but struggle to see how this could help a non-experienced person (into the topic). This paragraph leverages prior definitions including what's benchmarking, grey-box and non-functional approaches to it. Without properly defining these things, understanding things properly might be very hard.
We somewhat bumped into such scenario at ros-navigation/navigation2#2788 wherein after a few interactions, even for experienced ROS developers, it became somewhat clear to me (due to the lack of familiary with this approach) that the value of benchmarking things with this approach wasn't being understood properly. Lots of unnecesary demands were set because the approach was simply not understood, to later take a much less rigorous approach to benchmark the nav stack. In my experience the topics introduced in here are hard to digest and require proper context.
Maybe a compromise would be to grab these bits and rewrite them without introducing too much terminology in the abstract of the REP? @sloretz feel free to make a suggestion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vmayoral as I mentioned on Discourse, we could perhaps add a sentence to mention use-cases like heterogeneous systems and mention that
ros2_tracing
is probably not the only tool that would be used.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm think we can relax a bit the language in here and point to tracing in general, while mentioning
ros2_tracing
as one example that's already integrated in ROS 2.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest we make this section more explicit via a bulleted list. E.g. "Here are reference implementations for particular application domains. Developers in these domains are encouraged to format their results in a similar style."