Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added URDF/XACRO for the Zivid One+ 3D Camera #17
base: master
Are you sure you want to change the base?
Added URDF/XACRO for the Zivid One+ 3D Camera #17
Changes from 4 commits
5c0384d
795215c
f795426
624a977
690d71f
f6c2de4
eb36ce4
b4514c2
6bd77b1
9c02eb2
4e45d7b
fea145b
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure we need a optical joint that is not the same as the base_link?
The optical joint's frame should ideally just be the same frame as the points in the pointcloud is given in, which is a fixed point in the camera - I can figure out exactly how it's specified.
Then the only frame that is essential is that optical frame, and a hand-eye transform will be used to get the pointcloud a robot's frame.
I think might be useful to also have a rough estimate of the projector coordinate system relative to the optical frame, like you have added (discussed in 624a977#r563312884).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
joint != frame.
A frame is a coordinate frame. A joint is a connection between frames defining the transformation between them.
The measurement frame (
optical_frame
) is indeed defined by the frame in which the camera outputs the captures. This may or may not coincide with another frame, but is definitely a distinct frame (even if it is just for semantics). The joint just connect the two links together.Having a confirmation on the location of the
optical_frame
relative to the mounting hole (/base_link
) would be very helpful indeed. Our usage (attached to a robot manipulator) does show that this location is correct or at least really close to the actual measurement frame. We often use this description "as is" without calibration for some quick captures.If looking at the camera in isolation yes, but my intent behind making this package is to actually connect it to other hardware. Then the
base_link
is essential as well, even if only by convention, expectations, and ease of use.The
base_link
is located such that the geometry can easily be attached, it is the "starting point" of the geometry. In this case, I picked the center mounting hole as I saw this as a convenient location to which I can attach the camera for example a robot or end-effector. All description packages should start with abase_link
.I would say that calibration is indeed needed for real-world applications, but not part of the scope of this package. Description packages are just there to give the ideal geometry and required frames of hardware. This can be used then for simulations or for a first best guess of your real-world counterpart.
Typically calibrations will result in a new frame, for example:
calibrated_optical_frame
, that is then separately attached to the description by the user.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's keep the
base_link
link andoptical
joint. I see your point on that being useful for the simulation and a first best guess or starting point.Yes, I agree, in a real-world application the hand-eye calibration will take over, to be able to know how the point cloud is related to the robot's base. And then the transformation between the
base_link
frame and theoptical_frame
is mostly useful for simulations and verifying that the robot-camera calibration is sound.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I will get this information
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, so the point cloud is given relative to a location that is the optical center given at a certain temperature+aperture location. So this will vary for each camera, even within the same model, for instance Zivid One+ M.
So I think we can communicate through the naming of the joints and frames that the transformation between the between the mounting hole and the camera's optical center at the given calibration point (certain temperature+aperture) is an approximation.
And the we can use the fixed approximate values provided in the datasheet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would the driver have a way of retrieving that information?
There's no requirement for the
xacro:macro
to contain that link.If the driver could publish it (as a TF frame), that would work just as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would also be interested in this, especially if it is moving between usages (e.g. due to temperature differences)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(forgive me if this has been discussed before / is generally known about Zivid devices)
Unless the pointcloud / depth images are automatically transformed by the driver to have their origins at a fixed point (so the driver / embedded firmware compensates for the offsets/variation due to temperature/other factors), not having the precise location of the optical frame significantly complicates using Zivid cameras for really precise/accurate work.
Extrinsic calibrations could likely compensate for that offset (they would incorporate it into whatever transform they determine between the camera itself and the mounting link), but IIUC from the comments by @runenordmo, that would essentially only be the extrinsic calibration for one particular 'state' of the sensor.
If the camera itself already compensates for this, a static
link
in the URDF/xacro:macro
would seem to suffice. If not, the driver would ideally publish the transform itself -- perhaps not continuously, but at least the one associated with a particular capture. The rest of the system could then match it based on time from theheader.stamp
and the TF buffer.