-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added URDF/XACRO for the Zivid One+ 3D Camera #17
base: master
Are you sure you want to change the base?
Conversation
This looks nice. We do not have a contributes agreement (CLA) in place for this repository. I will try to get one in place so we can process this PR. Thx |
Sure, keep me posted! Just happy that this is useful to others as well. |
The checks seem to fail, but looking at the log this is due to the Zivid driver and as a result the nodelet not being able to load. Not sure how this is related to the added |
Any progress on getting the CLA in place? Alternatively, would you consider this PR without the CLA? |
Thanks for the reminder. I was in the process of implementing CLA, but got stuck with other tasks. I will resurrect the efforts here. |
We'll also look into the CI error. |
Should we allocate some time to this in June perhaps, @nedrebo ? |
I would very much like that. It is already on our list of candidates for short term prioritization. |
Friendly ping :) |
Hi @dave992 Thanks for this PR, and sorry that it has taken so long for us to follow up on this. I have asked a colleague to follow up with you regarding your specific questions on the camera/projector angle and the location of the optical center and the projector center. He should reach out to you soon. We agree that adding these URDF/XACRO/STL definitions for the Zivid cameras to this repo is a good idea, and would be valuable for other users as well. In order to merge this PR so that others could use them, we think we would need to have these definitions/files for all the Zivid camera models. At least One+ Small and One+ Large in addition to One+ Medium, in the first round. It should also be expandable, so that we can add Zivid Two eventually as well. Currently the file is just named zivid_camera.xacro, so there should probably be one file per camera model, appropriately named, with the appropriate camera-specific angles and coordinates. Probably some XML could be shared, using a macro. In addition to these changes, we would also need to do some testing on our side, to verify that this is working as expected, before we can take it in and officially support it. For the next months we are a bit too busy on the release of our next camera model, so we will unfortunately not be able to follow up on this for some time. We would like to keep this PR open so that others can use this until then. |
zivid_description/CMakeLists.txt
Outdated
project(zivid_description) | ||
find_package(catkin REQUIRED COMPONENTS) | ||
catkin_package() | ||
include_directories(${catkin_INCLUDE_DIRS}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copy-pasta @dave992 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd also suggest to add install(..)
:
include_directories(${catkin_INCLUDE_DIRS}) | |
install(DIRECTORY config launch meshes urdf | |
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}) |
If the frames are different across models, then I agree. The geometry itself looked identical (correct me if I am wrong here), hence why I only implemented one version for all Zivid One+ variants. The naming can indeed be changed to leave room for other models and variants in the future :). Let me know if I can do anything here. |
@dave992, it would be nice if you could check in one of the samples that the pointcloud is located correctly relative to the frames you have added, given that you for instance |
Updated minimum CMake version to match other zivid packages. Improve readabillity and add newlines. Add install command.
@dave992 @runenordmo any progress? Very interested in this. |
I do not know what the status of the CLA is. Other than that, not much has changed in the mean time but I can have a look again at incorporating the open-standing points:
|
@aashish-tud , @dave992 ; Regarding the CLA; @nedrebo has let me know that the decision is that we do not need it for this "BSD-3-Clause license" repository. |
@@ -32,13 +32,13 @@ | |||
|
|||
<!-- Zivid Optical (Measurement) and Projector Joints --> | |||
<joint name="${prefix}optical_joint" type="fixed"> | |||
<origin xyz="0.065 0.062 0.0445" rpy="-${0.5*M_PI} 0 -${0.5*M_PI + 8.5/180*M_PI}"/> | |||
<origin xyz="0.065 0.062 0.0445" rpy="-${0.5*pi} 0 -${0.5*pi + 8.5/180*pi}"/> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure we need a optical joint that is not the same as the base_link?
The optical joint's frame should ideally just be the same frame as the points in the pointcloud is given in, which is a fixed point in the camera - I can figure out exactly how it's specified.
Then the only frame that is essential is that optical frame, and a hand-eye transform will be used to get the pointcloud a robot's frame.
I think might be useful to also have a rough estimate of the projector coordinate system relative to the optical frame, like you have added (discussed in 624a977#r563312884).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The optical joint's frame should ideally just be the same frame as the points in the pointcloud is given in, which is a fixed point in the camera - I can figure out exactly how it's specified.
joint != frame.
A frame is a coordinate frame. A joint is a connection between frames defining the transformation between them.
The measurement frame (optical_frame
) is indeed defined by the frame in which the camera outputs the captures. This may or may not coincide with another frame, but is definitely a distinct frame (even if it is just for semantics). The joint just connect the two links together.
Having a confirmation on the location of the optical_frame
relative to the mounting hole (/base_link
) would be very helpful indeed. Our usage (attached to a robot manipulator) does show that this location is correct or at least really close to the actual measurement frame. We often use this description "as is" without calibration for some quick captures.
Then the only frame that is essential is that optical frame
If looking at the camera in isolation yes, but my intent behind making this package is to actually connect it to other hardware. Then the base_link
is essential as well, even if only by convention, expectations, and ease of use.
The base_link
is located such that the geometry can easily be attached, it is the "starting point" of the geometry. In this case, I picked the center mounting hole as I saw this as a convenient location to which I can attach the camera for example a robot or end-effector. All description packages should start with a base_link
.
and a hand-eye transform will be used to get the pointcloud a robot's frame
I would say that calibration is indeed needed for real-world applications, but not part of the scope of this package. Description packages are just there to give the ideal geometry and required frames of hardware. This can be used then for simulations or for a first best guess of your real-world counterpart.
Typically calibrations will result in a new frame, for example: calibrated_optical_frame
, that is then separately attached to the description by the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's keep the base_link
link and optical
joint. I see your point on that being useful for the simulation and a first best guess or starting point.
Typically calibrations will result in a new frame, for example: calibrated_optical_frame, that is then separately attached to the description by the user.
Yes, I agree, in a real-world application the hand-eye calibration will take over, to be able to know how the point cloud is related to the robot's base. And then the transformation between the base_link
frame and the optical_frame
is mostly useful for simulations and verifying that the robot-camera calibration is sound.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having a confirmation on the location of the optical_frame relative to the mounting hole (/base_link) would be very helpful indeed.
Yes, I will get this information
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, so the point cloud is given relative to a location that is the optical center given at a certain temperature+aperture location. So this will vary for each camera, even within the same model, for instance Zivid One+ M.
So I think we can communicate through the naming of the joints and frames that the transformation between the between the mounting hole and the camera's optical center at the given calibration point (certain temperature+aperture) is an approximation.
And the we can use the fixed approximate values provided in the datasheet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this will vary for each camera, even within the same model, for instance Zivid One+ M.
Would the driver have a way of retrieving that information?
There's no requirement for the xacro:macro
to contain that link.
If the driver could publish it (as a TF frame), that would work just as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this will vary for each camera, even within the same model, for instance Zivid One+ M.
Would the driver have a way of retrieving that information?
Would also be interested in this, especially if it is moving between usages (e.g. due to temperature differences)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(forgive me if this has been discussed before / is generally known about Zivid devices)
Unless the pointcloud / depth images are automatically transformed by the driver to have their origins at a fixed point (so the driver / embedded firmware compensates for the offsets/variation due to temperature/other factors), not having the precise location of the optical frame significantly complicates using Zivid cameras for really precise/accurate work.
Extrinsic calibrations could likely compensate for that offset (they would incorporate it into whatever transform they determine between the camera itself and the mounting link), but IIUC from the comments by @runenordmo, that would essentially only be the extrinsic calibration for one particular 'state' of the sensor.
If the camera itself already compensates for this, a static link
in the URDF/xacro:macro
would seem to suffice. If not, the driver would ideally publish the transform itself -- perhaps not continuously, but at least the one associated with a particular capture. The rest of the system could then match it based on time from the header.stamp
and the TF buffer.
Ah great! The only open-standing comment would be supporting the different variants of the Zivid One+. Could you confirm if the |
As the Zivid Two launched, it is needed to differntiate between the types.
Requested by Zivid as it might cause confusion
The macro now supports the S, M, and L types of the ZIvid One+. Launch files to load, and/or view the different variants have been included.
I've updated the XACRO macro to change the I did not change the position of the frame, this is still obtained as described in the PR. The drawing you shared only shows the projector frame, the optical frame is the other lens opening if I understand correctly. Please let me know if additional changes are needed, or if this suits your need. |
A project I'm working on needs URDFs of the zivid two under ROS2. |
I have added the URDF for the zivid two camera to the zivid_description package provided by @dave992. Is there any time estimate about when this is being merged so I can contribute? |
Hi, sorry for the late response on this PR. This PR, as well the feedbacks/ideas in this PR, is something we will address the next time we do a round of improvements/extensions to our ROS wrapper. I don't have an exact timeline for when we will do this. For now, we would like to keep the PR open, so that others can find this more easily. @iosb-ina-mr could you provide a link to your fork/branch here so that others could take a look at it if they need zivid_two URDF files? Thanks for your inputs and contributions on the Zivid ROS wrapper. |
@apartridge You are welcome. The URDF of the Zivid Two can be found on the urdf-branch of my fork: |
Instead the launch files and xacro's accept a type argument
I have added the Zivid Two (and Zivid Two Plus) descriptions to the As the number of variants was rising, and therefore the number of launch files and xacro files, I instead used a For the Zivid Two and Zivid Two Plus, these arguments are defined and passed to the underlying macro, but nothing is actually done with the information at the moment. I opted for this as I do not know what the differences between the variants are if there are any actual influences on the geometry and URDF of these models. @apartridge Can you tell me if there are actual differences relevant to the URDF for the Zivid Two and Zivid Two Plus series? |
If using the macro in other URDFs then the materials are included.
Hi @dave992 , sorry for the late response and thanks for your contribution. I can confirm that there is a small geometry change on the 2+ cameras compared to the 2 cameras (the front cover extends forward 1-2 mm longer). The CAD files here should be correct: https://www.zivid.com/downloads. In addition, there are also differences in the angles between the camera and the projector, as well as the optical center of the camera across thes models. This can be seen in Figure 5/6/7 in the data sheet for the products (https://www.zivid.com/downloads). I do see that the datasheets for 2+ cameras is missing the outgoing angle of the camera /projector, which is visible in the data sheets for Zivid 2 M70 and L100 (in Figure 5). I will request for this information to be included in the data sheet. Note that the data sheets for the 2+ M60/L110 are still prelimenary. I am not sure exactly what information you need for the URDFs, is this information sufficient (if we get the angles as well)? The optical center will vary a bit between units in the same model, due to unit variations, but I think it should be good enough for visualization purposes. For more accurate results one would need to use hand-eye calibration. |
Another friendly ping again. @apartridge: what would be the way forward? Is there anything needed to get this merged? |
Hi @gavanderhoorn , sorry for the late response. We will aim to reserve some time within the next months to look at the PR, do any changes needed and hopefully get it merged. As we have updated the driver to support ROS2, in order to merge this PR we will need to make it support ROS2 (I am unsure if there are changes in ROS2 related to this). We also need to make it work with all the Zivid 2 camera models, verify the CAD models, constants etc. |
I've created a description package for the Zivid One+ 3D Camera which could be useful for others using the Zivid camera. I already saw #16 mentioning that there was a need for this.
Origin of the meshes:
Location of the links/frames:
base_link
at the center of the tripod mount screw hole.optical_frame
at the lens opening of the camera and angled 8.5 degrees toward the projector. This orientation and location was the result of a discussion with Zivid support on where the measurement frame was located with respect to the tripod mount screw hole. I made this description for the Zivid One+ M. It could be the angle of theoptical_frame
differs depending on the model. I have not yet validated this location using measurement data, so a confirmation of this would be great!projector_frame
at the original origin of the visual geometry file, as the orientation of the origin seemed to indicate it was the projector origin or really close to it. Some feedback on this location would be appreciated.To view the URDF and TF frames build the package and simply run:
roslaunch zivid_description test_zivid_camera.launch