-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't use the semantic sensor by default #116
Don't use the semantic sensor by default #116
Conversation
Also fix spelling error. `get_perc_on_obj_semantic` had the parameter "sematic" [sic]. I changed it to "semantic" and updated uses.
Change default "semantics" to False in Multi-LM mount configs.
Update docstring
…cation_to_look_at
- Explicitly use semantic sensors for habitat_transform_test.py - Drop more non-matching columns between parallel and non-parallel experiments for run_parallel_test.py since not having a semantic sensor modifies reporting of stepwise_performance and stepwise_target_object.
docstrings
Don't use ycb data path for multi-object dataset args.
Add return values to get_good_view and get_good_view_with_patch_refinement so we can raise an assertion error if we start an episode off-object.
Prepare to pretrain new models.
…onty into semantic_sensor
📚 Documentation Preview ✅ A preview of the documentation changes in this PR is available for maintainers at: Last updated: 2025-01-15 21:32 UTC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really nice work! Just added some comments which we can also discuss in the team meeting, it might be worth starting with looking at my last comment (re. semantic_mask
and get_semantic_from_depth
).
else: | ||
semantic_added = True | ||
semantic_mask = np.ones_like(depth_obs, dtype=int) | ||
semantic_mask[depth_obs >= 1] = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be hard-coded based on 1? Would it be sufficient if we just initialize it as ones and then call self.clip instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we can initialize with ones and defer clipping to self.clip since this is only used with surface agents (i.e., when self.depth_clip_sensors
is non-empty and the the sensor index is in self.depth_clip_sensors
). I hard-coded as 1 since the MissingToMaxDepth
transform applied before DepthTo3DLocations
clips missing values/off-object to 1, so I figured that was, at least for now, a safe value to consider as the background off-object value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you add a comment on this in the code? something like
# MissingToMaxDepth will set observations that go into the empty void to 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
|
||
default_on_surface_th: default threshold to use if no bimodal distribution | ||
is found | ||
semantic_mask: binary mask indicating on-object locations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the purpose of semantic_mask
and get_semantic_from_depth
is still a bit confusing in terms of what they actually do, vs. what's documented (a lot of which I appreciate was inherited before this PR). I think it would be clearer if we renamed get_semantic_from_depth
to get_surfaces_from_depth
, and add a comment here that semantic_mask
is based on a simpler heuristic of depth. You could almost create a new, mini function for the below lines, and name that function get_semantic_from_depth
. This would then be used on lines 322:
semantic_mask = np.ones_like(depth_obs, dtype=int)
semantic_mask[depth_obs >= 1] = 0
agent_obs["semantic"] = semantic_mask
Then it would also hopefully be clearer that our final semantic map is typically determined by the element-wise product of these two.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added a bunch of extra comments in __call__
and get_surface_from_depth
(which was get_semantic_from_depth
) that I think helps understand what's going on with surface vs semantic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kept the code referenced code in __call__
though. I'm cool changing it around, but I left it because I didn't want to obscure that the observations dict was being added to (and that we'd have to possibly remove an item from it later).
Hey @nielsleadholm, thanks for the feedback! It needed fresh eyes. I've made a bunch of edits and added a fair amount of comments. Let me know what you think. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for taking this increadible deep dive and figuring out this jungle of complex entanglements and issues. I just left a few minor questions for my own understanding.
# according to depth. However, the threshold we use to separate the two | ||
# parts is dependent on distant vs surface agents. The parameter | ||
# `default_on_surface_th` changes based on whether we are using a distant | ||
# or surface agent. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we turn this big block into a docstring? Ideally of the function so it also shows up in our API docs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea, got it.
else: | ||
semantic_added = True | ||
semantic_mask = np.ones_like(depth_obs, dtype=int) | ||
semantic_mask[depth_obs >= 1] = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you add a comment on this in the code? something like
# MissingToMaxDepth will set observations that go into the empty void to 1
else: | ||
semantic_mask = np.ones_like(depth_obs, dtype=int) | ||
semantic_mask[depth_obs >= 1] = 0 | ||
agent_obs["semantic"] = semantic_mask |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we have to add this mask the agent_obs if we are passing it as a parameter to get_surface_from_depth
? Could we just do self.get_surface_from_depth(..., semantic_mask=estimated_semantic_mask)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a reason for it, but I went ahead and refactored the code such that we no longer modify the observations dict. I'm outlining the changes in a comment below.
# loaders only have one object in parallel experiments. | ||
for col in ["time", "stepwise_performance", "stepwise_target_object"]: | ||
scsv.drop(columns=col, inplace=True) | ||
pcsv.drop(columns=col, inplace=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did this issue only come up now? How was this related to the semantic sensor?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This issue is due to the fact that stepwise_performance
and stepwise_target_object
are derived from the semantic map stored by DepthTo3DLocations
. When the semantic sensor is in use, the semantic map contains any integer value/object ID, so stepwise_performance
and stepwise_target_object
were always "correct". However, without the semantic sensor, semantic maps are always binary, so the sensed object IDs is always 1.
The reason this is different between parallel and serial runs is because in parallel runs, there is always only one object in the data loader, and so the mapping from object ID to object name is always 1 -> the_object. These columns are always fine in parallel runs, but they are now wrong in serial runs when we don't have the semantic sensor, and the values in the columns are always "mug" (or whatever the object is for object ID 1).
So that's the short answer. I hope that makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah thanks, yes that makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for those clarifying comments Scott, this is great!
@nielsleadholm and @vkakerbeck Thanks for the review, great suggestions. I've made a few of changes to
Other than that, I've improved documentation and added more type annotation. Though nothing in the code should have functionally changed since the last review (changes were mostly stylistic or documentation things), I also reran some benchmarks to make sure everything looks normal. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great updates and refactors! Thanks :) Looks all good to me now. I just added one more minor comment but nothing blocking.
semantic_mask=semantic_mask, | ||
semantic_patch, | ||
0.01, | ||
default_on_surface_th, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason you are not using key words here anymore? I prefer it looking like min_depth_range=0.01,
instead of just 0.01,
since it is more readable and less error prone.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No problem, I'll change it back. I agree it's more readable to use keywords here. I had switched it to the above to make the calling style uniform (within this class, at least).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great stuff, thanks Scott, this looks very clean now!
Problem Statement
The environment data loader returns observations for each sensor, such as an RGBA image or depth image. By default, it also returns a semantic map for each sensor which indicates the object each pixel is "looking at"1. However, these semantic maps do contain a lot of information which is delivered to Monty "for free", and we would like to remove this dependence on privileged information by having Monty compute its own semantic maps from sensor data as its default behavior.
Most of the groundwork for this has already been laid. The
DepthTo3DLocation
transform currently has a method for estimating semantic maps using depth data. WhenDepthTo3DLocation.use_semantic_sensor=False
, the ground-truth semantic map is not used during the normal course of an episode. However, it is still used by when initializing an episode and after executing hypothesis-driven jumps. The goal is to replace usage of the ground-truth semantic map with the estimated semantic map in these contexts. We would also like to prevent the ground-truth semantic map from being returned by the environment data loader to guarantee that Monty doesn't use it when we don't want it to.Here's a quick overview of the settings that affect the usage of ground-truth semantic maps2.
MountConfig.semantics
value for a sensor determines whether the ground-truth semantic map is returned by the data loader. More specifically, it creates the itemobservations[agent_id][sensor_module_id]["semantic"]
.DepthTo3DLocation.use_semantic_sensor
determines whether the ground-truth semantic map is used by this transform.InformedPolicy
'smove_close_enough
,orient_to_object
, andfind_location_to_look_at
methods currently use (and require) the existence ofobservations[agent_id][sensor_module_id]["semantic"]
which is controlled by the mount config only. SettingDepthTo3DLocation.use_semantic_sensor=False
has no effect here.Important: the estimated semantic map created by
DepthTo3DLocation
is binary (off- or on-object), whereas the ground-truth semantic maps are zero when off-object and a particular object ID otherwise (e.g., 3s where the pixel is on thepotted_meat_can
). The presence of object IDs in the semantic map is what is needed by theInformedPolicy
's movement/orientation methods. While we can currently work around binary semantic maps for single-object episodes (see below), there's currently no way to avoid using ground-truth semantic maps for multi-object experiments.Solutions
I've adapted
InformedPolicy.move_close_enough
,InformedPolicy.orient_to_object
, andInformedPolicy.find_location_to_look_at
methods to use semantic maps embedded into theobservations
dict byDepthTo3DLocation
. These methods have been augmented withprimary_target_id
andmultiple_objects_present
parameters. If this isFalse
, the nonzero elements of the estimated semantic map are temporarily replaced with the ID of the target object. The nice thing about this method is that the semantic data contained in"semantic_3d"
reflects the choice ofDepthTo3DLocation.use_semantic_sensor
, and the modifications to the code have no effect when the semantic sensor is in use. These changes are specific to the distant agent.Quite a few changes have also been made to
DepthTo3DLocation
. The basic reason here is that the ground-truth semantic map was used to zero out off-object regions on an estimated surface. I introduced a simple, temporary semantic mask from depth data that performs the same function. There was a decent amount of reorganization to get this to work, including changes to parameters and moving some behavior between functions. Side note, the surface estimation behavior ofDepthTo3DLocation
isn't super obvious to a code reader, and I'd be happy to contribute some documentation to it somewhere.Beyond these changes, some tests were updated and some typos were changed (
num_distractors
was often spellednum_distactors
, andsemantic
was sometimes written assematic
). I have also added a pre-episode checks that ensures that we start an episode on the target object (with the exception of multi-object experiments).Dataset configs were also changed to not use semantic sensors in
DepthTo3DLocation
nor have ground-truth semantic data returned by habitat. Multi-object dataset configs were also added and should be used when performing multi-object experiments. Benchmark pretraining and object recognition experiments reflect these modifications.Finally, I re-pretrained all models (we're now up to
pretrained_ycb_v10
) and ran all benchmarks.Footnotes
Clarification for future readers: these maps are not used for object recognition -- they're used to select a location in space where Monty should move and orient its sensors toward to initialize an episode. ↩
The user should be aware that certain combinations of these two attributes are invalid. For example, setting the
MountConfig.semantics
value for a sensor toFalse
andDepthTo3DLocation.use_semantic_sensor=True
will trigger an error. This is something we can warn against when we implement config checking. ↩