-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unique image kernel + working inferece #196
Conversation
### Unique point per pixel image kernel ### | ||
|
||
|
||
def calculate_latent_and_observed_correspondences( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I factored this out so we can use it to establish the same pixel<>point correspondences in inference as in the model.
observed_rgbds_per_point = PixelsPointsAssociation.from_hyperparams_and_pose( | ||
get_hypers(trace), get_new_state(trace)["pose"] | ||
).get_point_rgbds(get_observed_rgbd(trace)) | ||
if inference_hyperparams.in_inference_only_assoc_one_point_per_pixel: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This gives a way to swap between whether to do pixel<>point association as before ("else" branch), or using the z buffering from the new unique image kernel.
@@ -123,19 +123,21 @@ def viz_trace( | |||
|
|||
vertices = hyperparams["vertices"] | |||
b3d.rr_log_cloud( | |||
vertices, | |||
vertices[visibility_prob > 0.1], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change the viz: don't display the colors of points that are invisible (these can get out of sync and be confusing -- though maybe for some debugging having a way to view them will end up being useful).
@@ -49,7 +49,9 @@ def save_hyperparams(folder_name, hyperparams, inference_hyperparams): | |||
f.write(pprint.pformat(inference_hyperparams)) | |||
|
|||
|
|||
def run_tracking(scene=None, object=None, save_rerun=False, max_n_frames=None): | |||
def run_tracking( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated this script so we can ask to run it in the mode where it has ground truth pose access.
TODO in the future: when that option is selected, we should probably not do C2F, and just do 1 step of gridding.
"color_kernel": transition_kernels.MixtureDriftKernel( | ||
[ | ||
transition_kernels.LaplaceNotTruncatedColorDriftKernel(scale=0.05), | ||
transition_kernels.RenormalizedLaplaceColorDriftKernel(scale=0.05), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I switched to this just since I was wondering if weird issues could arise due to not having a normalized distribution, but we can revert it if you'd like. It didn't visibly change performance in my experiments.
Here are example runs on 15 frames of this scene, to show how inference is looking. It is about 16 s / iters at the moment.
SCENE_49_OBJECT_INDEX_0.mp4
SCENE_49_OBJECT_INDEX_1.mp4
SCENE_49_OBJECT_INDEX_2.mp4
SCENE_49_OBJECT_INDEX_3.mp4