-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensuring correct flow computation and tweaking n_rgb #140
Comments
This is the same video after cropping the flow (left) part, normalizing it and bringing it back with the original rgb. You can appreciate the flow contains squared artifacts, probably either because
as a result, the leg movement is not clear out.mp4NOTE: This is how I normalized the flow so that its span becomes 0-255
|
This is the same experiment for a Proboscis Extension behavior PE.mp4out.mp4 |
Also please note, the artifacts in the flow are not a result of the .mp4 encoding, since they are present in the .png I saved with no compression, like this:
|
Revisiting my question, I noticed by studying the code further, that This argument is So to sum up, there is no need to normalize the video like I was doing (also, it's a wrong way to normalize them). Instead, the |
I have made this commit so that the I am training DEG by calling sequentially the two scripts in https://github.com/shaliulab/vsc-scripts/tree/91ea5e1acd1663e2b14f832f76941ad346717596/manual/deepethogram/train like so
To be more explicit, this is how I signal to the DEG config that the models should have a different number of flows
A discussion on whether |
Regardless of what FlyHostel2_6X_2023-03-13_14-00-00_000205_001_flows.mp4 |
Dear deepethogram developers
I am testing the performance of my flow generators to ensure that they detect the subtle fly behaviors I am interested in. For this I am interactively running
deepethogram.flow_generator.inference.extract_movie
to get a better intuition of what the flow generators do.In my tests, I see that microbehaviors like a leg twitch are not captured by the flow generator when using the default
n_rgb=11
. I was wondering if, because my videos have a high fps (150), only 11 frames are not able to capture the overall behavior i.e. within only 11 frames the animal always almost completely static. I am also worried that the flow computation seems to be there but it's almost invisible, which suggests it may not be working 100%twitch.mp4
I have tried looking up in the code and also in https://arxiv.org/pdf/1704.00389.pdf what would be the impact on speed and accuracy of increasing the
n_rgb
in the flow_generator training, but could not find any intuitive explanation, beyond what I can guess from reading the paper and the linked preprint.Is the flow computation shown in the video normal, or is there indeed a problem? If so, would increasing the
n_rgb
help? Or maybe there is a better approach?Thanks
Antonio
The text was updated successfully, but these errors were encountered: