-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
windows compat #6
Comments
Hi Thanks for your interest in our model. Sure, I will try to fix this error with split and join as soon as possible. Thanks for the feedback! Generally, we are not currently maintaining the code for the windows system due to the diverse python packages that could be used on windows. Sorry for this inconvenience. However, we do have DGP installed on NeuroCAAS which is a super cool platform that supports neuro data analysis with a simple drag-and-drop. This is way more convenient to use compared with installing the packages on your local machine. |
Hey, I just fix all the path issues. Please let me know if you still run into any error. |
Will do, thanks so much. Will also reach out and test the neurocaas framework as well. Much appreciated. |
DGP works on windows now can confirm. Is there an assumption that all training data as the animal present in it? A missing animal and no labels seem to cause training to fail. |
Hi, can you be more concrete about what do you mean by saying “A missing animal and no labels”. If the animal has never shown up in the training videos or a specific body part has never shown up, then it's hard to get it very accurate on the test videos. Also how does DLC perform? DGP should improve upon DLC. |
hmm that was unclear. I have a deeplabcut model trained on roughly 1600 labelled images, these images were drawn from a dataset of ~200 hours of footage. Because of the dataset size, it contains noise.. i.e., images missing animals, or images with human hands and no animals. filtering the dataset would take a prohibitively long period of time for one person to do. Because of how I selected the images for training there were a number of images that did not have the animal of interest in it so I did not label said image. that image was kept in the dataset. Unfortunately, Deeplabcut does not make it easy to remove data from a dataset during labelling, which is when you find the noise, so it is easier be careful when adding data to prevent noise and leave noise in if you get it. Additionally, I found that it is helpful to have some noise examples in the dataset to reduce labelling error when noise is present, this does not seem to reduce the performance of deeplabcut for labelling animals. when deepgraphpose is initiliazing the resnet. It seems to do so iteratively for each datafolder which is absurdly slow.. in my case around 300 folders..
it gets hung up and fails whenever a folder without labelled data is found...
I get that the beauty of deepgraphpose requires that there be labels present and there should always be an animal in the video. and that having noise will blow up the model and probably make it useless. Currently, I am finding and removing all the folders that contain noise from my deeplabcut dataset. It seems like it would be reasonable to simply not use a folder for training with deepgraphpose if there is an AssertionError. i.e., using a boolean and a try statement or something like that to verify if the error occurs. |
Hi @wweertman, thanks for this feedback. We will look into these two issues: 1. slow initialization; 2. assertion error. I will let you know as soon as we finish. Sorry for these errors. |
beautiful! I have also been getting the assertation error from videos that only have one or two missing bodyparts.. i.e., occluded bodyparts. |
I have also gotten assertion errors from frames with all bodyparts labeled. |
so here is another weird one that I don't totally get.
the video is a short video 2 seconds. but not a symbolic link (which is the dlc default to create). I think the error is because one of the images labeled is the final frame in the video. This label was extracted using the deeplabcut kmeans tool, so it just so happened to be the final image. It might be helpful for there to be a fairly detailed section in the Readme that contains all the assumptions about the structure of the data that deepgraphpose wants from deeplabcut so when we build our datasets using the deeplabcut tools we have a guide to enable compatibility with deepgraphpose. |
On the note above.. the reason for the short videos is this.. we only want to do tracking on the animals when they are in a specific region of our flume. we have tried training a deeplabcut model with a selection of all the data (animals on the walls, on rock, over exposed, etc.) and it performed worse on the data from the region we are interested in. To overcome this we grab 'trajectories' when a animal crosses the area of the flume we are interested in. This resulted in many short videos. If say, we knew that deepgraphpose required videos of X length, and frames no more than N values from the final frame of the video we could take that into account when creating our deeplabcut models. |
Hi @wweertman I fix the assertion error for videos with no labels. For the short video issue, the error occurs in deeplabcut package. It seems that this path "os.path.join(self.cfg.project_path,im_file)" doesn't exist. Is that true? Can you print out os.path.join(self.cfg.project_path,im_file) before line 165 in https://github.com/paninski-lab/deepgraphpose/blob/main/src/DeepLabCut/deeplabcut/pose_estimation_tensorflow/dataset/pose_defaultdataset.py? to see whether the error is due to the missing file? Best, |
lines 158 - 218 thanks! |
Oh, I mean whether the file at this path "os.path.join(self.cfg.project_path,im_file)" exists? If not, then |
Hi! Been trying to run this on windows and realized there are a bunch of code that operates on unix file separators, i.e.
line 597 in dataset.py
def extract_frame_num(img_path):
return int(img_path.rsplit('\', 1)[-1][3:].split('.')[0])
Could we get a simple fix (i.e. os.path.split/os.path.join) to make file separators platform-independent?
The text was updated successfully, but these errors were encountered: