Skip to content
This repository has been archived by the owner on Jun 5, 2024. It is now read-only.

Efficiency #30

Closed
farecomhalima opened this issue Jan 7, 2021 · 7 comments
Closed

Efficiency #30

farecomhalima opened this issue Jan 7, 2021 · 7 comments

Comments

@farecomhalima
Copy link

Hi, I have a question about your code: can we use cv2.dnn.readNetFromTensorflow(model, prtxt) to read the model with weights?
I can not find the file .pbtxt , if you have an idea how can i generate it with the file .config

@JunweiLiang
Copy link
Owner

Why do you need to load tensorflow with opencv? The .pbtxt file should exist in the variable model format, but most of our models are frozen graph. Here the object_v2 model should be in the variable model format and should have a .pbtxt file.

@farecomhalima
Copy link
Author

I'm new in this domaine. I think that using cv2.dnn.readNetFromTensorflow can make the inference faster. Do you agree? if not what do you suggest to improve the speed of inference (out of TensorRT)

@JunweiLiang
Copy link
Owner

JunweiLiang commented Jan 7, 2021

The main contribution of this repo is to have an efficient framework for object detection and tracking inference. Check out the latest updates here and the full speed experiments here. We can finish processing a 5-minute video in 1 minute 45 seconds with a ~1080 TI GPU (1280x720 input).
To improve speed, we basically went from variable model -> frozen graph -> prune model -> multi-image batch -> multi-thread.

@farecomhalima
Copy link
Author

farecomhalima commented Jan 7, 2021

Thank you for your answer. Last question: In this github there is a script for Fine Tuning, Transfer Learning? because i would like to improve the accuracy of the bike class and add a new class (electric Scooter)

@JunweiLiang
Copy link
Owner

Sure. Please read through the README. There are training and test instructions here. You can load the ActEV model and then finetuning on a new dataset.

@JunweiLiang JunweiLiang changed the title dnn tensorflow Efficiency Jan 10, 2021
@JunweiLiang JunweiLiang pinned this issue Jan 10, 2021
@farecomhalima
Copy link
Author

farecomhalima commented Jan 11, 2021

Hi I have just a question: what do you think about model distillation to have a speed and precise model (for Faster RCNN) ?

@JunweiLiang
Copy link
Owner

Yes! Suppose you could distill knowledge from a Resnet-101-backbone FPN to a Resnet-50 or even Resnet-34, you get 30% ~ 50% speed improvement.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants