Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Video Server for EdgeRealtimeVideoAnalytics #9

Open
Akhtar303 opened this issue Sep 3, 2019 · 8 comments
Open

Video Server for EdgeRealtimeVideoAnalytics #9

Akhtar303 opened this issue Sep 3, 2019 · 8 comments

Comments

@Akhtar303
Copy link

Thanks for your great implementation of EdgeRealtimeVideoAnalytics.
I have a question Which video server you used for This project Pipe Line .
i.e
video server like WebRTC and FFMPEG.

Thanks

@itamarhaber
Copy link
Collaborator

Hello @Akhtar303

This project uses a simple video server implemented in Python - it simply reads and sends the JPGs from a Redis stream: https://github.com/RedisGears/EdgeRealtimeVideoAnalytics/blob/master/app/server.py

@Akhtar303
Copy link
Author

Akhtar303 commented Sep 11, 2019

@itamarhaber Thanks for Immediate reply
Can I use this Pipe Line for multiples Camera And Multiple Deep Learning Models with Optimize way
i.e Can Multiples Camera and Multiple Models Effect On Performance (like delay/Latency)of Video Feed
Show On Web Pages.
So Can You Suggest me This Pipe Line For My Use Case (Multipes Camera And Multiples Models).
Thanks

@itamarhaber
Copy link
Collaborator

Yes, this pipeline can be used for multiple models and input sources (cameras). That said, the server's resources will determine the impact on performance and it is definitely a possibility that loading it too much will result in increased latencies and dropped frames.

@Akhtar303
Copy link
Author

@itamarhaber Thanks for Immediate reply
How can I change capture.py and other files to achieve my goal (multiples camera and multiple models) with best way and with little modification of code.
Give me some Suggestion to achieve my goal using this pipe line but little changes in code.
Thanks

@itamarhaber
Copy link
Collaborator

It should be fairly straightforward from what I remember - look for instances of the 'camera:0' literal and parametrize them. Feel free to hit us with questions if you run into any issues.

@milanlanlan
Copy link

Hello @itamarhaber
I have a similar question. I try to use this pipeline for multiples cameras and multiple DL models to support my research about video analytics. For example, I want to replace yolo with other DL models and observe their effect on performance.(Accuracy/latency/so on)
So what should I do to change the code with best way? I think I need to change the input&output of selected DL models like what you do in yolo_box.py. How can I know the details of the API about redisAI in python? Is there a document for developer?
It seems not easy if developers change the code for every DL model. Is there a better way to achive this goal?
Please reply me if I have any misunderstanding.
Thanks.

@itamarhaber
Copy link
Collaborator

Hello @milanlanlan

I think I need to change the input&output of selected DL models

Indeed - AFAIK every DL model has its own quirks so the input needs to be prepared accordingly and the outputs post-processed as needed.

How can I know the details of the API about redisAI in python?

RedisAI has a pythonic client if you want to use it directly from a script (https://github.com/RedisAI/redisai-py). This demo uses RedisGears, which has built-in integration with RedisAI, so every RedisAI command is exposed as a Gears method. The RedisAI documentation is at https://oss.redislabs.com/redisai/.

It seems not easy if developers change the code for every DL model

I don't see how that can be circumvented - this isn't because of a RedisAI limitation but rather due
to the variance in different DL models' inputs and outputs. RedisAI is actually built for serving multiple models, each stored as a key, against any tensors. It is up to the developer to tie them together.

I hope this makes sense - let me know if not :)

/cc @K-Jo @lantiga

@Akhtar303
Copy link
Author

Hello @itamarhaber
I am working on multi camera and models pipeline Here I am facing some problem when I run top.py to see performance of pipeline it prints like this
ts: 1569392325 in_fps: N/A out_fps: N/A prf_read: 12.4 prf_resize: 5.6 prf_model: 456.7 prf_script: 32.9 prf_boxes: 3.1 prf_store: 0.3 prf_total: 600.6 camera_id:camera:0
here the issue is it never print in_fps and out_fps.
I think problem is here gear.py
execute('TS.ADD', x[3]+':out_fps', 1, 'RESET', 1)
execute('TS.ADD', x[3]+':prf_{}'.format(name), ts, current)
i.e

execute('TS.ADD', 'camera:0:prf_{}'.format(name), ts, current)

in top.py this line p.execute_command('TS.RANGE', f'{args.video}:{m}', now - 2, now - 1)
always print empty list for in_fps and out_fps but all other values not empty.
I think issue is related to time stamp.
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants