Skip to content

Latest commit

 

History

History
87 lines (44 loc) · 4.93 KB

README.md

File metadata and controls

87 lines (44 loc) · 4.93 KB

VIDEX

Video Indexing Tool with Object and Outlier detection 스크린샷 2024-02-14 12 15 47

js js js js js js

VIDEX is a novel video indexing tool designed to streamline the surveillance video review process. More specifically, VIDEX facilitates automatic object detection and outlier detection, enabling rapid summarization and easy access to critical events within the footage. By cataloging information about detected objects and anomalies in an indexed database, VIDEX ensures efficient retrieval and analysis of the video frames that require attention for review. This significantly cuts down the time and resources needed for a thorough examination of surveillance footage. VIDEX is developed by HAIL Lab at Handong University.

objectdetection OutlierView

Overview

  • Initial Screen

Initial_Screen

(White mode)

Initial_Screen_White

  • Object Detection Result Screen

Object_Result

  • Outlier Detection Result Screen

Outlier_Result

System Design

  • Structure of VIDEX interface (with M-V-VM pattern) image

    In VIDEX, interface consisted of C# WPF M-V-VM pattern. The model plays a role in storing and importing data, and declares and uses classes of data to be stored in the database. View is a place that displays the screen that the user sees and processes user input. VIDEX includes SettingView, where users import videos and set up settings, ObjectDetectionView, and AnomalyDetectionView, which show results for object detection and anomaly detection. ViewModel detects events occurring in View and performs business logic suitable for those events. In VIDEX, most of the major tasks are performed in viewmodel, and the data necessary for logic is retrieved from the model and updated to view.


  • Multi-Thread Pipeline
    image

    VIDEX effectively processes video data through a multithreading method.

    In the process for object detection, entire video frame is divided by the number of threads allocated to object detection. Each thread is allocated as many as the number of divided frames, and independent work is possible because there are no frames shared with each other. Each thread executes a YOLOv5 model called through ONNX (Open Neural Network eXchange) for the video frames, and stores information such as class, frame, bounding box coordinates, and size for the detected object in the database. At the same time, the information is retrieved from the database and the view is updated.

    In the process for anomaly detection, we allocate threads as many as the number of detection methods to parallelize the process. Initially, we segment the input video into segments, which serve as detection units, and obtain spatio-temporal feature embeddings using a pre-trained 3D-CNN (also called through ONNX) on large-scale action recognition data. Then, each thread distinguishes anomaly embeddings from the entire embeddings using assigned non-parametric outlier detection methods. Through this approach, VIDEX leverages multithreading to maintain process parallelism and achieve improvements in speed.


  • Dataflow Diagram DFD_VIDEX-Page-1 drawio (6)

    The figure above is a dataflow diagram of videx. Follow the flow to find out the functions of videx.


Additional Function

  • Object Statistics Using OxyChart UI
    Chart_check_

  • Object Filter On/Off
    ObjectOn-off