Skip to content

Commit

Permalink
update projects
Browse files Browse the repository at this point in the history
  • Loading branch information
yurimjeon1892 committed Jan 3, 2024
1 parent d6211fb commit f791270
Show file tree
Hide file tree
Showing 4 changed files with 27 additions and 21 deletions.
19 changes: 12 additions & 7 deletions _posts/2020-06-01-project-lidar-object-detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,19 +22,24 @@ excerpt: LiDAR sensors are resilient to light and weather variations, accurately
<br>


LiDAR sensors are resilient to light and weather variations, accurately measuring positions, which is vital for object detection algorithms to provide precise distance information about surrounding objects. Thus, the use of lidar sensors is essential for autonomous driving. This project focuses on developing a deep learning-based object detection algorithm using LiDAR for autonomous vehicles driving in urban environments. As the project lead, I participated in the entire project, from designing the deep learning model to implementing the source code for execution in a ROS environment.
Lidar sensors are essential for autonomous driving, as they are resilient to light and weather changes and provide accurate location information. This project focuses on developing a deep learning-based object detection algorithm using LiDAR for autonomous vehicles driving in urban environments.

As the project lead, I was involved in the entire project, from designing the deep learning model to implementing the source code for execution in a ROS environment.

Project objectives:
* The final development outcome should be provided in the form of executable source code for the ROS environment.
* The developed object detection algorithm should meet the following performance criteria:
* Execution time on NVIDIA GeForce GTX 1080 Ti or RTX 2080 with Intel Core i7 machines should be 50 ms or less.
* The mean average precision (mAP) difference compared to the state-of-the-art (sota) object detection algorithm should be 5% or less.

My contributions to the project:
* I implemented the deep learning algorithm using Python with PyTorch and later converted it to C++ with ROS.
* To improve processing speed, I implemented both pre-processing and post-processing stages of the object detection algorithm in C++.
* The deep learning model was converted to ONNX format for enhanced compatibility. In cases where ONNX conversion was not possible, I used CUDA for improved computation speed.

As a result, the project team achieved the following project objectives:
My contributions to the project include:

* Designing the deep learning algorithm using Python with PyTorch and later converting it to C++ with ROS.
* Developing both pre-processing and post-processing stages of the object detection algorithm in C++ to improve processing speed.
* Converting the deep learning model to ONNX format for enhanced compatibility. In cases where ONNX conversion was not possible, I utilized CUDA for improved computation speed.

As a result, we achieved the following project objectives:

* Execution time within 40 ms on the testbed.
* mAP difference of 5% or less when compared to the state-of-the-art PointPillars.
* mAP difference of 5% or less compared to the state-of-the-art PointPillars.
9 changes: 6 additions & 3 deletions _posts/2021-02-01-project-driving-intelligence.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,17 @@
layout: post
title: Research on human-level driving intelligence for autonomous driving of unmanned vehicles
categories: Project
excerpt: During urban driving, drivers encounter a vast amount of information, including other vehicles, pedestrians, cyclists, traffic lights, and traffic signs. Human drivers selectively focus on and concentrate on essential information for driving, allowing them to make fast decisions. This project aims to develop driving intelligence that emulates the efficient selection and concentration observed in human perception systems, allowing for effective decision-making.
excerpt: While driving, humans encounter abundant information, selectively focus on crucial details, and make rapid decisions. This project aims to develop driving intelligence that replicates the efficient selection-and-concentration process observed in human perception systems.
# excerpt_separator: <!--more-->
---

**Government project, Seoul national university**


During urban driving, drivers encounter a vast amount of information, including other vehicles, pedestrians, cyclists, traffic lights, and traffic signs. Human drivers selectively focus on and concentrate on essential information for driving, allowing them to make fast decisions. This project aims to develop driving intelligence that emulates the efficient selection and concentration observed in human perception systems, allowing for effective decision-making.
While driving, humans encounter abundant information, selectively focus on crucial details, and make rapid decisions. This project aims to develop driving intelligence that replicates the efficient selection-and-concentration process observed in human perception systems.


The project team incorporated the attention mechanism from the field of deep learning into driving intelligence. The attention mechanism highlights important parts of the data and omits less relevant portions, allowing for high-precision perception with relatively fewer computations. As a result of research and development, the project team developed an efficient perception system that can process a large amount of data collected from urban driving, enabling fast computation and accurate perception.
We applied the attention mechanism to driving intelligence, emphasizing crucial information and disregarding less important details. This allows for high-precision perception with fewer computations.


The developed perception system processes real-time 2D and 3D data in urban environments, delivering high-accuracy perception results for safe autonomous driving.
8 changes: 3 additions & 5 deletions _posts/2022-11-01-project-auto-label.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,12 @@
layout: post
title: Development of automatic labeling tool for autonomous driving dataset generation
categories: Project
excerpt: Creating high-quality large-scale datasets is a essential element in artificial intelligence research. To minimize the labor involved in dataset creation and enhance dataset quality, this project aimed to develop an automatic labeling tool that effectively reduces the human resources cost used for dataset creation.
excerpt: Creating high-quality, large-scale datasets is crucial in artificial intelligence research. This project aims to develop an automatic labeling system to reduce human resource costs in dataset creation and improve dataset quality.
# excerpt_separator: <!--more-->
---

**Thordrive**

Creating high-quality, large-scale datasets is crucial in artificial intelligence research. This project aims to develop an automatic labeling system to reduce human resource costs in dataset creation and improve dataset quality.

Creating high-quality large-scale datasets is a essential element in artificial intelligence research. To minimize the labor involved in dataset creation and enhance dataset quality, this project aims to develop an automatic labeling tool that effectively reduces the human resources cost used for dataset creation.


In my role as a deep learning engineer on the project team, I was responsible for the design and development of the deep learning engine for multi-sensor object detection. This engine integrates into the automatic labeling tool to detect objects in various environments, assigning object class, size, position, rotation, and tracking IDs. As a result, the automatic labeling tool was able to reduce human labor requirements for dataset creation by up to 30%. This achievement not only streamlines the data labeling workflow but also contributes to the scalability of autonomous driving research and development.
As a deep learning engineer, I designed and developed a deep learning engine for multi-sensor object detection. This engine predicts object class, position, scale, rotation, and track IDs from the collected raw data and generates high-quality annotations for autonomous driving research. Our automatic labeling system can reduce human resource costs for dataset creation by up to 30%.
12 changes: 6 additions & 6 deletions _posts/2023-02-01-project-off-road-perception.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
layout: post
title: Development of perception system for unmanned vehicles in off-road scenarios
categories: Project
excerpt: Autonomous driving on structured urban roads has been widely researched to date. However, unstructured off-road environments present new challenges in autonomous driving research. This project aims to develop a perception system for unmanned robots in off-road scenarios.
excerpt: Unstructured off-road environments pose new challenges for autonomous driving. This project focuses on developing a perception system for unmanned vehicles in off-road scenarios.
---

**Government project, Seoul national university**
Expand All @@ -11,11 +11,11 @@ excerpt: Autonomous driving on structured urban roads has been widely researched
<img src="{{ "/assets/off-road.png" | relative_url }}">
</figure>

Autonomous driving on structured urban roads has been widely researched to date. However, unstructured off-road environments present new challenges in autonomous driving research. This project aims to develop a perception system for unmanned robots in off-road scenarios.
Unstructured off-road environments pose new challenges for autonomous driving. This project focuses on developing a perception system for unmanned vehicles in off-road scenarios.

From a perception perspective, off-road environments exhibit the following characteristics:
Off-road environments have the following characteristics:

* Ambiguous definition of traversable space: In off-road, the driving intelligence must comprehensively consider the given spatial and visual data to distinguish traversable spaces.
* Environmental variations with seasons: Even in the same area, the environment can look entirely different in lush summer and snowy winter.
* Ambiguous definition of traversable space: In off-road scenarios, driving intelligence must consider spatial and visual data comprehensively to distinguish traversable spaces.
* Environmental changes according to seasons: Even in the same area, the environment can look entirely different in the dense foliage of summer compared to the snow-covered winter.

Taking these features into account, the project team developed a perception system that demonstrates robust performance in off-road driving.
We have successfully developed perception system to support the safe driving of vehicles in off-road environments, completing unmanned exploration experiments in mountainous terrain.

0 comments on commit f791270

Please sign in to comment.