diff --git a/index.html b/index.html index 9d4d581048..51212d3f11 100644 --- a/index.html +++ b/index.html @@ -10,6 +10,13 @@ + + + @@ -20,11 +27,11 @@

Jiazhao Zhang | 张嘉曌

-

I am a Ph.D. student at the Center on Frontiers of Computing Studies , Peking University, advised by Prof. He Wang since 2022. Before this, I obtained my M.S. degree and B.Eng. degree from NUDT and Shandong University, respectively. During my master's studies, I was fortunate to be supervised by Prof. Kai Xu and had the opportunity to work closely with Prof. Chenyang Zhu. +

I am a Ph.D. student at the Center on Frontiers of Computing Studies at Peking University, where I have been advised by Prof. He Wang since 2022. Prior to this, I earned my M.S. degree from the National University of Defense Technology (NUDT), under the supervision of Prof. Kai Xu. I received my B.Eng. degree from Shandong University.

- I'm interested in indoor scene reconstruction, understanding and interaction. More specifically, I work on building robust and practical systems for home robots. + My research goal is to develop intelligent and practical robots to enhance people's daily lives. My current research focuses on building intelligent navigation robots based on vision-language models. I am also interested in scene reconstruction and understanding, including techniques such as SLAM and segmentation.

@@ -94,13 +101,11 @@
Paper / - Code - / Webpage

- NaVid makes the first endeavour to showcase the capability of VLMs to achieve state-of-the-art level navigation performance without any maps, odometer and depth inputs. Following human instruction, NaVid only requires an on-the-fly video stream from a monocular RGB camera equipped on the robot to output the next-step action. + NaVid makes the first endeavour to showcase the capability of VLMs to achieve state-of-the-art level navigation performance without any maps, odometer and depth inputs. Following human instruction, NaVid only requires an on-the-fly video stream from a monocular RGB camera equipped on the robot to output the next-step action.

@@ -426,7 +431,7 @@ Paper / - Code & Data + Contact me for code permission

To realize efficient random optimization in the 18D state space of IMU tracking, we propose to identify and sample particles from active subspace.