Skip to content

Commit

Permalink
update ego-exo4d
Browse files Browse the repository at this point in the history
  • Loading branch information
coolbay committed Feb 27, 2024
1 parent 6dee917 commit eca1f86
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 6 deletions.
6 changes: 3 additions & 3 deletions content/publication/Ego-exo4D/cite.bib
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
@article{grauman2023egoexo4d,
@article{grauman2024egoexo4d,
title={Ego-Exo4D: Understanding Skilled Human Activity from First-and Third-Person Perspectives},
author={Grauman, Kristen and Westbury, Andrew and Torresani, Lorenzo and Kitani, Kris and Malik, Jitendra and Afouras, Triantafyllos and Ashutosh, Kumar and Baiyya, Vijay and Bansal, Siddhant and Boote, Bikram and others},
journal={arXiv preprint arXiv:2311.18259},
year={2023}
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}
6 changes: 3 additions & 3 deletions content/publication/Ego-exo4D/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: "Ego-Exo4D: Understanding Skilled Human Activity from First-and Third-Person Perspectives"

publication_types:
- "3"
- "1"
authors:
- admin
- with other 100 authors
Expand Down Expand Up @@ -91,8 +91,8 @@ authors:
# - Lorenzo Torresani+
# - Mingfei Yan+
# - Jitendra Malik
publication: arxiv 2023
publication_short: arxiv 2023
publication: IEEE Conference on Computer Vision and Pattern Recognition (**CVPR**), 2024
publication_short: CVPR 2024
abstract: "We present Ego-Exo4D, a diverse, large-scale multimodal multiview video dataset and benchmark challenge. Ego-Exo4D centers around simultaneously-captured egocentric and exocentric video of skilled human activities (e.g., sports, music, dance, bike repair). More than 800 participants from 13 cities worldwide performed these activities in 131 different natural scene contexts, yielding long-form captures from 1 to 42 minutes each and 1,422 hours of video combined. The multimodal nature of the dataset is unprecedented: the video is accompanied by multichannel audio, eye gaze, 3D point clouds, camera poses, IMU, and multiple paired language descriptions -- including a novel `expert commentary' done by coaches and teachers and tailored to the skilled-activity domain. To push the frontier of first-person video understanding of skilled human activity, we also present a suite of benchmark tasks and their annotations, including fine-grained activity understanding, proficiency estimation, cross-view translation, and 3D hand/body pose. All resources will be open sourced to fuel new research in the community."
draft: false
featured: false
Expand Down

0 comments on commit eca1f86

Please sign in to comment.