Skip to content

Commit

Permalink
add dr2net
Browse files Browse the repository at this point in the history
  • Loading branch information
coolbay committed Jan 9, 2024
1 parent 31c4a2f commit 5911182
Show file tree
Hide file tree
Showing 5 changed files with 54 additions and 3 deletions.
2 changes: 1 addition & 1 deletion content/authors/admin/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ education:
- course: Research Intern
institution: National Institute of Informatics (NII), Tokyo, Japan
year: 2016
- course: CSC Joint Ph.D. student
- course: Joint Ph.D. student
institution: University of Washington (UW), Seattle, USA
year: 2012
- course: B.Eng. in Software Engineering
Expand Down
7 changes: 7 additions & 0 deletions content/publication/Dr2Net/cite.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
@inproceedings{zhao2024dr2net,
title={{Dr2Net}: Dynamic Reversible Dual-Residual Networks for Memory-Efficient Finetuning},
author={Zhao, Chen and Liu, Shuming and Mangalam, Karttikeya and Qian, Guocheng and Zohra, Fatimah and Alghannam, Abdulmohsen and Malik, Jitendra and Ghanem, Bernard},
booktitle={arXiv:2401.04105v1},
year={2024}
}

Binary file added content/publication/Dr2Net/featured.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
44 changes: 44 additions & 0 deletions content/publication/Dr2Net/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
---
title: "Dr2Net: Dynamic Reversible Dual-Residual Networks for Memory-Efficient Finetuning"
publication_types:
- "2"
authors:
- admin
- Shuming Liu
- Karttikeya Mangalam
- Guocheng Qian
- Fatimah Zohra
- Abdulmohsen Alghannam
- Jitendra Malik
- Bernard Ghanem
publication: arXiv:2401.04105
publication_short: arxiv 2024
abstract: "Large pretrained models are increasingly crucial in modern computer vision tasks. These models are typically used in downstream tasks by end-to-end finetuning, which is highly memory-intensive for tasks with high-resolution data, e.g., video understanding, small object detection, and point cloud analysis. In this paper, we propose Dynamic Reversible Dual-Residual Networks, or Dr2Net, a novel family of network architectures that acts as a surrogate network to finetune a pretrained model with substantially reduced memory consumption. Dr2Net contains two types of residual connections, one maintaining the residual structure in the pretrained models, and the other making the network reversible. Due to its reversibility, intermediate activations, which can be reconstructed from output, are cleared from memory during training. We use two coefficients on either type of residual connections respectively, and introduce a dynamic training strategy that seamlessly transitions the pretrained model to a reversible network with much higher numerical precision. We evaluate Dr2Net on various pretrained models and various tasks, and show that it can reach comparable performance to conventional finetuning but with significantly less memory usage."

draft: false
featured: true
tags:
- Deep learning
- Computer vision
- Reversible networks
- Memory-efficient finetuning
slides: ""
url_pdf: https://arxiv.org/pdf/2401.04105.pdf
image:
caption: ""
focal_point: ""
preview_only: false
filename: featured.png
url_dataset: ""
url_project: ""
url_source: ""
url_video: ""
author_notes: []
doi: ""
projects: []
date: 2024-01-4T00:00:02.020Z
url_slides: ""
publishDate: 2024
url_poster: ""
url_code: ""
---
4 changes: 2 additions & 2 deletions content/publication/LAE/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,9 @@ url_video: ""
author_notes: []
doi: ""
projects: []
date: 2023-07-16T00:00:02.020Z
date: 2021-01-16T00:00:02.020Z
url_slides: ""
publishDate: 2023
publishDate: 2021
url_poster: ""
url_code:
---

0 comments on commit 5911182

Please sign in to comment.