ResidualViT for Efficient
Temporally Dense Video Encoding


1 King Abdullah University of Science and Technology
2 Czech Institute of Informatics, Robotics and Cybernetics at the Czech Technical University in Prague
3 Adobe Research

ICCV 2025 - Highlight Paper
arXiv Code

Abstract

Several video understanding tasks, such as natural language temporal video grounding, temporal activity localization, and audio description generation, require "temporally dense" reasoning over frames sampled at high temporal resolution. However, computing frame-level features for these tasks is computationally expensive given the temporal resolution requirements. In this paper, we make three contributions to reduce the cost of computing features for temporally dense tasks. First, we introduce a vision transformer (ViT) architecture, dubbed ResidualViT, that leverages the large temporal redundancy in videos to efficiently compute temporally dense frame-level features. Our architecture incorporates (i) learnable residual connections that ensure temporal consistency across consecutive frames and (ii) a token reduction module that enhances processing speed by selectively discarding temporally redundant information while reusing weights of a pretrained foundation model. Second, we propose a lightweight distillation strategy to approximate the frame-level features of the original foundation model. Finally, we evaluate our approach across four tasks and five datasets, in both zero-shot and fully supervised settings, demonstrating significant reductions in computational cost (up to \( 60\% \)) and improvements in inference speed (up to \( {\sim}2.5\times \) faster), all while closely approximating the accuracy of the original foundation model.


Efficient Temporally Dense Video Encoding

(a) Naively encoding videos incurs a high computational cost when computing frame-level features for temporally dense tasks. (b) Our efficient interleaved approach significantly reduces this cost, enabling efficient temporally dense feature extraction. (c) On benchmark datasets, ResidualViT reduces the computational cost by an average of \( 56\% \) compared to the CLIP encoder while maintaining nearly identical accuracy across multiple downstream tasks: Natural Language Temporal Video Grounding (NLTVG), Temporal Activity Localization (TAL), Audio Description generation (AD), and Action Recognition (AR).


ResidualViT Architecture

(a) Video frames are processed via two visual encoders \(\mathcal{E}_{\mathcal{V}}\) and \( \mathcal{E}_{\mathcal{S}} \) in an interleaved manner. For each frame encoded via the ViT \( \mathcal{E}_{\mathcal{V}} \), N subsequent frames are encoded using our lightweight ResidualViT \( \mathcal{E}_{\mathcal{S}} \), significantly reducing the computational cost. (b) ResidualViT incorporates a token reduction module \( \mathcal{R} \) to reduce computation and the residual tokenizer \( \mathcal{A} \) to ensure temporal consistency by propagating information from preceding frames.


Encoding Time Efficiency

When varying the batch size, we showcase the runtime difference of a standard ViT (\( \textbf{blue} \)) against our ResidualViT (\( \textbf{orange} \)). We demonstrate that our approach is \( {\sim}2.5\times \) faster than a standard ViT. Moreover, for the same time budget (\ie, 10 seconds), we can accommodate \( {\sim}2.5\times \) more samples in the batch without incurring Out Of Memory issues.


BibTeX

@inproceedings{soldan2025residualvit,
  title={ResidualViT for Efficient Temporally Dense Video Encoding},
  author={Soldan, Mattia and Caba Heilbron, Fabian and Ghanem, Bernard and Sivic, Josef and Russell, Bryan},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2025}
}

Acknowledgments

This work was supported by King Abdullah University of Science and Technology (KAUST), Center of Excellence for Generative AI under award number 5940. We also thank Adobe Research for their support and collaboration.

Special thanks to all collaborators and contributors who provided feedback and insight during the development of this work.