New paper: Transferable-guided Attention Is All You Need for Video Domain Adaptation

We are delighted to announce that our paper, “Transferable-guided Attention Is All You Need for Video Domain Adaptation”, has been accepted for publication at the prestigious IEEE/CVF Winter Conference on Applications of Computer Vision (WACV’25) [link], taking place in Tucson, AZ, from February 28 to March 4, 2025.

Paper Highlights:

  • Tackling Video UDA Challenges: This work addresses the unique complexities of Unsupervised Domain Adaptation (UDA) in videos, a significantly underexplored area compared to image-based UDA.
  • Novel Attention Mechanism: The proposed Transferable-guided Attention (TransferAttn) framework incorporates a Domain Transferable-guided Attention Block (DTAB) into a Vision Transformer (ViT) to enhance spatio-temporal transferability.
  • State-of-the-Art Validation: Extensive experiments on datasets like UCF-HMDB, Kinetics-Gameplay, and Kinetics-NEC Drone demonstrate the framework’s superior performance across different backbones, including ResNet101, I3D, and STAM.

This research pioneers the integration of advanced ViT architectures with UDA for video action recognition, offering new possibilities in adapting models to diverse video datasets with minimal annotations.

Federal University of São Carlos (UFSCar), Sorocaba campus, João Leme dos Santos (SP-264) Highway, Km 110, Sorocaba – SP, 18052-780.

2024 © All rights reserved.