| Issue |
MATEC Web Conf.
Volume 413, 2025
International Conference on Measurement, AI, Quality and Sustainability (MAIQS 2025)
|
|
|---|---|---|
| Article Number | 06003 | |
| Number of page(s) | 5 | |
| Section | Artificial Intelligence in Societies | |
| DOI | https://doi.org/10.1051/matecconf/202541306003 | |
| Published online | 01 October 2025 | |
Visual transformer with depthwise separable convolution projections for video-based human action recognition
1 Department of Computer Science, Brunel University of London, Kingston Lane, Uxbridge, United Kingdom
2 Henan Key Laboratory on Public Opinion Intelligent Analysis, Zhongyuan University of Technology, Zhengzhou, China
* e-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.
** e-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.
** e-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.
Abstract
Human action recognition is a task that utilizes algorithms to recognize human actions from videos. Transformer-based algorithms have attracted growing attention in recent years. However, transformer networks often suffer from slow convergence and require large amounts of training data, due to their inability to prioritize information from neighboring pixels. To address these issues, we propose a novel network architecture that combines a depthwise separable convolution layer with transformer modules. The proposed network has been evaluated on the medium-sized benchmark dataset UCF101 and the results have demonstrated that the proposed model converges quickly during training and achieves competitive performance compared with SOTA pure transformer network, while reducing approximately 7.4 million parameters.
© The Authors, published by EDP Sciences, 2025
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.

