Open Access
| Issue |
MATEC Web Conf.
Volume 419, 2026
International Conference on Mechanical and Materials Engineering (ICMME 2025)
|
|
|---|---|---|
| Article Number | 01019 | |
| Number of page(s) | 10 | |
| DOI | https://doi.org/10.1051/matecconf/202641901019 | |
| Published online | 18 March 2026 | |
- Z. Yu, J. Yu, Y. Cui, D. Tao, Q. Tian, Deep modular co-attention networks for visual question answering, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) (2019). [Google Scholar]
- L. Gao, L. Cao, X. Xu, J. Shao, J. Song, Question-led object attention for visual question answering. Neurocomputing 391, 227–233 (2020). [Google Scholar]
- S. Shah, A. Mishra, N. Yadati, P. P. Talukdar, KVQA: Knowledge-aware visual question answering, in Proc. AAAI Conf. Artif. Intell. 33, 8876–8884 (2019). [Google Scholar]
- N. Bhattacharya, Q. Li, D. Gurari, Why does a visual question have different answers?, in Proc. IEEE Int. Conf. Comput. Vis. (ICCV) (2019). [Google Scholar]
- S. Toor, H. Wechsler, M. Nappi, Question action relevance and editing for visual question answering. Multimed. Tools Appl. 78, 2921–2935 (2019). [Google Scholar]
- Ray, G. Christie, M. Bansal, D. Batra, D. Parikh, Question relevance in VQA: Identifying non-visual and false-premise questions. arXiv:1606.06622 (2016). [Google Scholar]
- M. Acharya, K. Kafle, C. Kanan, TallyQA: Answering complex counting questions, in Proc. AAAI Conf. Artif. Intell. 33, 8076–8084 (2019). [Google Scholar]
- E. Davis, Unanswerable questions about images and texts (New York University, USA). [Google Scholar]
- J. Ye, A. Hu, H. Xu, Q. Ye, M. Yan, Y. Dan, C. Zhao, G. Xu, C. Li, J. Tian, mPlug-DocOwl: Modularized multimodal large language model for document understanding. arXiv:2307.02499 (2023). [Google Scholar]
- W. Huang, C. Wang, R. Zhang, Y. Li, J. Wu, L. Fei-Fei, Voxposer: Composable 3D value maps for robotic manipulation with language models. arXiv:2307.05973 (2023). [Google Scholar]
- J. Yang, H. Zhang, F. Li, X. Zou, C. Li, J. Gao, Set-of-mark prompting unleashes extraordinary visual grounding in GPT-4V. arXiv:2310.11441 (2023). [Google Scholar]
- K. Marino, X. Chen, D. Parikh, A. Gupta, M. Rohrbach, KRISP: Integrating implicit and symbolic knowledge for open-domain knowledge-based visual question answering, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 14111–14121 (2021). [Google Scholar]
- P. Rout, A. K. Jha, P. Gupta, B. Singh, S. Choudhury, Failure analysis of composite plate under ballistic impact. Mater. Today Proc. 74, 1008–1011 (2023). https://doi.org/10.1016/j.matpr.2022.11.385 [Google Scholar]
- M. Luo, Y. Zeng, P. Banerjee, C. Baral, Weakly-supervised visual-retriever-reader for knowledge-based question answering, in Proc. Conf. Empir. Methods Nat. Lang. Process. (EMNLP), 6417–6431 (2021). [Google Scholar]
- S. Shen, L. H. Li, H. Tan, M. Bansal, A. Rohrbach, K. W. Chang, Z. Yao, K. Keutzer, How much can CLIP benefit vision-and-language tasks?. arXiv:2107.06383 (2021). [Google Scholar]
- Y. Ding, J. Yu, B. Liu, Y. Hu, M. Cui, Q. Wu, MUKEA: Multimodal knowledge extraction and accumulation for knowledge-based visual question answering, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 5089–5098 (2022). [Google Scholar]
- W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong, A survey of large language models. arXiv:2303.18223 (2023). [Google Scholar]
- W. Jin, Y. Cheng, Y. Shen, W. Chen, X. Ren, A good prompt is worth millions of parameters: Low-resource prompt-based learning for vision-language models, in Proc. 60th Annu. Meet. Assoc. Comput. Linguist. (ACL), 2763–2775 (2022). [Google Scholar]
- H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, LLaMA 2: Open foundation and fine-tuned chat models. arXiv:2307.09288 (2023). [Google Scholar]
- H. S. Ruhela, S. Bhardwaj, T. Agrawal, P. Gupta, Explicit dynamics analysis of shin pads using finite element analysis, in Int. Conf. Industrial Problems on Machines and Mechanism (Springer, Singapore, 2022), pp. 683–690 [Google Scholar]
- D. Hong, B. Zhang, H. Li, Y. Li, J. Yao, C. Li, M. Werner, J. Chanussot, A. Zipf, X. X. Zhu, Cross-city matters: A multimodal remote sensing benchmark dataset for cross-city semantic segmentation using high-resolution domain adaptation networks. Remote Sens. Environ. 299, 113856 (2023). [Google Scholar]
- Y. Chang, X. Wang, J. Wang, Y. Wu, L. Yang, K. Zhu, H. Chen, X. Yi, C. Wang, Y. Wang, A survey on evaluation of large language models. ACM Trans. Intell. Syst. Technol. 15, 1–45 (2024). [CrossRef] [Google Scholar]
- P. Zhang, X. Li, X. Hu, J. Yang, L. Zhang, L. Wang, Y. Choi, J. Gao, VinVL: Revisiting visual representations in vision-language models, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 5579–5588 (2021). [Google Scholar]
- J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. V. Le, D. Zhou, Chain-of-thought prompting elicits reasoning in large language models, in Adv. Neural Inf. Process. Syst. 35, 24824–24837 (2022). [Google Scholar]
- J. Devlin, M. W. Chang, K. Lee, K. Toutanova, BERT: Pre-training of deep bidirectional transformers for language understanding, in Proc. Conf. North Am. Chapter Assoc. Comput. Linguist. (NAACL), 4171–4186 (2019). [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.

