MATEC Web Conf.
Volume 232, 20182018 2nd International Conference on Electronic Information Technology and Computer Engineering (EITCE 2018)
|Number of page(s)||7|
|Section||Network Security System, Neural Network and Data Information|
|Published online||19 November 2018|
- Krizhevsky, Alex, I. Sutskever, and G. E. Hinton. “ImageNet classification with deep convolutional neural networks.” International Conference on Neural Information Processing Systems Curran Associates Inc. 1097-1105. (2012) [Google Scholar]
- Girshick, Ross, et al. “Region-based Convolutional Networks for Accurate Object Detection and Segmentation.” IEEE Transactions on Pattern Analysis & Machine Intelligence 38.1:142-158. (2015) [Google Scholar]
- Devlin, Jacob, et al. “Language Models for Image Captioning: The Quirks and What Works.” Computer Science (2015) [Google Scholar]
- Fang, H., et al. “From captions to visual concepts and back.” Computer Vision and Pattern Recognition IEEE, 1473-1482. (2015) [Google Scholar]
- Cho, Kyunghyun, et al. “Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation.” Computer Science (2014) [Google Scholar]
- Hochreiter, Sepp, and J. Schmidhuber. “Long Short-TermMemory."Neural Computation 9.8: 1735-1780. (1997) [CrossRef] [Google Scholar]
- Karpathy, Andrej, and F. F. Li. “Deep visual-semantic alignments for generating image descriptions.” Computer Vision and Pattern Recognition IEEE, 3128-3137. (2015) [Google Scholar]
- Sermanet, Pierre, et al. “OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks.” Eprint Arxiv (2013) [Google Scholar]
- Sundermeyer, M., et al. “Comparison of feedforward and recurrent neural network language models.” IEEE International Conference on Acoustics, Speech and Signal Processing IEEE, 8430-8434. (2013) [Google Scholar]
- Simonyan, Karen, and A. Zisserman. “Very Deep Convolutional Networks for Large-Scale Image Recognition.” Computer Science (2014) [Google Scholar]
- Szegedy, Christian, et al. “Going deeper with convolutions.” IEEE Conference on Computer Vision and Pattern Recognition IEEE, 1-9. (2015) [Google Scholar]
- He, Kaiming, et al. “Deep Residual Learning for Image Recognition.” IEEE Conference on Computer Vision and Pattern Recognition IEEE Computer Society, 770-778. (2016) [Google Scholar]
- Mao, Junhua, et al. “Explain Images with Multimodal Recurrent Neural Networks.” Computer Science (2014) [Google Scholar]
- Vinyals, Oriol, et al. “Show and tell: A neural image caption generator.” IEEE Conference on Computer Vision and Pattern Recognition IEEE Computer Society, 3156-3164. (2015) [Google Scholar]
- Xu, Kelvin, et al. “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention.” Computer Science, 2048-2057. (2015) [Google Scholar]
- Papineni, K. “BLEU: a method for automatic evaluation of MT.” (2001) [Google Scholar]
- Satanjeev, Banerjee. “METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments.” ACL-2005.228-231. (2005) [Google Scholar]
- Flick, Carlos. “ROUGE: A Package for Automatic Evaluation of summaries.” The Workshop on Text Summarization Branches Out2004:10. (2014) [Google Scholar]
- Vedantam, Ramakrishna, C. L. Zitnick, and D. Parikh. “CIDEr: Consensus-based Image Description Evaluation.” Computer Science, 4566-4575. (2014) [Google Scholar]
- Anderson, Peter, et al. “SPICE: Semantic Propositional Image Caption Evaluation.” Adaptive Behavior 11.4 382-398. (2016) [Google Scholar]
- Ranzato, Marc’Aurelio, et al. “Sequence Level Training with Recurrent Neural Networks.” Computer Science (2015) [Google Scholar]
- Kalchbrenner, Nal, E. Grefenstette, and P. Blunsom. “A Convolutional Neural Network for Modelling Sentences.” Eprint Arxiv (2014) [Google Scholar]
- Aneja, Jyoti, A. Deshpande, and A. Schwing. “Convolutional Image Captioning.” (2017) [Google Scholar]
- Gu, Jiuxiang, et al. “Stack-Captioning: Coarse-to-Fine Learning for Image Captioning.” (2018) [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.