Open Access
Issue
MATEC Web Conf.
Volume 277, 2019
2018 International Joint Conference on Metallurgical and Materials Engineering (JCMME 2018)
Article Number 02001
Number of page(s) 7
Section Data and Signal Processing
DOI https://doi.org/10.1051/matecconf/201927702001
Published online 02 April 2019
  1. Fukushima K. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position[J]. Biological Cybernetics, 1980, 36(4):193-202. [Google Scholar]
  2. Lecun Y, Boser B, Denker J S., et al Backpropagation Applied to Handwritten Zip Code Recognition[J]. Neural Computation, 1989, 1(4):541-551. [Google Scholar]
  3. Cireşan, Dan, Meier U, Schmidhuber, Juergen. Multi-column Deep Neural Networks for Image Classification[J]. Eprint Arxiv, 2012, 157(10):3642-3649. [Google Scholar]
  4. Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition[J]. Computer Science, 2014. [Google Scholar]
  5. Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[C]// International Conference on Neural Information Processing Systems. Curran Associates Inc. 2012:1097-1105. [Google Scholar]
  6. Zeiler M D, Fergus R. Visualizing and Understanding Convolutional Networks[C]// European Conference on Computer Vision. Springer, Cham, 2014:818-833. [Google Scholar]
  7. Mikolov T A. Statistical Language Models Based on Neural Networks[J]. 2012. [Google Scholar]
  8. Sutskever I, Vinyals O, Le Q V. Sequence to Sequence Learning with Neural Networks[J]. 2014, 4:3104-3112. [Google Scholar]
  9. Bahdanau D, Cho K, Bengio Y. Neural Machine Translation by Jointly Learning to Align and Translate[J]. Computer Science, 2014 [Google Scholar]
  10. Mao J, Xu W, Yang Y, et al. Explain Images with Multimodal Recurrent Neural Networks[J]. Computer Science, 2014. [Google Scholar]
  11. Ren S, He K, Girshick R., et al Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017, 39(6):1137-1149. [Google Scholar]
  12. Visin F, Kastner K, Cho K., et al ReNet: A Recurrent Neural Network Based Alternative to Convolutional Networks[J]. Computer Science, 2015, 25(7):2983-2996. [Google Scholar]
  13. He K, Zhang X, Ren S, et al. Deep Residual Learning for Image Recognition[J]. 2015:770-778. [Google Scholar]
  14. Krizhevsky A. Learning Multiple Layers of Features from Tiny Images[J]. 2009. [Google Scholar]
  15. Surhone L M, Tennoe M T, Henssonow S F. Long Short Term Memory[J]. Betascript Publishing, 2010. [Google Scholar]
  16. Cho K, Merrienboer B V, Gulcehre C, et al. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation[J]. Computer Science, 2014. [Google Scholar]
  17. Le Q V, Jaitly N, Hinton G E. A Simple Way to Initialize Recurrent Networks of Rectified Linear Units[J]. Computer Science, 2015. [Google Scholar]
  18. Shi X, Chen Z, Wang H, et al. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting[J]. 2015, 9199:802-810. [Google Scholar]
  19. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. [Google Scholar]
  20. Hochreiter S. Recurrent Neural Net Learning and Vanishing Gradient[J]. 1998. [Google Scholar]
  21. Kingma D P, Ba J. Adam: A Method for Stochastic Optimization[J]. Computer Science, 2014. [Google Scholar]
  22. Ioffe S, Szegedy C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift[J]. 2015:448-456. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.