Open Access
MATEC Web Conf.
Volume 277, 2019
2018 International Joint Conference on Metallurgical and Materials Engineering (JCMME 2018)
Article Number 02028
Number of page(s) 8
Section Data and Signal Processing
Published online 02 April 2019
  1. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV, 2014. [Google Scholar]
  2. Ross Girshick, Jeff Donahue, Trevor Darrel, and Jitendra Malik. Region-based convolutional networks for accurate object detection and segmentation. IEEE transactions on pattern analysis and machine intelligence, 2015. [Google Scholar]
  3. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards realtime object detection with region proposal networks. In NIPS, 2015. [Google Scholar]
  4. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar. Focal Loss for dense object detection. arXiv preprint arXiv:1708.02002, 2017. [Google Scholar]
  5. Ali Farhadi, Ian Endres, Derek Hoiem, and Devid Forsyth. Describing objects by their attributes. In Computer Vision and Pattern Recognition, 2009. [Google Scholar]
  6. Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling. Attribute-based classification for zero-shot visual object categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014. [Google Scholar]
  7. Yann LeCun, Bernhard E Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne E Hubbard, and Lawrence D Jackel. Handwritten digit recognition with a back-propagation network. In Advances in neural information processing systems, pages 396-404, 1990. [Google Scholar]
  8. Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once:Unified, real-time object detection. In CVPR, 2016. [Google Scholar]
  9. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In ECCV, 2016. [Google Scholar]
  10. R. Girshick. Fast R-CNN. In ICCV, 2015. [Google Scholar]
  11. S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In NIPS, 2015 [Google Scholar]
  12. T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In CVPR, 2017. [Google Scholar]
  13. Nancy Kanwisher and Jon Driver. Objects, attributes, and visual attention: Which, what, and where. Current Directions in Psychological Science, 1992. [Google Scholar]
  14. Anne M Treisman and Garry Gelade. A feature-integration theory of attention. Cognitive psychology, 12(1):97-136, 1980. [CrossRef] [Google Scholar]
  15. Vittorio Ferrari and Andrew Zisserman. Learning visual attributes. In NIPS, 2008. [Google Scholar]
  16. M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2008 (VOC2008) Results., 2008. [Google Scholar]
  17. Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. IEEE, 2005. [Google Scholar]
  18. Gang Wang and David Forsyth. Joint learning of visual attributes, object classes and visual saliency. In Computer Vision, 2009 IEEE 12th International Conference on. IEEE, 2009. [Google Scholar]
  19. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2016. [Google Scholar]
  20. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [Google Scholar]
  21. François Chollet et al. Keras., 2015. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.