Open Access
Issue |
MATEC Web Conf.
Volume 277, 2019
2018 International Joint Conference on Metallurgical and Materials Engineering (JCMME 2018)
|
|
---|---|---|
Article Number | 02034 | |
Number of page(s) | 14 | |
Section | Data and Signal Processing | |
DOI | https://doi.org/10.1051/matecconf/201927702034 | |
Published online | 02 April 2019 |
- Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, "Realtime multi-person 2d pose estimation using part affinity fields," in CVPR, 2017. [Google Scholar]
- S. Laraba, M. Brahimi, J. Tilmanne, and T. Dutoit, "3d skeleton-based action recognition by representing motion capture sequences as 2d-rgb images," Computer Animation and Virtual Worlds, vol. 28, no. 3-4, 2017. [Google Scholar]
- A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang, "Ntu rgb+d: A large scale dataset for 3d human activity analysis," in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. [Google Scholar]
- Y. Du, Y. Fu, and L. Wang, "Skeleton based action recognition with convolutional neural network," in Pattern Recognition (ACPR), 2015 3rd IAPR Asian Conference on, pp. 579-583, IEEE, 2015. [CrossRef] [Google Scholar]
- F. Ofli, R. Chaudhry, G. Kurillo, R. Vidal, and R. Bajcsy, "Berkeley mhad: A comprehensive multimodal human action database," in Applications of Computer Vision (WACV), 2013 IEEE Workshop on, pp. 53-60, IEEE, 2013. [CrossRef] [Google Scholar]
- Z. Ding, P. Wang, P. O. Ogunbona, and W. Li, "Investigation of different skeleton features for cnn-based 3d action recognition," in Multimedia & Expo Workshops (ICMEW), 2017 IEEE International Conference on, pp. 617-622, IEEE, 2017. [CrossRef] [Google Scholar]
- Q. Ke, M. Bennamoun, S. An, F. Sohel, and F. Boussaid, "A new representation of skeleton sequences for 3d action recognition," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4570-4579, IEEE, 2017. [CrossRef] [Google Scholar]
- K. Yun, J. Honorio, D. Chattopadhyay, T. L. Berg, and D. Samaras, "Twoperson interaction detection using body-pose features and multiple instance learning," in Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, pp. 28-35, IEEE, 2012. [Google Scholar]
- "Cmu dataset." http://mocap.cs.cmu.edu/. Acceded on 02-2018. [Google Scholar]
- P. Wang, Z. Li, Y. Hou, and W. Li, "Action recognition based on joint trajectory maps using convolutional neural networks," in Proceedings of the 2016 ACM on Multimedia Conference, pp. 102-106, ACM, 2016. [CrossRef] [Google Scholar]
- "Msrc-12 kinect gesture dataset." https://www.microsoft.com/enus/ download/details.aspx?id=52283.Acceded on 02-2018. [Google Scholar]
- V. Bloom, D. Makris, and V. Argyriou, "G3d: A gaming action dataset and real time action recognition evaluation framework," in Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, pp. 7-12, IEEE, 2012. [Google Scholar]
- C. Li, S. Sun, X. Min, W. Lin, B. Nie, and X. Zhang, "End-to-end learning of deep convolutional neural network for 3d human action recognition," in Multimedia & Expo Workshops (ICMEW), 2017 IEEE International Conference on, pp. 609-612, IEEE, 2017. [CrossRef] [Google Scholar]
- B. Li, Y. Dai, X. Cheng, H. Chen, Y. Lin, and M. He, "Skeleton based action recognition using translation-scale invariant image mapping and multi-scale deep cnn," in Multimedia & Expo Workshops (ICMEW), 2017 IEEE International Conference on, pp. 601-604, IEEE, 2017. [CrossRef] [Google Scholar]
- C. Li, Q. Zhong, D. Xie, and S. Pu, "Skeleton-based action recognition with convolutional neural networks," in Multimedia & Expo Workshops (ICMEW), 2017 IEEE International Conference on, pp. 597-600, IEEE, 2017. [CrossRef] [Google Scholar]
- F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, "Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size," arXiv preprint arXiv:1602.07360, 2016. [Google Scholar]
- G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, vol. 1, p. 3, 2017. [Google Scholar]
- "Skeleton return by openpose." https://arvrjourney.com/human-poseestimation-using-openpose-with-tensorflow-part-2-e78ab9104fc8. Acceded on 02-2018. [Google Scholar]
- A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, pp. 1097-1105, 2012. [Google Scholar]
- K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. [Google Scholar]
- K. He, X. Zhang, S. Ren, and J. Sun, "Identity mappings in deep residual networks," in European Conference on Computer Vision, pp. 630-645, Springer, 2016. [Google Scholar]
- C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, et al., "Going deeper with convolutions," Cvpr, 2015. [Google Scholar]
- C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the inception architecture for computer vision," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818-2826, 2016. [Google Scholar]
- C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, "Inception-v4, inception-resnet and the impact of residual connections on learning.," in AAAI, vol. 4, p. 12, 2017. [Google Scholar]
- K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014. [Google Scholar]
- M. Oquab, L. Bottou, I. Laptev, and J. Sivic, "Learning and transferring midlevel image representations using convolutional neural networks," in Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pp. 1717-1724, IEEE, 2014. [CrossRef] [Google Scholar]
- H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, and R. M. Summers, "Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning," IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1285-1298, 2016. [Google Scholar]
- J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, "How transferable are features in deep neural networks?," in Advances in neural information processing systems, pp. 3320-3328, 2014. [Google Scholar]
- P. Zhang, C. Lan, J. Xing, W. Zeng, J. Xue, and N. Zheng, "View adaptive neural networks for high performance skeleton-based human action recognition," arXiv preprint arXiv:1804.07453, 2018. [Google Scholar]
- H. Liu, J. Tu, and M. Liu, "Two-stream 3d convolutional neural network for skeleton-based action recognition," arXiv preprint arXiv:1705.08106, 2017. [Google Scholar]
- C. Li, P. Wang, S. Wang, Y. Hou, and W. Li, "Skeleton-based action recognition using lstm and cnn," in Multimedia & Expo Workshops (ICMEW), 2017 IEEE International Conference on, pp. 585-590, IEEE, 2017. [Google Scholar]
- M. Zolfaghari, G. L. Oliveira, N. Sedaghat, and T. Brox, "Chained multi-stream networks exploiting pose, motion, and appearance for action classification and detection," in Computer Vision (ICCV), 2017 IEEE International Conference on, pp. 2923-2932, IEEE, 2017. [CrossRef] [Google Scholar]
- R. Zhao, H. Ali, and P. van der Smagt, "Two-stream rnn/cnn for action recognition in 3d videos," arXiv preprint arXiv:1703.09783, 2017. [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.