Open Access
Issue |
MATEC Web Conf.
Volume 175, 2018
2018 International Forum on Construction, Aviation and Environmental Engineering-Internet of Things (IFCAE-IOT 2018)
|
|
---|---|---|
Article Number | 03055 | |
Number of page(s) | 5 | |
Section | Computer Simulation and Design | |
DOI | https://doi.org/10.1051/matecconf/201817503055 | |
Published online | 02 July 2018 |
- Mancini, M., Costante, G., Valigi, P., & Ciarfuglia, T. A. (2016, October). Fast robust monocular depth estimation for obstacle detection with fully convolutional networks. In Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on (pp. 4296-4303). IEEE. [CrossRef] [Google Scholar]
- Laina, I., Rupprecht, C., Belagiannis, V., Tombari, F., & Navab, N. (2016, October). Deeper depth prediction with fully convolutional residual networks. In 3D Vision (3DV), 2016 Fourth International Conference on (pp. 239-248). IEEE. [CrossRef] [Google Scholar]
- Saxena, A., Chung, S. H., & Ng, A. Y. (2006). Learning depth from single monocular images. In Advances in neural information processing systems (pp. 1161-1168). [Google Scholar]
- Saxena, A., Sun, M., & Ng, A. Y. (2009). Make3d: Learning 3d scene structure from a single still image. IEEE transactions on pattern analysis and machine intelligence, 31(5), 824-840. [CrossRef] [Google Scholar]
- Eigen, D., & Fergus, R. (2015). Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proceedings of the IEEE International Conference on Computer Vision (pp. 2650-2658). [Google Scholar]
- Liu, F., Shen, C., Lin, G., & Reid, I. (2016). Learning depth from single monocular images using deep convolutional neural fields. IEEE transactions on pattern analysis and machine intelligence, 38(10), 2024-2039. [CrossRef] [Google Scholar]
- Laina, I., Rupprecht, C., Belagiannis, V., Tombari, F., & Navab, N. (2016, October). Deeper depth prediction with fully convolutional residual networks. In 3D Vision (3DV), 2016 Fourth International Conference on (pp. 239-248). IEEE. [CrossRef] [Google Scholar]
- Cao, Y., Wu, Z., & Shen, C. (2017). Estimating depth from monocular images as classification using deep fully convolutional residual networks. IEEE Transactions on Circuits and Systems for Video Technology. [Google Scholar]
- Liao, Y., Huang, L., Wang, Y., Kodagoda, S., Yu, Y., & Liu, Y. (2017, May). Parse geometry from a line: Monocular depth estimation with partial laser observation. In Robotics and Automation (ICRA), 2017 IEEE International Conference on (pp. 5059-5066). IEEE. [CrossRef] [Google Scholar]
- Ma, F., & Karaman, S. (2017). Sparse-to-dense: Depth prediction from sparse depth samples and a single image. arXiv preprint arXiv:1709.07492. [Google Scholar]
- Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. [Google Scholar]
- Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., … & Shi, W. (2016). Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint. [Google Scholar]
- Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., & Lee, H. (2016). Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396. [Google Scholar]
- He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). [Google Scholar]
- Ronneberger, O., Fischer, P., & Brox, T. (2015, October). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (pp. 234-241). Springer, Cham. [Google Scholar]
- Ketkar, N. (2017). Introduction to pytorch. In Deep Learning with Python (pp. 195-208). Apress, Berkeley, CA. [CrossRef] [Google Scholar]
- Roy, A., & Todorovic, S. (2016). Monocular Depth Estimation Using Neural Regression Forest. Computer Vision and Pattern Recognition (pp.5506-5514). IEEE. [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.