Open Access
Issue |
MATEC Web Conf.
Volume 277, 2019
2018 International Joint Conference on Metallurgical and Materials Engineering (JCMME 2018)
|
|
---|---|---|
Article Number | 02029 | |
Number of page(s) | 11 | |
Section | Data and Signal Processing | |
DOI | https://doi.org/10.1051/matecconf/201927702029 | |
Published online | 02 April 2019 |
- Shahrokni A, Vacchetti L, Lepetit V and Fua P 2002 Polyhedral object detection and pose estimation for augmented reality applications Proc. of Computer Animation (Lausanne: Geneva) pp 65-9 [Google Scholar]
- Mousavian A, Anguelov D, Flynn J and Košecká J 2017 3d bounding box estimation using deep learning and geometry IEEE Conf. on Computer Vision and Pattern Recognition (Fairfax: Honolulu) pp 5632-40 [Google Scholar]
- Collet A, Berenson D, Srinivasa SS and Ferguson D 2009 Object recognition and full pose registration from a single image for robotic manipulation IEEE Int. Conf. on Robotics and Automation (Pittsburgh: Kobe) pp 48-55 [Google Scholar]
- Collet A, Martinez M and Srinivasa SS 2010 The moped framework: object recognition and pose estimation for manipulation IEEE Int. Conf. on Robotics and Automation (Pittsburgh: Anchorage) pp 1284-306 [Google Scholar]
- Lim JJ, Pirsiavash H and Torralba A 2013 Parsing ikea objects: fine pose estimation IEEE Int. Conf. on Computer Vision (Cambridge: Sydney) pp 2992-9 [Google Scholar]
- Zhu M, Derpanis KG, Yang Y, Brahmbhatt S, Zhang M, Phillips C, Lecce M and Daniilidis K 2014 Single image 3d object detection and pose estimation for grasping IEEE Int. Conf. on Robotics and Automation (Philadelphia: Hong Kong) pp 3936-43 [Google Scholar]
- Payet N and Todorovic S 2011 From contours to 3d object detection and pose estimation IEEE Int. Conf. on Computer Vision (Corvallis: Barcelona) pp 983-90 [Google Scholar]
- Holzer S, Hinterstoisser S, Ilic S and Navab N 2009 Distance transform templates for object detection and pose estimation IEEE Conf. on Computer Vision and Pattern Recognition (Garching: Miami) pp 1177-84 [Google Scholar]
- Canny J 1986 A computational approach to edge detection IEEE Trans. on Pattern Analysis and Machine Intelligence 6 679-98 [CrossRef] [Google Scholar]
- Hinterstoisser S, Benhimane S and Navab N 2007 N3m: natural 3d markers for realtime object detection and pose estimation (Garching: Rio de Janerio) IEEE Int. Conf. on Computer Vision pp 1-7 [Google Scholar]
- Zhang J, Zhou SK, McMillan L and Comaniciu D 2007 Joint real-time object detection and pose estimation using probabilistic boosting network IEEE Conf. on Computer Vision and Pattern Recognition (Chapel Hill: Minneapolis) pp 1-8 [Google Scholar]
- Kayanuma M and Hagiwara M 1999 A new method to detect object and estimate the position and the orientation from an image using a 3-d model having feature points IEEE Int. Conf. on Systems, Man, and Cybernetics (Yokohama: Tokyo) pp 931-6 [Google Scholar]
- Kehl W, Manhardt F, Tombari F, Ilic S and Navab N 2017 Ssd-6d: making rgb-based 3d detection and 6d pose estimation great again IEEE Int. Conf. on Computer Vision (Munich: Venice) pp 1530-8 [Google Scholar]
- Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY and Berg AC 2016 Ssd:single shot multibox detector European Conf. on Computer Vision (Chapel Hill:Amsterdam) pp 21-37 [Google Scholar]
- Xiang Y, Schmidt T, Narayanan V and Fox D 2017 https://arxiv.org/abs/1711.00199 [Google Scholar]
- Thanh-Toan D, Ming C, Trung P and Ian R, 2018 arXiv:1802.10367 [Google Scholar]
- Kaiming H, Gerogia G, Piotr D and Ross G 2017 Mask r-cnn IEEE Int. Conf. on Computer Vision (Menlo Park: Venice) pp 2980-8 [Google Scholar]
- Poirson P, Ammirato P, Fu CY, Liu W, Kosecka J and Berg AC 2016 Fast single shot detection and pose estimation 4th Int. Conf. on 3D Vision (Chapel Hill: Stanford) pp 676-84 [Google Scholar]
- Rad M and Lepetit V 2017 Bb8: a scalable, accurate, robust to partial occlusion method for predicting the 3d poses of challenging objects without using depth IEEE Int. Conf. on Computer Vision (Graz: Venice) pp 3848-56 [Google Scholar]
- Brachmann E, Michel F, Krull A, Ying Yang M and Gumhold S 2016 Uncertaintydriven 6d pose estimation of objects and scenes from a single rgb image IEEE Conf. on Computer Vision and Pattern Recognition (Dresden: Las Vegas) pp 3364-72 [Google Scholar]
- Shotton J, Glocker B, Zach C, Izadi S, Criminisi A and Fitzgibbon A 2013 Scene coordinate regression forests for camera relocalization in rgb-d images IEEE Conf. on Computer Vision and Pattern Recognition (Cambridge: Portland) pp 2930-7 [Google Scholar]
- Hinterstoisser S, Lepetit V, Ilic S, Holzer S, Bradski G, Konolige K and Navab N 2012 Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes Asian Conf. on Computer Vision (Munich: Daejeon) pp 548-62 [Google Scholar]
- Tejani A, Tang D, Kouskouridas R and Kim TK 2014 Latent-class hough forests for 3d object detection and pose estimation European Conf. on Computer Vision (London:Zurich) pp 462-77 [Google Scholar]
- Hodan T, Haluza P, Obdržálek Š, Matas J, Lourakis M and Zabulis X 2017 T-less: An rgb-d dataset for 6d pose estimation of texture-less objects IEEE Winter Conf. on Applications of Computer Vision (Prague: Santa Rosa) pp 880-8 [Google Scholar]
- Brachmann E, Krull A, Michel F, Gumhold S, Shotton J and Rother C 2014 Learning 6d object pose estimation using 3d object coordinates European Conf. on Computer Vision (Dresden: Zurich) pp 536-551 [Google Scholar]
- Xiang Y, Mottaghi R and Savarese S 2014 Beyond pascal: a benchmark for 3d object detection in the wild IEEE Winter Conf. on Applications of Computer Vision (Ann Arbor: Steamboat Springs) pp 75-82 [CrossRef] [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.