Open Access
Issue
MATEC Web Conf.
Volume 406, 2024
2024 RAPDASA-RobMech-PRASA-AMI Conference: Unlocking Advanced Manufacturing - The 25th Annual International RAPDASA Conference, joined by RobMech, PRASA and AMI, hosted by Stellenbosch University and Nelson Mandela University
Article Number 04007
Number of page(s) 15
Section Robotics and Mechatronics
DOI https://doi.org/10.1051/matecconf/202440604007
Published online 09 December 2024
  1. M. Posa, C. Cantu, and R. Tedrake, A direct method for trajectory optimization of rigid bodies through contact, Int. J. Robot. Res., 33, pp. 69–81, (2014). [CrossRef] [Google Scholar]
  2. M. Bloesch, M. Hutter, M.A. Hoepflinger, S. Leutenegger, C. Gehring, C.D. Remy, and R. Siegwart, State estimation for legged robots-consistent fusion of leg kinematics and IMU, Robotics, 17, pp. 17–24, (2013). [CrossRef] [Google Scholar]
  3. T. Erez, K. Lowrey, Y. Tassa, V. Kumar, S. Kolev, and E. Todorov, An integrated system for real-time model predictive control of humanoid robots, in 2013 13th IEEE- RAS International Conference on Humanoid Robots (Humanoids), Atlanta, pp. 292–299, (2013). [Google Scholar]
  4. J. Tan, T. Zhang, E. Coumans, A. Iscen, Y. Bai, D. Hafner, S. Bohez, V. Vanhoucke, Sim-to-Real: Learning Agile Locomotion For Quadruped Robots, (2018). Accessed: Apr. 10, 2024. [Online]. Available: http://arxiv.org/abs/1804.10332 [Google Scholar]
  5. J. Hwangbo, J. Lee, A. Dosovitskiy, D. Bellicoso, V. Tsounis, V. Koltun, and M. Hutter, Learning agile and dynamic motor skills for legged robots, Sci. Robot., 4, (2019). [CrossRef] [Google Scholar]
  6. T. Haarnoja, S. Ha, A. Zhou, J. Tan, G. Tucker, and S. Levine, Learning to Walk via Deep Reinforcement Learning. (2019). Accessed: Feb. 21, 2024. [Online]. Available: http://arxiv.org/abs/1812.11103 [Google Scholar]
  7. J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter, Learning quadrupedal locomotion over challenging terrain, Sci. Robot., 5, (2020). [Google Scholar]
  8. H. Duan, J. Dao, K. Green, T. Apgar, A. Fern, and J. Hurst, Learning Task Space Actions for Bipedal Locomotion, in 2021 IEEE International Conference on Robotics and Automation, ICRA, pp. 1276–1282, (2021). [Google Scholar]
  9. A. Kumar, N. Paul, and S. N. Omkar, Bipedal Walking Robot using Deep Deterministic Policy Gradient, (2018). Accessed: Feb. 21, 2024. [Online]. Available: http://arxiv.org/abs/1807.05924 [Google Scholar]
  10. Z. Fu, A. Kumar, J. Malik, and D. Pathak, Minimizing Energy Consumption Leads to the Emergence of Gaits in Legged Robots, (2021). Accessed: Oct. 23, 2023. [Online]. Available: http://arxiv.org/abs/2111.01674 [Google Scholar]
  11. G. Bellegarda, Y. Chen, Z. Liu, and Q. Nguyen, Robust High-Speed Running for Quadruped Robots via Deep Reinforcement Learning, in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, pp. 10364–10370, (2022). [Google Scholar]
  12. W. Zhao, J.P. Queralta, and T. Westerlund, Sim-to-Real Transfer in Deep Reinforcement Learning for Robotics: a Survey, in 2020 IEEE Symposium Series on Computational Intelligence, SSCI, pp. 737–744, (2020). [Google Scholar]
  13. W. Zhu and A. Rosendo, PSTO: Learning Energy-Efficient Locomotion for Quadruped Robots, Machines, 10, p. 185, (2022). [CrossRef] [Google Scholar]
  14. A. Kumar, Z. Fu, D. Pathak, and J. Malik, RMA: Rapid Motor Adaptation for Legged Robots, (2021). Accessed: Apr. 30, 2024. [Online]. Available: http://arxiv.org/abs/2107.04034 [Google Scholar]
  15. Z. Zhuang, Z. Fu, J. Wang, C. Atkeson, S. Schwertfeger, C. Finn, and H. Zhao, Robot Parkour Learning, (2023). Accessed: Oct. 23, 2023. [Online]. Available: http://arxiv.org/abs/2309.05665 [Google Scholar]
  16. T. He, C. Zhang, W. Xiao, G. He, C. Liu, and G. Shi, Agile But Safe: Learning Collision-Free High-Speed Legged Locomotion, (2024). Accessed: Apr. 30, 2024. [Online]. Available: http://arxiv.org/abs/2401.17583 [Google Scholar]
  17. P. Henderson, R. Islam, P. Bachman, J. Pineau, D. Precup, and D. Meger, Deep Reinforcement Learning That Matters, In Proceedings of the AAAI conference on artificial intelligence, 32, (2018). [Google Scholar]
  18. H. Zhang, L. He, and D. Wang, Deep reinforcement learning for real-world quadrupedal locomotion: a comprehensive review, Intell. Robot., 2, pp. 275–297, (2022). [CrossRef] [Google Scholar]
  19. P. Varin, L. Grossman, and S. Kuindersma, A comparison of action spaces for learning manipulation tasks, in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, (2019). [Google Scholar]
  20. X. B. Peng and M. Van De Panne, Learning locomotion skills using DeepRL: does the choice of action space matter?, in Proceedings of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation, pp. 1–13, (2017). [Google Scholar]
  21. G. Bellegarda, C. Nguyen, and Q. Nguyen, Robust Quadruped Jumping via Deep Reinforcement Learning, (2023). Accessed: Feb. 21, 2024. [Online]. Available: http://arxiv.org/abs/2011.07089 [Google Scholar]
  22. G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, OpenAI Gym, (2016). Accessed: Jan. 21, 2024. [Online]. Available: http://arxiv.org/abs/2401.17583 [Google Scholar]
  23. R. Martín-Martín, M. A. Lee, R. Gardner, S. Savarese, J. Bohg, and A. Garg, Variable impedance control in end-effector space: An action space for reinforcement learning in contact-rich tasks, in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, pp. 1010–1017, (2019). [Google Scholar]
  24. S. Ha, P. Xu, Z. Tan, S. Levine, and J. Tan, Learning to Walk in the Real World with Minimal Human Effort, (2020). Accessed: May 06, 2024. [Online]. Available: http://arxiv.org/abs/2002.08550 [Google Scholar]
  25. M. Hutter et al., ANYmal - a highly mobile and dynamic quadrupedal robot, in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, pp. 38–44, (2016). [Google Scholar]
  26. G. Kenneally, A. De, and D. E. Koditschek, Design Principles for a Family of Direct- Drive Legged Robots, IEEE Robot. Autom. Lett., 1, pp. 900–907, (2016). [CrossRef] [Google Scholar]
  27. L. Han, Q. Zhu, J. Sheng, C. Zhang, T. Li, Y. Zhang, H. Zhang, Y. Liu, C. Zhou, R. Zhao, and J. Li, Lifelike Agility and Play on Quadrupedal Robots using Reinforcement Learning and Generative Pre-trained Models, (2023). Accessed: May 06, 2024. [Online]. Available: http://arxiv.org/abs/2308.15143 [Google Scholar]
  28. Unitree, A1 - highly integrated, pushing limits, (2016 – 2024). Accessed: Mar. 08, 2024. [Online]. Available: https://m.unitree.com [Google Scholar]
  29. R. S. Sutton and A. G. Barto, Reinforcement learning: an introduction. in Adaptive computation and machine learning. Cambridge, Mass: MIT Press, (1998). [Google Scholar]
  30. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, Proximal Policy Optimization Algorithms, (2017). Accessed: Feb. 05, 2024. [Online]. Available: http://arxiv.org/abs/1707.06347 [Google Scholar]
  31. J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel, High-Dimensional Continuous Control Using Generalized Advantage Estimation, (2018). Accessed: Feb. 05, 2024. [Online]. Available: http://arxiv.org/abs/1506.02438 [Google Scholar]
  32. E. Coumans and Y. Bai, PyBullet, a Python module for physics simulation for games, robotics and machine learning, (2016 - 2024). [Online]. Available: http://pybullet.org [Google Scholar]
  33. A. Raffin, A. Hill, A. Gleave, A. Kanervisto, M. Ernestus, and N. Dormann, Stable- Baselines3: Reliable Reinforcement Learning Implementations, J. Mach. Learn. Res., 22, pp. 1–8, (2021). [Google Scholar]
  34. T. Akiba, S. Sano, T. Yanase, T Ohta, and M. Koyama, Optuna: A next-generation hyperparameter optimisation framework, (2019). [Online]. Available: http://arxiv.org/abs/1907.10902 [Google Scholar]
  35. Y. Duan, X. Chen, R. Houthooft, J. Schulman, P. Abbeel, Benchmarking Deep Reinforcement Learning for Continuous Control, (2016). Accessed: May 03, 2024. [Online]. Available: arXiv:1604.06778 [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.