Peer-Reviewed Publications


2025
2024
2023
2022
2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
2009

Preprints


Neural Contractive Dynamical Systems
Beik-Mohammadi, H.; Hauberg, S.; Arvanitidis, G.; Figueroa, N.; Neumann, G.; Rozo, L.
2024. arxiv. doi:10.48550/arXiv.2401.09352
PointMapPolicy: Structured Point Cloud Processing for Multi-Modal Imitation Learning
Jia, X.; Wang, Q.; Wang, A.; Wang, H. A.; Gyenes, B.; Gospodinov, E.; Jiang, X.; Li, G.; Zhou, H.; Liao, W.; Huang, X.; Beck, M.; Reuss, M.; Lioutikov, R.; Neumann, G.
2025. arxiv. doi:10.48550/arXiv.2510.20406
DIME:Diffusion-Based Maximum Entropy Reinforcement Learning
Celik, O.; Li, Z.; Blessing, D.; Li, G.; Palenicek, D.; Peters, J.; Chalvatzaki, G.; Neumann, G.
2025. arxiv. doi:10.48550/arXiv.2502.02316
Scaffolding Dexterous Manipulation with Vision-Language Models
de Bakker, V.; Hejna, J.; Lum, T. G. W.; Celik, O.; Taranovic, A.; Blessing, D.; Neumann, G.; Bohg, J.; Sadigh, D.
2026. arxiv. doi:10.48550/arXiv.2506.19212
Learning Boltzmann Generators via Constrained Mass Transport
von Klitzing, C.; Blessing, D.; Schopmans, H.; Friederich, P.; Neumann, G.
2025. arxiv. doi:10.48550/arXiv.2510.18460
Registered and Segmented Deformable Object Reconstruction from a Single View Point Cloud
Henrich, P.; Gyenes, B.; Scheikl, P. M.; Neumann, G.; Mathis-Ullrich, F.
2023. arxiv. doi:10.48550/arXiv.2311.07357
Swarm Reinforcement Learning For Adaptive Mesh Refinement
Freymuth, N.; Dahlinger, P.; Würth, T.; Reisch, S.; Kärger, L.; Neumann, G.
2023. arxiv. doi:10.48550/arXiv.2304.00818
Information Maximizing Curriculum: A Curriculum-Based Approach for Training Mixtures of Experts
Blessing, D.; Celik, O.; Jia, X.; Reuss, M.; Li, M. X.; Lioutikov, R.; Neumann, G.
2023. arxiv. doi:10.48550/arXiv.2303.15349
Curriculum-Based Imitation of Versatile Skills
Li, M. X.; Celik, O.; Becker, P.; Blessing, D.; Lioutikov, R.; Neumann, G.
2023. arxiv. doi:10.48550/arXiv.2304.05171
What Matters For Meta-Learning Vision Regression Tasks?
Gao, N.; Ziesche, H.; Ngo, A. V.; Volpp, M.; Neumann, G.
2022. doi:10.5445/IR/1000143728
Hidden Parameter Recurrent State Space Models For Changing Dynamics Scenarios
Shaj Kumar, V.; Büchler, D.; Sonker, R.; Becker, P.; Neumann, G.
2022. doi:10.5445/IR/1000143406
A Study on Dense and Sparse (Visual) Rewards in Robot Policy Learning
Mohtasib, A.; Neumann, G.; Cuayáhuitl, H.
2021
Residual Feedback Learning for Contact-Rich Manipulation Tasks with Uncertainty
Ranjbar, A.; Vien, N. A.; Ziesche, H.; Boedecker, J.; Neumann, G.
2021. doi:10.5445/IR/1000137510
Action-Conditional Recurrent Kalman Networks For Forward and Inverse Dynamics Learning
Shaj, V.; Becker, P.; Buchler, D.; Pandya, H.; Duijkeren, N. van; Taylor, C. J.; Hanheide, M.; Neumann, G.
2020. doi:10.5445/IR/1000125269
Agricultural Robotics: The Future of Robotic Agriculture
Duckett, T.; Pearson, S.; Blackmore, S.; Grieve, B.; Chen, W.-H.; Cielniak, G.; Cleaversmith, J.; Dai, J.; Davis, S.; Fox, C.; From, P.; Georgilas, I.; Gill, R.; Gould, I.; Hanheide, M.; Iida, F.; Mihalyova, L.; Nefti-Meziani, S.; Neumann, G.; Paoletti, P.; Pridmore, T.; Ross, D.; Smith, M.; Stoelen, M.; Swainson, M.; Wane, S.; Wilson, P.; Wright, I.; Yang, G.-Z.
2018. UK-RAS Network