I'm a researcher working on making Reinforcement Learning (RL) more reliable and ready for real-world use.
While RL has shown impressive results in games and simulations, applying it safely and effectively outside the lab is still a major challenge. My work focuses on building algorithms that help RL systems learn in ways that are not just powerful, but also safe, stable, and aligned with real-world goals and constraints. This includes areas like safety-critical decision-making, learning from imperfect or limited data (offline RL), and developing new ways to guide exploration and generalization. Ultimately, I’m interested in closing the gap between what RL can do in theory and what it needs to do in practice.
") does not match the recommended repository name for your site ("
").
", so that your site can be accessed directly at "http://
".
However, if the current repository name is intended, you can ignore this message by removing "{% include widgets/debug_repo_name.html %}
" in index.html
.
",
which does not match the baseurl
("
") configured in _config.yml
.
baseurl
in _config.yml
to "
".
Nikola Milosevic, Johannes Müller, Nico Scherf
International Conference on Machine Learning (ICML) 2025 Spotlight
Reinforcement Learning (RL) agents can solve diverse tasks but often exhibit unsafe behavior. Constrained Markov Decision Processes (CMDPs) address this by enforcing safety constraints, yet existing methods either sacrifice reward maximization or allow unsafe training. We introduce Constrained Trust Region Policy Optimization (C-TRPO), which reshapes the policy space geometry to ensure trust regions contain only safe policies, guaranteeing constraint satisfaction throughout training. We analyze its theoretical properties and connections to TRPO, Natural Policy Gradient (NPG), and Constrained Policy Optimization (CPO). Experiments show that C-TRPO reduces constraint violations while maintaining competitive returns.
Nikola Milosevic, Johannes Müller, Nico Scherf
International Conference on Machine Learning (ICML) 2025 Spotlight
Reinforcement Learning (RL) agents can solve diverse tasks but often exhibit unsafe behavior. Constrained Markov Decision Processes (CMDPs) address this by enforcing safety constraints, yet existing methods either sacrifice reward maximization or allow unsafe training. We introduce Constrained Trust Region Policy Optimization (C-TRPO), which reshapes the policy space geometry to ensure trust regions contain only safe policies, guaranteeing constraint satisfaction throughout training. We analyze its theoretical properties and connections to TRPO, Natural Policy Gradient (NPG), and Constrained Policy Optimization (CPO). Experiments show that C-TRPO reduces constraint violations while maintaining competitive returns.