Skip to content
BY 4.0 license Open Access Published by De Gruyter September 18, 2019

Deep Reinforcement Learning for the Navigation of Neurovascular Catheters

  • Tobias Behr EMAIL logo , Tim Philipp Pusch , Marius Siegfarth , Dominik Hüsener , Tobias Mörschel and Lennart Karstensen

Abstract

Endovascular catheters are necessary for state-ofthe- art treatments of life-threatening and time-critical diseases like strokes and heart attacks. Navigating them through the vascular tree is a highly challenging task. We present our preliminary results for the autonomous control of a guidewire through a vessel phantom with the help of Deep Reinforcement Learning. We trained Deep-Q-Network (DQN) and Deep Deterministic Policy Gradient (DDPG) agents on a simulated vessel phantom and evaluated the training performance. We also investigated the effect of the two enhancements Hindsight Experience Replay (HER) and Human Demonstration (HD) on the training speed of our agents. The results show that the agents are capable of learning to navigate a guidewire from a random start point in the vessel phantom to a random goal. This is achieved with an average success rate of 86.5% for DQN and 89.6% for DDPG. The use of HER and HD significantly increases the training speed. The results are promising and future research should address more complex vessel phantoms and the use of a combination of guidewire and catheter.

Published Online: 2019-09-18
Published in Print: 2019-09-01

© 2019 by Walter de Gruyter Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 Public License.

Downloaded on 30.4.2024 from https://www.degruyter.com/document/doi/10.1515/cdbme-2019-0002/html
Scroll to top button