4:20

Elia Lombardo, MSc

November 21, 2024

Video Transcript


Hello everyone. My name is Elia Lombardo and I'm a PhD student at the Adaptive Radiation Therapy Lab at the Department of Radiation Oncology of the LMU University Hospital in Munich. And I would like to give you an overview about our latest paper published in the Red Journal about "Patient-Specific Deep Learning Tracking Framework for Real-Time 2D Target Localization in MRI-Guided Radiotherapy." Intra-fractional motion can lead to increased dose to healthy tissues and less conformal tumor dose. MRI-linacs offer state-of-the-art real- time motion management by imaging the moving tumor in real-time using 2D cine MRI to perform gated beam delivery in combination with breath-holds as shown in this cine MRI acquired at our institution. Research is performed to all towards MLC-tracking in free-breathing as a more efficient yet dosimetrically equivalent alternative to gating. In MLC-tracking the leaves would follow the moving tumor in real-time without beam pulses. Precise real-time target localization; so finding the tumor in the current frame, is critical regardless of the adaptation strategy. However, it is more challenging in free-breathing as the tumor needs to be found in every breathing state. AI offers great potential for motion management and in particular for target localization, thanks to its fast inference speed and excellent pattern recognition capabilities. In this study using 1.4 million unlabeled cine MRI frames and 7.5 thousand labeled frames, we could show how two AI models achieved accurate and complementary performance, while outperforming several non-AI baselines. In particular, we implemented the the deformable image registration transformer and an auto-segmentation convolutional neural network unit. In the following slide, I'm going to show some results for selected testing patients. In general, we found TransMorph Image registration model to perform slightly better than the auto-segmentation SegResNet for all cases. But for the following liver case, which presented very large motion and very high contrast. On the other hand, for cases such as this cardiac patient where there is not particularly large motion and very low contrast, the image registration model performed best. One could imagine the clinical team selected the best of the two models on a case-by-case basis. In conclusion, both AI models accurately localized treatment targets with inference times below 50 milliseconds, which would allow deployment in a real- time scenario. A drawback of the two presented models is that they require patient-specific training prior to start of treatment, which prolongs the MRI-guided radiotherapy workflow by a few minutes. As the next step, we are currently organizing an international grand-challenge called TrackRAD2025. In the challenge, we are going to provide 2D cine MRI data from hundreds of patients treated at MRI-linacs with different field strengths such that participants can investigate whether, for instance, foundational AI models could also achieve accurate target localization without the need of patient-specific training. The training and testing set will be released in spring and summer 2025 respectively. And the challenge will be presented at the MICCAI 2025 conference in South Korea. The challenge will remain open for submission also after the conference. Thank you for your attention.



Produced with Vocal Video