MedAI Session 23: Multimodal medical research of vision and language | Jean-Benoit Delbrouck

Title: Multimodal medical research at the intersection of vision and language Speaker: Jean-Benoit Delbrouck Abstract: Inspired by traditional machine learning on natural images and texts, new multimodal medical tasks are emerging. From Medical Visual Question Answering to Radiology Report Generation or Summarization using x-rays, we investigate how multimodal architectures and multimodal pre-training can help improving results. Speaker Bio: Jean-Benoit holds a PhD in engineering science from Polytechnic Mons in Belgium and is now a postdoctoral scholar at the Department of Biomedical Data Science . His doctoral thesis focused on multimodal learning on natural images and texts. His postdoctoral research focuses on applying new (or proven) methods on multimodal medical tasks at the intersection of vision and language. ------ The MedAI Group Exchange Sessions are a platform where we can critically examine key topics in AI and medicine, generate fresh ideas and discussion around their intersection and most
Back to Top