Keyword search (4,163 papers available)

"Harirpoush A" Authored Publications:

Title Authors PubMed ID
1 Towards user-centered interactive medical image segmentation in VR with an assistive AI agent Spiegler P; Harirpoush A; Xiao Y; 41509996
ENCS
2 Virtual reality-based preoperative planning for optimized trocar placement in thoracic surgery: A preliminary study Harirpoush A; Rakovich G; Kersten-Oertel M; Xiao Y; 39720764
ENCS

 

Title:Towards user-centered interactive medical image segmentation in VR with an assistive AI agent
Authors:Spiegler PHarirpoush AXiao Y
Link:https://pubmed.ncbi.nlm.nih.gov/41509996/
DOI:10.1007/s10055-025-01284-0
Publication:Virtual reality
Keywords:AI agentAttention switchingClinical decision supportEye trackingFoundation modelHuman-in-the-loopMedical image segmentationMedical visualizationVirtual reality
PMID:41509996 Category: Date Added:2026-01-09
Dept Affiliation: ENCS
1 Department of Computer Science and Software Engineering, Concordia University, Montreal, Quebec Canada.

Description:

Crucial in disease analysis and surgical planning, manual segmentation of volumetric medical scans (e.g. MRI, CT) is laborious, error-prone, and challenging to master, while fully automatic algorithms can benefit from user feedback. Therefore, with the complementary power of the latest radiological AI foundation models and virtual reality (VR)'s intuitive data interaction, we propose SAMIRA, a novel conversational AI agent for medical VR that assists users with localizing, segmenting, and visualizing 3D medical concepts. Through speech-based interaction, the agent helps users understand radiological features, locate clinical targets, and generate segmentation masks that can be refined with just a few point prompts. The system also supports true-to-scale 3D visualization of segmented pathology to enhance patient-specific anatomical understanding. Furthermore, to determine the optimal interaction paradigm under near-far attention-switching for refining segmentation masks in an immersive, human-in-the-loop workflow, we compare VR controller pointing, head pointing, and eye tracking as input modes. With a user study, evaluations demonstrated a high usability score (SUS = 90.0 ± 9.0), low overall task load, as well as strong support for the proposed VR system's guidance, training potential, and integration of AI in radiological segmentation tasks.





BookR developed by Sriram Narayanan
for the Concordia University School of Health
Copyright © 2011-2026
Cookie settings
Concordia University