Keyword search (4,163 papers available)

"Virtual reality" Keyword-tagged Publications:

Title Authors PubMed ID
1 Cross-modal synchrony between music and visual motion modulates vection, urge to move, and comfort in VR Van Kerrebroeck B; Spiech C; Penhune V; Wanderley MM; 41867666
PSYCHOLOGY
2 Towards user-centered interactive medical image segmentation in VR with an assistive AI agent Spiegler P; Harirpoush A; Xiao Y; 41509996
ENCS
3 Exploring interaction paradigms for segmenting medical images in virtual reality Jones Z; Drouin S; Kersten-Oertel M; 40402355
ENCS
4 iSurgARy: A mobile augmented reality solution for ventriculostomy in resource-limited settings Asadi Z; Castillo JP; Asadi M; Sinclair DS; Kersten-Oertel M; 39816703
ENCS
5 PreVISE: an efficient virtual reality system for SEEG surgical planning Spiegler P; Abdelsalam H; Hellum O; Hadjinicolaou A; Weil AG; Xiao Y; 39735694
ENCS
6 Virtual reality-based preoperative planning for optimized trocar placement in thoracic surgery: A preliminary study Harirpoush A; Rakovich G; Kersten-Oertel M; Xiao Y; 39720764
ENCS
7 A usability analysis of augmented reality and haptics for surgical planning Kazemipour N; Hooshiar A; Kersten-Oertel M; 38942947
ENCS
8 Virtual and Augmented Reality in Ventriculostomy: A Systematic Review Alizadeh M; Xiao Y; Kersten-Oertel M; 38823448
ENCS
9 Exploring the challenges of avoiding collisions with virtual pedestrians using a dual-task paradigm in individuals with chronic moderate to severe traumatic brain injury de Aquino Costa Sousa T; Gagnon IJ; Li KZH; McFadyen BJ; Lamontagne A; 38755606
PERFORM
10 Effects of color cues on eye-hand coordination training with a mirror drawing task in virtual environment Alrubaye Z; Hudhud Mughrabi M; Manav B; Batmaz AU; 38288362
ENCS
11 At-home computerized executive-function training to improve cognition and mobility in normal-hearing adults and older hearing aid users: a multi-centre, single-blinded randomized controlled trial Downey R; Gagné N; Mohanathas N; Campos JL; Pichora-Fuller KM; Bherer L; Lussier M; Phillips NA; Wittich W; St-Onge N; Gagné JP; Li K; 37864139
PERFORM
12 Digital Game Interventions for Youth Mental Health Services (Gaming My Way to Recovery): Protocol for a Scoping Review. Ferrari M, McIlwaine SV, Reynolds JA, Archie S, Boydell K, Lal S, Shah JL, Henderson J, Alvarez-Jimenez M, Andersson N, Boruff J, Nielsen RKL, Iyer SN 32579117
CONCORDIA
13 Effects of Age on Dual-Task Walking While Listening Victoria Nieborowska 30239280
PERFORM

 

Title:Towards user-centered interactive medical image segmentation in VR with an assistive AI agent
Authors:Spiegler PHarirpoush AXiao Y
Link:https://pubmed.ncbi.nlm.nih.gov/41509996/
DOI:10.1007/s10055-025-01284-0
Publication:Virtual reality
Keywords:AI agentAttention switchingClinical decision supportEye trackingFoundation modelHuman-in-the-loopMedical image segmentationMedical visualizationVirtual reality
PMID:41509996 Category: Date Added:2026-01-09
Dept Affiliation: ENCS
1 Department of Computer Science and Software Engineering, Concordia University, Montreal, Quebec Canada.

Description:

Crucial in disease analysis and surgical planning, manual segmentation of volumetric medical scans (e.g. MRI, CT) is laborious, error-prone, and challenging to master, while fully automatic algorithms can benefit from user feedback. Therefore, with the complementary power of the latest radiological AI foundation models and virtual reality (VR)'s intuitive data interaction, we propose SAMIRA, a novel conversational AI agent for medical VR that assists users with localizing, segmenting, and visualizing 3D medical concepts. Through speech-based interaction, the agent helps users understand radiological features, locate clinical targets, and generate segmentation masks that can be refined with just a few point prompts. The system also supports true-to-scale 3D visualization of segmented pathology to enhance patient-specific anatomical understanding. Furthermore, to determine the optimal interaction paradigm under near-far attention-switching for refining segmentation masks in an immersive, human-in-the-loop workflow, we compare VR controller pointing, head pointing, and eye tracking as input modes. With a user study, evaluations demonstrated a high usability score (SUS = 90.0 ± 9.0), low overall task load, as well as strong support for the proposed VR system's guidance, training potential, and integration of AI in radiological segmentation tasks.





BookR developed by Sriram Narayanan
for the Concordia University School of Health
Copyright © 2011-2026
Cookie settings
Concordia University