Keyword search (4,163 papers available)

"Matinfar S" Authored Publications:

Title Authors PubMed ID
1 From tissue to sound: A new paradigm for medical sonic interaction design Matinfar S; Dehghani S; Salehi M; Sommersperger M; Navab N; Faridpooya K; Fairhurst M; Navab N; 40222195
CONCORDIA
2 Sonification as a reliable alternative to conventional visual surgical navigation Matinfar S; Salehi M; Suter D; Seibold M; Dehghani S; Navab N; Wanivenhaus F; Fürnstahl P; Farshad M; Navab N; 37045878
ENCS

 

Title:From tissue to sound: A new paradigm for medical sonic interaction design
Authors:Matinfar SDehghani SSalehi MSommersperger MNavab NFaridpooya KFairhurst MNavab N
Link:https://pubmed.ncbi.nlm.nih.gov/40222195/
DOI:10.1016/j.media.2025.103571
Publication:Medical image analysis
Keywords:Auditory feedbackAugmented realityMedical imagingMixed realityModel-based sonificationPhysical modeling synthesisSonification
PMID:40222195 Category: Date Added:2025-04-14
Dept Affiliation: CONCORDIA
1 Computer Aided Medical Procedures (CAMP), Technical University of Munich, Munich, Germany. Electronic address: sasan.matinfar@tum.de.
2 Computer Aided Medical Procedures (CAMP), Technical University of Munich, Munich, Germany.
3 Topological Media Lab, Concordia University, Montreal, Canada.
4 Rotterdam Eye Hospital, Rotterdam, The Netherlands.
5 Centre for Tactile Internet with Human-in-the-Loop, Technical University of Dresden, Dresden, Germany.

Description:

Medical imaging maps tissue characteristics into image intensity values, enhancing human perception. However, comprehending this data, especially in high-stakes scenarios such as surgery, is prone to errors. Additionally, current multimodal methods do not fully leverage this valuable data in their design. We introduce "From Tissue to Sound," a new paradigm for medical sonic interaction design. This paradigm establishes a comprehensive framework for mapping tissue characteristics to auditory displays, providing dynamic and intuitive access to medical images that complement visual data, thereby enhancing multimodal perception. "From Tissue to Sound" provides an advanced and adaptable framework for the interactive sonification of multimodal medical imaging data. This framework employs a physics-based sound model composed of a network of multiple oscillators, whose mechanical properties-such as friction and stiffness-are defined by tissue characteristics extracted from imaging data. This approach enables the representation of anatomical structures and the creation of unique acoustic profiles in response to excitations of the sound model. This method allows users to explore data at a fundamental level, identifying tissue characteristics ranging from rigid to soft, dense to sparse, and structured to scattered. It facilitates intuitive discovery of both general and detailed patterns with minimal preprocessing. Unlike conventional methods that transform low-dimensional data into global sound features through a parametric approach, this method utilizes model-based unsupervised mapping between data and an anatomical sound model, enabling high-dimensional data processing. The versatility of this method is demonstrated through feasibility experiments confirming the generation of perceptually discernible acoustic signals. Furthermore, we present a novel application developed based on this framework for retinal surgery. This new paradigm opens up possibilities for designing multisensory applications for multimodal imaging data. It also facilitates the creation of interactive sonification models with various auditory causality approaches, enhancing both directness and richness.





BookR developed by Sriram Narayanan
for the Concordia University School of Health
Copyright © 2011-2026
Cookie settings
Concordia University