Keyword search (4,163 papers available)

"Poullis C" Authored Publications:

Title Authors PubMed ID
1 From data to action in flood forecasting leveraging graph neural networks and digital twin visualization Roudbari NS; Punekar SR; Patterson Z; Eicker U; Poullis C; 39127785
ENCS
2 Transductive meta-learning with enhanced feature ensemble for few-shot semantic segmentation Karimi A; Poullis C; 38369571
ENCS
3 Author Correction: Motion estimation for large displacements and deformations Chen Q; Poullis C; 36517657
CONCORDIA
4 Motion estimation for large displacements and deformations Chen Q; Poullis C; 36385172
CONCORDIA

 

Title:Motion estimation for large displacements and deformations
Authors:Chen QPoullis C
Link:https://pubmed.ncbi.nlm.nih.gov/36385172/
DOI:10.1038/s41598-022-21987-7
Publication:Scientific reports
Keywords:
PMID:36385172 Category: Date Added:2022-11-17
Dept Affiliation: CONCORDIA
1 Immersive and Creative Technologies Lab, Concordia University, Montreal, QC, Canada.
2 Immersive and Creative Technologies Lab, Concordia University, Montreal, QC, Canada. charalambos@poullis.org.

Description:

Large displacement optical flow is an integral part of many computer vision tasks. Variational optical flow techniques based on a coarse-to-fine scheme interpolate sparse matches and locally optimize an energy model conditioned on colour, gradient and smoothness, making them sensitive to noise in the sparse matches, deformations, and arbitrarily large displacements. This paper addresses this problem and presents HybridFlow, a variational motion estimation framework for large displacements and deformations. A multi-scale hybrid matching approach is performed on the image pairs. Coarse-scale clusters formed by classifying pixels according to their feature descriptors are matched using the clusters' context descriptors. We apply a multi-scale graph matching on the finer-scale superpixels contained within each matched pair of coarse-scale clusters. Small clusters that cannot be further subdivided are matched using localized feature matching. Together, these initial matches form the flow, which is propagated by an edge-preserving interpolation and variational refinement. Our approach does not require training and is robust to substantial displacements and rigid and non-rigid transformations due to motion in the scene, making it ideal for large-scale imagery such as aerial imagery. More notably, HybridFlow works on directed graphs of arbitrary topology representing perceptual groups, which improves motion estimation in the presence of significant deformations. We demonstrate HybridFlow's superior performance to state-of-the-art variational techniques on two benchmark datasets and report comparable results with state-of-the-art deep-learning-based techniques.





BookR developed by Sriram Narayanan
for the Concordia University School of Health
Copyright © 2011-2026
Cookie settings
Concordia University