Keyword search (4,163 papers available)

"neural network" Keyword-tagged Publications:

Title Authors PubMed ID
1 Tuning Deep Learning for Predicting Aluminum Prices Under Different Sampling: Bayesian Optimization Versus Random Search Alicia Estefania Antonio Figueroa 41751647
CONCORDIA
2 Distinguishing Between Healthy and Unhealthy Newborns Based on Acoustic Features and Deep Learning Neural Networks Tuned by Bayesian Optimization and Random Search Algorithm Lahmiri S; Tadj C; Gargour C; 41294952
ENCS
3 Efficient neural encoding as revealed by bilingualism Moore C; Donhauser PW; Klein D; Byers-Heinlein K; 40828024
PSYCHOLOGY
4 Personalizing brain stimulation: continual learning for sleep spindle detection Sobral M; Jourde HR; Marjani Bajestani SE; Coffey EBJ; Beltrame G; 40609549
PSYCHOLOGY
5 PARPAL: PARalog Protein Redistribution using Abundance and Localization in Yeast Database Greco BM; Zapata G; Dandage R; Papkov M; Pereira V; Lefebvre F; Bourque G; Parts L; Kuzmin E; 40580499
BIOLOGY
6 Distributed adaptive sliding mode control with deep recurrent neural network for cooperative robotic system in automated fiber placement Zhu N; Xie WF; 40436653
ENCS
7 Parallel boosting neural network with mutual information for day-ahead solar irradiance forecasting Ahmed U; Mahmood A; Khan AR; Kuhlmann L; Alimgeer KS; Razzaq S; Aziz I; Hammad A; 40185800
PHYSICS
8 Large language models deconstruct the clinical intuition behind diagnosing autism Stanley J; Rabot E; Reddy S; Belilovsky E; Mottron L; Bzdok D; 40147442
ENCS
9 CACTUS: An open dataset and framework for automated Cardiac Assessment and Classification of Ultrasound images using deep transfer learning Elmekki H; Alagha A; Sami H; Spilkin A; Zanuttini AM; Zakeri E; Bentahar J; Kadem L; Xie WF; Pibarot P; Mizouni R; Otrok H; Singh S; Mourad A; 40107020
ENCS
10 MuscleMap: An Open-Source, Community-Supported Consortium for Whole-Body Quantitative MRI of Muscle McKay MJ; Weber KA; Wesselink EO; Smith ZA; Abbott R; Anderson DB; Ashton-James CE; Atyeo J; Beach AJ; Burns J; Clarke S; Collins NJ; Coppieters MW; Cornwall J; Crawford RJ; De Martino E; Dunn AG; Eyles JP; Feng HJ; Fortin M; Franettovich Smith MM; Galloway G; Gandomkar Z; Glastras S; Henderson LA; Hides JA; Hiller CE; Hilmer SN; Hoggarth MA; Kim B; Lal N; LaPorta L; Magnussen JS; Maloney S; March L; Nackley AG; O' Leary SP; Peolsson A; Perraton Z; Pool-Goudzwaard AL; Schnitzler M; Seitz AL; Semciw AI; Sheard PW; Smith AC; Snodgrass SJ; Sullivan J; Tran V; Valentin S; Walton DM; Wishart LR; Elliott JM; 39590726
HKAP
11 Ion channel classification through machine learning and protein language model embeddings Ghazikhani H; Butler G; 39572876
ENCS
12 A protocol for trustworthy EEG decoding with neural networks Borra D; Magosso E; Ravanelli M; 39549492
ENCS
13 Position-based visual servoing of a 6-RSS parallel robot using adaptive sliding mode control Zhu N; Xie WF; Shen H; 39492316
ENCS
14 Near-optimal learning of Banach-valued, high-dimensional functions via deep neural networks Adcock B; Brugiapaglia S; Dexter N; Moraga S; 39454372
MATHSTATS
15 Deep neural network-based robotic visual servoing for satellite target tracking Ghiasvand S; Xie WF; Mohebbi A; 39440297
ENCS
16 Generalization limits of Graph Neural Networks in identity effects learning D' Inverno GA; Brugiapaglia S; Ravanelli M; 39426036
ENCS
17 Modelling reindeer rut activity using on-animal acoustic recorders and machine learning Boucher AJ; Weladji RB; Holand Ø; Kumpula J; 38932958
BIOLOGY
18 The immunomodulatory effect of oral NaHCO3 is mediated by the splenic nerve: multivariate impact revealed by artificial neural networks Alvarez MR; Alkaissi H; Rieger AM; Esber GR; Acosta ME; Stephenson SI; Maurice AV; Valencia LMR; Roman CA; Alarcon JM; 38549144
CSBN
19 Enhanced identification of membrane transport proteins: a hybrid approach combining ProtBERT-BFD and convolutional neural networks Ghazikhani H; Butler G; 37497772
ENCS
20 Compatible-domain Transfer Learning for Breast Cancer Classification with Limited Annotated Data Shamshiri MA; Krzyzak A; Kowal M; Korbicz J; 36758326
ENCS
21 Neural correlates of recall and extinction in a rat model of appetitive Pavlovian conditioning Brown A; Villaruel FR; Chaudhri N; 36496079
PSYCHOLOGY
22 Reinforcement learning for automatic quadrilateral mesh generation: A soft actor-critic approach Pan J; Huang J; Cheng G; Zeng Y; 36375347
ENCS
23 Sentiment Classification Method Based on Blending of Emoticons and Short Texts Zou H; Xiang K; 35327909
ENCS
24 Analysis of input set characteristics and variances on k-fold cross validation for a Recurrent Neural Network model on waste disposal rate estimation Vu HL; Ng KTW; Richter A; An C; 35287077
ENCS
25 Comparative Evaluation of Artificial Neural Networks and Data Analysis in Predicting Liposome Size in a Periodic Disturbance Micromixer Ocampo I; López RR; Camacho-León S; Nerguizian V; Stiharu I; 34683215
ENCS
26 Corrigendum: Deep Learning-Based Haptic Guidance for Surgical Skills Transfer Fekri P; Dargahi J; Zadeh M; 34026860
ENCS
27 X-Vectors: New Quantitative Biomarkers for Early Parkinson's Disease Detection From Speech Jeancolas L; Petrovska-Delacrétaz D; Mangone G; Benkelfat BE; Corvol JC; Vidailhet M; Lehéricy S; Benali H; 33679361
PERFORM
28 Deep Learning-Based Haptic Guidance for Surgical Skills Transfer. Fekri P, Dargahi J, Zadeh M 33553246
ENCS

 

Title:Near-optimal learning of Banach-valued, high-dimensional functions via deep neural networks
Authors:Adcock BBrugiapaglia SDexter NMoraga S
Link:https://pubmed.ncbi.nlm.nih.gov/39454372/
DOI:10.1016/j.neunet.2024.106761
Publication:Neural networks : the official journal of the International Neural Network Society
Keywords:Banach spacesDeep learningDeep neural networksHigh-dimensional approximationUncertainty quantification
PMID:39454372 Category: Date Added:2024-10-26
Dept Affiliation: MATHSTATS
1 Department of Mathematics, Simon Fraser University, 8888 University Drive, Burnaby BC, Canada, V5A 1S6. Electronic address: ben_adcock@sfu.ca.
2 Department of Mathematics and Statistics, Concordia University, J.W. McConnell Building, 1400 De Maisonneuve Blvd. W., Montréal, QC, Canada, H3G 1M8. Electronic address: simone.brugiapaglia@concordia.ca.
3 Department of Scientific Computing, Florida State University, 400 Dirac Science Library, Tallahassee, FL, 32306-4120, USA. Electronic address: nick.dexter@fsu.edu.
4 Department of Mathematics, Simon Fraser University, 8888 University Drive, Burnaby BC, Canada, V5A 1S6. Electronic address: smoragas@sfu.ca.

Description:

The past decade has seen increasing interest in applying Deep Learning (DL) to Computational Science and Engineering (CSE). Driven by impressive results in applications such as computer vision, Uncertainty Quantification (UQ), genetics, simulations and image processing, DL is increasingly supplanting classical algorithms, and seems poised to revolutionize scientific computing. However, DL is not yet well-understood from the standpoint of numerical analysis. Little is known about the efficiency and reliability of DL from the perspectives of stability, robustness, accuracy, and, crucially, sample complexity. For example, approximating solutions to parametric PDEs is a key task in UQ for CSE. Yet, training data for such problems is often scarce and corrupted by errors. Moreover, the target function, while often smooth, is a potentially infinite-dimensional function taking values in the PDE solution space, which is generally an infinite-dimensional Banach space. This paper provides arguments for Deep Neural Network (DNN) approximation of such functions, with both known and unknown parametric dependence, that overcome the curse of dimensionality. We establish practical existence theorems that describe classes of DNNs with dimension-independent architecture widths and depths, and training procedures based on minimizing a (regularized) l2-loss which achieve near-optimal algebraic rates of convergence in terms of the amount of training data m. These results involve key extensions of compressed sensing for recovering Banach-valued vectors and polynomial emulation with DNNs. When approximating solutions of parametric PDEs, our results account for all sources of error, i.e., sampling, optimization, approximation and physical discretization, and allow for training high-fidelity DNN approximations from coarse-grained sample data. Our theoretical results fall into the category of non-intrusive methods, providing a theoretical alternative to classical methods for high-dimensional approximation.





BookR developed by Sriram Narayanan
for the Concordia University School of Health
Copyright © 2011-2026
Cookie settings
Concordia University