| Keyword search (4,163 papers available) | ![]() |
"Neural networks" Keyword-tagged Publications:
| Title | Authors | PubMed ID | |
|---|---|---|---|
| 1 | Tuning Deep Learning for Predicting Aluminum Prices Under Different Sampling: Bayesian Optimization Versus Random Search | Alicia Estefania Antonio Figueroa | 41751647 CONCORDIA |
| 2 | Distinguishing Between Healthy and Unhealthy Newborns Based on Acoustic Features and Deep Learning Neural Networks Tuned by Bayesian Optimization and Random Search Algorithm | Lahmiri S; Tadj C; Gargour C; | 41294952 ENCS |
| 3 | Efficient neural encoding as revealed by bilingualism | Moore C; Donhauser PW; Klein D; Byers-Heinlein K; | 40828024 PSYCHOLOGY |
| 4 | Personalizing brain stimulation: continual learning for sleep spindle detection | Sobral M; Jourde HR; Marjani Bajestani SE; Coffey EBJ; Beltrame G; | 40609549 PSYCHOLOGY |
| 5 | Parallel boosting neural network with mutual information for day-ahead solar irradiance forecasting | Ahmed U; Mahmood A; Khan AR; Kuhlmann L; Alimgeer KS; Razzaq S; Aziz I; Hammad A; | 40185800 PHYSICS |
| 6 | Large language models deconstruct the clinical intuition behind diagnosing autism | Stanley J; Rabot E; Reddy S; Belilovsky E; Mottron L; Bzdok D; | 40147442 ENCS |
| 7 | MuscleMap: An Open-Source, Community-Supported Consortium for Whole-Body Quantitative MRI of Muscle | McKay MJ; Weber KA; Wesselink EO; Smith ZA; Abbott R; Anderson DB; Ashton-James CE; Atyeo J; Beach AJ; Burns J; Clarke S; Collins NJ; Coppieters MW; Cornwall J; Crawford RJ; De Martino E; Dunn AG; Eyles JP; Feng HJ; Fortin M; Franettovich Smith MM; Galloway G; Gandomkar Z; Glastras S; Henderson LA; Hides JA; Hiller CE; Hilmer SN; Hoggarth MA; Kim B; Lal N; LaPorta L; Magnussen JS; Maloney S; March L; Nackley AG; O' Leary SP; Peolsson A; Perraton Z; Pool-Goudzwaard AL; Schnitzler M; Seitz AL; Semciw AI; Sheard PW; Smith AC; Snodgrass SJ; Sullivan J; Tran V; Valentin S; Walton DM; Wishart LR; Elliott JM; | 39590726 HKAP |
| 8 | A protocol for trustworthy EEG decoding with neural networks | Borra D; Magosso E; Ravanelli M; | 39549492 ENCS |
| 9 | Near-optimal learning of Banach-valued, high-dimensional functions via deep neural networks | Adcock B; Brugiapaglia S; Dexter N; Moraga S; | 39454372 MATHSTATS |
| 10 | Deep neural network-based robotic visual servoing for satellite target tracking | Ghiasvand S; Xie WF; Mohebbi A; | 39440297 ENCS |
| 11 | Generalization limits of Graph Neural Networks in identity effects learning | D' Inverno GA; Brugiapaglia S; Ravanelli M; | 39426036 ENCS |
| 12 | The immunomodulatory effect of oral NaHCO3 is mediated by the splenic nerve: multivariate impact revealed by artificial neural networks | Alvarez MR; Alkaissi H; Rieger AM; Esber GR; Acosta ME; Stephenson SI; Maurice AV; Valencia LMR; Roman CA; Alarcon JM; | 38549144 CSBN |
| 13 | Reinforcement learning for automatic quadrilateral mesh generation: A soft actor-critic approach | Pan J; Huang J; Cheng G; Zeng Y; | 36375347 ENCS |
| 14 | Comparative Evaluation of Artificial Neural Networks and Data Analysis in Predicting Liposome Size in a Periodic Disturbance Micromixer | Ocampo I; López RR; Camacho-León S; Nerguizian V; Stiharu I; | 34683215 ENCS |
| 15 | X-Vectors: New Quantitative Biomarkers for Early Parkinson's Disease Detection From Speech | Jeancolas L; Petrovska-Delacrétaz D; Mangone G; Benkelfat BE; Corvol JC; Vidailhet M; Lehéricy S; Benali H; | 33679361 PERFORM |
| Title: | Near-optimal learning of Banach-valued, high-dimensional functions via deep neural networks | ||||
| Authors: | Adcock B, Brugiapaglia S, Dexter N, Moraga S | ||||
| Link: | https://pubmed.ncbi.nlm.nih.gov/39454372/ | ||||
| DOI: | 10.1016/j.neunet.2024.106761 | ||||
| Publication: | Neural networks : the official journal of the International Neural Network Society | ||||
| Keywords: | Banach spaces; Deep learning; Deep neural networks; High-dimensional approximation; Uncertainty quantification; | ||||
| PMID: | 39454372 | Category: | Date Added: | 2024-10-26 | |
| Dept Affiliation: |
MATHSTATS
1 Department of Mathematics, Simon Fraser University, 8888 University Drive, Burnaby BC, Canada, V5A 1S6. Electronic address: ben_adcock@sfu.ca. 2 Department of Mathematics and Statistics, Concordia University, J.W. McConnell Building, 1400 De Maisonneuve Blvd. W., Montréal, QC, Canada, H3G 1M8. Electronic address: simone.brugiapaglia@concordia.ca. 3 Department of Scientific Computing, Florida State University, 400 Dirac Science Library, Tallahassee, FL, 32306-4120, USA. Electronic address: nick.dexter@fsu.edu. 4 Department of Mathematics, Simon Fraser University, 8888 University Drive, Burnaby BC, Canada, V5A 1S6. Electronic address: smoragas@sfu.ca. |
||||
Description: |
The past decade has seen increasing interest in applying Deep Learning (DL) to Computational Science and Engineering (CSE). Driven by impressive results in applications such as computer vision, Uncertainty Quantification (UQ), genetics, simulations and image processing, DL is increasingly supplanting classical algorithms, and seems poised to revolutionize scientific computing. However, DL is not yet well-understood from the standpoint of numerical analysis. Little is known about the efficiency and reliability of DL from the perspectives of stability, robustness, accuracy, and, crucially, sample complexity. For example, approximating solutions to parametric PDEs is a key task in UQ for CSE. Yet, training data for such problems is often scarce and corrupted by errors. Moreover, the target function, while often smooth, is a potentially infinite-dimensional function taking values in the PDE solution space, which is generally an infinite-dimensional Banach space. This paper provides arguments for Deep Neural Network (DNN) approximation of such functions, with both known and unknown parametric dependence, that overcome the curse of dimensionality. We establish practical existence theorems that describe classes of DNNs with dimension-independent architecture widths and depths, and training procedures based on minimizing a (regularized) l2-loss which achieve near-optimal algebraic rates of convergence in terms of the amount of training data m. These results involve key extensions of compressed sensing for recovering Banach-valued vectors and polynomial emulation with DNNs. When approximating solutions of parametric PDEs, our results account for all sources of error, i.e., sampling, optimization, approximation and physical discretization, and allow for training high-fidelity DNN approximations from coarse-grained sample data. Our theoretical results fall into the category of non-intrusive methods, providing a theoretical alternative to classical methods for high-dimensional approximation. |



