Keyword search (4,163 papers available)

"Yang J" Authored Publications:

Title Authors PubMed ID
1 Pedestrian detection in aerial image based on convolutional neural network with attention mechanism and multi-scale prediction Yang J; Shen J; Wang S; 41387459
ENCS
2 Technical recommendations for analyzing oxylipins by liquid chromatography-mass spectrometry Schebb NH; Kampschulte N; Hagn G; Plitzko K; Meckelmann SW; Ghosh S; Joshi R; Kuligowski J; Vuckovic D; Botana MT; Sánchez-Illana Á; Zandkarimi F; Das A; Yang J; Schmidt L; Checa A; Roche HM; Armando AM; Edin ML; Lih FB; Aristizabal-Henao JJ; Miyamoto S; Giuffrida F; Moussaieff A; Domingues R; Rothe M; Hinz C; Das US; Rund KM; Taha AY; Hofstetter RK; Werner M; Werz O; Kahnt AS; Bertrand-Michel J; Le Faouder P; Gurke R; Thomas D; Torta F; Milic I; Dias IHK; Spickett CM; Biagini D; Lomonaco T; Idborg H; Liu J 40392938
CHEMBIOCHEM
3 Semantically-Enhanced Feature Extraction with CLIP and Transformer Networks for Driver Fatigue Detection Gao Z; Chen X; Xu J; Yu R; Zhang H; Yang J; 39771685
ENCS
4 Identifying personalized barriers for hypertension self-management from TASKS framework Yang J; Zeng Y; Yang L; Khan N; Singh S; Walker RL; Eastwood R; Quan H; 39143621
ENCS
5 Deep model integrated with data correlation analysis for multiple intermittent faults diagnosis. Yang J, Xie G, Yang Y, Zhang Y, Liu W 31174854
ENCS

 

Title:Semantically-Enhanced Feature Extraction with CLIP and Transformer Networks for Driver Fatigue Detection
Authors:Gao ZChen XXu JYu RZhang HYang J
Link:https://pubmed.ncbi.nlm.nih.gov/39771685/
DOI:10.3390/s24247948
Publication:Sensors (Basel, Switzerland)
Keywords:CLIP pre-trained modelTransformerfatigue detectioninstance normalizationsemantic analysis
PMID:39771685 Category: Date Added:2025-01-08
Dept Affiliation: ENCS
1 School of Computer Science and Technology, Tongji University, Shanghai 201804, China.
2 Department of Computer Science, City University of Hong Kong, Hong Kong 999077, China.
3 Key Laboratory of Road and Traffic Engineering of the Ministry of Education, Shanghai 201804, China.
4 College of Transportation Engineering, Tongji University, Shanghai 201804, China.
5 Zhejiang Fengxing Huiyun Technology Co., Ltd., Hangzhou 311107, China.
6 Department of Computer Science and Software Engineering, Concordia University, Montreal, QC H3G 1M8, Canada.

Description:

Drowsy driving is a leading cause of commercial vehicle traffic crashes. The trend is to train fatigue detection models using deep neural networks on driver video data, but challenges remain in coarse and incomplete high-level feature extraction and network architecture optimization. This paper pioneers the use of the CLIP (Contrastive Language-Image Pre-training) model for fatigue detection. And by harnessing the power of a Transformer architecture, sophisticated and long-term temporal features are adeptly extracted from video sequences, paving the way for more nuanced and accurate fatigue analysis. The proposed CT-Net (CLIP-Transformer Network) achieves an AUC (Area Under the Curve) of 0.892, a 36% accuracy improvement over the prevalent CNN-LSTM (Convolutional Neural Network-Long Short-Term Memory) end-to-end model, reaching state-of-the-art performance. Experiments show that the CLIP pre-trained model more accurately extracts facial and behavioral features from driver video frames, improving the model's AUC by 7% over the ImageNet-based pre-trained model. Moreover, compared with LSTM, the Transformer more flexibly captures long-term dependencies among temporal features, further enhancing the model's AUC by 4%.





BookR developed by Sriram Narayanan
for the Concordia University School of Health
Copyright © 2011-2026
Cookie settings
Concordia University