TY - JOUR
T1 - Neuromorphic Driver Monitoring Systems
T2 - A Computationally Efficient Proof-of-Concept for Driver Distraction Detection
AU - Shariff, Waseem
AU - Dilmaghani, Mehdi Sefidgar
AU - Kielty, Paul
AU - Lemley, Joe
AU - Farooq, Muhammad Ali
AU - Khan, Faisal
AU - Corcoran, Peter
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2023
Y1 - 2023
N2 - Driver Monitoring Systems (DMS) represent a promising approach for enhancing driver safety within vehicular technologies. This research explores the integration of neuromorphic event camera technology into DMS, offering faster and more localized detection of changes due to motion or lighting in an imaged scene. When applied to the observation of a human subject event camera provides a new level of sensing capabilities over conventional imaging systems. The study focuses on the application of DMS by incorporating the event cameras, augmented by submanifold sparse neural network models (SSNN) to reduce computational complexity. To validate the effectiveness of proposed machine learning pipeline built on event data we have opted the Driver Distraction as a critical use case. The SSNN model is trained on synthetic event data generated from the publicly available Drive&Act and Driver Monitoring Dataset (DMD) using a video-To-event conversion algorithm (V2E). The proposed approach yields comparable performance with state-of-The-Art approaches, achieving an accuracy of 86.25% on the Drive&Act dataset and 80% on comprehensive DMD dataset while significantly reducing computational complexity. In addition, to demonstrate the generalization of our results the network is also evaluated using locally acquired event dataset gathered from a commercially available neuromorphic event sensor.
AB - Driver Monitoring Systems (DMS) represent a promising approach for enhancing driver safety within vehicular technologies. This research explores the integration of neuromorphic event camera technology into DMS, offering faster and more localized detection of changes due to motion or lighting in an imaged scene. When applied to the observation of a human subject event camera provides a new level of sensing capabilities over conventional imaging systems. The study focuses on the application of DMS by incorporating the event cameras, augmented by submanifold sparse neural network models (SSNN) to reduce computational complexity. To validate the effectiveness of proposed machine learning pipeline built on event data we have opted the Driver Distraction as a critical use case. The SSNN model is trained on synthetic event data generated from the publicly available Drive&Act and Driver Monitoring Dataset (DMD) using a video-To-event conversion algorithm (V2E). The proposed approach yields comparable performance with state-of-The-Art approaches, achieving an accuracy of 86.25% on the Drive&Act dataset and 80% on comprehensive DMD dataset while significantly reducing computational complexity. In addition, to demonstrate the generalization of our results the network is also evaluated using locally acquired event dataset gathered from a commercially available neuromorphic event sensor.
KW - Distraction recognition
KW - computational complexity
KW - driver monitoring system driver monitoring system (DMS)
KW - event based vision
KW - neuromorphic sensing
KW - submanifold convolutions
UR - https://www.scopus.com/pages/publications/85174825932
U2 - 10.1109/OJVT.2023.3325656
DO - 10.1109/OJVT.2023.3325656
M3 - Article
SN - 2644-1330
VL - 4
SP - 836
EP - 848
JO - IEEE Open Journal of Vehicular Technology
JF - IEEE Open Journal of Vehicular Technology
ER -