Transfer learning of temporal information for driver action classification

Research output: Contribution to conference (Published)Paperpeer-review

22 Citations (Scopus)

Abstract

Correct classification of image data can depend on features learned in multiple sequential frames. We focus on the problem of learning action from video data with an emphasis on driver behavior monitoring. An insuffcient quantity of high quality labeled data is a major problem in machine learning research. This is especially true when deep neural networks are used. Although some sufficiently large, general purpose image databases exist for action recognition, most of these are limited to single frames. This kind of data requires that the action recognition task is applied regardless of the temporal information (information from previous and next frames of a video sequence). In this paper, we show that temporal information is useful for accurate classification of video and that the temporal information in lower layers of a convolutional neural network can successfully be transferred from one network to another to greatly improve performance on the driver behavior monitoring task.

Original languageEnglish
Pages123-128
Number of pages6
Publication statusPublished - 2017
Event28th Modern Artificial Intelligence and Cognitive Science Conference, MAICS 2017 - Fort Wayne, United States
Duration: 28 Apr 201729 Apr 2017

Conference

Conference28th Modern Artificial Intelligence and Cognitive Science Conference, MAICS 2017
Country/TerritoryUnited States
CityFort Wayne
Period28/04/1729/04/17

Keywords

  • Action Recognition
  • Deep Learning
  • Transfer Learning

Fingerprint

Dive into the research topics of 'Transfer learning of temporal information for driver action classification'. Together they form a unique fingerprint.

Cite this