A deep learning model for ergonomics risk assessment and sports and health monitoring in self-occluded images

Amirhossein Aghamohammadi, Seyed Aliasghar Beheshti Shirazi, Seyed Yashar Banihashem, Saman Shishechi, Ramin Ranjbarzadeh, Saeid Jafarzadeh Ghoushchi, Malika Bendechache

Research output: Contribution to a Journal (Peer & Non Peer)Articlepeer-review

22 Citations (Scopus)

Abstract

Ergonomic assessments and sports and health monitoring play a crucial role and have contributed to sustainable development in many areas such as product architecture, design, health, and safety as well as workplace design. Recently, visual ergonomic assessments have been broadly employed for skeleton analysis of human joints for body postures localization and classification to deal with musculoskeletal disorders risks. Moreover, monitoring players in a sports activity helps to analyze their actions to help maximize body performance. However, body postures identification has some limitations in self-occlusion joint postures. In this study, a visual ergonomic assessment technique employing a multi-frame and multi-path convolutional neural network (CNN) is presented to assess ergonomic risks in the presence of free-occlusion and self-occlusion conditions. Our model has four inputs that accept four sequential frames to overcome the problems of the missing joints and classify the input into one of four risk categories. Our pipeline was evaluated on a video with 5 min ~ 300 s (that could be 9000 frames) duration time and showed that our architecture has competitive results (recall = 0.8925, precision = 0.8743, F-score = 0.8837).

Original languageEnglish
Pages (from-to)1161-1173
Number of pages13
JournalSignal, Image and Video Processing
Volume18
Issue number2
DOIs
Publication statusPublished - Mar 2024

Keywords

  • Action detection
  • Convolutional neural network
  • Deep learning
  • Ergonomic assessment
  • Occlusion

Fingerprint

Dive into the research topics of 'A deep learning model for ergonomics risk assessment and sports and health monitoring in self-occluded images'. Together they form a unique fingerprint.

Cite this