Improving the Performance and Explainability of Indoor Human Activity Recognition in the Internet of Things Environment


Cengiz A. B., Birant K. U., Cengiz M., Birant D., Baysari K.

SYMMETRY-BASEL, cilt.14, sa.10, 2022 (SCI-Expanded) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 14 Sayı: 10
  • Basım Tarihi: 2022
  • Doi Numarası: 10.3390/sym14102022
  • Dergi Adı: SYMMETRY-BASEL
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Aerospace Database, Communication Abstracts, INSPEC, Metadex, zbMATH, Directory of Open Access Journals, Civil Engineering Abstracts
  • Anahtar Kelimeler: machine learning, image classification, human activity recognition, convolutional neural networks, Internet of Things, CLASSIFIERS, ENSEMBLE
  • Dokuz Eylül Üniversitesi Adresli: Evet

Özet

Traditional indoor human activity recognition (HAR) has been defined as a time-series data classification problem and requires feature extraction. The current indoor HAR systems still lack transparent, interpretable, and explainable approaches that can generate human-understandable information. This paper proposes a new approach, called Human Activity Recognition on Signal Images (HARSI), which defines the HAR problem as an image classification problem to improve both explainability and recognition accuracy. The proposed HARSI method collects sensor data from the Internet of Things (IoT) environment and transforms the raw signal data into some visual understandable images to take advantage of the strengths of convolutional neural networks (CNNs) in handling image data. This study focuses on the recognition of symmetric human activities, including walking, jogging, moving downstairs, moving upstairs, standing, and sitting. The experimental results carried out on a real-world dataset showed that a significant improvement (13.72%) was achieved by the proposed HARSI model compared to the traditional machine learning models. The results also showed that our method (98%) outperformed the state-of-the-art methods (90.94%) in terms of classification accuracy.