Skip to main content
main-content
Erschienen in: German Journal of Exercise and Sport Research 2/2022

06.05.2022 | Brief Communication

Validation of human activity recognition using a convolutional neural network on accelerometer and gyroscope data

verfasst von: Eni Hysenllari, Jörg Ottenbacher, Darren McLennan

Erschienen in: German Journal of Exercise and Sport Research | Ausgabe 2/2022

Einloggen, um Zugang zu erhalten

Abstract

Background

Human activity recognition (HAR) means identifying sequences of data recorded by specialized wearable sensors into known, well-defined classes of physical activity. In principle, activity recognition provides great societal benefits, especially in real-life, humancentric applications such as healthcare and care of the elderly. Using raw acceleration and angular velocity to train a convolutional neural network shows great success in recognition accuracy. This article presents the quality of activity recognition obtained using convolutional neural network on acceleration and angular velocity data recorded from different sensor locations.

Methods

Thirty-five volunteers from two studies (16 women and 19 men) with an average age of 28.54 years wore Move4/EcgMove4 accelerometers on 6 different body positions (ankle, thigh, hip, wrist, upper arm, chest) while completing typical activities (sitting, standing, lying, walking, jogging, cycling). We then used those databases to evaluate a two-dimensional convolutional neural network (2D-CNN) that takes 3D acceleration and 3D angular velocity signals as inputs to recognize human activity. We measure the networks performance using accuracy and Cohen’s κ.

Results

Depending on the location of the sensor, the accuracy of the network varies from 96.57% (ankle) to 99.28% (thigh) and Cohen’s κ varies from 0.96 (ankle) to 0.99 (thigh).

Conclusions

The performance of the 2D-CNN concerning human activity recognition showed excellent results. Using raw signals may enable real-time, on-device—also known as at the edge—activity recognition even in small devices with low computational power and small storage.
Zugang erhalten Sie mit:
e.Med Interdisziplinär

Für Ihren Erfolg in Klinik und Praxis - Die beste Hilfe in Ihrem Arbeitsalltag

Mit e.Med Interdisziplinär erhalten Sie Zugang zu allen CME-Fortbildungen und Fachzeitschriften auf SpringerMedizin.de.

Weitere Produktempfehlungen anzeigen
Anhänge
Nur mit Berechtigung zugänglich
Literatur
Zurück zum Zitat Avilés-Cruz, C., Ferreyra-Ramírez, A., Zúñiga-López, A., & Villegas-Cortéz, J. (2019). Coarse-fine convolutional deep-learning strategy for human activity recognition. Sensors, 19(7), 1556. CrossRef Avilés-Cruz, C., Ferreyra-Ramírez, A., Zúñiga-López, A., & Villegas-Cortéz, J. (2019). Coarse-fine convolutional deep-learning strategy for human activity recognition. Sensors, 19(7), 1556. CrossRef
Zurück zum Zitat Buchner, D. M. (2014). The development and content of the 2008 physical activity guidelines for Americans. Journal of physical education, recreation and dance, 85(7), 13–16. CrossRef Buchner, D. M. (2014). The development and content of the 2008 physical activity guidelines for Americans. Journal of physical education, recreation and dance, 85(7), 13–16. CrossRef
Zurück zum Zitat Cho, H., & Yoon, S. M. (2018). Divide and conquer-based 1D CNN human activity recognition using test data sharpening. Sensors, 18(4), 1055. CrossRef Cho, H., & Yoon, S. M. (2018). Divide and conquer-based 1D CNN human activity recognition using test data sharpening. Sensors, 18(4), 1055. CrossRef
Zurück zum Zitat Gadri, S., & Neuhold, E. (2020). Building best predictive models using ML and DL approaches to categorize fashion clothes. In International conference on artificial intelligence and soft computing (pp. 90–102). Cham: Springer. CrossRef Gadri, S., & Neuhold, E. (2020). Building best predictive models using ML and DL approaches to categorize fashion clothes. In International conference on artificial intelligence and soft computing (pp. 90–102). Cham: Springer. CrossRef
Zurück zum Zitat Giurgiu, M., Bussmann, J. B., Hill, H., Anedda, B., Kronenwett, M., Koch, E. D., & Reichert, M. (2020). Validating accelerometers for the assessment of body position and sedentary behavior. Journal for the Measurement of Physical Behaviour, 3(3), 253–263. CrossRef Giurgiu, M., Bussmann, J. B., Hill, H., Anedda, B., Kronenwett, M., Koch, E. D., & Reichert, M. (2020). Validating accelerometers for the assessment of body position and sedentary behavior. Journal for the Measurement of Physical Behaviour, 3(3), 253–263. CrossRef
Zurück zum Zitat Hamm, J., Stone, B., Belkin, M., & Dennis, S. (2012). Automatic annotation of daily activity from smartphone-based multisensory streams. In International conference on mobile computing, applications, and services (pp. 328–342). Berlin, Heidelberg: Springer. Hamm, J., Stone, B., Belkin, M., & Dennis, S. (2012). Automatic annotation of daily activity from smartphone-based multisensory streams. In International conference on mobile computing, applications, and services (pp. 328–342). Berlin, Heidelberg: Springer.
Zurück zum Zitat Ji, X., Cheng, J., Feng, W., & Tao, D. (2018). Skeleton embedded motion body partition for human action recognition using depth sequences. Signal Processing, 143, 56–68. CrossRef Ji, X., Cheng, J., Feng, W., & Tao, D. (2018). Skeleton embedded motion body partition for human action recognition using depth sequences. Signal Processing, 143, 56–68. CrossRef
Zurück zum Zitat Jiang, W., & Yin, Z. (2015). Human activity recognition using wearable sensors by deep convolutional neural networks. In Proceedings of the 23rd ACM international conference on Multimedia (pp. 1307–1310). CrossRef Jiang, W., & Yin, Z. (2015). Human activity recognition using wearable sensors by deep convolutional neural networks. In Proceedings of the 23rd ACM international conference on Multimedia (pp. 1307–1310). CrossRef
Zurück zum Zitat O’Shea, K., & Nash, R. (2015). An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458. O’Shea, K., & Nash, R. (2015). An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458.
Zurück zum Zitat Qi, X., Keally, M., Zhou, G., Li, Y., & Ren, Z. (2013). AdaSense: adapting sampling rates for activity recognition in body sensor networks. In 2013 IEEE 19th Real-Time and Embedded Technology and Applications Symposium (RTAS) (pp. 163–172). IEEE. Qi, X., Keally, M., Zhou, G., Li, Y., & Ren, Z. (2013). AdaSense: adapting sampling rates for activity recognition in body sensor networks. In 2013 IEEE 19th Real-Time and Embedded Technology and Applications Symposium (RTAS) (pp. 163–172). IEEE.
Zurück zum Zitat Rana, J. B., Shetty, R., & Jha, T. (2015). Application of machine learning techniques in human activity recognition. arXiv preprint arXiv:1510.05577. Rana, J. B., Shetty, R., & Jha, T. (2015). Application of machine learning techniques in human activity recognition. arXiv preprint arXiv:1510.05577.
Zurück zum Zitat Ronao, C. A., & Cho, S. B. (2016). Human activity recognition with smartphone sensors using deep learning neural networks. Expert systems with applications, 59, 235–244. CrossRef Ronao, C. A., & Cho, S. B. (2016). Human activity recognition with smartphone sensors using deep learning neural networks. Expert systems with applications, 59, 235–244. CrossRef
Zurück zum Zitat Shakya, S. R., Zhang, C., & Zhou, Z. (2018). Comparative study of machine learning and deep learning architecture for human activity recognition using accelerometer data. Int. J. Mach. Learn. Comput, 8(6), 577–582. Shakya, S. R., Zhang, C., & Zhou, Z. (2018). Comparative study of machine learning and deep learning architecture for human activity recognition using accelerometer data. Int. J. Mach. Learn. Comput, 8(6), 577–582.
Zurück zum Zitat Warrens, M. J. (2015). Five ways to look at Cohen’s kappa. Journal of Psychology & Psychotherapy, 5(4), 1. CrossRef Warrens, M. J. (2015). Five ways to look at Cohen’s kappa. Journal of Psychology & Psychotherapy, 5(4), 1. CrossRef
Zurück zum Zitat Zhang, C., Yang, X., Lin, W., & Zhu, J. (2012). Recognizing human group behaviors with multi-group causalities. In 2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology (Vol. 3, pp. 44–48). IEEE: In. CrossRef Zhang, C., Yang, X., Lin, W., & Zhu, J. (2012). Recognizing human group behaviors with multi-group causalities. In 2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology (Vol. 3, pp. 44–48). IEEE: In. CrossRef
Zurück zum Zitat Zhou, Z., Li, K., & He, X. (2015). Recognizing human activity in still images by integrating group-based contextual cues. In Proceedings of the 23rd ACM international conference on Multimedia (pp. 1135–1138). CrossRef Zhou, Z., Li, K., & He, X. (2015). Recognizing human activity in still images by integrating group-based contextual cues. In Proceedings of the 23rd ACM international conference on Multimedia (pp. 1135–1138). CrossRef
Metadaten
Titel
Validation of human activity recognition using a convolutional neural network on accelerometer and gyroscope data
verfasst von
Eni Hysenllari
Jörg Ottenbacher
Darren McLennan
Publikationsdatum
06.05.2022
Verlag
Springer Berlin Heidelberg
Erschienen in
German Journal of Exercise and Sport Research / Ausgabe 2/2022
Print ISSN: 2509-3142
Elektronische ISSN: 2509-3150
DOI
https://doi.org/10.1007/s12662-022-00817-y

Weitere Artikel der Ausgabe 2/2022

German Journal of Exercise and Sport Research 2/2022 Zur Ausgabe

dvs Informationen

dvs Informationen

DOSB Informationen

DOSB Informationen

Arthropedia

Grundlagenwissen der Arthroskopie und Gelenkchirurgie. Erweitert durch Fallbeispiele, DICOM-Daten, Videos und Abbildungen. » Jetzt entdecken

Neu im Fachgebiet Orthopädie und Unfallchirurgie

Newsletter

Bestellen Sie unseren kostenlosen Newsletter Update Orthopädie und Unfallchirurgie und bleiben Sie gut informiert – ganz bequem per eMail.

Der einfache Weg sich fortzubilden: Befundungskurs Radiologie

Strukturiertes Interpretieren und Analysieren von radiologischen Befunden