Introduction
Stroke impacts approximately 17 million people worldwide every year [
1]. Post-stroke survivors are mostly affected by mobility impairments, due to ataxia or hemiplegia, and consequences of lesion in the motor cortex following the stroke. Recovery of motor function requires intensive physical rehabilitation which must be tailored to the patient for better efficacy. Currently, therapeutic decisions are usually based on clinical assessment of motor function using functional tests such as the Berg Balance Scale (BBS) for balance assessment [
2] or Timed Up and Go (TUG) for gait and balance evaluation [
3], or on patient reports including QoL questionnaires such as the generic Stroke Impact Scale [
4] or Stroke-specific Quality of Life [
5]. Although useful and currently used in clinical practice, it is recognized that these evaluations may have some limitations. The QoL scores may be biased by the subjective interpretation of questions and the patient’s state-of-mind. The clinical functional tests are performed only in a hospital setting and may not reflect the actual motor performance of patients in everyday life. For example, a patient may show good balance and stable gait during the clinical exam, report good mobility in daily-life, however in reality he/she might avoid long distance walking or climbing stairs. Activity monitoring in everyday life is therefore expected to provide a more comprehensive assessment of physical functioning and QoL of post-stroke patients.
The unobtrusive monitoring of basic daily activities in the real-life environment has been extensively investigated over the last decade, along with the development and spread of wearable technologies. Daily activities were successfully monitored using a set of multiple inertial sensors (accelerometers and gyroscopes) placed at key body locations in patients with chronic pain [
6], or stroke [
7,
8]. Sensors placed on the trunk were used to detect lying and walking periods, and to characterize postural transfer such as
sit-to-stand and
stand-to-sit (STS) transitions [
9], relevant for functional recovery assessment after stroke [
10,
11]. Inertial sensors (accelerometers) on the thigh allowed distinguishing sitting from standing posture, while sensors on the shank/foot (gyroscopes) were used for detailed evaluation of the gait pattern. However, placing multiple sensors on the patient’s body may lead to discomfort and hence hinder their ability and willingness to perform normal daily activities. Given this limitation, a number of studies were dedicated to the development of activity monitors using a single sensor configuration [
12‐
14]. Although the current activity monitors are accurate in recognizing dynamic activities (walking and running), their abilities to classify static postures (standing vs. sitting or sitting vs. lying) remain limited. For instance, an accelerometer placed on thigh cannot distinguish accurately sitting from lying [
15,
16]. Furthermore, a trunk-located inertial sensor can distinguish various basic activities (lying, sitting, standing, and walking) but with limited performance, due to the variability of movement pattern across activities and patients [
13,
17]. A possible solution to increase the performance of these algorithms is to use an additional sensor modality such as the barometric pressure (BP). BP provides an estimate of the sensor’s absolute altitude, which can be particularly useful for distinguishing transitions between activities involving altitude/body elevation changes (e.g. up/down level walking, stair claiming, STS transitions). This approach can result in detecting additional activities, for example the evaluation of patients mobility while climbing the stairs which is a relevant outcome for post-stroke recovery [
18]. Lester et al. [
19] and Moncada-Torres et al. [
20] proposed activity recognition algorithms including BP-based stair climbing detection algorithm but the results were validated only on healthy controls.
In addition to the sensor configuration (number and placement), a methodological approach to recognize/classify the activities from the raw sensor data is crucial. The most common approach is the
epoch-based classification [
21], i.e. sensor data are split into fixed-length epochs, and based on extracted features machine learning techniques are applied to classify each epoch into an activity. Another approach is the
event-based classification, which consists in
detection and
classification of key events such as postural transitions, start/end of walking and lying periods. Following this approach, Salarian et al. [
22] incorporated postural transition-specific knowledge into a fuzzy logic based activity recognition algorithm as a way to improve the classification performances. This algorithm is based only on the information from a single inertial sensor fixed on the trunk therefore, the accuracy of STS classification is limited.
The present work is based on the following hypotheses: (1) changes in trunk elevation during STS postural transitions, inclined walking and stairs climbing can be detected by a multimodal sensor including inertial (accelerometers, gyroscopes) and BP sensing; (2) this information can be used to devise an improved event-based activity classification algorithm. We propose a wearable activity monitoring system based on single trunk-worn multimodal sensor system (Inertial Measurement Unit-IMU and BP) and a fuzzy logic based activity classifier that exploits fused information from the sensors. The classifier accounts for behavioral constraints and, in addition, estimates the body elevation (flat, up and down) during standing and locomotion.
Discussion
This study presents a new activity recognition algorithm able not only to recognize the basic daily-life activities (lying, sitting, standing, walking) but also to distinguish the body elevation using barometric pressure: Up and Down the elevator for standing and Up and Down the stairs for walking. The recognition of daily activity was carried out by a double-stage hierarchical fuzzy logic inference system. While the first stage processed the events such as the start/end of lying or walking periods, and detected postural transitions, the second stage improved the activity recognition by providing a simple way to integrate the typical behavior of the subject and biomechanical constraints. Five algorithms were benchmarked on a dataset containing daily-living activities from 12 patients suffering from post-stroke mobility impairments. The validation was performed using the conventional classification metrics, i.e., SEN, SPE, PPV and F-score estimated for each activity and overall for the ensemble of activities.
The results presented in this study demonstrate the efficiency of the event-driven algorithms featuring the BP sensor. This is essentially because the event-driven architecture of H-FIS and the FIS-IMUBP enables to leverage the full potential of the barometric pressure at the postural transition time, i.e. the body altitude change. Furthermore, the H-FIS results were statistically compared with other methods across patient-specific dataset. A statistical significant difference (
p < 0.05) was always found between H-FIS approach and the inertial-based approaches, highlighting an improvement in the recognition across all patients (N
patients = 12). Even if the difference between H-FIS and the evaluated state-of-the-art algorithms (FIS-IMU and EPOCH) in terms of overall CCR may appear minor, it results in a superior performance in classification of
standing activity (minimum 9.0 % increase between H-FIS and these algorithms). Better performance for standing classification (distinction from sitting) was one of main objectives of the present study, since this information is important for the clinical assessment of patients’ recovery (standing is a ‘dynamic’ activity related to physical capacity [
40]).
The dataset used in this study was composed of daily activities performed at the clinic in naturalistic conditions. Ganea et al. [
13] highlighted a lowering of recognition performance when algorithms were applied to data collected in “real” daily-life context, mostly due to a decreased ability in recognizing STS transitions. The addition of the BP sensor in the present study allowed to overcome this limitation. The pressure-based STS recognition is less prone to pathology related changes of trunk movement patterns.
The ability of negotiating stairs is an important component in stroke patients’ physical recovery process [
41]. The body elevation was therefore computed to distinguish different ambulatory strategies: taking the stairs or the elevator as opposed to (flat)
level walking or
standing. The CCR of the three considered approaches (H-FIS, H-FISnoFIT, EPOCH) was superior to 96.8 % for all algorithms, due to the high F-Score (>98.4 %) in the
Flat class where most of the instances were located. This class unbalance characterized by more sample data for the
Flat class yielded to very high CCR, despite moderate classification performance in the other class. Nonetheless, the H-FIS outperformed the benchmarked algorithms in terms of F-Score for all the other classes. The difference between the H-FIS and the H-FISnoFIT in terms of CCR can essentially be explained by the improvements in F-score over the
non-level activity detection (72.6 % for H-FIS vs., 64.5 % for H-FISnoFIT). This was essentially due to the narrowing of the
non-level activity duration using the sinus fitting functions which can be observed by the increase in PPV (66.8 % for H-FIS vs. 38.5 % for H-FISnoFIT). The exact time of elevator start-off movement was difficult to track using the video recording and may explain the few seconds wrongly-classified as
Flat for the
Elevator Up/Down activities.
Similarly, due to slow dynamics of trunk movement, the annotation around an activity transition was difficult and this may have worsened the results. Furthermore, each period containing more than three consecutive steps were annotated as walking. However, the walking detection algorithm was initially developed for healthy/fit elderly subjects [
27] without mobility impairments. When applied for mobility impaired stroke patients, the algorithm might consider a slow walking period as
standing (F-Score of 70 % for H-FIS). These factors may have adverse effects on results. Nonetheless, an F-score greater than 90 % was obtained for
walking. Another limitation, occurring during the slow motion period within walking, is the lack of sensitivity at recognizing level walking from stair climbing. This was essentially because a patient climbing the stairs might stall for few seconds, which would then end the current walking session and start a new standing session followed by a walking episode. These periods may not reach the required amplitude threshold ∆Alt
level and hence not be classified as climbing activity. A solution could be either to have different thresholds according to the climbing activities or to group a sequence of consecutive
standing and
walking activities.
The benefit of applying behavior-inspired constraints was observed by comparing the H-FIS with the Event-FIS in terms of CCR (90.4 % for H-FIS vs 81.9 % for Event-FIS). This difference greatly lies in the rule related to the correction of very-long (∆T
standing = 2 min) standing postures (rule e). The removal of the corresponding rule (rule e) from the fuzzy rule set (listed in Table
3) led to a 7.2 % decrease of the H-FIS’ CCR. A similar constraint was applied by Salarian et al. (FIS-IMU) [
22] to improve the recognition. This threshold (∆T
standing) can be fine-tuned according to different pathologies based on the analysis of behavioral data collected in free-living environment [
42]. Furthermore, the behavioral rule (rule d) enables only a short sitting activity to be considered as standing if a large and sudden change of altitude is detected. This timespan limit prevents specific actions such as sitting in a car moving on a mountain road to be wrongly classified as
standing. It also blocks any interference stemming from daily-changes in atmospheric pressure due to their very low dynamics.
In this study, we selected one epoch-based machine learning algorithm (decision tree) which was more descriptive due to the use of decision tree. However, we tested as well various machine learning algorithms using Weka software [
43] on the same feature-reduced dataset and with the same leave-one-patient-out cross-validation procedure applied. They all resulted in an overall performance for activity recognition similar to EPOCH, i.e. CCR smaller than 87.1 % (Decision Table: 82.5 % CCR; Naïve Bayes: 81.6 %; Random Forest, #Trees = 10: 87.1 %; K-Nearest-Neighbors, K = 10: 85.6 %) confirming the advantage of the event-driven algorithmic approach.
The goal of this study was not to optimize the fuzzy rules and operators for H-FIS classifier, but to introduce a methodological approach that incorporates BP sensors alongside with the inertial measurement as a way to improve the activity classification. The fuzzy rules were therefore hand-engineered in this investigation. However a global optimization algorithm or any hybrid-Fuzzy system with adaptation can replace each of the fuzzy blocks to improve the performance. Furthermore, fusing the epoch-based algorithm with the H-FIS could also improve the performance of the presented system. For instance, for a prolonged activity, an epoch-based algorithm could split this activity into multiple epochs and then infer the activity by combining the results across the epochs using a meta-classifier such as plurality voting [
44].
Splitting the activity classifier into three blocks, event processing, behavior constraints, and body elevation recognition enabled a great modularity. Each of these blocks can be tuned according to the studied pathology.
The impact of the study design on the development and evaluation of an activity-type classifier is a topic that was recursively addressed in the last years [
13,
17,
45]. These studies showed that data collected in a protocol involving scripted activities under confined laboratory conditions may not reflect real-life situations. This may be particularly critical for activity-type classifiers based on machine-learning approaches (discrepancy between features extracted from ‘lab’ and ‘real-file’ data). In this study, we tried to minimize this issue by first designing a measurement protocol as similar as possible to the real-life context, i.e., self-paced various activities performed in an extended physical space (different locations in the hospital area). Second, an “expert-based” activity-classifier was designed based on biomechanical models/constraints and behavioral rules; this approach is expected to be robust in different contexts since the biomechanical/behavioral rules still stand. Although it remains to be proven, our expectation is that performances of the proposed algorithm will not change significantly with data collected from patients in home environments.
This study has however few limitations mainly due to the data available for validation. One limitation is related to the non-uniformity of the number of data samples for the different activities. The number of samples in static activities (Sit) and at Flat body elevation (i.e., more flat walking than up/down stairs) was greater due to the reduced physical capacity of patients, fatigue and fall-risk concerns (4 of 12 patients needed walking assistance). However, collected data corresponds to real-life context, both in terms of protocol design (different self-paced activities in an extended area of the hospital) and patients’ clinical condition. Another limitation of this study included the small sample size which may have led to an under-powered statistical analysis. An extension of this work could thus be to validate this approach on a greater number of stroke patients or on another patient population impaired by mobility restriction such as patients suffering from Parkinson’s disease or chronic pain.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All the authors were involved in the design and conception of the study. Furthermore, FM and RRG were involved in the data collection. FM, AA, and API were involved in the data processing. All the authors reviewed and approved the content of this publication.