Abstract:In recent years, deep learning methods have performed well in human activity recognition. They use time series data obtained by wearable sensors such as gyroscopes and accelerometers to perform training and classification after preprocessing and data-level fusion. This paper proposes a feature-level fusion method of LSTM and CNN in order to solve the problem that the data-level fusion method has certain limitations in the recognition of multiple sensors. This method connects the independent sensor data to the LSTM layer and the convolutional component layer in turn for feature extraction, and then gathers the features of multiple sensors for action classification. The average F1 scores of this method on the three public data sets UCI-HAR, PAMAP2 and OPPORTUNITY is 96.06%, 96.17% and 94.44% respectively. Experimental results show that the method proposed in this paper has better accuracy in multi-sensor recognition of human movements.