Abstract:The convolutional operation is constrained by traversal rules, limiting the extraction of feature information from individual skeletal nodes and preventing effective fusion of feature information between adjacent nodes, resulting in limited expressive power. In response to this issue, a gesture recognition neural network based on a Feature Displacement Module is proposed. This network adopts the architecture of conventional spatiotemporal graph convolutional neural networks and replaces the conventional spatiotemporal convolution module with the Feature Displacement Module to achieve fusion of feature information between adjacent nodes. By reordering the displacement channels through the Feature Displacement Module, global feature information of skeletal nodes is extracted, further enabling efficient and accurate classification of gesture information. The Feature Displacement Module is validated on the public dataset DHG-14/28 and FPHA, achieving classification accuracies of 95.11%, 93.01% and 92.67% for 14-class, 28-class and FPHA gesture datasets. The experimental results demonstrate that this network model can better and more effectively mine global feature information, achieving excellent performance on common gesture recognition datasets.