Abstract
Human activity recognition, or HAR for short, is a process of interpreting specific human motion based on sensor data. HAR has many human-centric applications, notably in eldercare and healthcare as an assistive service. However, due to the noisy sensor data, it requires domain analysis and signal processing to extract features from the raw data to fit into the machine learning models. The recent revolution of Deep Learning Models makes it possible to learn the features automatically instead of handcrafting features. This area extensively utilizes deep learning techniques like CNN and RNN. In this paper, we present a branch CNN and LSTM structure for recognizing human activity that yields cutting-edge outcomes. The experiment is conducted on the SHOAIB Al and UCI HAR datasets, which produce better results than the traditional approach.