Sign Language Detection using ACTION RECOGNITION with Python | LSTM Deep Learning Model

Want to take your sign language model a little further? In this video, you’ll learn how to leverage action detection to do so! You’ll be able to leverage a keypoint detection model to build a sequence of keypoints which can then be passed to an action detection model to decode sign language! As part of the model building process you’ll be able to leverage Tensorflow and Keras to build a deep neural network that leverages LSTM layers to handle the sequence of keypoints. In this video you’ll learn how to: 1. Extract MediaPipe Holistic Keypoints 2. Build a Sign Language model using a Action Detection powered by LSTM layers 3. Predict sign language in real time using video sequences Get the code: Chapters 0:00 - Start 0:38 - Gameplan 1:38 - How it Works 2:13 - Tutorial Start 3:53 - 1. Install and Import Dependencies 8:17 - 2. Detect Face, Hand and Pose Landmarks 40:29 - 3. Extract Keypoints 57:35 - 4. Setup Folders for Data Collection 1:06:00 -
Back to Top