Language barriers are very much still a real thing. We can take baby steps to help close that. Speech to text and translators have made it a heap easier. But what about for those that maybe don’t speak or can’t hear? What about them? Well…you can begin to use Tensorflow Object Detection and Python to help close that gap. And in this video, you’ll learn how to take the first steps to doing just that! In this video, you’ll learn how to build an end-to-end custom object detection model that allows you to translate sign language in real time. In this video you’ll learn how to: 1. Collect images for deep learning using your webcam and OpenCV 2. Label images for sign language detection using LabelImg 3. Setup Tensorflow Object Detection pipeline configuration 4. Use transfer learning to train a deep learning model 5. Detect sign language in real time using OpenCV Get the training template here: https://github.com/nicknochnack/RealTimeObjectDetection Other Links Mentioned in the Video Face Mask Detection Video: https://youtu.be/IOI0o3Cxv9Q LabelImg: https://github.com/tzutalin/labelImg Installing the Tensorflow Object Detection API: https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html Oh, and don’t forget to connect with me! LinkedIn: https://www.linkedin.com/in/nicholasrenotte Facebook: https://www.facebook.com/nickrenotte/ GitHub: https://github.com/nicknochnack Happy coding! Nick P.s. Let me know how you go and drop a comment if you need a hand!