An iOS app that can detect human emotions, objects and lot more. Made using coreML image detection API. Thank you for Watching. Please don’t forget to subscribe. ~ ## Inspiration Inspired by a few blind people who use echolocation to “see” things around them, we have developed an interface that would help a blind person listen to have a feeling of where they belong in the society and enjoy the little things and experiences of everyday life. ## What it does The interface is connected to a camera (which could be then possibly integrated with cameras on pen tips) which records real time videos of events, parse it into multiple frames – analyze it piece by piece and finally using a text to speech interface, dictates what it sees. ## How we built it For Building the interface, we used the Apple Artificial Intelligence API that contains a pre-trained data set that could be readily used. However on experimenting we learned that real time video/image has a lot of noise, and that it would take a long time to train the data set for practical purposes. Therefore we created a small data set with real time images (taken using phone cameras) and further trained the available data set to a considerable degree of accuracy. With enough time, this can further be implemented and generalized into more diverse data sets, achieving its intended purpose. ## Challenges we ran into As previously mentioned, we learned that the time it takes to train [More]