Deepfakes are spreading fast, and while some have playful intentions, others can cause serious harm. We stepped inside this deceptive new world to see what experts are doing to catch this altered content. » Subscribe to Seeker! http://bit.ly/subscribeseeker » Watch more Focal Point | https://bit.ly/2s0cf7w » Visit our shop at http://shop.seeker.com Chances are you’ve seen a deepfake; Donald Trump, Barack Obama, and Mark Zuckerberg have all been targets of the computer-generated replications. A deepfake is a video or an audio clip where deep learning models create versions of people saying and doing things that have never actually happened. A good deepfake can chip away at our ability to discern fact from fiction, testing whether seeing is really believing. The deep part of the deepfake that you might be accustomed to seeing often relies on a specific machine learning technique called GAN, or generative adversarial network. Two algorithms compete with each other to outsmart the other one. For example, one of the algorithms tries to create a convincing image of a face, while the other algorithm tries to detect if the image is fake. The end result can be a convincing generation. Deepfakes first started to pop up in 2017, after a Reddit user posted videos showing famous actresses in porn. Today, these videos still predominantly target women, but have widened the net to include politicians saying and doing things that haven’t happened. In June 2019, the House Intelligence Committee held an open meeting to address the national security challenge presented [More]