Artificial Intelligence and Advancing Synthetic Media (Deep Fakes)

3D rendered depiction of a digital avatar. Text: Artificial Intelligence and Advancing Synthetic Media (Deep Fakes)

Artificial intelligence (AI) is, by far, one of the most vital technologies currently being developed. As we have discussed in previous articles regarding Artificial Intelligence regulation and the Internet of Behaviors (IoB), AI has a lot of potential, both helpful and dangerous. Modern AI systems primarily run on a process called deep learning. The model takes, as input, several numerical values and after hidden, weighted analysis returns a number of numerical outputs. In a process called training, the model uses mathematics to guess better weights in the invisible steps, eventually improving the outputs.

Recently, there’s been a lot of discussion around an application of deep learning called synthetic learning or deep fakes. A highly complex deep learning system trains itself, using videos or pictures of a person’s face and mannerisms, to essentially photoshop one person’s face onto another’s body within a video. The same technique can be used to synthesize a voice to imitate that of a specific person. This technology has a lot of potentials to be dangerous and is, at least, morally suspect. In the future, misinformation can easily be spread by faking a politician’s speech - even civilians could be at risk.

In a recent episode of 60 Minutes, journalist Bill Whitaker highlighted the risks of deep learning. He volunteered himself to show just how natural the end-product can be. Whitaker also discussed the potential applications mentioned above as a vector of misinformation; machine learning holds the potential to push our already polarized society over the edge. Some of these concerns, though, are not just science fiction. A recent Forbes article reported that criminals have used deep fake technology to impersonate a bank director’s voice, in turn, stealing over $35 million.

Deep fakes are already posing serious threats to companies and governments. However, as technology constantly improves, its realism, and hence its threat, will continuously increase. It may be impossible to distinguish actual film or audio from deep fake media in a few years, which could cause misinformation to spread like wildfire on social media. As we move forward, it may be critical to regulate deep fake development, even if it may potentially infringe on individual freedoms.

Deep fakes are becoming more and more realistic, a fact threatening the private sector, government systems, and even our society. Although deep learning is potentially one of the most valuable technologies to date, there should also be cause for concern. Though the potential good that could come from deep learning when considering medical and informational applications, as with any new technology, in the wrong hands could be used nefariously. In the coming years, it may be necessary to control deep fake development to preserve our society’s trust in virtual media as a whole. The precaution to be taken by the public is simply knowing of the existence of deep fakes, questioning, and not taking what is seen at face value.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to top
Close Bitnami banner
Bitnami