Open post

Machine Learning Bias in Application

3 Things You Must Start Doing When You Become a Team Leader

Machine learning is powerful because it is taught to respond to data it has never seen before. Through a process of statistical analysis called “training”, these models can pick up on patterns in data sets and apply those motifs to completely new examples.

A famous example of machine learning in action is its application on the EMNIST database. The EMNIST database is a collection of human-drawn digits (0-9) that has been widely used in training image classification programs.

Essentially, a computer can learn how to recognize and classify imperfect, handwritten numbers from this database. After such a model has been effectively trained, the program should be able to classify a fresh example it has never seen before.

This is what makes AI and machine learning so powerful and likely to revolutionize technology as we know it. However, this asset is also potentially dangerous in at-scale applications.

Trained on Facebook posts and comments, Meta’s new AI chatbot “can convincingly mimic how humans speak on the internet,” according to CNN. However, as we all know by now, some people on Facebook can say some questionable things.

Not only did the chatbot recite many conspiracy theories propagated on the platform, but it also attacked the company that created it. Vice reports the bot even parroted “since deleting Facebook my life has been much better.”

In other examples, racial biases in datasets are amplified through the training process and skew the model.

Fundamentally, AI’s greatest strength leads to inherent unpredictability in how it might respond to data. From a marketing standpoint or from an ethical one, it’s necessary to really spend time understanding the potential biases of a dataset before using it to train any model.

 

Job Hunting? Let us help! - Search Jobs Now

 

Open post 3D rendered depiction of a digital avatar. Text: Artificial Intelligence and Advancing Synthetic Media (Deep Fakes)

Artificial Intelligence and Advancing Synthetic Media (Deep Fakes)

3D rendered depiction of a digital avatar. Text: Artificial Intelligence and Advancing Synthetic Media (Deep Fakes)

Artificial Intelligence

Artificial intelligence (AI) is, by far, one of the most vital technologies currently being developed. As we have discussed in previous articles regarding Artificial Intelligence regulation and the Internet of Behaviors (IoB), AI has a lot of potential, both helpful and dangerous. Modern AI systems primarily run on a process called deep learning. The model takes, as input, several numerical values and after hidden, weighted analysis returns a number of numerical outputs. In a process called training, the model uses mathematics to guess better weights in the invisible steps, eventually improving the outputs.

Deep Fakes

Recently, there’s been a lot of discussion around an application of deep learning called synthetic learning or deep fakes. A highly complex deep learning system trains itself, using videos or pictures of a person’s face and mannerisms, to essentially photoshop one person’s face onto another’s body within a video. It can use the same technique to synthesize a voice and imitate a specific person. This technology has a lot of potentials to be dangerous and is, at least, morally suspect. In the future, misinformation can easily be spread by faking a politician’s speech - even civilians could be at risk.

The Risks

In a recent episode of 60 Minutes, journalist Bill Whitaker highlighted the risks of deep learning. He volunteered himself to show just how natural the end-product can be. Whitaker also discussed the potential applications mentioned above as a vector of misinformation. Machine learning holds the potential to push our already polarized society over the edge. Some of these concerns, though, are not just science fiction. A recent Forbes article reported that criminals have used deep fake technology to impersonate a bank director’s voice, in turn, stealing over $35 million.

The Spread of Misinformation

Deep fakes are already posing serious threats to companies and governments. However, as technology constantly improves, its realism, and hence its threat, will continuously increase. It may be impossible to distinguish actual film or audio from deep fake media in a few years, which could cause misinformation to spread like wildfire on social media. As we move forward, it may be critical to regulate deep fake development, even if it may potentially infringe on individual freedoms.

The Future

Deep fakes are becoming more and more realistic, a fact threatening the private sector, government systems, and even our society. Although deep learning is potentially one of the most valuable technologies to date, there should also be cause for concern. There is potential good that could come from deep learning with regards to medical and informational applications. However, as with any new technology, in the wrong hands it could be used nefariously. In the coming years, it may be necessary to control deep fake development to preserve our society’s trust in virtual media as a whole. The precaution to be taken by the public is simply knowing of the existence of deep fakes, questioning, and not taking what is seen at face value.

Scroll to top
Close Bitnami banner
Bitnami