The advances of Artificial Intelligence (AI) are driving discussions of technology’s threat to humankind into the mainstream. Science Fiction classics like The Terminator and 2001: A Space Odyssey depict rogue, fully sentient AI turning against their inventors. The question on everyone’s mind though, is if these Sci-Fi classics are foreshadowing real, scientific potentials. Recently, the European Union proposed AI regulation aimed at reducing the likelihood of events such as these occurring. The bill would require AI developers and users of “high-risk” AI systems to keep detailed records, increase internal transparency, and comply with data safety regulations. The question, though, is whether regulation like this is necessary and if it will be implemented in the US.
The risks of AI have been widely discussed, with excessive job automation concerning some and the potential risks to human freedoms and safety piquing the interest of others. Keeping in mind, even the most mundane project can experience issues when the time is not taken to determine the intent versus the actual impact. We will continue see jobs automated, but the drastic, sci-fi-esque future of an AI takeover is still that of imagination. As we progress towards a future where more jobs are being automated, we may experience a point where labor is less of an individual necessity.
However, because of how quickly AI technology is advancing, we should be considering the ramifications of this potentially dangerous technology, now. As mentioned in our article about AI bias, machine learning has other potential dangers that need to be managed. As AI is an incredibly impactful technology, even minor biases can have massive real-world impacts. Additionally, as AI does advance to a stage where it becomes a potential threat, we need to have regulations and safety measures in place to be ready for the risk.
The consensus is that AI needs to be regulated, but simultaneously, explored and tested. Being one of the most promising and revolutionary technologies, it could be harmful to heavily hamper its development; even in exchange for regulations that could prove beneficial. The challenge becomes planning for what can’t possibly be imagined. The US is currently discussing what that regulation might look like here; they will need to find a cohesive balance between safety and innovation that will optimally support our country into the future.