Science and TechArtificial Intelligence

Actions

Experts: US needs to take steps to regulate rapidly-evolving AI

How worried should Americans be about A.I. and its future potential?
Posted

Artificial intelligence is evolving at a dizzying speed. There's been a lot of attention in recent months on the dangers and risks of A.I. systems, especially since the release of Open AI's ChatGPT program last fall. How worried should Americans be about A.I. and its future potential? 

The nonprofit Center for AI Safety posted a letter this month by industry leaders, researchers and even celebrities warning A.I. could one day create as deadly a risk to humanity as nuclear war and pandemics, leading eventually to global annihilation. 

The one sentence statement said, "mitigating the risks of AI should be a global priority." While experts in the field also said A.I. is a long way from extreme science fiction scenarios, they want Congress to regulate it now, before any major mishaps occur. 

The government understands the risks. The Biden administration is trying to get out in front of it and plans to issue guidance on A.I. in the next few months. And Congress has been holding a series of hearings on A.I. in recent weeks, although some have argued it's too late to get a head start on the technology now — the proverbial cat is already out of the bag. 

Vice President Kamala Harris met with the leaders of four tech companies recently, including Google and Microsoft, which both have their own A.I. systems. Microsoft released a new version of Bing, the first search engine powered by artificial intelligence, and Google has launched its Bard chatbot. 

"AI is one of today's most powerful technologies, with the potential to improve people's lives and tackle some of society's biggest challenges," Harris said in a statement after the meeting in May. 

"At the same time, AI has the potential to dramatically increase threats to safety and security, infringe (on) civil rights and privacy, and erode public trust and faith in democracy," she added. 

President Joe Biden listens as British Prime Minister Rishi Sunak speaks.

President Biden hosts UK PM for talks on economy, Ukraine and AI

President Biden and UK Prime Minister Rishi Sunak focused their meeting on economic and defense issues.

LEARN MORE

There's a sense that the technology could do real harm because it can create fake videos, pictures, songs and artwork. It can write books and term papers. And it could supercharge misinformation and disinformation campaigns by nefarious actors on a level never seen before. 

Professor of computer science at Northwestern University and the director of the Center for Advancing Safety and Machine Intelligence, Kristian Hammond called the idea that A.I. could one day wipe out humanity "shockingly overly dramatic" and a "distraction" from the real issue. 

Hammond said the fact that there's no regulation or oversight of the technology at this point is a "real present harm." 

"There's not a lot in the way of thought and regulation around an entire suite of harms, which we know flow from bias systems, systems that are sort of addictive in nature, systems that cause depression, that we understand really well and no one's moving," he said in an interview with Scripps News Live. 

"We actually need to take steps to make A.I. safe. That's it," Hammond said. "We need to make it safe." 

Hammond explained that it's critical that A.I. systems are made safe on a day-to-day basis, as opposed to worrying about extinction. 

"Tech executives are looking for regulation but they won't actually do the things they need to do right now that they should be regulated to do." 

Hammond said the companies building these systems are not transparent, that they won't reveal how the systems are trained or explain how they decide what they don't want people to see. He cited the concern over bias in the systems. 

"The thing I worry about is us getting stuck in the world of Skynet," Hammond said, referring to the A.I. system in the "Terminator" movie franchise that tries to wipe out humanity. 

"Right now we're in a situation where we're being distracted by the long-term and we're actually not taking action on the short and it's time to actually push really hard to say, no we actually want to have regulations around systems that can cause depression, systems that can cause online addictions, systems that will convince us to do things we don't want to do, systems that are biased. We need regulations over those and we need them today. I'm less worried about actually creating Skynet," Hammond said.