Artificial Intelligence

Biden administration calls for more consumer protections in AI tools

The Commerce Department wants feedback and recommendations for making AI tools safer for consumers.

Text from the ChatGPT website
Richard Drew / AP
SMS

The Biden administration wants to do more to make sure AI tools like the ChatGPT text generator are safe for consumers.

The Commerce Department says it will take the next two months to collect feedback and recommendations to help protect consumers, who are getting more and more access to AI tools that generate text, images and video.

"There is a heightened level of concern now, given the pace of innovation, that it needs to happen responsibly," said Assistant Commerce Secretary Alan Davidson, administrator of the National Telecommunications and Information Administration.

AI programs like ChatGPT are already gaining and refining abilities faster than the government can keep up.

In 2022, for example, the White House proposed an AI Bill of Rights to protect consumers from discrimination and data leaks. It would give consumers access to humans in the loop who can explain how AI systems work, or take over for them if they’re not working properly.

But that measure doesn’t have the legal teeth to back up its goals, and it came out before the newest AI tools hit their stride and entered the mainstream. President Joe Biden last week said that tech companies need to make sure their products are safe to use before the public gets access, and signaled to Congress that he wanted to see new legislation to help protect children and regulate data collection as AI gains ground.

Canada opens investigation into ChatGPT
Canada opens investigation into ChatGPT

Canada opens investigation into ChatGPT

Canada is investigating the company behind ChatGPT after it received a privacy complaint about the AI-powered chatbot.

LEARN MORE

Italy, meanwhile, outright blocked ChatGPT in the country last week while it investigates whether the tool ran afoul of strict EU data privacy laws. Other data regulators in the union say they are investigating potential risks as well.

San Francisco-based OpenAI, which develops ChatGPT, says it is addressing Italy’s concerns, and says it continually works to scrub personal information out of its products.