Science and TechArtificial Intelligence

Actions

While AI went mainstream in 2023, so did efforts to regulate its use

This year, artificial intelligence demonstrated exciting leaps in its potential — and prompted new calls from governments to be careful using it.
The OpenAI logo is displayed in front of an AI-generated illustration
Posted

Artificial intelligence was further in the mainstream than ever in 2023 — demonstrating unprecedented capabilities, and showing that it can be unreliable and even dangerous at the same time.

This was the year ChatGPT came on the scene, prompting fascination, a scramble to find ways to make it work for businesses and a look under the hood that showed what is often a messy, unreliable and inaccurate system.

"2023 is, in history, hopefully going to be remembered for the profound changes of (AI) technology as well as the public awakening," AI researcher Fei Li told the Associated Press. "It also shows how messy this technology is."

She said people wrestled to understand "what this is, how to use it, what’s the impact — all the good, the bad and the ugly."

Some U.S. school districts banned ChatGPT this year over safety and plagiarism concerns.

But another school in the U.K. appointed an AI chatbot "principal headteacher," advising a human staff and helping draft policies and communications.

States advanced laws regulating the use of AI-generated imagery in political advertising and campaigns, and the Biden administration called on tech companies to put in new guardrails around the use of AI.

Europe reaches a deal on the world's first comprehensive AI rules
The ChatGPT logo

Europe reaches a deal on the world's first comprehensive AI rules

The rules would go into full effect in 2025, and could serve as a blueprint for regulators elsewhere in the world.

LEARN MORE

Tech companies like Meta said they would hold political advertisers to stricter rules about AI use starting in 2024, which is a U.S. presidential election year.

The FBI told reporters in July it was concerned not just about the role of AI-based advertising and social media posts, but also its ability to help malicious actors develop chemical or biological materials, or even explosives.

And negotiators in the E.U. announced rules for 2025 that could be a benchmark for comprehensive regulation on AI for other governments going forward.