The House of Representatives on Thursday introduced a new bill that would require disclaimers to be attached to AI-generated content online.
The bill, which has support from both Democrats and Republicans, would require AI-generated content to be marked with digital signatures in their metadata. AI content on platforms like YouTube or TikTok would have to carry disclaimers that users would recognize.
“We've seen so many examples already, whether it's voice manipulation or a video deepfake. I think the American people deserve to know whether something is a deepfake or not," said Democratic Rep. Anna Eshoo of California, one of the bill's sponsors.
Florida Republican Neal Dunn, also a sponsor, said a rule to require disclaimers on AI content would be a "simple safeguard" for all of the audiences that AI reaches.
The final rule would be implemented by the Federal Trade Commission. Violators could face civil lawsuits.
YouTube requiring creators to label 'realistic' AI content
The platform says a disclosure is required for "content a viewer could easily mistake for a real person, place, or event."
The new legislation joins other efforts from lawmakers and tech companies worldwide to manage the new wave of AI content.
A group of large tech and social platform companies have already signed on to voluntary measures that would manage AI content. In July of 2023, companies including Amazon, Google, Meta, Microsoft and OpenAI agreed to new guidelines set up by the Biden administration that would include third-party testing for cybersecurity, information sharing with researchers and regulators and some disclaimer practices for consumers like those Congress is now weighing.
Later that year, Google and Meta announced they would set up new rules to require disclosure labels on political ad content that included any AI elements.
President Biden has signed an executive order regulating the use of AI by federal agencies.
And EU regulators have already set out comprehensive rules for the use of AI, set to go into full force in 2025. Broadly, they will manage or ban AI practices that pose risks to the public, hold AI developers to specific safety obligations and set up a governing body in the EU to carry out continued oversight.