Science and TechArtificial Intelligence

Actions

FBI worried about AI and disinformation ahead of the 2024 election

FBI is closely monitoring the role that artificial intelligence may play in the 2024 presidential election.
Posted

After months of warnings from tech executives about the dangers of artificial intelligence, the Federal Bureau of Investigation has a new list of concerns.

The agency's biggest fears are not only about what the technology does but also about who is using it.

During a rare background briefing call with reporters, a senior FBI official, who even acknowledged that they haven't done significant outreach on the topic of AI, described a pretty concerning situation, or a "threat landscape," as the FBI calls it.

He said that China is looking to steal U.S. AI technology and data for AI programs and then use it not just to advance their own AI programs but to influence Americans.

He also said that the FBI is closely monitoring the role that AI may play in the 2024 election and is concerned about the spread of disinformation and deep fake videos.

He said that criminals and terrorists are seeking AI to simplify the production of dangerous chemicals and biological substances to increase their potency.

Tech giants commit to Biden administration-brokered AI safety rules
The OpenAI logo on a phone screen.

Tech giants commit to Biden administration-brokered AI safety rules

Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI have made voluntary commitments to ensure their products are safe.

LEARN MORE

Scripps News asked about explosives, and this official said that a variety of criminal and national security actors, from violent extremists to traditional terrorists, are using AI to try to come up with ways to create different types of explosives.

He said, "There have been people who have successfully elicited recipes or instructions for creating explosives."

He also said that AI is a force multiplier for crafting fishing e-mails and for using it in other cyberattacks. He says that the FBI has found AI-generated websites that are infected with malware to target users’ sites that have more than a million followers.

The bottom line, the FBI says, there are fewer people, less expertise, and less time needed for a lot of these threats, so there's a much lower bar or barrier for entry here. 

Furthermore, the FBI is spending some of its time working on being able to determine what is synthetically AI-generated content online. They are working with private companies, and they're working with academia. But as this official said, this technology is advancing really quickly, and it is hard to stay on top of it.