Artificial Intelligence

How artificial intelligence is helping to protect your children online

It's not something parents have to pay for. Rather, it's a service that platforms use.

How artificial intelligence is helping to protect your children online
Branden Camp / AP
SMS

Monitoring what your child sees online can be difficult, but a new version of artificial intelligence is helping protect your kids from things like hate speech, extremism, and even grooming.

One of Dave Matli’s biggest concerns as a father is protecting his children from online threats. That’s why he says he started working for Spectrum Labs, a software developer that uses artificial intelligence for content moderation.

“Things like child grooming and hate speech and like radicalization of people, bullying, threats," Matli said. "They use machine learning to get better and better at detecting some of that stuff and then removing it before you ever see it in your own phone and device.”

He says the AI pulls data points on a user’s profile like how long they’ve been there, the ages of two different people talking, and the topic of their conversation. It's nothing parents pay for. Rather, it's a service that platforms use. Current platforms utilizing the AI include entertainment site Fandom, children's app creator Kinzoo, and the dating app Grindr.

Report: 96% of K-12 apps share children's data with third parties
Report: 96% of K-12 apps share children's data with third parties

Report: 96% of K-12 apps share children's data with third parties

Next time you download an education app, consider the following expert advice on data privacy.

LEARN MORE

Matthew Soeth has worked in the tech and social media industry for more than a decade; most recently working on the Global Trust and Safety Team for TikTok. He now leads safety for Spectrum Labs. He says this AI stands out because it’s not just picking out keywords, it’s looking at the whole context.

“Example: if you use a laughing face emoji, there's a good chance you're probably over 40," Soeth said. "But there's different emojis that if you're under 30 or if you're under 18, you use a different set of emojis to indicate, 'Oh, I think that's really funny.' And so looking at those kind of context clues, we can start to identify when someone is being truthful about their age.”

Both Soeth and Matli say that industry wide, about 80% of the problems are caused by 5% of users. So, if you can identify and remove them, it protects millions of users.

Matli says if the AI recognizes something dangerous, it will notify the police.

"Especially when it comes to child safety," Matli said. "When we detect like signs of child endangerment or what's called CSAM, which is child sexual abuse material, by law, you have to report it to police. But we already have that built in any way to do that."

More kids are showing up to ERs with mental health crises
More kids are showing up to ERs with mental health crises

More kids are showing up to ERs with mental health crises

WARNING: You're going to hear doctors and a family talk about suicidal thoughts, suicide attempts, and about children who've killed themselves.

LEARN MORE

Because this AI content moderation is for the platforms, and not for parents, you wouldn't even know it's working because it's happening behind the scenes. 

The platforms will be forced to take protective measures as the U.S. takes more legal steps to protect children online. Soeth says the U.S. is facing increasing pressure to implement legislation following Europe's Digital Services Act and Australia's Online Safety Act.

“There's some talk at the national level. Like the Kids Online Safety Act is really looking at how do we better kind of implement or hold platforms accountable to make sure that they're doing what they say they're doing,” Soeth said.