We're Failing Kids Online. Here's Why.
- Sarah DeGue
- Mar 5
- 5 min read
Part 1 of a 3-Part Series on Protecting Kids in Digital Spaces
A quick note if we're just meeting: Welcome! You're just in time.

Yesterday, I launched EVOLVE Violence Prevention after months of planning and creating. I am sleep-deprived, coffee-fueled, and... INCREDIBLY EXCITED! I built EVOLVE to help organizations translate violence prevention evidence into strategies that work, and I hope it will help you — the people doing the real work. More on that at www.evolveprevention.com — but first, this.

Last week, I had the privilege of speaking (along with Senator Blumenthal and a fantastic panel) at the launch of KidSafeHQ, a new initiative from The Rowan Center that provides parents, educators, and communities with one accessible, trustworthy place to find resources for keeping kids safe online. If you haven't seen it yet, check it out at kidsafehq.com. It's fantastic.
It was the kind of event that reminds you why this work matters — and also how much more work remains to keep our kids safe from harm online.
That's what this series is for. Despite enormous amounts of attention, energy, and genuine concern (from people and some policymakers, not the tech companies), we are still failing kids online. We're failing because we keep misreading the problem. Here's what I mean.
We're failing because we're solving the wrong problem.
The thing that matters most — and that I think gets missed most often — is this: the internet is not a cause of violence against children. It's just a new — and much more effective — tactic.
The people who groom children, exploit them sexually, bully them into crisis, or pull them toward extremist ideologies are not a new species of offender created by social media or being online 15 hours a day (though that does not help!). In most cases, they are people who were already at risk of causing harm — and who found the internet to be a faster, cheaper, easier, more anonymous, and vastly higher-reach way to do it.
That distinction matters because it changes where we look for solutions. When we treat the internet as the cause, the answer is restriction — block the apps, filter the content, limit the screen time. And while those things have their place (and parents should do it, or at least try...), they protect the child in front of you from harms that already exist. They do nothing to reduce the number of people who want to cause that harm or their ability to do it to someone else.
If we want to actually move the needle, we have to go further upstream: to the perpetrators, the platforms, the communities, and the conditions that create risk in the first place.
We're failing because the threat has outpaced our response.
The harms themselves — exploitation, bullying, manipulation, radicalization — are not new. What has changed is their scale and speed, and our prevention systems have not kept pace.
Reports of child sexual abuse material to the National Center for Missing & Exploited Children increased 35% during the COVID-19 pandemic alone, reaching 29.4 million reports in a single year. Sextortion — where a perpetrator obtains an intimate image and uses it as leverage for money or more images — has become one of the fastest-growing forms of exploitation, increasingly targeting teenage boys in a pattern that is relatively new and deeply alarming. One person, operating from anywhere in the world, can now reach thousands of children with minimal effort and very little risk of being caught.
And then there's AI. We are now dealing with tools that can generate convincing fake images of real children, automate grooming conversations at scale, and make perpetrators dramatically harder to identify and trace. The threat hasn't just grown — it has gotten smarter and faster. What we're doing to prevent it largely hasn't.
We're failing because we're ignoring a key driver of risk.
The U.S. Surgeon General's 2023 advisory documented an epidemic of loneliness and social disconnection among American youth. That is a mental health crisis, yes. It is also a child safety crisis. And we are not treating it like one.
Lonely kids are more vulnerable kids. Young people without strong offline connections are more likely to seek belonging online, spend more time there, and miss the warning signs of manipulation — because the attention feels good and connection feels scarce. They're more likely to disclose personal information and engage with people they don't know. That vulnerability is real.
But loneliness doesn't just create victims. It creates perpetrators, too. Extremist recruiters aren't primarily selling an ideology — they're selling significance, belonging, and purpose to young people who feel they have none. The same unmet needs that make a child vulnerable to a groomer make a teenager vulnerable to radicalization.
It's the same dynamic, from two different directions.
We're failing because we've separated online and offline violence.
Our systems treat online harm and offline harm as separate problems requiring separate solutions — separate programs, separate funding streams, separate conversations. The research says that's wrong.
Cyberbullying escalates into in-person violence. Online grooming leads to contact sexual abuse (remember this? It still happens.). Radicalization online produces real-world violence. And the risk factors driving all of it — loneliness, unhealthy relationship norms, poor social-emotional skills, lack of trusted adults — are the same ones we've been studying in the context of offline violence for decades. The perpetrators aren't different either. In many cases, someone at risk of committing sexual violence offline is the same person who may use the internet to find victims, distribute abuse material, or groom children remotely.
This means we are leaving enormous leverage on the table. Everything we know about preventing sexual violence, reducing youth aggression, and building healthy relationships offline also reduces the pool of people likely to harm children online. Programs that address these root causes protect children in both worlds — but only if we stop treating online safety as a separate lane.
We're failing because we've put the burden in the wrong place.
Even if we get all of the above right — even if we invest seriously in upstream prevention, address loneliness, and integrate online and offline safety — we will still face a problem we cannot prevention-program our way out of. Perpetrators don't just come from our communities. They come from everywhere in the world. A child in any American city can be targeted by someone operating from another country entirely, well beyond the reach of any school curriculum or community initiative we design.
This is the hard limit of prevention — and it's the most important argument for why platform accountability and policy solutions aren't optional extras. They are the only tools with anything close to global reach. No family, no school, no community can out-parent a transnational criminal network. Asking them to is not a strategy. It's an abdication of responsibility by the systems and institutions that actually have the power to act at scale.
We need prevention and we need platform accountability and we need policy — because no single approach is sufficient on its own. In Part 2, I'll get into what the evidence shows about what prevention can actually accomplish. And in Part 3, I'll lay out what each sector — parents, schools, tech companies, and governments — needs to do differently.
Sarah DeGue, PhD is the founder of EVOLVE Violence Prevention.
EVOLVE helps organizations translate violence prevention evidence into strategies that work in the real world. www.evolveprevention.com
Want to receive the next article in your inbox? Get our newsletter, The Prevention Pulse, and never miss a post. Plus, more tools, tips, resources, and news you can use.






Comments