Texas passed it. Utah passed it. Florida passed it. By the end of 2025, more than ten states had enacted laws requiring social media platforms to verify users' ages and obtain parental consent before allowing children under 16 to create accounts. The political coalition behind these laws is unusually broad, drawing support from conservative parent groups focused on traditional values and progressive child development advocates focused on mental health research. When those two camps agree on something, it moves.
The problem is that the laws are moving faster than the infrastructure needed to enforce them.
Age verification online is genuinely hard. The approaches that exist are imperfect by design. Document upload verification, where users submit a photo of an ID, is invasive, creates data security risks, and excludes anyone without a government-issued ID readily accessible. Age estimation AI, which uses facial recognition to guess a user's age from a selfie, is inaccurate enough to be essentially useless as a standalone method and raises obvious civil liberties concerns. Third-party verification services, which ping credit bureaus or other databases to confirm age, rely on data that many minors' families don't have or that doesn't clearly belong to the minor being checked. None of these solutions work well. All of them are being tried.
Social media platforms are not scrambling urgently to solve this problem because they have no financial incentive to exclude young users. Quite the opposite. Teen and young adult users are among the most valuable demographics on attention-driven platforms. They generate enormous engagement data, they are highly influenced by what they see, and they grow into adult consumers who often remain on the platforms they used as teenagers. A teenage user base is a future adult user base. The platforms understand this calculation precisely.
Meta has implemented some age-verification features in states where the laws require it, primarily relying on self-declaration with additional verification prompts. The honest assessment of that approach is that it doesn't work particularly well. A 14-year-old who wants an account and is told to enter a birthdate will enter a birthdate that makes them appear old enough. The verification layer that's supposed to catch this isn't sufficient to stop a motivated teenager.
TikTok's situation is more complicated given the ongoing legal and ownership questions around the platform. But the age verification compliance issue applies regardless of who owns the app. The user base includes a significant percentage of teens, the algorithms know this and optimize content accordingly, and the current compliance mechanisms are not meaningfully reducing that percentage.
The constitutional dimension of these laws is unresolved. Tech companies have challenged several state laws on First Amendment grounds, arguing that age-gating social media restricts expression for both minors and the adults who might want to communicate with them. Courts have issued mixed rulings. The Supreme Court has not yet definitively addressed whether and how states can restrict minors' access to social platforms. That ruling, whenever it comes, will determine whether this patchwork of state laws survives or gets rolled back.
The mental health research driving these laws has grown significantly since 2020. The general finding is that heavy social media use among teenagers, particularly girls, correlates with higher rates of anxiety, depression, and disrupted sleep. The correlation is documented well enough to take seriously. The causation is still being debated among researchers. But even if the relationship isn't as clean as advocates claim, the correlation is concerning enough that doing nothing seems indefensible.
For parents, the practical reality is that the laws as currently written are not actually preventing kids from accessing social media. They're adding friction, which may reduce some casual use, but they're not creating meaningful barriers for motivated teenagers. What the laws are doing is establishing a legal framework that may be refined as both the technology and the constitutional boundaries become clearer.
The deeper question underneath all of this is cultural rather than technical or legal. Who is responsible for what children encounter online? Parents, platforms, or governments? The answer is probably all three, but the burden isn't currently shared equally. These laws are an attempt to shift more of it toward the platforms, and the fight over that shift is just beginning.