Technology

New reports link Meta and ‘momfluencers’ in perpetuating child exploitation online

The Instagram logo on a black background. Bright camera flashes light up around it.

Two new investigations out this week shine a darkening light on parent-run child influencer accounts, alleging that Meta‘s content monetization tools and subscription models are providing a breeding ground for child sexual exploitation online.

According to an exclusive from the Wall Street Journal, Meta safety staffers alerted the company to adult account owners using Facebook and Instagram’s paid subscription tools to profit from exploitative content featuring their own children. Internal reports document hundreds of what they define as “parent-managed minor accounts” selling exclusive content via Instagram’s subscriptions. The content frequently featured young children in bikinis and leotards and promised videos of children stretching or dancing, the Wall Street Journal reported, and parent-owners often encouraged sexual banter and interactions with followers.

Safety staff recommended the banning of accounts dedicated to child models, or a new requirement that child-focused accounts be registered and monitored. The company instead chose to rely on an automated system designed to detect and ban suspected predators before they could subscribe, according to the Wall Street Journal report. Employees said the technology wasn’t reliable and that the bans could be easily evaded.

Simultaneously, the New York Times released a report on the lucrative business of mom-run Instagram accounts, which confirmed findings of accounts selling exclusive photos and chat sessions with their children. According to the Times, more suggestive posts received more likes, male subscribers were found to flatter, bully, and even blackmail the families to get “racier” images, and some of the active followers had been convicted of sex crimes in the past. Child influencer accounts reported that they earned hundreds of thousands of dollars from monthly subscriptions and follower interactions.

The Times’ investigation also documented high numbers of adult male accounts interacting with minor creators. Among the most popular influencers, men made up 75 to 90 percent of followers, and millions of male “connections” were found among the child accounts analyzed.

As Meta spokesperson Andy Stone explained to the New York Times, “We prevent accounts exhibiting potentially suspicious behavior from using our monetization tools, and we plan to limit such accounts from accessing subscription content.” Stone told the Wall Street Journal that the automatic system was instituted as part of “ongoing safety work.”

The platform’s moderation policies have done little to curb these accounts and their dubious business models, with banned accounts returning to platforms, explicitly sexual searches and usernames filtering through detection systems, and the spread of Meta content onto offsite forums for child predators, according to the Wall Street Journal report.

Last year, Meta launched a new verification and subscription feature and expanded monetization tools for creators, including bonuses for popular reels and photos and new gifting options. Meta has periodically tweaked its content monetization avenues, including pausing Reels Play, a creator tool that enabled users to cash in on Reels videos once they had reached a certain number of views.

Meta has been under fire before for its reluctance to stop harmful content across its platforms. Amid ongoing investigations by the federal government into social media’s negative impact on children, the company was sued multiple times for its alleged role in child harm. A December lawsuit accused the company of creating a “marketplace for predators.” Last June, the platform established a child safety task force. An 2020 internal Meta investigation documented 500,000 child Instagram accounts having daily “inappropriate” interactions.

It’s not the only social media company accused of doing little to stop child sexual abuse materials. In November 2022, a Forbes investigation found that private TikTok accounts were sharing child sexual abuse materials and targeting minor users despite the platform’s “zero tolerance” policy.

According to Instagram’s content monetization policies: “All content on Instagram must comply with our Terms of Use and Community Guidelines. These are our high-level rules against sexual, violent, profane or hateful content. However, content appropriate for Instagram in general is not necessarily appropriate for monetization.” The policy does not specifically point out prohibitions for minor accounts, although Meta has issued a separate set of policies that prohibit forms of child exploitation in general.

Both investigations respond to rising cries from many online to halt the spread of child sexual abuse material via so-called child modeling accounts and even more mundane pages fronted by child “influencers.” Online activists — including a network of TikTok accounts like child safety activist @mom.uncharted — have documented a rise of such accounts across the platform and other social media sites, and even tracked down members of the predominately male followings to confront their behavior. Call-outs of the parents behind the accounts have prompted other family vloggers to remove content of their children, pushing back against the profitability of “sharenting.” Meanwhile, states are still debating the rights and regulation of child influencers in a multi-billion dollar industry.

But while parents, activists, and political representatives call for both legislative and cultural action, the lack of regulation, legal uncertainty about the types of content being posted, and general moderation loopholes seem to have enabled these accounts to proliferate across platforms.

Mashable