Instagram boss Adam Mosseri reportedly blocked or weakened efforts by employees to implement youth safety features – even as parent company Meta faced mounting legal scrutiny over concerns that its popular social media apps were harming young users.
Mosseri – whose name appears frequently in a sweeping lawsuit filed by 33 states accusing Meta of loading its apps with addictive features that hurt youth mental health — reportedly ignored “pressure from employees” to install some proposed safety features as default settings for Instagram users, according to The Information.
Meta-owned Instagram and Facebook have come under fire from critics who allege their use has fueled a slew of alarming trends among youth, including increased depression, anxiety, insomnia, body image issues and eating disorders.
Nevertheless, Instagram brass rejected a push by members of the company’s “well-being team” to include app features that would encourage users not to compare themselves to others, the report said, citing three former employees with knowledge of the details.
The feature wasn’t implemented despite Mosseri’s own admission in an internal email that he saw “social comparison” as the “existential question Instagram faces” and that “social comparison is to Instagram [what] election interference is to Facebook,” according to the states’ lawsuit.
Additionally, a Mosseri-backed feature to address the “social comparison” problem by hiding “like” counts on Instagram was ultimately “watered down” into an optional setting that users could manually enable, the report said.
Internally, some company employees reportedly featured that the like-hiding tool would hurt engagement on the app – and therefore cut into advertising revenue.
While some sources praised Mosseri’s commitment to promoting youth safety, others told The Information that Instagram has a pattern of making such features optional rather than automatically implementing them.
A Meta spokesperson didn’t respond specifically to questions about why the company rejected proposals for tools to counter problems arising from the social comparison issue.
“We can’t know what prompts any given individual to compare themselves to others, so we give people tools to decide for themselves what they do and don’t want to see on Instagram,” a Meta spokesperson told the outlet.
Meta didn’t immediately respond to requests for comment from The Post.
Elsewhere, Mosseri allegedly opposed use of a tool that would have automatically blocked offensive words in direct message requests because he “thought it might stop legitimate messages getting through,” The Information reported, citing two former employees.
Ultimately, Instagram approved an optional “filter” feature in 2021 that allowed users to block a list of offensive words curated by the company or to compile their own list of offensive phrases and emojis they wanted to block.
The move reportedly rankled safety staffers, including ex-Meta engineer Arturo Béjar, who felt people of color should not be forced to confront the offensive words in order to deal with the problem. In November, Béjar testified before a Senate panel regarding harmful content on Instagram.
“I went back to Instagram with the hope that Adam would be proactive about these issues and I had no evidence of that in the two years I was there,” Béjar, who had initially left Meta in 2015 and returned to a role on the safety team in 2019, told the outlet.
Meta pushed back on the report, noting that Instagram has introduced a series of default safety features for its teen users, such as blocking adults 19 and older from sending direct messages to teen accounts that don’t follow them.
For example, Meta said its tool hiding offensive phrases and emojis, called “Hidden Words,” will be enabled by default for teens starting in 2024. The company said it has made more than 20 policy announcements about teen safety since Mosseri took over Instagram in 2018.
Mosseri also responded, writing that further investments in platform safety “will make our business stronger.”
“If teens come to Instagram and feel bullied, get unwanted advances, or see content that upsets them, they’ll leave and go to one of our competitors,” Mosseri said. “I know how important this work is, and that my leadership will be defined by how much progress we make on it. I’m committed to continuing to do more.”
Mosseri was one of several Meta executives to draw scrutiny as part of a sweeping lawsuit filed in October by a coalition of 33 state attorneys general.
The suit alleged in part that Meta’s millions of underage Instagram users were an “open secret” at the company.
The suit includes an internal chat from November 2021 in which Mosseri seemingly acknowledged the app’s problem with underage users, writing, “tweens want access to Instagram, and they like about their age to get it now.”
A month later, Mosseri testified to the Senate that children under age 13 were “not permitted on Instagram.” He also told lawmakers that he viewed youth online safety as “critically important.”
Aside from the states’ legal challenge, Meta faces another lawsuit from the state of New Mexico alleging it failed to protect young users from alleged sexual predators and bombarded them with adult sex content.
Source