Meta has received more than 1.1 million reports from users under 13 on its Instagram platform since the start of 2019, but has deactivated “only a fraction” of those accounts, according to a recently unsealed legal complaint against the company filed by the Attorney General’s Office was submitted in 33 states.
Instead, the social media giant “continued to routinely collect” personal information from children, such as their locations and email addresses, without parental permission, in violation of a federal law protecting children’s privacy, it said Court file. If the states prove the allegations, Meta could face civil penalties of hundreds of millions of dollars or more.
“Within the company, Meta’s actual knowledge that millions of Instagram users are under the age of 13 is an open secret, routinely documented, rigorously analyzed, and verified,” the complaint states, “and eagerly awaited disclosure the public is protected.”
The privacy lawsuits are part of a larger federal lawsuit filed last month by California, Colorado and 31 other states in the U.S. District Court for the Northern District of California. The lawsuit accuses Meta of unfairly seducing young people on its Instagram and Facebook platforms while concealing internal studies that show user harm. And it aims to force Meta to stop using certain features that states say have harmed young users.
However, much of the evidence cited by the states was redacted through redactions in the original filing.
Now the unsealed statement of claim filed Wednesday evening provides new details from the states’ lawsuit. Using excerpts from internal emails, employee chats and company presentations, the complaint alleges that Instagram has “coveted and pursued” underage users for years, despite the company’s “failure to comply” with children’s privacy laws.
The unsealed filing said Meta “consistently failed” to make effective age verification systems a priority and instead used approaches that allowed users under 13 to lie about their age to set up Instagram accounts. Additionally, Meta executives were accused of publicly stating in congressional testimony that the company’s age verification process was effective and that the company removed underage accounts when it learned about it – even though executives knew there were millions of underage users on Instagram.
“Tweens want access to Instagram, and they’re lying about their age to get it now,” Adam Mosseri, the head of Instagram, said in an internal company chat in November 2021, according to court documents.
In testimony to the Senate the following month, Mr. Mosseri said: “If a child is under 13, they are not allowed on Instagram.”
In a statement Saturday, Meta said it has spent a decade making online experiences safe and age-appropriate for teens and that the states’ complaint “misrepresents our work through selective citations and carefully selected documents.”
The statement also noted that Instagram’s terms of service prohibit users under the age of 13 in the United States from being banned. And it said the company has “taken steps to remove these accounts as we identify them.”
The company added that verifying people’s age is a “complex” challenge for online services, especially for younger users who may not have a school ID or driver’s license. Meta said it would like to see federal legislation requiring “app stores to obtain parental consent when their teens under 16 download apps,” rather than requiring teens or their parents to provide personal information such as birth dates for many different apps .
The privacy allegations in this case are based on a 1998 federal law, the Children’s Online Privacy Protection Act. This law requires online services with content aimed at children to obtain verifiable permission from a parent before collecting personal information – such as names, email addresses or selfies – from users under 13. Fines for violating the law can range up to more than $50,000 per violation.
The lawsuit argues that Meta chose not to develop systems to effectively identify and exclude such underage users because it viewed children as a critical demographic – the next generation of users – that the company needed to capture, to ensure further growth.
According to the report filed on Wednesday, Meta had a lot of evidence of underage users. For example, an internal company chart displayed in the unsealed material showed Meta tracking the percentage of 11- and 12-year-olds who used Instagram daily, the complaint says.
Through the company’s reporting channels, Meta also knew of accounts belonging to certain underage Instagram users. However, certain reports from users under the age of 13 were “automatically” ignored and they were allowed to continue using their accounts, the complaint said, as long as the accounts did not contain user bios or photos.
In one case in 2019, Meta employees discussed in emails why the company had not deleted four accounts belonging to a 12-year-old despite requests and “complaints from the girl’s mother, who said her daughter was 12 years old.” the complaint says. Employees concluded that the accounts were “ignored,” in part because Meta representatives “could not say with certainty that the user was a minor,” the court filing said.
This isn’t the first time the social media giant has faced allegations of data breaches. In 2019, the company agreed to pay a record $5 billion and change its data practices to settle Federal Trade Commission allegations that it misled users about its ability to control their privacy.
It may be easier for states to prosecute Meta for violations of children’s privacy than to prove that the company encouraged compulsive social media use – a relatively new phenomenon – among young people. Since 2019, the FTC has successfully filed similar children’s privacy complaints against tech giants such as Google and its YouTube platform, Amazon, Microsoft and Epic Games, the creator of Fortnite.