How Meta’s Own Internal Research Led to Massive Court Defeats

Facebook Owner Meta
From Facebook to the Metaverse — Meta's Journey. [TechGolly]

Key Points:

  • Juries in 2 separate trials recently ruled against Meta after reviewing millions of internal company documents and emails.
  • Former executive Brian Boland testified that Meta intentionally hid research showing its platforms caused direct harm to teenagers.
  • Whistleblower Frances Haugen leaked thousands of files in 2021, which prompted CEO Mark Zuckerberg to shut down internal safety teams.
  • Artificial intelligence companies now face the same $1 billion dilemma: whether to publish or hide their safety data.

Over 10 years ago, Meta hired social science researchers to study how its platforms affected daily users. Executives at the social media giant wanted to prove they cared about the benefits and risks of their growing networks. Today, that exact research has turned into a massive legal nightmare. Recent court cases prove that knowing about a serious problem and choosing to hide it will cost a company dearly.

Juries in 2 separate trials in Los Angeles and New Mexico recently delivered punishing verdicts against Meta. Brian Boland, a former Facebook executive, testified in both courtrooms. He told the juries that Meta’s own internal documents completely contradicted the public image the company tried to project. After reviewing the evidence, the juries decided that Meta failed to police its platforms, which put millions of children in direct danger. Google also lost in the Los Angeles trial regarding its YouTube platform, and both tech giants plan to appeal the decisions.

The trouble for Meta really started back in 2021. That year, former employee Frances Haugen blew the whistle and leaked thousands of internal documents to the press. Her files showed that Meta knew its products harmed young people but chose to ignore the data. After that massive public relations disaster, CEO Mark Zuckerberg began shutting down internal research teams. He decided to stop funding studies that might eventually look bad for the company in a courtroom.

During the recent trials, lawyers presented juries with millions of corporate documents to prove their case. These files included executive emails, PowerPoint presentations, and hidden research data. One internal survey revealed that a surprisingly high percentage of teenage girls received unwanted sexual messages on Instagram. Another buried study showed that people who stopped using Facebook experienced a 40.0% drop in depression and anxiety. Meta executives saw all this data but chose to halt the research rather than fix the core platform problems.

Meta’s defense lawyers tried their best to fight back against the overwhelming evidence. They told the court that the plaintiffs took the research entirely out of context. The lawyers argued that the 5-year-old data did not reflect how the company operates its business today. However, Boland noted that the juries heard a very fair presentation of the facts from both sides. After reading the exact internal emails and memos, both juries delivered clear verdicts against the tech giant.

Lisa Strohman, a psychologist and attorney who consulted on the New Mexico lawsuit, shared her thoughts on the corporate mindset inside Meta. She believes tech executives arrogantly thought they could control their researchers forever. She pointed out that executives failed to recognize that researchers also go home to their own families every night. The company simply could not buy the silence of parents who genuinely care about protecting children.

The fallout from these trials extends far beyond social media platforms. The technology industry now aggressively pushes artificial intelligence into the consumer market. New companies like OpenAI and Anthropic recently hired their own researchers to study how AI affects daily users. Now, these startups face a tough $100 million question. Do they publish their findings and risk future lawsuits, or do they suppress the data to protect their massive profits?

Kate Blocker, a research director at the Institute of Digital Media and Child Development, worries deeply about this exact trend. She notes that many tech companies now view any ongoing safety research as a major financial liability. Following the historic 2021 leaks, many platforms even deleted the software tools that independent researchers used to study their sites. Blocker warns that AI companies currently focus all their money on building faster models rather than testing for long-term safety.

Industry experts demand that the artificial intelligence sector avoid repeating Meta’s costly mistakes. Right now, the general public has almost zero visibility into what these new companies know about their own chatbots. Tech companies need to establish clear systems of transparency immediately. If they refuse to share their internal safety data, they will inevitably face the same billion-dollar lawsuits that just crushed Meta in court.

EDITORIAL TEAM
EDITORIAL TEAM
Al Mahmud Al Mamun leads the TechGolly editorial team. He served as Editor-in-Chief of a world-leading professional research Magazine. Rasel Hossain is supporting as Managing Editor. Our team is intercorporate with technologists, researchers, and technology writers. We have substantial expertise in Information Technology (IT), Artificial Intelligence (AI), and Embedded Technology.
Read More