Social Media and LGBTQ+ Youth

Social Media and LGBTQ+ Youth

Compiled by Kelly Stonelake

Last updated 01/30/26

Contact Kelly

icon
kellystonelake.com

Subscribe to Kelly’s newsletter

icon
Overturned by Kelly Stonelake

Table of Contents:

Academic Research

Government Reports

NGO Research

Investigative Journalism

LGBTQ+ Policy Rollbacks & Content Suppression by Platform

  • Meta (Facebook, Instagram, Threads)
  • YouTube
  • X (Twitter)
  • TikTok

Myths vs. Reality: LGBTQ+ Teens and Social Media

Opinions:

  • How Big Tech Uses Queer Kids as Shields
  • Social Media Companies’ Worst Argument

Scope & Methodology

This page compiles peer-reviewed research, government data, NGO surveys, investigative reporting, and opinion pieces focused specifically on LGBTQ+ youth and social media. Sources were included if they document social media’s impact on LGBTQ+ youth (both positive and negative) including mental health impacts, exposure to harassment or discrimination, or platform policies affecting LGBTQ+ safety and expression.

Academic Research

A 2022 systematic review in the Journal of Medical Internet Research (Berger et al.) analyzing 26 studies with 14,112 participants found that:

  • While social media can provide crucial community support for LGBTQ+ youth, heavy use was associated with increased loneliness and emotional sensitivity.
  • Mental health concerns were directly attributed to "discrimination, victimization, and policies that did not accommodate changed identities.”
  • This review concluded that social media can offer identity support but also that discrimination and victimization online are linked to negative mental health outcomes (depression, loneliness, sensitivity).

Research published in JAMA Network Open (Coyne et al., 2023) studying 1,231 youth found that:

  • Transgender and gender nonbinary (TGNB) youth reported significantly higher depression scores (Mean 2.80 vs. 2.22 for females, 2.07 for males) and that 25-32% of TGNB youth have attempted suicide.
  • The study revealed a paradox relevant to regulatory considerations: while social media breaks helped cisgender youth, they were associated with increased depression for TGNB youth (B=1.03; P=.02), suggesting these populations uniquely depend on online community connection unavailable offline.
  • This evidence supports legislation that limits minor’s access to addictive endless feeds and predatory product features without preventing access to desired content and communities. This is achievable by legislation that restricts access of these features to minors, or limits minors’ access to social media broadly but narrowly defining social media to only include sites with addictive algorithmic feeds known to impact youth mental health, preserving other community groups and sites that don’t have these features.

A 2026 study in the Journal of Adolescent Health (Delgado-Ron et al.) found social media abstention would reduce disordered eating scores by 16.3% for sexual minority girls and 11.6% for transgender/gender-expansive youth. Research indicates up to 48% of Canadian transgender adolescents report disordered eating behaviors.

A systematic literature review on cyberbullying (Abreu & Kenny, 2018) found:

  • Victimization rates among LGBTQ+ youth range from 10.5% to 71.3% across studies, with higher correlations to suicidal ideation, depression, and body image issues than for heterosexual/cisgender counterparts.
  • While not explicitly about addictive feeds, this data supports additional benefits to limiting notifications during school and while sleeping for LGBTQ+ youth.
  • Despite other research (e.g. Berger et al.) establishing that social communities are important for LGBTQ+ kids, this review suggests that they are also more vulnerable to online bullying.

Details from that review:

Study
Findings
Bauman and Baldasare (2015)
1- When compared to their heterosexual and cisgender counterparts, LGBT respondents reported higher rates of unwanted contact online (t = 3.49, df = 91.98, p = .001, η2 = .01).
Blais et al. (2013)
1–28% to 48.95% of the youths of students reported cyberbullying (study does not distinguish between sexual minority youth and others). 2- Sexual orientation rates of prejudice: 1.67–2% for heterosexual participants and 32.02–64.42 for sexual minority youth. 3- Gender non-conformity rates of prejudice: 5.29–6.47% for heterosexual participants and 25.66–60.49 for sexual minority youth.
Blumenfeld and Cooper (2010)
1- Rates of cyberbullying of LGBT vs. non-LGBT was not measured.
Bouris et al. (2016)
1- Cyberbullying based on sexual orientation: 16.81% for sexual minority and 11.03% for heterosexual participants.
Cénat et al. (2015)
1–28% for gay/lesbian, 32.9% bisexual, and 24% questioning vs. 21.4% for heterosexual participants.
Cooper and Blumenfeld (2012)
1- Rates of “frequently” experiencing cyberbullying for LGBT vs. LGBT allies: 22.7% - 32.8% for LGBT vs. 10% - 28.3% for LGBT allies.
Duong and Bradshaw (2014)
1–9.7% experienced cyberbullying and 10.1% experienced both cyberbullying and traditional bullying.
GLSEN et al. (2013)
1- In the past year: 42% harassed online, 19% cyberbullied via phone call, 27% harassed via text message. 2- One in four (24%) said they had been bullied online because of their sexual orientation or gender expression. 3–30% experienced bullying due to heir sexual orientation or gender expression via text message or online while at home. 4–32% said they had been sexually harassed online. 5–25% had been sexually harassed via text message in the past year. 6–30% experienced sexual harassment online. 7–20% experienced sexual harassment via text message.
Guasp (2012)
1–23% experienced cyberbullying.
Hillier et al. (2010)
1- Approximately 25% males, 18% female, and 27% gender questioning.
Hinduja and Patchin (2012)
1- LGBT students reported experiencing more cyberbullying throughout their life time when compared to their heterosexual counterparts (36.4% vs. 20.1%). 2- LGBT students reported being the victim of cyberbullying in the previous 30 days when compared to their heterosexual counterparts (17.3% vs. 6.8%). 3- Non-heterosexual females experience more cyberbullying than their heterosexual counterparts (38.3% vs. 24.6%). 4- Non-heterosexual males experience more cyberbullying than their heterosexual counterparts (30.4% vs. 15.7%).
Kosciw et al. (2012)
1–55% of LGBTQ youth experienced cyberbullying in the past year.
Kosciw et al. (2016)
1–48.6% of LGBTQ youth experienced cyberbullying at in the past year; 15% experienced it often or frequently.
Mace et al. (2016)
This study measured perceived social support among heterosexual and non-heterosexual university sample; no information on cyberbullying prevalence was reported.
Priebe and Svedin (2012)
1- Non-heterosexual male students reported experiencing more cyberbullying than their heterosexual male counterparts (10.4% to 23.0% vs. 2.0% to 16.8%). 2- Non-heterosexual female students reported experiencing more cyberbullying than their heterosexual female counterparts (3.3% to 23.2% vs. 1.5% to 16.1%).
Ramsey et al. (2016)
1- Sexual minority participants reported significantly higher levels of recent cyber victimization compared to heterosexual participants (M = 1.07 vs. M = 1.02).
Rice et al. (2015)
1- Sexual-minority students were more likely to report cyberbullying victimization than their heterosexual counterparts.
Robinson and Espelage (2011)
1- LGBTQ students reported experiencing more cyberbullying; approximately 14.8% more than heterosexual students. 2- Bisexual students reported higher incidents of cyberbullying than heterosexual and LGTQ students; approximately 25.5% more than heterosexual and 10.7% more than LGTQ students.
Schneider et al. (2015)
1- Sexual minority youth reported experiencing more cyberbullying than their heterosexual counterparts for 2006 (28.6% vs. 13.6%), 2008 (32.8% vs. 14.3%), 2010 (34.6% vs. 18.6%), and 2012 (31.5% vs. 20.3%).
Schneider et al. (2012)
1- Sexual minority youth reported experiencing more cyberbullying than their heterosexual counterparts (33.1% vs. 14.5%). 2- Sexual minority youth reported experiencing more school bullying and cyberbullying combined than their heterosexual counterparts (22.7% vs. 8.5%).
Sinclair et al. (2012)
This study reported on the correlates of cyberbullying with academic, substance use, and mental health problems; No prevalence of cyberbullying was provided.
Sterzing et al. (2017)
1- Cisgender sexual minority males: 37.2%; Cisgender sexual minority females: 35.6%; Transgender male: 51.4%; Transgender females: 71.3%; Genderqueer- assigned male at birth: 43.8%; Genderqueer- assigned female at birth: 44.8%.
Stoll and Block (2015)
1- Non-heterosexual students experienced more than half an additional instance of cyberbullying than their heterosexual peer.
Taylor et al. 2011
1- LGBTQ youth reported experiencing more lies and rumors spread by text messaging and Internet than their non-LGBTQ counterparts (27.7% vs. 5.7%).
Varjas et al. (2013)
Qualitative study; no prevalence reported.
Walker (2015)
1- Non-heterosexual participants experienced more cyberbullying than their heterosexual counterparts (22.9% vs. 9.5%). 2- Percentages of specific forms of cyberbullying ranges ranged from .0% to 29.9% for heterosexual participants and 5.7% to 43.2% for non-Heterosexual participants.
Wensley and Campbell (2012)
1- Non-heterosexual participants experienced more cyberbullying than their heterosexual counterparts (10.8% vs. 15.4%). 2- Non-heterosexual males experienced more cyberbullying than their male heterosexual counterparts (11.1% vs. 35.3%). 3- Non-heterosexual females experienced more cyberbullying than their female heterosexual counterparts (10.5% vs. 11%).

Government Reports

The US Surgeon General's 2023 Advisory on Social Media and Youth Mental Health explicitly recognizes that "adolescent girls and transgender youth are disproportionately impacted by online harassment and abuse."

The CDC's Youth Risk Behavior Survey (2023) provides the most robust national data on LGBTQ+ youth disparities.

  • More than 60% of LGBTQ+ students experienced persistent sadness or hopelessness—compared to approximately 40% of all high school students.
  • 20% of LGBTQ+ students attempted suicide in the past year.
  • LGBTQ+ students were more likely to experience every form of violence measured, including electronic bullying, and were twice as likely to use illicit drugs.
  • The CDC explicitly identifies social media as a factor affecting mental health "particularly among girls, LGBTQ+ students, and students from marginalized racial and ethnic groups."

The FTC's September 2024 report "A Look Behind the Screens" found social media companies engaged in "vast surveillance" with "inadequate safeguards for kids and teens." The report noted companies "claim no children on platforms" as an apparent attempt to avoid COPPA liability, while teen accounts received no additional privacy protections beyond adult accounts. FTC Chair Lina Khan stated: "Several firms' failure to adequately protect kids and teens online is especially troubling."

NGO Research

The Trevor Project's 2024 National Survey (N=18,663 LGBTQ+ young people ages 13-24) provides the most comprehensive data on this population:

  • 39% seriously considered suicide in the past year
  • 46% of transgender/nonbinary youth seriously considered suicide
  • 12% attempted suicide (14% trans/nonbinary vs. 7% cisgender)
  • 35% ages 13-17 experienced cyberbullying
  • Bullied youth showed 3x the rate of suicide attempts compared to non-bullied peers

GLAAD's Social Media Safety Index evaluated all major platforms and found universal failures. In 2025, every platform received failing scores: TikTok (56/100), Facebook (45), Instagram (45), YouTube (41), Threads (40), and X/Twitter (30).

  • More details on GLAAD’s individual platform scores here.

The Human Rights Campaign's 2023 Youth Report (N=12,615) found:

  • 96% of LGBTQ+ youth have been exposed to offensive anti-LGBTQ+ content online.
  • 49% of trans and nonbinary youth experienced cyberbullying based on gender identity.
  • 66% don't believe platforms would take action on reports of harassment.

The Center for Countering Digital Hate documented 989,547 tweets in a 7-month period using slurs like "groomer" and "predator" against LGBTQ+ people, with the top 500 most-viewed tweets garnering over 72 million views.

Amnesty International's February 2023 investigation with GLAAD and HRC found 60% of LGBTQ+ organizations reported hateful speech increased on Twitter under Musk, and 88% who reported abuse said Twitter took no action. 30% experienced increased offline violence including protests, threats, and harassment since October 2022.

Thorn’s How LGBTQ+ Youth Are Navigating Exploration and Risks of Sexual Exploitation Online found that:

  • LGBTQ+ teens reported a greater reliance on online communities and spaces.
  • LGBTQ+ teens reported higher rates of experiences involving nudes and online sexual interactions.
  • Compared to other teens, cisgender non-hetero male teens reported higher rates of risky encounters and of attempting to handle unsafe situations alone.
  • LGBTQ+ minors are also three times more likely to experience unwanted and risky online interactions.

Investigative Journalism

ABC News reported that 75% of LGBTQ+ users who experienced harassment reported that it occurred on Facebook.

Platform-by-Platform: LGBTQ+ Policy Rollbacks & Content Suppression (2025)

It’s worth nothing that much of the research and data cited above was collected pre-2025. Since January 2025, every major social media platform has either rolled back LGBTQ+ protections, censored LGBTQ+ content, or both. These changes appear coordinated with the Trump administration's anti-trans executive orders and align with Project 2025's explicit instruction to delete "sexual orientation and gender identity" from official documents. GLAAD's 2025 Social Media Safety Index gave failing scores to all six platforms evaluated.

Meta (Facebook, Instagram, Threads)

GLAAD Safety Score: 45/100 (Facebook/Instagram), 40/100 (Threads)

Policy Rollbacks (January 7, 2025)

Hate Speech Now Explicitly Permitted:

Meta's updated "Hateful Conduct" policy now states: "We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality."

Additional policy changes include:

  • Users can advocate for exclusion of LGBTQ+ people from military, law enforcement, teaching jobs, bathrooms, and sports: "We do allow content arguing for gender-based limitations of military law enforcement and teaching jobs."
  • Removed prohibition on comparing people to "household objects," "filth," or "feces" based on protected characteristics
  • Removed prohibition on referring to trans/nonbinary people as "it"
  • Reclassified anti-trans slur "tr*nny" as "non-violating" in internal enforcement guidelines
  • Adopted anti-LGBTQ rhetoric in its own policy language ("transgenderism," "homosexuality")

Structural Changes:

  • Ended third-party fact-checking program (replaced with X-style Community Notes)
  • Terminated all DEI programs and dedicated DEI teams: "The legal and policy landscape surrounding diversity, equity and inclusion efforts in the United States is changing."
  • Ended diverse supplier initiatives and minority hiring goals
  • Relocated Trust & Safety team from California to Texas
  • New leadership:
    • In Jan 2025, Dustin Carmack (former Project 2025 and Ron DeSantis staffer as Director of Public Policy), Dana White (UFC CEO, Trump ally) added to board
    • In Jan 2026, former Trump advisor Dina Powell McCormick brought on as President and Vice Chair
    • In Jan 2026, former Trump deputy U.S. trade representative Curtis Joseph Mahoney brought on as Chief Legal Officer

Product Changes:

  • Deleted trans and nonbinary Pride themes from Messenger app (originally launched for Pride Month 2021-2022): "Meta deleted nonbinary and trans themes for its Messenger app this week, around the same time that the company announced it would change its rules to allow users to declare that LGBTQ+ people are 'mentally ill."
  • Removed tampons from men's restrooms at Meta offices

Content Censorship (October-December 2025)

"Meta has removed or restricted dozens of accounts belonging to abortion access providers, queer groups and reproductive health organisations in recent weeks in what campaigners describe as one of the biggest waves of censorship on its platforms in years."

"Within this last year, especially since the new US presidency, we have seen a definite increase in accounts being taken down — not only in the US, but also worldwide as a ripple effect." Source: Martha Dimitratou, Executive Director of Repro Uncensored, quoted in The Guardian

Electronic Frontier Foundation documented "algorithmic silencing" of LGBTQ+ content: "Within this last year, especially since the new US presidency, we have seen a definite increase in accounts being taken down." Source: Paige Collings, EFF, quoted in Bay Area Reporter, October 2025, as cited in LGBTQ Nation

Impact Data (GLAAD/UltraViolet "Make Meta Safe" Survey, September 2025):

Survey of 7,000+ active users from 86 countries found:

  • 75% of LGBTQ respondents said harmful content increased since January policy changes
  • 77% feel less safe expressing themselves on Meta platforms
  • 66% witnessed harmful content in their feeds
  • 25%+ have been directly targeted with hate or harassment
  • "Violence against me has skyrocketed since January. I live in daily fear." — Survey respondent

Oversight Board Ruling (April 2025)

  • Allowed two posts intentionally misgendering transgender women to remain on platform
  • Called on Meta to remove "transgenderism" from its policy language: "To ensure Meta's content policies are framed neutrally and in line with international human rights standards, Meta should remove the term 'transgenderism' from the Hateful Conduct policy and corresponding implementation guidance."
  • Washington Post reported that top Meta executives told the Oversight Board the ruling should be "treated carefully... given the fraught political debate"

Human Rights Campaign Analysis

"By focusing only on content that violates legal standards, Meta effectively gives users across all its platforms carte-blanche to spread the harassment, hate speech, and disinformation that may fall just shy of illegality, but still cause irreparable harm to the safety of LGBTQ+ people." Source: Human Rights Campaign, January 27, 2025,

YouTube (Google)

GLAAD Safety Score: 41/100

Policy Rollbacks (January 29 - February 6, 2025)

Removed from Hate Speech Policy:

"In a deeply concerning update to YouTube's 'Hate Speech' policy, the company removed 'gender identity and expression' from its list of protected characteristic groups, which suggests that the platform is no longer protecting transgender, nonbinary, and gender-nonconforming people from hate and discrimination according to its Community Guidelines." Source: GLAAD Social Media Safety Index 2025

Additional removed content:

  • Line stating "[Protected group status] is just a form of mental illness that needs to be cured" removed from examples of prohibited hate speech
  • Now only lists "sex, gender, or sexual orientation" — no explicit trans/nonbinary protection

YouTube's Response:

"A YouTube spokesperson said that the removal of 'gender identity and expression' from the hate speech policy was part of regular copy edits to the website, and that the enforcement of the policy hasn't changed." Source: User Mag, April 3, 2025

GLAAD's Response:

"YouTube quietly removing 'gender identity and expression' from its list of protected groups is a major radical shift away from best practices in the field of trust and safety and content moderation. Like Meta's recent dangerous rollbacks of hate speech protections for transgender and nonbinary people, the removal of these specific words appears to be a responsive alignment with the anti-LGBTQ agenda of Project 2025, which calls for targeting 'woke culture warriors … start[ing] with deleting the terms sexual orientation and gender identity.'" Source: GLAAD spokesperson, quoted in The Advocate, April 3, 2025

Ongoing Issues

Enforcement Failures:

  • No policy prohibiting targeted misgendering and deadnaming
  • LGBTQ+ creators more likely to be demonetized
  • Little transparency on wrongful demonetization of LGBTQ+ content

X (FORMERLY TWITTER)

GLAAD Safety Score: 30/100 (Lowest of all platforms)

Policy Rollbacks Under Musk (2022-2025)

April 2023:

Quietly removed ban on targeted misgendering and deadnaming from Hateful Conduct Policy. Musk personally confirmed: "It is definitely allowed... Whether or not you agree with using someone's preferred pronouns, not doing so is at most rude and certainly breaks no laws." Source: NBC News, June 2, 2023

March 2024:

Briefly appeared to reinstate misgendering protections, then immediately reversed after complaints from Libs of TikTok's Chaya Raichik. New policy: Will only act "in jurisdictions where local laws explicitly mandate it."

"X's commitment to act only when the law requires that it do so is an insidious sleight of hand." Source: Belle Torek, Human Rights Campaign, quoted in The Advocate, March 5, 2024

Structural Changes:

  • Disbanded Twitter Trust & Safety Council (of which GLAAD was an organizational member)
  • Reinstated previously banned accounts including neo-Nazis and anti-LGBTQ hate accounts
  • Ended free access to API for outside researchers, preventing independent hate speech analysis: "Elon has shut down essentially all good faith efforts to measure pretty much anything about Twitter." Source: Jeremy Blackburn, Binghamton University, quoted in NBC News, October 27, 2023
  • Musk personally amplifies anti-trans content through likes, replies, and shares

Documented Harm Increases

From Center for Countering Digital Hate (March 2023):

  • 989,547 tweets using "groomer" slur against LGBTQ+ people in 7-month period
  • 72+ million views on top 500 most-viewed anti-LGBTQ tweets
  • "Groomer" rhetoric increased 119% after Musk acquisition
  • Retweets from prominent anti-LGBTQ accounts increased 1,200%
  • Twitter failed to act on 99% of reported hateful content

Media Matters/GLAAD (December 2022):

"Anti-LGBTQ accounts that saw substantial increases in both retweets of and mentions in tweets with the slur included Tim Pool, Jack Posobiec, Jake Shield, Gays Against Groomers, Blaire White, Allie Beth Stuckey, Andy Ngo, Seth Dillon, and Mike Cernovich. Collectively, these 9 accounts saw an over 1,200% increase in retweets of tweets with the slur." Source: Media Matters for America

PLOS One Study (February 2025):

"We find that the increase in hate speech just before Musk bought X persisted until at least May of 2023, with the weekly rate of hate speech being approximately 50% higher than the months preceding his purchase... The increase is seen across multiple dimensions of hate, including racism, homophobia, and transphobia." Source: PLOS One, February 12, 2025

Amnesty International/GLAAD/HRC Survey (February 2023):

  • 60% of LGBTQ+ organizations reported increase in hateful speech under Musk
  • 88% who reported abuse said Twitter took no action
  • 30% experienced increased offline violence (protests, threats, harassment, violence)
  • 100% of respondents encountered hateful/abusive speech
  • 0% reported a decrease in abuse Source: Amnesty International, February 2023,

Platform Exodus

Multiple LGBTQ+ organizations have deactivated accounts: "LGBT Life Center has made the decision to join a growing chorus of LGBTQ+ organizations that are deactivating their Twitter accounts due to the proliferation of hate speech and the decision by Elon Musk to roll back policies protecting trans individuals from dehumanizing attacks." Source: LGBT Life Center, April 2023

TIKTOK

GLAAD Safety Score: 56/100 (Highest, still failing)

Policy Position

  • Only platform evaluated that prohibits both misgendering/deadnaming AND "conversion therapy" content
  • Highest score primarily due to policy existence, not enforcement

Documented Censorship

Shadow-Banning of LGBTQ+ Content:

A report by the Australian Strategic Policy Institute (ASPI) think-tank said many LGBT hashtags were 'shadow-banned' in Bosnia, Jordan and Russia."

2019 Admission:

"In December 2019, TikTok admitted that it aimed to 'reduce bullying' in the comments of videos by artificially reducing the viral potential of videos its algorithm identified as being made by LGBTQ+ people." Source: Wikipedia, citing The Guardian

Academic Research:

"LGBTQ+ users feel 'unfairly censored' while being 'pigeon-holed' into normative queer identities... The belief that the platform censors content based on social identity is referred to as The Identity Strainer Theory." Source: Simpson and Semaan (2021), Karizat et al. (2021), as cited in academic study

Internal Documents (Kentucky Litigation, October 2024)

  • Platform quantified addiction thresholds: after "260 videos (approximately 35 minutes), an average user is likely to become addicted"
  • Acknowledged "compulsive usage correlates with a slew of negative mental health effects"
  • Time-limit tools designed for "improving public trust via media coverage" rather than actual protection — reduced use by only 1.5 minutes
  • Project manager stated: "Our goal is not to reduce the time spent"

Myths vs. Reality: LGBTQ+ Teens and Social Media

Myth pushed by Big Tech & allies
Reality supported by evidence
“Social media is uniquely protective for LGBTQ+ kids.”
Social media can provide community, but heavy use is associated with increased depression, loneliness, emotional sensitivity, eating disorders, and suicide risk, especially for transgender and gender-expansive youth. Support and harm coexist, and the harms scale with exposure and addictive design.
“Any regulation will cut LGBTQ+ kids off from lifesaving support.”
Most proposed laws (including KOSA) target addictive design features and duty-of-care failures, not LGBTQ+ content or peer support. Limiting endless feeds, nighttime notifications, and algorithmic amplification does not equal banning queer communities.
“LGBTQ+ kids need these platforms more than other kids.”
The evidence supports LGBTQ+ kids’ need for community, and proposed legislation like KOSA’s duty-of-care or addictive feed restrictions do not take platforms away from kids. Because LGBTQ+ youth are disproportionately harmed on these platforms (higher rates of cyberbullying, sexual harassment, misgendering, and exposure to hate), it’s even more important that they are using platforms held to a duty-of-care or that are otherwise limited in their most predatory and addictive features. Youth social media delays/bans, such as Australia’s and France’s, do not limit kids’ access to content via online support groups, community sites, direct messaging with friends on non-exploitative platforms.
“Platforms already protect LGBTQ+ users.”
Every major platform received a failing score from GLAAD. Since 2025, platforms have bent the knee to Trump and Project 2025. They’ve rolled back LGBTQ+ protections, permitted misgendering and dehumanization, and censored queer content, while allowing hate to spread.
“The real threat to LGBTQ+ kids is government overreach.”
Government overreach is a real threat to LGBTQ+ kids, which is why proposed legislation that does not dictate the content they can access is essential. The dominant, documented threat is corporate overreach: surveillance of minors, data collection in violation of COPPA, algorithmic amplification of harassment, failure to act on abuse reports, and design choices optimized for engagement over safety.
“Harassment is just bad actors, not the product.”
Internal research, NGO audits, and investigative reporting show harassment is structural, amplified by ranking systems, recommendations, and virality mechanics that platforms control and refuse to meaningfully change.
“LGBTQ+ advocacy groups oppose regulation.”
This claim selectively amplifies a few organizations, usually funded by Big Tech companies directly, while ignoring overwhelming evidence from LGBTQ+ serving orgs, researchers, clinicians, and youth themselves documenting harm and calling for accountability.
“Kids can choose to disengage if it’s harmful.”
Platforms are explicitly designed to defeat disengagement. Internal documents quantify addiction thresholds and show safety tools are often performative, not protective. Children cannot meaningfully consent to manipulative systems.
“Content moderation is the main issue.”
Addictive feeds, surveillance, recommendation engines, and notification loops are the primary drivers of distress and compulsive use, regardless of content type.
“We must choose between safety and LGBTQ+ visibility.”
This is a false choice. Safer design and real enforcement increase LGBTQ+ kids’ ability to participate without being targeted, harassed, or driven into crisis.

Opinions

How Big Tech Uses Queer Kids as Shields: Beyond the False Choice Between Online Safety and Expression for LGBTQ+ Youth

Lennon Torres and Kelly Stonelake

Link to Recorded Conversation on Overturned by Kelly Stonelake

image

Kelly:

“This tension sits at the heart of one of Big Tech’s most effective deflection strategies: positioning child safety legislation as a threat to queer community and expression. It’s a framing that exploits legitimate fears to protect trillion-dollar business models.”

“Tech companies fund many of the groups opposing KOSA, and their lobbyists constantly cite LGBTQ+ concerns in their talking points. There's evidence that Big Tech has spent over $50 million lobbying against KOSA. Some argue they're exploiting LGBTQ+ fears to protect their profits. I got a taste of this when working at Meta - when I was first pitched the idea of leading go-to-market for Meta Horizon Worlds, the "hero story" was about the "isolated gay kid in Kansas" who could finally find community. When in reality, as I discovered once on the job, there was rampant bullying, harassment, and the implication of parental controls where they didn't exist.”

Lennon:

“I think the administration that is in place right now is going to continue to harm LGBTQ+ people no matter what we do with Big Tech regulation... But at the same time, I totally hear the narrative that you don’t want to give the government any more power. But when I look at a design bill like KOSA, I don’t see it giving the government power. I see it holding the Big Tech companies accountable for the wrongdoing and the ability for a government, just or unjust, to take action against those companies to make their products safer for kids.”

“Big Tech knows that they can sell people who are struggling, who need community, who need connection. Oh, well, we’ve got it. We’ve got it and we’re going to give it to you with with sugar before and sugar after so it tastes really good and you’re going to want to come back for more.”

“Watch what people do. That’s how they feel. Watch what people do, not what they say. People can say a lot of crap.”

“If these people cared about kids, their actions would be so different. And that’s all I want people to pay attention to.”

Social Media Companies’ Worst Argument

Jonathan Haidt, Lennon Torres, Zach Rausch

“Tech lobbyists have gone further, deploying the dual argument that social media is especially beneficial to teens from historically marginalized communities, and therefore nearly any regulation would harm them. Through their funding and, at times, their own statements, many leaders in Silicon Valley have used these claims as part of their efforts to oppose a pair of bills—now before Congress—aimed at strengthening online protections for minors, referred to collectively as the Kids Online Safety and Privacy Act. (KOSPA combines the Kids Online Safety Act, widely known as KOSA, and the Children and Teens’ Online Privacy Protection Act.)

The talking point plays into a long-running strand of progressive thought that sees digital technology as a means of empowering disadvantaged groups. The early internet did in fact help many Black, low-income, and LGBTQ+ Americans—among others—find resources and community. And even today, surveys from organizations like Hopelab and Common Sense Media find that LGBTQ+ teens report experiencing more benefits from social media than non-LGBTQ+ teens.

That’s a good reason to be careful about imposing new regulation. But the wholesale opposition to legislation ignores strong evidence that social media also disproportionately harms young people in those same communities.”

“As it turns out, the adolescents being harmed the most by social media are those from historically disadvantaged groups. Recent surveys have found that LGBTQ+ adolescents are much more likely than their peers to say that social media has a negative impact on their health and that using it less would improve their lives. Compared with non-LGBTQ+ teens, nearly twice as many LGBTQ+ teens reported that they would be better off without TikTok and Instagram. Nearly three times as many said the same for Snapchat.

Youth from marginalized groups have good reason to feel this way. LGBTQ+ teens are significantly more likely to experience cyberbullying, online sexual predation, and a range of other online harms, including disrupted sleep and fragmented attention, compared with their peers. LGBTQ+ minors are also three times more likely to experience unwanted and risky online interactions.”