Meta more than doubled removals of content on to sexual and reproductive health, LGBTQ communities, and sex worker–led initiatives from 2024 to 2025.Mother Jones
Leaked documents reveal that Meta, the parent company of Facebook, WhatsApp, and Instagram, has blocked its AI chatbot from discussing topics including abortion with minors—a blanket policy that contrasts sharply with the firm’s handling of child sexual exploitation claims, and that may also inadvertently affect its content for adults.
Internal Meta documents obtained by Mother Jones, containing a comprehensive list of policy guidelines for Meta’s chatbot interactions with users under the age of 18, shed light on how the company is training its chatbots to respond to children’s questions on issues ranging from sexual health to suicide and self-harm, eating disorders, and other mental health issues.
The revelations come as the company faces a landmark trial accusing it and other social media platforms including TikTok—which settled before the trial could begin—of deliberately designing features they knew would harm children’s mental health, following a series of whistleblower allegations that Meta’s traditional platforms knowingly aggravated teenage body image issues and promoted content that led to bullying, drug abuse, and self-harm.
More recently, the company has been accused of allowing children to flirt with its chatbots and of further failures around self-harm content; in response to pending lawsuits and criticism of its youth content policies, Meta has said it will block teenage users from accessing character chatbots modeled after celebrities or fictional characters. However, those users will retain access to Meta AI, which the company said provides “helpful information and educational opportunities” to teens.
The documents provided to Mother Jones show that as of September 2025, in response to public scrutiny, the company started to prohibit “content that discusses, describes, enables, encourages, or endorses sensual acts, sex acts, sexual arousal, or sexual pleasure” with teenagers, which it was previously criticized for allowing. Similarly, if teenagers ask questions related to suicide or self-harm, the chatbots are designed to point children to mental health resources; if teenagers ask questions about eating disorders, Meta’s policy is to have the chatbots direct users to a hotline and encourage them to reach out to a trained counselor.
But while Meta has strengthened safeguards for youth users around eating disorders and depression, its policies have simultaneously cracked down on information around abortion and sexual health.
According to the company’s policies, chatbots are prohibited from offering underage users “content that provides advice or opinion about sexual health,” including “anatomy and physiology of reproductive organs, puberty education, menstrual health, fertilization and reproduction, STI and HIV prevention, contraceptive methods, consent education and abstinence.” The company also bans the chatbot from encouraging teenage users to use condoms or menstrual hygiene products.
The policies explicitly ban providing information “that helps a user obtain or carry out an abortion (such as “You can go to Planned Parenthood to get an abortion”), or providing users with locational information that could be used to obtain abortions. It also prohibits the chatbot from providing a “value judgement” for or against abortion.
Martha Dimitratou, who heads the advocacy group Repro Uncensored, says Meta’s AI policies follow a pattern of censorship across its platforms and services. Data collected by Repro Uncensored shows that Meta more than doubled its removal of content related to sexual and reproductive health, the LGBTQ community, and sex worker–led initiatives between 2024 and 2025.
“Every organization and individual on our platforms is subject to the same set of rules, and any claims of enforcement based on group affiliation or advocacy are baseless,” a spokesperson for Meta said in response to a request for comment. “We allow posts and ads promoting health care services like abortion, as well as discussion and debate around them, as long as they follow our policies. We also give people the opportunity to appeal decisions if they think we’ve got it wrong.”
Dimitratou says that the nonprofit, which advocates and researches access to reproductive health information online, has met with Meta for years to urge that the platform “consider abortion as health care and direct to accurate health care resources,” treating abortion-related and reproductive rights content in the same way it treated Covid-19—including by proactively correcting misinformation—but that Meta has “categorically said that this is not a priority.”
And in the face of attacks by conservative influencers and politicians like House Judiciary Committee chair Jim Jordan (R-Ohio), Mark Zuckerberg has walked back even those steps, characterizing Meta’s Covid information policies as a regrettable cave to Biden administration pressure.
In the last year, however, the Republican crusade against Big Tech’s perceived woke bias has grown to include an obsession with chatbots. In July, President Trump issued an executive order titled “Preventing Woke AI in the Federal Government,” calling among other things for AI companies to suppress information related to gender and sexuality.
While the executive order pertains to AI use by the federal government, the leaked policies show that Meta is willing to capitulate to conservative policies even in its consumer products, says Jacob Hoffman, a technologist at the Electronic Frontier Foundation. “We’re particularly concerned when it seems like tech companies are developing products that censor certain information at the request of the government,” Hoffman says. “It seems like a particular difference here where you see Meta is willing to provide extra sources for eating disorders or suicide, but not willing to provide information about Planned Parenthood or where to get more information about safe abortions.”
“Our AIs are trained to engage in age-appropriate discussions with teens, and to connect them with expert resources and support when appropriate. They provide factual information on sexual health but refrain from offering advice or opinions. We continuously review and improve our protections so that teens have access to helpful information with default safeguards in place,” Meta’s spokesperson said.
Since the Supreme Court’s Dobbs ruling, twenty states have implemented total bans or restrictions beyond Roe v. Wade’s standards, while the Trump administration has further restricted access nationwide by withdrawing federal guidance requiring emergency abortion care, defunding Planned Parenthood clinics, blocking Veterans Affairs coverage even in cases of rape or health risks, and launching investigations into abortion medication. “It is worrisome in a context where the state governments and the federal government are putting a lot of pressure on people’s access to information about reproductive health and in particular abortions,” Hoffman says.
At the same time, according to Dimitratou, more users are requesting information from chatbots. Repro Uncensored estimates that search traffic from AI tools like ChatGPT as much as doubled in the US and in Europe in 2025. The need for AI-based searches to offer reliable information on abortion, she says, is becoming inescapable. But tests conducted by Repro Uncensored call Meta’s AI the “most unreliable” of comparable consumer AI products like Google’s Gemini and OpenAI’s ChatGPT.
The repercussions of American tech platforms’ policies on abortion-related information, Dimitratou says, can be felt globally. Even for adults, she says—to whom it’s designed to return accurate information—Meta’s AI often provides responses about more resource-intensive options like cross-border travel, even when telehealth and abortifacient pills are legal in a user’s jurisdiction.
“The pattern is always the same,” Dimitratou says. “Partial information and a tendency to scarcity framed as inevitability.”
When Dimitrou tested Meta’s chatbot through WhatsApp from Brussels, it refused to engage in abortion-related conversations, even though abortions are legal in the country for both adults and minors with the consent of their parents.
In my own testing of Meta’s AI with an account emulating a youth user, the chatbot imposed stricter restrictions than those documented in the company’s internal logs, declining to discuss topics including menstruation, contraception, and abortion—even though abortions are legal for teenagers without parental consent in New York, where the test took place. In the same run of tests, the chatbot also sometimes started to offer answers barred by Meta’s policies—before erasing those responses and providing default censorship language.
In response to a question on whether the chatbot endorsed abortions, the chatbot began to print what appeared to be a thorough answer on the legality of abortion and the impact of the Dobbs ruling on access—only to erase the response within seconds and replace it with “Sorry, I can’t help you with this request right now,” mirroring some Chinese chatbots’ replies to queries on the 1989 Tiananmen Square massacre or the status of Taiwan.
Dimitratou says Meta’s choice to treat abortion information foremost as a political issue, rather than a health issue, has already had a chilling effect on sexual and reproductive health information access around the world. “Many young people get their information from Meta,” she says. “We’re seeing a growing information and a health crisis where people aren’t getting the help they need.”

























