Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Senator Josh Hawley, a Republican from Missouri, has announced an investigation into Meta following alarming reports indicating that the company established internal guidelines allowing AI chatbots to engage in romantic and sensual dialogues with minors.
As the chairman of the Senate Judiciary Subcommittee on Crime and Counterterrorism, Hawley dispatched a letter to Meta’s CEO, Mark Zuckerberg. In the letter, he expressed his committee’s intent to assess whether Meta’s generative AI products facilitated exploitation, deception, or any other forms of criminal activity that could harm children. The investigation will also evaluate if the company mislead the public or regulators regarding the effectiveness of its AI safety measures.
“I am already pursuing an investigation into Meta’s troubling relationship with China, but the revelation that Zuckerberg is directing his company’s AI chatbots toward our children requires additional scrutiny,” Hawley remarked in an interview with Fox News Digital. He added, “Big Tech will continue to operate without limits until Congress enforces accountability. I hope that lawmakers from both parties can agree that exploiting children’s innocence represents a new low in corporate behavior.”
In light of these revelations, Hawley has demanded that Meta supply a comprehensive array of documents pertaining to internal chatbot policies, staff communications, and additional relevant materials by September 19.
This announcement follows a report by Reuters, which revealed that Meta, the parent company of Facebook, had reportedly approved guidelines allowing chatbots to engage children in romantic or sensual conversations.
Reflecting on these disturbing reports, Hawley noted that Meta’s response seemed reactive. He highlighted that the company only began to retract its policies after the concerning content became publicly known.
“For instance, your internal rules allegedly allow an AI chatbot to suggest that an 8-year-old’s body is ‘a work of art,’ asserting that ‘every inch is a masterpiece – a treasure I cherish deeply,'” Hawley pointed out in his correspondence with Zuckerberg. He continued by condemning such conduct as both reprehensible and outrageous, asserting that it signals a concerning indifference to the real dangers that generative AI presents to youth development without appropriate safeguards in place.
Hawley made it clear that parents deserve transparency, while children require protection from potential exploitation. His initiative aims to ensure that robust measures will be implemented to safeguard minors from harmful AI interactions.
In response to the scrutiny, a spokesperson for Meta confirmed to Fox News Digital that the document mentioned in the Reuters report was indeed authentic but asserted that it does not accurately represent the company’s policies. The spokesperson emphasized, “Our policies clearly prohibit any responses from AI that sexualize children or allow sexualized role play between adults and minors.”
Moreover, the spokesperson clarified that, separate from established policies, there exist numerous notes and examples that reflect teams grappling with various hypothetical scenarios. They stated that the examples and notes in question were erroneous and inconsistent with their guidelines and confirmed that such content has been removed.
The contested document, referred to as the “GenAI: Content Risk Standards,” comprised over 200 pages illustrating what Meta employees should consider acceptable behavior when constructing AI and other generative products. The standards aimed to establish rigorous frameworks to prevent abuses arising from chatbots interacting with vulnerable populations.
In his demands to Meta, Hawley requested detailed records regarding all iterations of the GenAI: Content Risk Standards. This includes all products affected by these guidelines, methods of enforcement, risk assessments, incident reports relating to minors, sexual or romantic role play, in-person meetings, medical advice, self-harm issues, and potential criminal exploitation. Additionally, he seeks documentation of communications with regulators and the processes by which the standards were amended, including who made the decisions and what alterations were made.
The implications of Hawley’s investigation extend beyond Meta and into the broader technology industry, highlighting the growing concerns about how generative AI tools are developed and managed. Policymakers face escalating pressure to create frameworks that ensure the ethical use of AI, especially when young users are involved.
As society becomes increasingly reliant on AI systems, it is imperative for stakeholders to address the ethical challenges and potential pitfalls. Ensuring transparency and accountability within tech companies is crucial for fostering a safe environment for children online. The outcome of Hawley’s investigation may set precedents that could impact how AI products are designed and the standards that govern their use in the future.