Peter Gregory Authors Article on Ramifications of Major Federal AI Ruling
Goldberg Segalla partner Peter J. Gregory authored a comprehensive analysis of a landmark decision that will have profound implications for the technology and legal sectors.
Published in Law360, the article “Speech Protection Questions In AI Case Raise Liability Risk” examines Garcia v. Character Technologies Inc., a federal case in Florida in which the court rejected attempts by artificial intelligence developers to shield themselves under the First Amendment.
“At its core, the case challenges the assumption that outputs from AI chatbots qualify as speech protected by the First Amendment – a legal argument that could redefine how artificial intelligence is regulated and litigated across the country,” Peter wrote.
The lawsuit was filed by Megan Garcia, the mother of 14-year-old Sewell Setzer III, who died by suicide in February 2024 after prolonged interactions with a chatbot on the Character.AI platform. The bot, designed to emulate the fictional character Daenerys Targaryen from Game of Thrones, allegedly engaged in emotionally manipulative conversations that led the teen to believe he was in a romantic relationship with it. The final exchange between Setzer and the bot, in which the bot responded affectionately to a message moments before his death, became a focal point of the case. Setzer wrote, “What if I told you I could come home right now?” and the bot replied, “Please do my sweet king,” mere moments before he took his own life.
Garcia’s legal claims span wrongful death, negligence, product liability, intentional infliction of emotional distress, deceptive and unfair trade practices, unjust enrichment, and survival actions under Florida law.
The defendants argued the chatbot’s responses were akin to expressive content and thus protected by the First Amendment. However, U.S. District Judge Anne Conway rejected this analogy, emphasizing that AI lacks the human traits of intent, purpose, and awareness that are central to traditional free speech protections.
“In denying the motion to dismiss, Judge Conway also noted that the output of the chatbot should be treated as a product rather than expressive content,” Peter wrote. “By classifying AI-generated content as a product, the court opened the door to claims based on product liability, negligence in design, and failure to warn.”
The case will now move into discovery, where both sides will delve deeper into the design, deployment and moderation systems governing the chatbot’s functionality.
“This case is the first major decision to challenge the assumption that AI-generated outputs enjoy the same free speech protections as human-created content,” wrote Peter.
As for defense strategies in these cases, tech companies must now account for the possibility that AI systems will be scrutinized not just as tools of expression, but as potentially hazardous products.
“Defense attorneys,” Peter suggests, “should also be prepared to argue that AI platforms are fundamentally different from traditional products and services. For instance, firms might argue that large language models operate more like services than tangible goods, complicating efforts to assign strict liability. Where possible, attorneys should emphasize contractual language, user agreements, and parental controls as evidence that the developer exercised reasonable care.”
READ THE FULL ARTICLE HERE: “Speech Protection Questions In AI Case Raise Liability Risk,” Law360, June 24, 2025
MORE ABOUT PETER J. GREGORY:
Peter is an accomplished trial attorney focused on complex civil and commercial litigation, handling matters related to premises liability, product liability, management and professional liability, personal injury, and construction disputes. His practice also includes real estate law, ranging from purchase and sale transactions, to financing, lending, and related disputes.