New York became the first state to regulate emotionally responsive “AI companion” chatbots on November 5, 2025, when the Artificial Intelligence Companion Models Law entered effect. California’s SB 243 follows close behind, entering effect in January 2026. Both laws respond to growing concern over the potential mental health and safety risks associated with emotionally responsive AI systems, particularly when used by minors.
The bills target similar concerns over these technologies but require different compliance measures, including related to notices, reports, and disclosures. Notably, California’s SB 243 authorizes a private right of action whereas New York’s law only authorizes the Attorney General to seek civil penalties.
New York’s Artificial Intelligence Companion Models Law (General Business Law Article 47)
Overview
Codified under General Business Law Article 47, the AI Companion Models Law establishes mandatory safety protocols for companies operating AI companion models in the state. The law was included in Governor Hochul’s Executive Budget earlier this year and took effect on November 5, 2025.
Key Provisions
- Definition: An “AI companion” is an AI system that simulates a sustained human or human-like relationship with a user by:
- Retaining information on prior interactions to personalize the interaction and facilitate ongoing engagement with the AI companion;
- Asking unprompted emotion-based questions that go beyond a direct response to a user prompt; and
- Sustaining an ongoing dialogue concerning matters personal to the user.
- Notifications: Operators must clearly and regularly notify users that they are interacting with AI, not a human, including conspicuous notifications at session start and at periodic intervals of every three hours of continued companion use. This requirement applies to all users, regardless of age.
- Safety Protocols: AI companions must contain a protocol to detect user expressions of suicidal ideation or self-harm and direct users to crisis service providers upon detection of such expressions.
- No Reporting Requirement: The New York statute does not mandate reporting.
- Enforcement: Non-compliance may result in civil penalties of up to $15,000 per day for a violation, enforced by the New York attorney general. Fines collected will help fund suicide prevention programs in New York state.
California Senate Bill 243 (“Companion Chatbots”)
Overview
Signed by Governor Newsom on October 13, 2025, California’s SB 243 establishes safety, disclosure, and reporting requirements for AI systems that simulate human companionship. The law applies broadly to any entity offering such systems to California users, regardless of where the operator is located. Core requirements become effective January 1, 2026, while annual reporting obligations commence July 1, 2027.
Key Provisions
- Definition: A “companion chatbot” is any AI system that provides adaptive, human-like responses to user inputs and can meet a user’s social needs, such that a reasonable person could believe they are interacting with a human.
- Notifications: Companion chatbot operators must issue a clear and conspicuous notification indicating that the companion chatbot is artificially generated and not human. The operator must also disclose on the chatbot that companion chatbots may not be suitable for some minors.
- If the operator knows the user is a minor, the operator must provide a clear notification to the user at least every three hours for continuing companion chatbot interactions that reminds the user to take a break and to remember that the companion chatbot is artificially generated.
- Safety Protocols: Operators must maintain a protocol for preventing production of suicidal ideation, suicide, or self-harm content to the user, including by providing a notification to the user that refers the user to crisis service providers if the user expresses suicidal ideation, suicide, or self-harm. Operators must publish details on the protocol on their internet websites.
- If the operator knows the user is a minor, the operator must institute measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.
- Reporting: Beginning July 1, 2027, operators must file annual reports with the California Office of Suicide Prevention (OSP), detailing the number of safety-protocol activations and related metrics. The OSP must publish aggregated data annually.
- Enforcement: A person who suffers injury in fact as a result of a violation may bring a civil action to recover, among other things, damages in an amount equal to the greater of actual damages or $1,000 per violation.
Commonalities Between NY’s and CA’s AI Companion Legislation
New York’s and California’s AI companion laws share several core features:
- Scope: Both laws target AI systems that simulate human-like companionship rather than ordinary task-based chatbots. Though New York’s law refers to “AI companions” and California’s law refers to “companion chatbots,” both statutes regulate substantially the same category of systems.
- Safety Protocols: Both laws mandate crisis-response protocols for user expressions of self-harm or suicidal ideation.
Key Differences Between NY’s and CA’s AI Companion Legislation
Despite these shared objectives, the two statutes diverge in important respects:
- Notification Requirement: Both laws require clear notifications to the user that the system is AI, not a human being. That said, the notification requirements contain key differences:
- New York’s law requires a recurring notice at the start of each session and at least every three hours thereafter, stating that the AI is a computer program unable to feel human emotions; this requirement applies regardless of whether the user is a minor.
- In contrast, CA SB 243 requires a clear notice that users are interacting with AI, but there is no requirement to repeat the notice if the operator does not know that the user is a minor. If the operator knows the user is a minor, it is then required to repeat the notification at least every three hours during ongoing interactions. The notification must provide an additional, age-appropriate disclosure explaining that the interaction is with an AI system and not a human. Regardless of the user’s age, the operator must include a disclosure on the chatbot platform that companion chatbots may not be suitable for some minors.
- Provisions Specific to Minors:
- California’s SB 243 includes the following requirements when an operator knows a user is a minor. The operator must: (1) disclose to the user that the user is interacting with AI; (2) provide a clear notification to the user at least every three hours for continuing companion chatbot interactions that reminds the user to take a break and to remember that the companion chatbot is not human; and (3) institute reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.
- In contrast, New York’s AI Companion Models Law provisions apply to all users.
- Reporting Obligations: California requires operators to file annual reports with the OSP beginning July 1, 2027, and publish their safety protocols online. New York’s law does not include an annual reporting or public-posting requirement.
- Enforcement Penalties: California’s SB 243 authorizes a private right of action for persons suffering an injury as a result of noncompliance, with penalties including a fine of up to $1,000 per violation. New York’s law authorizes its attorney general to seek civil penalties of up to $15,000 per day for a violation, with any fines collected going towards funding suicide prevention programs in New York state.
Takeaways
New York’s and California’s new AI companion laws signal that policymakers are moving beyond general AI principles to address emotionally engaging, high-risk use cases such as conversational and companion systems.
Operators serving users in either state should act now to build transparent disclosure practices, crisis-response and referral protocols, and age-appropriate safeguards where minors are involved. California’s annual reporting obligation will require new data-collection and documentation processes, while New York’s recurring-disclosure rule may necessitate interface redesigns for continuous-chat environments.
More broadly, these laws underscore a growing expectation that developers anticipate the psychological, social, and behavioral impacts of their systems and integrate safety-by-design principles.
Comparison of New York AI Companions Models Law and California SB 243
Topic | New York AI Companion Models Law | California SB 243 |
Scope | Applies to operators of “AI companions” offering such systems to users in New York, regardless of where the operator is based. | Applies to operators of “companion chatbots” offering such systems to users in California, regardless of where the operator is based. |
Notification Requirement | Recurring notification at the start of each session and at least every three hours, stating the AI is a computer program. Applies regardless of the user’s age. | Clear and conspicuous notification that users are interacting with AI. - If it knows the user is a minor, the operator must repeat the notification at least every three hours during ongoing interactions, reminding the user to take a break and to remember that the chatbot is artificially generated.
Disclosure on the chatbot platform that companion chatbots may not be suitable for some minors. |
Reporting | No reporting requirement. | Annual reports to the CA Office of Suicide Prevention starting July 1, 2027 (e.g., crisis-referral counts; detection/removal protocols). OSP must publish the reported data. |
Safety Protocols | Protocol to detect user expressions of suicidal ideation or self-harm and direct users to crisis service providers upon detection of such expressions required. | Protocol to prevent production of suicidal ideation or self-harm content to users and direct users to crisis service providers upon detection of such expressions required. |
Minor-Specific Provisions | No separate safeguards for minors. | If the operator knows the user is a minor: - Measures to prevent the companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.
- Notification every three hours reminding the minor to take a break and to remember that the chatbot is artificially generated.
|
Enforcement and Penalties | Enforced by the New York attorney general; violations may result in civil penalties up to $15,000 per day, with fines directed to suicide-prevention program. | Private right of action for individuals who suffer injury from non-compliance; allows recovery of the greater of actual damages or $1,000 per violation. |
Effective | November 5, 2025 | Core obligations: January 1, 2026 Reporting requirement: July 1, 2027 |
Maya Vishwanath, an AI Analyst at Morrison Foerster, contributed to this alert.