- The Bastion View
- Posts
- ⚖️Liability for information provided by chatbots
⚖️Liability for information provided by chatbots
-PLUS ©️U.S. Copyright Office on AI-assisted works

Welcome to The Bastion View where we dive into the latest trends and insights at the crossroads of AI, security, and privacy, helping you to unravel emerging threats, leverage opportunities, and empowering you to navigate the AI space with confidence.
Owners of AI chatbots used for business interactions are now liable for the accuracy of the information they provide. This is contained in a new bill going through legislation in the New York state of the US. And the US Copyright Office has weighed in on AI, though question on if it is “fair use” for developers to train their models on copyrighted content without authors’ permission is yet to be answered.
Happy reading.
LATEST DEVELOPMENTS
⚖️New York Bill imposing liability for information provided by chatbots referred to Committee

Source: istock
A bill seeking to introduce a groundbreaking framework for holding businesses accountable for the accuracy of information provided by chatbots is going through legislative process in the state of New York in United States.
The Context:
On January 8, 2025, New York's Assembly Bill 222 was referred to the Consumer Affairs and Protection Committee. The bill 222, an amendment to New York’s general business law seeks to impose liability on businesses using chatbots - AI systems that simulate human conversation to provide information or services.
Key points include:
Who It Applies To: Proprietors with more than 20 employees who deploy chatbots. The law excludes third-party developers of chatbot technology.
Responsibility: Businesses are responsible for ensuring the accuracy of chatbot-provided information. Simply notifying users that the chatbot is non-human does not exempt them from liability.
Liability Avoidance: If harmful or false information is corrected within 30 days of being notified, businesses can avoid legal repercussions.
Why It Matters:
The bill highlights ongoing efforts to govern AI systems, emphasizing consumer protection and ensuring AI accountability. It does raise the stakes for businesses by requiring them to verify the information their AI systems provide, or risk facing legal consequences.
For companies operating in or expanding to New York, compliance with Assembly Bill 222 will be crucial. The broader implications? It does signal a growing trend of regulatory scrutiny on AI technologies worldwide, and businesses will need to adapt their practices to align with these evolving standards.
As this bill unfolds, it serves as a reminder for organizations to proactively review their AI policies and ensure their chatbot systems are built on a foundation of trust and accountability.
©️U.S. Copyright Office says some AI-assisted works may be copyrighted

Photo source: Getty Images
The US Copyright Office has released new guidance on copyrights and AI. With the former of the two major issues now clearly decided on; one involved whether AI-generated content can be copyrighted and the other on whether developers can rely on ‘fair use’ to train their models on copyrighted works without author’s permission.
In an update to a decision made last year, the Office now clearly states that purely AI-generated material - or material “where there is insufficient human control over the expressive elements” - cannot be copyrighted. However, if a human uses generative AI as a tool in their creative process, the final work can qualify for copyright protection. The Office explained that “prompts do not alone provide sufficient control” when assessing human authorship. This is because a single prompt can lead to multiple distinct outputs, illustrating how “an AI system providing varying interpretations of the user’s directions” challenges the notion of consistent creative control.
The office firmly rejected proposals for extra legal protection for AI-generated work, emphasizing that copyright fundamentally requires human authorship. It also noted that the significance of human contributions in these cases will be evaluated on a case-by-case basis.
Why it matters
These new guidelines come at a time when generative AI is reshaping creative industries in many ways including contributing to noticeable declines in freelance jobs like writing, graphic designing and even coding. Guidance on whether training LLMs with copyrighted content qualifies as ‘fair use’ remains outstanding - a sign that the issue is complex and far from straightforward with the potential of shaping the industry whichever way it goes.
MORE NEWS
🔍EU: EDPB publishes reports on AI and effective data protection supervision
The European Data Protection Board (EDPB) released two reports addressing bias in AI and the implementation of data subject rights.
🔬OpenAI partners with U.S. National Laboratories on scientific research
OpenAI just announced a new partnership with U.S. National Laboratories, giving thousands of government scientists access to its most advanced AI models for critical research, including nuclear weapons security.
🦺Figure AI details plan to improve humanoid robot safety in the workplace
Figure AI is establishing the Center for the Advancement of Humanoid Safety to address gaps in safety for robots in workplaces.
💰Omi raises $2m to build the future of AI wearables
Omi has raised $2M to develop an AI wearable that enhances mind and productivity.
⚠️Anthropic CEO says limiting China’s access to AI chips is ‘existentially important
Dario Amodei, the CEO of the AI company Anthropic, has responded to the current hysteria in his industry and the financial markets around a new and surprisingly advanced Chinese AI model called DeepSeek by saying it proves the United States needs export controls on chips to China in order to ensure China doesn’t “take a commanding lead on the global stage, not just for AI but for everything.”
🚀Google quitely announces its next flagship AI model
Google revealed its next-gen flagship AI model, Gemini 2.0 Pro Experimental, in a changelog for its Gemini chatbot app. The model was available to Gemini Advanced users beginning Thursday, but Google has since removed the mention of the model from its changelog. The new model provides better factuality and stronger performance for coding and mathematics-related tasks. It is still in early preview and can display unexpected behaviors. The model doesn't have access to real-time information and isn't compatible with some of the app's features.
UPCOMING EVENTS
AI & Security Maturity: Navigating Risks Across Every Stage with John Hammond & Vanta


And that’s a wrap!
Hope you enjoyed it. See you in the next edition.