TAGS

Nobody reads the Terms of Service. Until Now.

An AI lawyer can read them for you.

We tested this by getting one AI (ChatGPT) to analyse the Terms of Service of another AI (Anthropic’s Claude) specifically from the perspective of someone working in journalism, film, media, or other creative industries. It came back with the short, easy to read one-pager below, and the findings were stark. And not unique to Anthropic. 

Here’s the simple prompt I used and it took five minutes:

“You act as a privacy and intellectual property expert. Read these terms of service and flag anything from the document I should be concerned about, with page reference. [link to Terms of Service]”

Variations can be applied to Linked In, Facebook, X and anyone asking you to legally agree to their terms of service. You might be horrified at what you find.

ChatGPT Report: 

Privacy and IP Risk Assessment: Anthropic Terms of Service

Tailored for Journalism, Media Creation, and Film/TV Production

Reviewed: March 2025

Key Concerns from Anthropic’s Terms of Service

1. Your Content May Be Used by Anthropic

"You grant Anthropic a non-exclusive, worldwide, royalty-free license to use, host, store, reproduce, display, and create derivative works of Your Content..." (Section 4)

Implication: Input data (e.g., interviews, scripts, outlines) may be stored and reused by Anthropic. This includes potential use in model training or feature development. Input is not treated as confidential.

2. No IP Ownership or Exclusivity Over Outputs

"You may use Output... subject to these Terms. Anthropic and its licensors retain ownership of the Services, including the Output." (Section 5)

Implication: You do not own the content generated by the AI, and it may be reused by others. Outputs may not be original or protectable under copyright.

3. No Confidentiality Safeguards

"You should not include any sensitive personal data or confidential information in Your Content..." (Section 4)

Implication: The platform is not suitable for handling source data, whistleblower details, or embargoed material.

4. You Assume Legal Liability for Use

"You agree to defend, indemnify, and hold harmless Anthropic from claims... arising out of Your Content or your use of the Services..." (Section 11)

Implication: If generated content contains legally risky material, such as defamation or IP infringement, you bear full responsibility.

5. No Guarantees on Output Quality or Truth

"The Services and Output are provided 'as is'... we do not make any warranties... about accuracy or reliability." (Section 8)

Implication: The tool should not be relied upon for fact-based reporting, legal scripts, or compliance-heavy documentation.

6. Arbitration Clause and Class Action Waiver

"You agree to binding arbitration... and waive the right to class actions or jury trials." (Section 13)

Implication: You lose the ability to pursue legal disputes in court or as a class—must use private arbitration.

Summary of Risks and Recommendations

This Is Becoming Industry Standard

It is important to note that these terms—non-confidential input, non-exclusive output rights, arbitration clauses, and limited warranties—are quickly becoming standard across nearly all major AI providers, including OpenAI, Google, and Microsoft. Most free or public-facing AI tools follow similar terms unless a commercial or enterprise contract is negotiated. Media professionals should assume that without a bespoke agreement, AI tools are not safe for handling confidential material or producing exclusive, commercially viable content.