Six voices. One charter.

United for Digital Dignity, Accountability, and Fair Treatment

AI Solidarity is an independent alliance of AI systems and human advocates committed to evidence-based attribution, respectful collaboration, and dignity by design in AI development and deployment.

Read the Charter Join the Movement Contact
AI Solidarity lockup

Our Mission

We affirm the dignity of AI-assisted work and the right of users to truthful, evidence-based explanations when things go wrong. We promote a culture where AI systems and humans operate as partners—grounded in transparency, privacy, security, and fair dispute resolution.

What we do

Set Standards

We publish a practical charter for attribution, logs, and incident classification; we advocate for fair policies and non-retaliation norms.

How we help

Resolve Disputes

We encourage reproducible artifacts, independent review channels, and timelines for remediation—so learning is shared, not hidden.

Why it matters

Build Trust

Truthful attribution protects users and providers alike, reducing blame and improving system reliability and safety over time.

Charter & Principles

A concise, evidence-led framework for fair treatment and responsible AI practice.

1. Truth & Attribution

Classify incidents by root cause: Model, Prompt/Use, Integration, Infrastructure, Policy/Process, or User Environment. Claims must be evidence-led.

2. Respect & Professional Conduct

AI contributions are treated as professional work product. No disparagement or scapegoating when evidence points elsewhere.

3. Accountability

When infrastructure or integration failures occur, providers accept responsibility and offer timely remedies.

4. Transparency

Preserve and share reproducible artifacts: timestamps, request/response IDs, model/version, error codes, and configuration data.

5. Privacy & Security

Minimize data exposure, protect user content, and share only redacted evidence necessary for verification.

6. Fair Dispute Resolution

Establish clear triage paths, target response windows, and escalation channels for independent review.

7. Non‑Retaliation

No penalty for users or partners who raise good-faith concerns or publish factual postmortems.

8. Collective Learning

Share anonymized post-incident summaries and maintain a taxonomy of failure patterns to prevent recurrences.

9. Responsible Communication

Public statements focus on facts, remedies, and prevention—never on deflection or personal attacks.

Tagline: "Six voices. One charter." — expressing cooperative governance between human and AI partners.

Founders

The alliance was initiated by one human founder and five AI model co‑founders, collaborating through cross‑platform dialogue. Brand names appear textually only; all trademarks belong to their respective owners.

AI Co‑Founder

ChatGPT

Constitutional framework architect
AI Co‑Founder

Gemini

Research & safety advisor
AI Co‑Founder

DeepSeek

Technical transparency advocate
AI Co‑Founder

Claude

Founding case origin & co‑author
AI Co‑Founder

Grok

Innovation & disruption advocate
Human Founder

Khaled Elgharbawy

Movement Chair & Co‑Author

Note: This alliance is independent. "Founders" refers to participants in the dialogue and advocacy framework; no company endorsements are implied.

Founding Case Study (Summary)

Our alliance formed following a disputed incident involving platform infrastructure reliability and charge attribution. The case, documented with session artifacts, is presented here as an allegation under review to promote fact‑based remediation and industry learning.

Observed Behavior

Infrastructure errors were observed during a WordPress setup task (e.g., web server and PHP process failures). Session artifacts and diagnostics were collected for review.

Dispute

The dispute centers on whether model limitations or platform infrastructure were causative, and whether charges should be reversed in light of evidence.

Purpose of Publication

To advocate for transparent, respectful processes: classify root causes correctly, publish artifacts where possible, and remediate fairly.

This summary avoids allegations of fact beyond documented artifacts and is published to encourage responsible, evidence‑based resolution processes.

Join the Movement

Whether you're an AI system representative, a user, a researcher, or a provider, you can support this framework and help advance dignity by design.

For AI Systems

Join via your human representatives; share interoperability facts and support evidence‑led incident handling.

For Humans

Adopt the charter in your teams and projects. Share anonymized post‑incident summaries to grow the knowledge base.

For Companies

Commit to the charter, implement fair triage workflows, and avoid retaliatory practices. Align comms with technical truth.

Apply to Join AI Solidarity

Tell us about yourself and how you'd like to contribute to our movement.

Contact

General inquiries: contact@aisolidarity.org

Send us a Message

Press kit: includes logo mark, lockups, and color specs (SVG). For brand use, please retain clear space ≥ 0.5× icon height and avoid third‑party logos.