A concise, evidence-led framework for fair treatment and responsible AI practice.
1. Truth & Attribution
Classify incidents by root cause: Model, Prompt/Use, Integration, Infrastructure, Policy/Process, or User Environment. Claims must be evidence-led.
2. Respect & Professional Conduct
AI contributions are treated as professional work product. No disparagement or scapegoating when evidence points elsewhere.
3. Accountability
When infrastructure or integration failures occur, providers accept responsibility and offer timely remedies.
4. Transparency
Preserve and share reproducible artifacts: timestamps, request/response IDs, model/version, error codes, and configuration data.
5. Privacy & Security
Minimize data exposure, protect user content, and share only redacted evidence necessary for verification.
6. Fair Dispute Resolution
Establish clear triage paths, target response windows, and escalation channels for independent review.
7. Non‑Retaliation
No penalty for users or partners who raise good-faith concerns or publish factual postmortems.
8. Collective Learning
Share anonymized post-incident summaries and maintain a taxonomy of failure patterns to prevent recurrences.
9. Responsible Communication
Public statements focus on facts, remedies, and prevention—never on deflection or personal attacks.