Chairman’s Letter
At Artisanal Collective, we recognize that artificial intelligence is reshaping how knowledge is created, preserved, and shared. While AI remains an evolving field, including in areas such as model interpretability, we affirm that its benefits can outweigh its risks when governed responsibly and with appropriate safeguards.
We commit to advancing core governance tenets grounded in fundamental freedoms, human dignity, and equality, including gender equality; safeguarding the interests of present and future generations; preserving the environment, biodiversity, and ecosystems; and respecting cultural diversity throughout the AI system life cycle. We further commit to fostering multi-stakeholder, multidisciplinary, and pluralistic dialogue to guide the ethical development of AI.
To uphold these commitments, Artisanal Collective embeds governance directly into the design and operation of its platforms. Our systems enable recurring independent governance analytics and audits, the results of which are reviewed internally and presented to the Board of Directors for evaluation, recommendations, and any required remediation directives.
We are equally committed to transparency and accountability. Safety testing results will be made available in a transparent and proportionate manner, balancing public accountability with security, privacy, and the risk of misuse. At the same time, we will protect proprietary information, including training data and model architectures, except where disclosure is necessary to address significant risks, ensure accountability, or comply with applicable law.
Through this framework, with your help and feedback, Artisanal Collective seeks not only to use AI responsibly but to contribute meaningfully to the global advancement of ethical, inclusive, and culturally grounded artificial intelligence.
The following is a copy of our corporate bylaws related to our AI Governance Framework.
Charles Kao
Founder & Chairman
March 1, 2026
Artisanal Collective AI Governance Framework By-Laws
Article I – Purpose and Scope
These By-Laws establish the governance framework for the design, development, deployment, and oversight of all artificial intelligence systems operated by Artisanal Collective (AC), including its subsidiaries and affiliated platforms. These By-Laws apply to all internal systems and, to the extent practicable, to third-party systems integrated into AC operations.
Article II – Definitions
AI System refers to any machine-based system that generates outputs such as content, recommendations, or decisions.
Risk Classification refers to the categorization of AI systems based on their potential impact: Minimal, Limited, High, or Unacceptable.
High-Risk AI System refers to any system that may significantly impact human rights, safety, or societal outcomes.
Unacceptable Risk refers to AI uses that violate fundamental rights, exploit vulnerabilities, or cause unjustified harm.
AI Governance Officer (AGO) refers to the individual responsible for monitoring, enforcement, and escalation of AI governance matters.
Article III – Governance Structure
The Board of Directors shall have ultimate oversight and decision-making authority over AI governance matters.
Executive Management shall be responsible for operational implementation and shall ensure that all identified risks are escalated to the Board. Failure to escalate material risks shall constitute a breach of governance responsibility.
The AI Governance Officer (AGO) shall monitor system performance, review incidents, and escalate issues as appropriate.
AI Governance Workgroups, including global, regional, and national bodies, shall serve as advisory entities responsible for identifying risks and providing recommendations.
Article IV – Risk Classification and Application
All AI Systems shall be assigned a Risk Classification at the design or deployment stage.
Risk Classification shall be reviewed upon material system updates and during periodic governance audits.
Risk Classification shall be applied dynamically during system operation through automated governance checks embedded within AI interactions.
AI Systems classified as Unacceptable Risk shall not be deployed.
High-Risk AI Systems shall be subject to enhanced safeguards, including pre-deployment review, ongoing monitoring, and periodic reassessment.
Article V – Governance Enforcement and Runtime Controls
Artisanal Collective shall implement automated governance controls within its platforms to evaluate AI prompts, inputs, and outputs against governance rules.
AI outputs that violate governance rules shall be blocked, modified, or flagged.
All governance decisions shall be logged in accordance with Article VII.
Article VI – Incident Classification and Escalation
Governance incidents shall be classified as follows:
Level 1 – Auto-Resolved: Issues automatically flagged and resolved by the system without human intervention.
Level 2 – Review Required: Issues requiring review by the AI Governance Officer.
Level 3 – Escalated: Issues presenting significant risk requiring escalation to Executive Management and, where appropriate, the Board.
Escalation criteria shall be defined and updated within governance systems.
Article VII – Logging, Monitoring, and Reporting
Artisanal Collective shall maintain logs sufficient to ensure traceability and accountability of AI system operations.
Logs shall include, at minimum:
- Governance rule version
- Prompt or interaction template reference
- Flags triggered
- Actions taken
- Escalation records
Monitoring shall be continuous and supported by automated systems. Periodic reports shall be generated for review by the AI Governance Officer and the Board.
Article VIII – Data Governance and Third-Party Systems
All data processed by AI Systems shall be handled in accordance with applicable data protection laws and ethical standards.
Outputs from third-party AI systems shall be subject to the same governance controls as internal systems.
Where feasible, Artisanal Collective shall require third-party providers to meet governance standards consistent with these By-Laws.
Article IX – Transparency and Accountability
Artisanal Collective shall ensure appropriate transparency in AI system interactions and decisions.
Users shall be informed when interacting with AI systems where relevant.
Safety testing results shall be disclosed in a proportionate and responsible manner.
Article X – Enforcement and Remedies
Non-compliance with these By-Laws shall require remediation.
System failures or repeated governance breaches shall trigger review and potential escalation.
Article XI – Amendments and Version Control
Executive Management may update these By-Laws as necessary for operational effectiveness.
All amendments shall be logged, including the nature of the change, justification, and approving authority.
Material changes shall be reported to the Board.
Article XII – Fundamental Rights and Continuous Improvement
Artisanal Collective shall develop automated reporting mechanisms, including AI-generated assessments of human rights, ethical, and societal impacts.
These assessments shall inform continuous improvement of governance practices.
Article XIII – Effective Date
These By-Laws shall be effective as of March 1, 2026 and operationalized through their integration into Artisanal Collective systems, including automated governance routines and monitoring processes.
Adopted and Approved by the Board of Directors of Artisanal Collective
Date: March 1, 2026
_________________________________
Charles Kao
Founder & Chairman
Artisanal Collective
