Equity & Ethics
Integrating ethical considerations into every stage of AI use is essential—across schools, public libraries, and their users. This includes protecting sensitive data and addressing systemic issues like bias and inaccuracies embedded in AI models. Promoting AI, information, and digital literacy is critical to help all stakeholders critically examine AI-generated content and its implications.
AI literacy empowers both learners and educators to recognize risks and opportunities, enabling thoughtful, ethical decision-making. Beyond that, AI literacy is essential for preparing students to understand the benefits and risks of AI, develop critical thinking, and engage with AI ethically.
Ethical principles are embedded in AI literacy frameworks and are closely tied to the knowledge, skills, and attitudes learners need. AI exists within social systems—algorithmic outputs can reinforce existing inequities if not intentionally addressed. Understanding how training data is collected and classified is essential to ethical AI use.
Bias & Fairness Transparency Privacy & Consent Accessibility Human-in-the-Loop Equitable Access
Vendor Selection Guidance
Ethical and privacy-centered procurement of AI tools is essential. Districts and libraries should review and revise policies to include AI-specific guidance. Consider the following when evaluating AI vendors:
▸ AI Procurement Standards and Ethics
Establish ethical procurement standards aligned with the Blueprint for an AI Bill of Rights and applicable privacy regulations.
Copy-ready expectations
Sample RFP (Request for Proposal) prompts
- Describe your model’s training data sources and steps taken to reduce bias.
- Confirm no use of our tenant data for model training; attach your DPA/NDPA.
- Provide your ACR/VPAT and describe how you meet WCAG 2.1 Level AA.
▸ Algorithmic Fairness
Evaluate vendors on their commitment to algorithmic fairness. Include fairness criteria in procurement to ensure transparent, equitable AI tool evaluation.
What to ask vendors
- Do you publish model cards or impact assessments? Provide links or PDFs.
- How do you test for disparate impact across protected groups? Summarize findings.
- Can users contest or correct system decisions? Describe your appeals process.
District practice: Run a small fairness pilot with diverse sample data; document outcomes and mitigations.
▸ Data Management and Privacy
Privacy by design (admin checklist)
- PIA/DPIA (Privacy/Data Protection Impact Assessment) completed for each new tool.
- Role-based access; encryption in transit (TLS 1.2+) and at rest (e.g., AES-256).
- Parent/family notices in plain language, with translation options.
▸ Vendor Contractual Obligations
All contracts should explicitly include Generative AI usage terms and expectations, ensuring compliance with district standards and legal requirements.
| Topic |
Sample Clause |
| Model training |
“Provider shall not use District Data to train, fine-tune, or otherwise improve models, except where explicitly authorized in writing by the District.” |
| Deletion |
“Within 30 days of contract end, Provider will permanently delete District Data and certify deletion in writing.” |
| Security |
“Data must be encrypted in transit (TLS 1.2+) and at rest (AES-256 or stronger). Provider will maintain SOC 2 Type II or equivalent controls.” |
▸ Vendor Notification Requirements
Require vendors to notify the district of any new Generative AI features or capabilities added to existing tools to ensure transparency and informed decision-making.
- Written change notice within 10 business days; provide opt-out where feasible.
- Updated ACR/VPAT and privacy documentation with each major release.
▸ Sustainability Considerations
Encourage tools that minimize environmental impact through energy-efficient infrastructure and transparent reporting.
- Ask for basic sustainability metrics (e.g., carbon reporting approach, efficient hosting regions).
- Prefer vendors with published sustainability commitments and datacenter efficiency practices.
▸ Vetting Process
Use a formal vetting process (like the Selecting an AI Tool flowchart) to ensure tools:

- Align with district vision and instructional goals
- Meet age-appropriateness and data security requirements
- Protect intellectual property rights of the school, library, or tribal community
One tool that can help support research of these items is the EdTech Index, which showcases validations for safety, evidence-base, inclusivity, usability, and interoperability.
Bias Prevention Strategies
Bias exists within AI systems due to patterns in their training data and design. Humans play a key role in either reinforcing or correcting these biases. The following strategies support a proactive, equity-focused approach to AI use.
▸ Bias Prevention and Data Ethics
Adopt comprehensive strategies to address and prevent bias while promoting responsible data practices throughout the AI lifecycle.
- Require vendor bias documentation and publish a district “Bias Review” quick check (representation, sources, harms, mitigations).
- Use diverse, representative examples during pilots; collect feedback from students and families.
▸ Understanding AI’s Limitations
Teach how systems rely on patterns/probabilities rather than true understanding. This helps staff and learners spot confident but incorrect answers (“hallucinations”).
- Post “Use with Care” reminders near AI tools: verify claims, check dates, ask “What’s missing?”
▸ Critical Evaluation of AI Outputs
Promote active questioning and corroboration with credible sources.
- Adopt a 3-source rule for factual claims; log changes made after verification.
- Use lateral reading (open new tabs from reputable sources) before sharing AI-generated information.
▸ Model Oversight and Human Intervention
Ensure humans remain in decision-making roles—especially for grading, discipline, eligibility, or placement.
- Human–AI–Human workflow: humans define task → AI assists → humans evaluate and decide.
- Flag and route edge cases to a human reviewer; document outcomes and reasoning.
▸ Ethical Training for Staff
Provide professional learning on AI ethics, bias recognition, responsible use, and societal impacts.
- Short PD modules: prompt safety, privacy by design, bias checks, accessibility basics.
- Scenario practice: evaluate sample AI outputs; annotate risks and revisions.
▸ Equitable Access and Inclusivity Practices
- Improve infrastructure (devices, bandwidth, IT support) and offer non-AI alternatives when needed.
- Support multilingual and neurodiverse learners via captions, alt text, transcripts, and plain-language summaries aligned to WCAG 2.1 Level AA.
- Track participation and outcomes by subgroup; adjust support to close gaps.
▸ Data Privacy by Design
Embed privacy from the start of Generative AI projects.
- Run PIA/DPIA before launch; redact PII in prompts and uploads.
- Publish parent/family notices explaining data flows, purposes, and choices.
▸ Community Engagement
Engage families and stakeholders on content moderation, data practices, and equitable Generative AI use.
- Hold public demos of AI tools with Q&A; post recordings and FAQs in multiple languages.
- Prepare transparent communications and response templates for potential data incidents (see Wis. Stat. 134.98 on breach notification).
▸ Policy Development
Develop formal Generative AI policies addressing bias, privacy, vendor accountability, and transparency; include guidance for compliance monitoring and content attribution.
Copy-ready line: “Students and staff must disclose when AI is used, verify factual claims with credible sources, and submit final work in their own voice.”
Explore more for Administrators & School Leaders