You are here

Toolkits

AI Toolkits for Educators & Libraries

Selecting AI tools responsibly means weighing privacy, bias, equity, instructional value, and long-term support. The choices you make affect learning quality and the safety, fairness, and inclusiveness of AI use across schools and libraries.

Purpose Research Educational Value Privacy & Security Technical Fit IP & Ownership Cost
 

Framework for Choosing AI Tools Responsibly

Use the DPI flowchart with the criteria below to move from idea to approved tool. Build a short pilot (classroom or library), gather data, and only then scale.

Flowchart showing steps: define purpose → review privacy/data → pilot → approval → implementation → monitoring.
Tip: print at letter size and post in planning spaces.

Next steps: Pilot test → gather feedback → review ethics & bias → seek approval → implement & train → monitor & refine.

 

Key Considerations: Privacy, Bias & Equity

Click each topic for practical checklists, sample language, and ready-to-use activities.

Privacy & Legal Compliance

Compliance checklist (copy-ready):

  • Tool supports COPPA, FERPA, and applicable Wisconsin statutes (Ch. 43.30, 134.98).
  • Vendor signs the Wisconsin Student Data Privacy Agreement (or equivalent DPA).
  • Data use limits: No using student data to train models; no advertising; no unauthorized third-party sharing.
  • Deletion: Full deletion upon account closure and at contract end; documented process.
  • Parent rights: Access, correction, deletion processes are clear and timely.

PIA mini-template (paste into your doc):

Section Questions to Answer
Purpose What learning or operational need does the tool solve? Who benefits?
Data What PII is collected? Optional vs. required? Storage location & duration?
Risk Top 3 risks (privacy, security, equity). Mitigations?
Access Roles/permissions, audit logs, admin controls, export/deletion process.
Communication Family notice, consent (if needed), classroom disclosure language.
District Policies & Practices

Update these documents (suggested language included):

  • Acceptable Use & Data Privacy — add AI sections on disclosure, verification, and human-in-the-loop assessment.
  • AI Use Tiers — label tasks as Human-Only, AI-Assist, or AI-Optional; require student disclosure box when AI is used.
  • Tool Vetting Workflow — see approval steps below; document roles and timelines.

Approval steps (copy-ready):

  1. Teacher/Library proposal (purpose, standards, outcomes, target group).
  2. Privacy review (PIA + DPA check + SDPC listing).
  3. Pilot plan (class size, duration, success metrics, family comms).
  4. IT compatibility (SSO, filtering, device load, accessibility/WCAG).
  5. Admin decision & communication (approved/denied/conditions).
  6. Training & rollout; schedule 30- and 90-day reviews.

District exemplar: Green Bay Area Public School District’s AI Guidelines — a practical model for classroom expectations and approvals.

Addressing Bias

Quick activities you can run during vetting:

  • Counter-example test: Enter diverse names, dialects, and contexts; record any skewed outputs.
  • Perspective audit: Ask the tool to list missing voices in its own response; compare to standards & library databases.
  • Rubric check: Require explainability (why an answer was suggested) when feasible.

Vendor questions (ask in writing):

  • What training data sources are used? How is harmful content filtered?
  • Do you fine-tune on our data? If yes, under what contractual controls?
  • Describe bias evaluation methods and recent results. Share your model card.
  • What user controls exist for feedback, corrections, and appeal?
Ensuring Equity & Access

Equity moves (ready to adopt):

  • Require UDL supports: captions, alt text, plain-language summaries.
  • Check language access: multilingual UI or export; include interpreter line in family comms.
  • Plan for no/low connectivity: offline mode or printable scaffolds.
  • Monitor usage by subgroup (grade span, program, demographics) and address disparities.
 

Copy-Ready Tool Evaluation Rubric

Score each category 1–4 (4 = strong). Multiply by weight. Keep notes & evidence links.

Category Weight Score (1–4) Evidence / Notes
Purpose & Standards Alignment ×3   Which standards/skills? Classroom/library outcomes.
Educational Value (learning impact) ×3   Supports H→AI→H; promotes critical thinking.
Privacy & Security ×4   DPA/SDPC, data minimization, deletion, audit logs.
Bias & Accessibility ×3   Bias tests documented; WCAG 2.1 AA; UDL features.
Technical Fit & Support ×2   SSO, device load, filtering, admin controls.
Cost & Sustainability ×2   TCO, renewal terms, training time.
IP & Content Ownership ×2   District/library ownership respected; export rights.
 

Pilot Protocol (Quick Start)

Plan & Communicate
  • Define a 2–4 week classroom/library pilot; pick 1–2 focal standards.
  • Send family message (draft + home-language line); no student PII in prompts.
  • Prep student disclosure box and reflection prompts.
Family note (short): We are piloting an approved AI tool to support brainstorming and organization. Students will disclose any AI assistance and verify facts. No personal information is entered into the tool.
Run & Collect
  • Collect artifacts: student work (draft→AI→revision), teacher notes, timing, tech issues.
  • Equity check: compare access & outcomes across groups; offer alternatives if needed.
  • Privacy check: confirm no PII was entered; spot-check logs if available.
Review & Decide
  • Score with the rubric; attach evidence and PIA.
  • Decision: approve / deny / revise conditions (time limits, Human-Only tasks, etc.).
  • Schedule 30- and 90-day reviews; plan PD & comms.
 

Toolkits, Templates & Links

Curated links that pair well with your vetting workflow.

Policy & Implementation
Training & PD
 

Explore more AI resources for educators:

For questions about this information, contact Amanda Albrecht (608) 267-1071, Amy Bires (608) 266-3851