You are here

Ethical Use in the Classroom

Ethical Use in the Classroom: Core Principles

Ethical considerations should be embedded at every stage of AI use in education. The AI Literacy Framework emphasizes that values, context, and accountability are inseparable from learning about AI. These principles help learners evaluate AI’s impact and prepare them to shape a responsible AI future.

Featured Guide

Key themes include:

Balancing benefits & risks Bias & equity Data privacy & transparency Human-centered (H > AI > H) Academic integrity & attribution
Policy and Guidelines

Clear, consistent guidance helps staff and students use AI responsibly. The items below are sample language you can adapt and remix for district, school, or classroom policies.

Core components to include

  • Scope & definitions: What counts as AI and where the policy applies (school devices, personal devices on campus, LMS, email, etc.).
  • Permitted, required, and prohibited uses: Spell out what is encouraged, what must be disclosed, and what is not allowed.
  • Disclosure & attribution: How students and staff document AI assistance (on assignments, syllabi, and communications).
  • Privacy & data protection: No personal/sensitive data in prompts; approved tools only; retention rules.
  • Equity & accessibility: Inclusive language, UDL supports, translation options, and accommodations.
  • Tool vetting & approval: Security, privacy, accessibility checks before use with students.
  • Teaching & assessment integrity: Human-in-the-loop, learning outcomes, and academic honesty process.
  • Incident reporting & continuous improvement: How to report concerns, appeal decisions, and refine policy over time.

Quick reference matrix

Example classroom matrix; adapt to align with your local grading and policy language.

Task Type Allowed Required Disallowed
Brainstorming / outlining Yes (AI-assist) Disclose tool + how used
Grammar/clarity edits Yes (AI-assist) Track changes or note edits
Factual summaries Yes (verify facts) Disclosure + source checks Unverified copy-paste
Personal reflections / assessments Human-only unless teacher permits scaffolds Fully AI-generated submissions
Code / media generation Yes (follow license & originality rules) Attribution + note of modifications Using AI to defeat plagiarism checks
Sensitive data (PII, health, grades) No Entering PII into prompts

Disclosure & attribution (sample language)

  • Syllabus language (short):
    AI tools may be used for idea generation, outlining, and editing when permitted by the assignment. You must disclose the tool(s) used and how they were used. Final work must reflect your own thinking and voice. Do not input personal or sensitive information into AI tools.
  • Assignment disclosure box (paste at end of student work):
    AI Use: Tool(s): ____ • Purpose: (brainstorm / outline / edit / summarize / translate) ____ • What I kept/changed: ____ • Verification steps (facts/sources): ____ • Date used: ____.
  • In-text acknowledgment (examples):

Privacy, safety, and data handling

  • No PII in prompts: Names, addresses, photos, grades, health or disciplinary info are prohibited.
  • Approved tools only: Use the district’s vetted list; turn off training/data-sharing when possible.
  • Uploads: Redact student identifiers before uploading docs/images.
  • Storage: Do not store AI outputs with student PII in third-party systems unless approved in writing.

Tool vetting checklist (admin/IT)

  • Student data path documented (collection, storage, retention, deletion, sub-processors).
  • Agreement meets FERPA/COPPA where applicable; DPA on file; data minimization in practice.
  • Accessibility (WCAG 2.1 AA), language support, keyboard navigation, captioning/alt-text.
  • Bias/fairness documentation (model card or equivalent) and opt-out options.
  • Role-based access, audit logs, and incident response commitments.

Assessment integrity & human-in-the-loop

  • Teachers clearly mark tasks as Human-Only, AI-Assist, or AI-Optional.
  • High-stakes assessments are Human-Only unless accommodation requires support.
  • When AI assists, students must: (a) disclose tools, (b) verify facts/sources, (c) revise in their own voice.
  • Academic honesty follows the existing code of conduct; AI detectors are not sole evidence—use process reviews (drafts, oral checks).

Implementation & reporting

  • AI Use Plan on new assignments: allowed uses, required disclosures, verification steps, privacy reminders.
  • Concerns/appeals workflow: Step 1 teacher conference → Step 2 department/coach review → Step 3 admin decision with learning-focused remedy.
  • Professional learning: yearly refresh on prompt design, privacy, and accessible practices.

Family communication template (short)

Our classes may use approved AI tools to support learning (e.g., brainstorming, organization, and language feedback). Students will disclose when AI is used and verify facts. We do not input personal information into AI tools. If you prefer communications in another language or need an interpreter, please contact ____.

District example

See the Green Bay Area Public School District’s AI Guidelines for a practical, readable model of implementation details and classroom expectations.

Documentation, Attribution, and Citation

Students must cite AI tools used in content generation. When students use AI to brainstorm, draft, edit, summarize, or generate text, images, code, or media, they are expected to:

  • Clearly document what tool was used
  • Describe how it was used (e.g., idea generation, grammar editing, text generation, etc.)
  • Provide a citation formatted appropriately

Educators may choose a citation style (e.g., APA, MLA); the key is transparency. Below are full examples you can copy:

Example 1 — MLA (Works Cited entry)
OpenAI. ChatGPT, version 4, 3 May 2025, https://chat.openai.com/. Prompt: “Write an introduction paragraph about climate change for a high school audience.” Accessed 3 May 2025.

Example 2 — APA (Reference list entry)
OpenAI. (2025, May 3). ChatGPT (GPT-4) [Large language model]. https://chat.openai.com/

Example 3 — Informal classroom documentation
“Used Canva’s Magic Write to generate three title options for my digital poster (prompt: ‘Title ideas about renewable energy for 6th graders’). I selected and revised the second one.”

Discuss originality and ownership—e.g., comparing student work vs. AI-generated poems or artworks.

Note: AI-only generated content is generally not copyrightable under current U.S. law.

Human > AI > Human Approach

AI supports—rather than replaces—student thinking. Learners begin with their own ideas, use AI as a support tool, and then apply critical thinking to revise, reflect on, or refine the AI-assisted output. AI is not the final author—students are. This pattern is sometimes described more broadly as a “human-in-the-loop” approach.

This cycle:

  • Human: The student begins with their own question, draft, or prompt.
  • AI: An AI tool is used to enhance, inspire, or suggest.
  • Human: The student critically evaluates the output and makes thoughtful revisions to make the final product their own.

Classroom Example (Middle School ELA):

  1. Step 1 – Human: Students brainstorm their main argument and write a rough topic sentence on their own.
  2. Step 2 – AI: Students use a generative AI tool to get examples or a sample paragraph (e.g., “Write a persuasive paragraph on why school lunches should be longer”).
  3. Step 3 – Human: Students review the AI-generated paragraph, highlight what they agree or disagree with, then revise using their voice, facts from class, and teacher feedback. They annotate where the AI helped and where they made changes.

Optional Extension: Students reflect on how AI helped and how their thinking evolved.

Addressing Misinformation and Bias

Educators must prepare students to navigate a digital world where generative AI can produce inaccurate, biased, or misleading content. Teach students to:

  • Recognize and question bias in AI-generated responses
  • Cross-check facts using reliable sources
  • Identify manipulated media such as deepfakes
  • Reflect on how their own input (prompts) can influence AI output

Classroom Example (High School Social Studies):

  1. Step 1 – Inquiry Using AI: Students ask a model, “What were the main causes of the XYZ conflict?”
  2. Step 2 – Evaluation: Students annotate parts that may be biased, oversimplified, or missing diverse perspectives.
  3. Step 3 – Fact-Checking: Students compare with vetted sources and highlight discrepancies.
  4. Step 4 – Deepfake Awareness: View a short AI-generated video/audio with subtle misinformation. Discuss signs of inauthenticity.
  5. Step 5 – Reflection: Students write a brief reflection on applying critical thinking to convincing but unreliable content.

This builds AI literacy, media literacy, and civic reasoning—aligned with Wisconsin’s Information & Technology Literacy standards.

Social Emotional Learning (SEL) Integration

The SEL-focused examples below are DPI-developed sample ideas intended to spark discussion and planning. Adapt, shorten, or expand them with local teams and mental health professionals as appropriate for your context.

Fostering SEL Competencies by Grade Level
  • Elementary (K–5)
    • Feelings Check-In with Avatars (10 min): Students pick an emoji/AI avatar and name the feeling; the AI suggests two regulation strategies.
      Prompt to copy: “Act as a friendly feelings coach for a 3rd-grader. Ask me to name my feeling in one word, then suggest two simple strategies (like deep breaths or a quick stretch). Keep it positive and short.”
      Evidence: 1-sentence exit ticket: “Today I felt __, I tried __, it helped __.”
      Rubric snippet: Meets: names feeling + chooses 1–2 strategies + notes effect. Approaches: feeling or strategy missing/unclear. Needs: no feeling/unsafe strategy.
    • Kind Words Rewrite (10–12 min): Students paste a blunt sentence; AI offers a kinder version; students choose and explain why.
      Prompt: “Please rewrite this sentence to sound kind and respectful for a classmate. Keep the meaning the same and use simple words: ‘__’”
      Evidence: Student circles the best version and adds a “why” note.
      Rubric snippet: Meets: kind tone + same meaning + simple words + brief why. Approaches: kind but meaning shifts or missing why. Needs: still unkind/off-topic.
    • Role-Play: Playground Problem (12–15 min): AI plays a classmate who interrupted; student practices an “I-statement.”
      Prompt: “Pretend you’re my classmate who keeps interrupting me. Help me practice using an I-statement. Give me a short, kind reply I could say back.”
      Evidence: Teacher checklist of I-statement parts (I feel… when… because… Can we…?).
      Rubric snippet: Meets: complete I-statement, respectful tone. Approaches: 2–3 parts present. Needs: 0–1 parts or disrespectful.
  • Middle (6–8)
    • SMART Goal Builder (15 min): Students draft a goal; AI helps convert it into a SMART goal with 3 concrete steps.
      Prompt: “Turn this goal into a SMART goal for a 7th-grader and list 3 steps with dates: ‘__’. Ask me one clarifying question first.”
      Evidence: SMART rubric (Specific, Measurable, Achievable, Relevant, Time-bound) self-check.
      Rubric snippet: Meets: SMART goal + 3 dated steps + answered clarifier. Approaches: SMART but missing date/step. Needs: vague goal.
    • Perspective Switch (15 min): Students paste a paragraph; AI supplies a respectful counter-view; students respond using empathy frames.
      Prompt: “Give a respectful counter-argument to this paragraph for a middle-school debate, using ‘I hear…’ and ‘I wonder…’ frames: ‘__’”
      Evidence: Student reply has at least one “I hear…” + one “I wonder…” and a proposed compromise.
      Rubric snippet: Meets: both frames + specific compromise. Approaches: one frame or generic compromise. Needs: no frames or disrespectful.
    • Conflict-Resolution Coach (12–15 min): AI helps plan a 3-step conversation for a friend-group conflict.
      Prompt: “Coach me through a 3-step plan to resolve this conflict at school. Keep it age-appropriate and specific. Situation: ‘__’”
      Evidence: Script with greeting, I-statement, request; peer role-play notes.
      Rubric snippet: Meets: greeting + I-statement + request; realistic wording. Approaches: missing one element. Needs: missing two+ elements or unrealistic.
  • High School (9–12)
    • Interview/Workplace Scenario Practice (15–20 min): AI interviewer asks 3 STAR questions; student answers; AI gives targeted feedback.
      Prompt: “Be a mock interviewer for a part-time job. Ask me 3 STAR questions one at a time. After each answer, give concise feedback and 1 improvement.”
      Evidence: Student revises one answer using STAR (Situation, Task, Action, Result).
      Rubric snippet: Meets: 3 STAR answers + revision after feedback. Approaches: STAR used but missing Result or no revision. Needs: vague/one-word responses.
    • Media Bias & Tone Reflection (15–20 min): AI generates two short social posts on the same issue with different tones; students analyze language and impact.
      Prompt: “Write two 60-word posts on the same topic: one neutral/informational and one strongly persuasive. Label each. Topic: ‘__’”
      Evidence: Comparison table (tone, word choice, likely impact, missing perspectives).
      Rubric snippet: Meets: table compares tone/words/impact + cites examples. Approaches: partial comparison or generic examples. Needs: no analysis.
    • Stress-Management Study Planner (12–15 min): AI co-creates a week plan that includes breaks and coping strategies.
      Prompt: “Help me plan a one-week study schedule for __ class with built-in stress-management (breaks, movement, sleep goals). I can study __ minutes on these days: __.”
      Evidence: Plan + 2-sentence reflection naming the first tactic to try.
      Rubric snippet: Meets: schedule with study blocks, breaks, coping strategies + reflection. Approaches: plan or reflection incomplete. Needs: no workable plan.

Privacy note: Use district-approved tools; avoid real names or sensitive details; fictionalize scenarios when needed. Remind students never to paste personal identifiers.

Risks in SEL Integration
  • Distinguishing AI from Real-Life
    • Deepfake Spotting Mini-Lab (10–12 min): Show a district-approved AI-generated clip/image beside a real one. Students list possible “tells,” then verify with a credible source.
      Prompt: “List three subtle cues that could indicate this media is AI-generated. Keep each under 12 words.”
      Evidence: T-chart with Claimed tellVerified/Not verified + source link/title.
      Rubric snippet: Meets: 3 cues + verification for each + 1 credible source. Approaches: 2 cues or limited verification. Needs: guesses without checks.
    • AI Empathy vs Human Nuance (15 min): AI role-plays a supportive friend; student rates helpfulness (1–5). Then partner gives a human response; student compares differences.
      Prompt: “Give a 3-sentence supportive response to: ‘I’m anxious about my presentation.’ Use validating language; avoid clinical advice.”
      Evidence: Compare/contrast note with one strength of AI reply + one advantage of human reply.
      Rubric snippet: Meets: identifies 1 concrete strength + 1 limitation for each. Approaches: generic statements. Needs: off-topic or judgmental.
  • Preventing AI Over-Reliance
    • AI-Last Writing Protocol (20 min): Students draft first. AI suggests line edits. Students revise and label changes.
      Prompt: “Suggest 3 concise edits to improve clarity and tone of this paragraph. Do not add new ideas.”
      Evidence: Draft → AI suggestions → final with [H] and [AI] tags + 1-sentence justification per accepted change.
      Rubric snippet: Meets: original draft present + all AI edits labeled + justifications. Approaches: labels or justifications missing. Needs: AI text substituted wholesale.
    • Timebox & No-Go Zones (5 min setup, ongoing): Students create a personal “AI use plan” with time limits and tasks reserved for humans.
      Template: “I will use AI for __ up to __ minutes. I will not use AI for __ (e.g., personal reflections, grading myself, messages to peers).”
      Evidence: Signed plan + weekly 2-sentence reflection on adherence.
      Rubric snippet: Meets: specific limits + at least 2 no-go tasks + reflection. Approaches: vague limit or no-go list. Needs: no plan or not followed.
  • Building Human Connections
    • Compliment & Repair Script (10–12 min): AI drafts a kind message; student tailors it; then delivers in person.
      Prompt: “Draft a 2-sentence apology after a misunderstanding. Be specific about what happened and what you’ll do differently. Warm tone; no flattery.”
      Evidence: Final script + teacher checklist confirming respectful in-person delivery.
      Rubric snippet: Meets: names impact + states repair/action + delivered respectfully. Approaches: vague impact or no action. Needs: insincere/undelivered.
    • Restorative Circles Planner (15 min plan; human-led circle): AI helps brainstorm neutral questions; circle is facilitated by people only.
      Prompt: “Generate 5 neutral, open-ended questions for a restorative circle about group-work conflicts. Avoid blame; invite listening.”
      Evidence: Question list + circle notes + 1 takeaway per participant.
      Rubric snippet: Meets: questions are neutral/specific + all voices logged + takeaways completed. Approaches: minor blame language or missing takeaways. Needs: leading/blaming questions.

Safeguards: Use district-approved tools; fictionalize names; never paste sensitive info; teacher reviews AI outputs before classroom use.

Promoting Equity

AI in SEL should actively address systemic bias and promote inclusion. Ongoing stakeholder collaboration is needed to ensure fairness and responsiveness to emerging ethical concerns. Try these concrete classroom moves:

  • Inclusive Language Rewrite (10–12 min)
    • What: Students or teachers paste directions/scenarios; AI suggests person-first, asset-based language and flags stereotypes.
    • Prompt: “Rewrite the text below using person-first, inclusive, and asset-based language. Remove stereotypes, avoid gendered assumptions, and keep the reading level ~Grade 6. Text: ‘__’”
    • Evidence: Before/After comparison + 1 sentence noting a specific stereotype removed.
    • Rubric snippet: Meets: person-first phrasing + stereotype removed + clarity maintained. Approaches: partial fixes. Needs: new bias introduced or meaning lost.
  • Representation & Scenario Audit (15 min)
    • What: Audit names, roles, and perspectives used in SEL scenarios; expand to reflect diverse racial/ethnic, disability, language, and family backgrounds.
    • Prompt: “Review these SEL scenarios. Identify 5 missing perspectives (e.g., multilingual learners, students with disabilities, varied family structures). Suggest one specific revision for each. Scenarios: ‘__’”
    • Evidence: Revised scenario list + small “representation matrix” (characters vs. attributes).
    • Rubric snippet: Meets: ≥4 concrete additions + matrix completed. Approaches: 2–3 additions or vague descriptors. Needs: tokenism or no changes.
  • Accessibility by Default (UDL) (12–15 min)
    • What: Generate alt text, captions, and plain-language summaries so all students can access SEL media.
    • Prompts:
      • “Write accurate alt text (≤140 chars) for this image: ‘__’”
      • “Create a plain-language summary (100–120 words, Grade 6) of this article: ‘__’”
    • Evidence: Alt text added to LMS; readability check (e.g., Grade 6–8) recorded on the slide/document.
    • Rubric snippet: Meets: alt text specific + summary at target readability. Approaches: one element weak. Needs: inaccurate or missing supports.
  • Family Partnership, Home-Language First (10–15 min)
    • What: Draft bilingual family updates about SEL work; verify with district translation resources before sending.
    • Prompt (draft only): “Compose a short family message about our SEL unit in English and Spanish, warm and clear, ~100 words each. Add a line in English/Spanish about requesting an interpreter.”
    • Evidence: Final message + back-translation check + note of interpreter line.
    • Rubric snippet: Meets: bilingual draft + interpreter info + verified by approved channel. Approaches: missing verification. Needs: only English or errors that change meaning.

Equity guardrails: Avoid sensitive personal data; obtain consent for names/stories; prioritize lived-experience review (students/families/colleagues) over AI suggestions when in doubt.

 

Explore more AI resources for educators:

For questions about this information, contact Amanda Albrecht (608) 267-1071, Amy Bires (608) 266-3851