Monday, December 8, 2025

When people talk about “good prompts,” they often focus on how to get more from AI. Just as important is learning to spot prompts that are likely to give you confident, polished answers that are actually misleading. These patterns apply to tools like ChatGPT Edu, Copilot Chat, and Microsoft 365 Copilot.

Below are five signs that a prompt is more likely to lead you astray and what to do instead.

1. The prompt hides a big assumption

Prompts that quietly “bake in” a conclusion can push AI to fill in details that are not true. For example:

  • “Explain why students who use AI always learn less.”
  • “Describe how this policy clearly allows AI for all assignments.”

In both cases, the model is nudged to agree, even if the premise is wrong or contested.

Safer approach: Ask open questions that separate facts from opinions, such as “What are some potential benefits and risks of AI for student learning?” or “Summarize how this policy describes AI use for assignments.”

2. The prompt demands certainty where there is none

AI tools are trained to produce fluent answers, not to express doubt. Prompts that insist on definitive predictions or diagnoses are especially risky, for example:

  • “Tell me whether this student definitely used AI on their assignment.”
  • “Predict exactly how many students will fail if AI is allowed.”

The model cannot actually know these outcomes, but it can still generate confident-sounding responses that feel persuasive.

Safer approach: Use AI for framing possibilities, not making high-stakes calls. Ask questions like “List factors that might indicate AI use, and explain why they are not conclusive” or “Describe scenarios to consider when planning course policies.” Decisions should remain with people.

3. The prompt skips critical context

Short prompts can be useful, but when key context is missing, AI will “fill in the gaps” using patterns from its training data. For example:

  • “Write a policy about AI in my class.”
  • “Summarize this email for my supervisor” (without saying what matters to your supervisor).

The result may sound reasonable but fail to match your goals, your discipline, or University expectations.

Safer approach: Add a few lines of context. For example, “Write a draft AI statement for a 200-level course where students are allowed to use AI for brainstorming but not for final submissions. Use a calm, invitational tone.” Always review the output and adjust it to match your actual context.

4. The prompt asks AI to “look up” or prove something it cannot check

Prompts that ask AI to act like a search engine or a database can lead to false references and invented details, such as:

  • “Give me five real journal articles published in 2023 that prove this claim.”
  • “Confirm whether this specific student actually attended class.”

AI models may fabricate citations, mix together real sources, or guess at facts they cannot verify.

Safer approach: When you need verifiable facts, use trusted search tools, library databases, or official systems first. You can then ask AI to help you interpret or summarize information you already have, instead of asking it to generate proof on its own.

5. The prompt invites AI to replace, not support, human judgment

Some prompts hand over decisions that should stay with people, for example:

  • “Decide whether this student’s explanation is truthful.”
  • “Tell me if this staff member should pass probation.”
  • “Choose which research participant to exclude from the study.”

AI can reflect biases in training data and cannot understand the full context, relationships, or ethical considerations involved.

Safer approach: Use AI to generate options, questions, or draft language, not final decisions. Ask for help outlining criteria, listing pros and cons, or suggesting questions to consider. Keep evaluative, personnel, and integrity decisions with human reviewers.

A quick checklist before you hit “enter”

Before you send a prompt, pause and ask:

  • Am I assuming something I am not sure is true?
  • Am I asking for a prediction or decision that should stay with a person?
  • Have I given enough context for a useful, honest draft?
  • Am I asking AI to “verify” facts it cannot actually check?
  • Am I inviting AI to replace my judgment instead of support it?

If you answer “yes” to any of these, revise your prompt or switch to a different tool or process.

Avoid sensitive or restricted data and understand what type of data is appropriate to use with each AI tool. Follow University policies and departmental guidance when you use Iowa-supported tools such as ChatGPT Edu and Microsoft 365 Copilot.

Explore the AI Tools page to see which tools fit different tasks, then subscribe to the AI at Iowa newsletter for more practical tips and examples in future issues.