Wednesday, March 11, 2026

Questions about AI and privacy often come up in ordinary moments, not just in formal guidance or training. You are trying to respond to an email, catch up on a meeting, or make sense of a spreadsheet, and the fastest option can look like dropping the content into whichever AI tool is open. That is usually the moment to slow down.

The issue is not whether AI can help. It often can. The issue is whether the tool fits the kind of data you are working with and whether that information should leave University-managed systems. These everyday scenarios show what that judgment can look like in practice.

Scenario 1: Drafting a sensitive message

You want help improving the tone of a sensitive email. Pasting the full message into a public AI site may seem like a quick way to make it clearer or more professional.

The problem is that messages like this often include information that should be handled carefully. Names, identifying details, private circumstances, or internal context can make the content inappropriate for a public AI tool, where prompts may be stored or processed under the vendor’s terms.

A safer approach is to keep the drafting inside University-supported tools. If you use AI for wording help, remove identifying details first and focus the prompt on tone, structure, or clarity rather than the private facts of the situation. Then review the result yourself before sending anything.

Scenario 2: Summarizing a Teams meeting

You missed part of a meeting and want a quick recap. Downloading the recording or transcript and uploading it to a public AI site may feel efficient, especially when you just need the main points.

The privacy issue is that meetings often include internal context: plans, timelines, staffing discussions, or other information that was shared with a limited audience. Moving that material into a public tool can expose content beyond the systems where it was originally shared.

A better option is to use Teams Premium, where available, to work from meeting content that already lives within Microsoft 365. That includes the files, transcript, recording, and chat content stored in Teams, OneDrive, or SharePoint, with existing permissions still in place. Even then, summaries should be reviewed before they are forwarded or reused.

Scenario 3: Exploring a spreadsheet with AI

You have a spreadsheet and want quick insights, patterns, or charts. It is easy to see why a public AI site that promises “instant analysis” would be appealing.

But spreadsheets often contain more than numbers. They may include internal operational details, staffing information, financial context, or other data that should stay inside approved systems. Uploading the file to a public tool gives the vendor a copy of that content and can create risks you do not fully see from the upload screen.

A safer path is to use AI features inside University-supported tools when they fit the data you are working with. If the spreadsheet is stored in OneDrive or SharePoint, using Microsoft 365 Copilot in Excel keeps the work within the Microsoft 365 environment. If the data is especially sensitive or the classification is unclear, that is usually the point to pause and ask before using AI at all.

When to pause and ask

If you are unsure whether a tool is appropriate for the kind of information you are handling, that is a good reason to stop and check before moving forward.

As a general rule, avoid sharing sensitive, restricted, or otherwise protected information with public AI tools. Use University-supported tools that keep your work inside existing systems and permissions. If the data or situation feels unclear, ITS and other support teams can help you make the call.

For more guidance, review the AI Tools page and the University’s guidance on using AI tools. You can also subscribe to the AI at Iowa newsletter for future tips and updates.