Monday, September 9, 2024

Artificial intelligence (AI) is changing how we study, teach, explore, and create, sometimes in ways we don’t fully understand. 

AI-powered apps raise important practical and ethical questions: Are AI tools reliable and unbiased? Do they plagiarize the datasets they’re trained on? Does AI use in classroom settings count as cheating? 

AI apps come with significant security considerations, too. Read on for a security-oriented overview of AI and factors you should weigh when thinking about AI tools. 

AI big picture 

The term “artificial intelligence” covers a lot of ground. To start, let’s consider some of the most common types of AI: 

  • Machine learning: Algorithms that help systems improve task performance based on experience 
  • Natural language processing: Tools that interpret and generate human language 
  • Computer vision: Systems that process images or other visual inputs 
  • Generative AI: Umbrella term for systems that create text, images, or other media 

Most AI methods use massive sets of data—for example, vast volumes of written content that tools like Copilot or ChatGPT processes in order to read and respond to text prompts. You may have heard about controversies over these datasets, including complaints that they use content without creators’ permission. 

University of Iowa academic, research, and technology leaders take AI seriously. They’ve developed a generative AI hub with usage guidelines, links to info about using AI in teaching, and more. The Information Security and Policy Office also offers a set of AI security and ethics guidelines.  

Data security 

Given the university’s responsibility for managing academic, health care, research, and other kinds of data, it’s essential to understand how AI tools interact with datasets. Some AI tools may save or expose sensitive information unless you use encryption or anonymization options.  

Data-security protections are especially important if you work with data covered by the Health Insurance Portability and Accountability Act (HIPAA), the Family Educational Rights and Privacy Act (FERPA), or the European Union’s General Data Protection Regulation (GDPR). 

Submit a security review request when considering any new technology solution, especially if working with regulated data. In some cases, AI-powered applications simply aren’t appropriate. 

Approved tools 

The university does not have contracts for most commercial AI applications and services. If you use these tools, you do so without the security, privacy, and compliance protections that come with fully vetted products. 

Only university data categorized as public should be used with non-contracted products, including those powered by AI. Public data is not confidential, can easily be reproduced, and poses low risk if compromised (example: most information posted on public-facing university websites). 

University IT and security pros maintain a list of AI tools for campus stakeholders looking to explore AI. Check this list to find tools that are contracted/supported, tools currently in pilot, and other tools that students, faculty, and staff might explore using public data. 

Microsoft Copilot 

An agreement between the university and Microsoft makes the enterprise version of Microsoft Copilot—a generative AI tool—available to students, faculty, and staff.  

To use Copilot, start by logging in with your HawkID. Once you log in, Copilot will not access your data, save your history, or use info you submit to train the tool. This added degree of data protection means you can use Copilot with data classified as university-internal (and with additional data types upon approval from the Information Security and Policy Office or ITS Research Services). 

The university also contracts the AI-assisted grading tool Gradescope and is piloting Microsoft 365 Copilot and GitHub Copilot for productivity and coding. 

These and other AI tools have potential to save time, enhance education and discovery, and support innovation. But only if we use them wisely, considering the full range of practical, ethical, and security issues.