The University of Iowa recognizes the rapidly evolving landscape of Artificial Intelligence (AI) and its potential to advance knowledge, research, and scholarly work. AI tools can save researchers time and resources, enable them to tackle more complex problems, and enhance the quality of their work. However, it's essential to use AI responsibly, understand its limitations, be aware of the UI policies and guidance that apply to AI tools, and ensure that human expertise remains central to the research process. 

The technology continues to advance and as such, remains an evolving discussion across the University of Iowa and requires diligence in staying up-to-date.  Several departments have provided additional guidance on using AI tools and links can be found below:

The Office of the Provost and the Office of Teaching and Learning Technology have provided guidance about using AI tools as part of teaching and classroom activities.

In addition, the Information Security and Policy Office has provided guidance on using AI tools as it relates to data security and privacy as well as policies surrounding what to do if you need to implement, purchase, or license an AI tool.

Below, you will find information about using AI tools throughout the research lifecycle.  If you have additional questions regarding how to use AI tools in research, please email your questions to research-computing@uiowa.edu

Some examples of AI tools include the following (not an exhaustive list):

Microsoft Copilot, ChatGPT , Google Gemini (Bard), & Bing AI Chat - These AI tools can answer questions, generate content, translate language, etc.

DALL-E - DALL-E can generate images from textual descriptions. 

Otter.ai - Otter.ai is an AI-driven transcription and note-taking tool. Please see this article by the UI Security Office as this is a prohibited technology.

Grammarly - Grammarly is an AI-powered writing assistant. Please see this article by the UI Security Office as this is a prohibited technology.

The University of Iowa has a contract in place with UI Microsoft Copilot, but does not have a contract or agreement with any other externally-sourced AI tools or services. This means that standard UI security, privacy, and compliance provisions are not in place when using these technologies. As such, do not place UI data (University Internal, Restricted, or Critical) into any externally sourced AI tool or service.  UI Microsoft Copilot is currently approved for Public and University/Internal data.  Please email ai-feedback@uiowa.edu if you have a need to use Copilot with higher data classifications. 

Once data is placed into AI tools, the data becomes available to the public and open source.  This occurs, for example, with ChatGPT, Gemini (Bard), Bing or GPT as well as with prompts to generative image processors such as DALL-E.  Additionally, the data may be subject to other third-party terms and conditions. 

Generating Research Ideas - AI tools can help generate ideas by providing you with a list of related keywords or phrases that you can use to narrow down your research focus.

Finding Relevant Information - AI tools can generate a list of articles, papers, and other sources that might be relevant to your research.

Generating Titles and Summaries - AI tools can help generate titles or short summaries for your research writing.

Generating Content - AI tools can generate several paragraphs about your research topic that you can use as inspiration for your own content.

Bias and Discrimination: AI tools can inherit biases. This bias can perpetuate stereotypes and discrimination in research outcomes. It is important to validate content using reliable resources.

Plagiarism: Content generated from AI often paraphrases from other sources.  This might raise concerns regarding plagiarism and intellectual property rights. Many Federal agencies have tools to detect AI-generated content.  Be aware of these tools and their potential impact on your research and research writings. 

Data Privacy and Legal Issues:  The UI prohibits the use of AI tools with University/Internal, Restricted, and Critical data types. Once the data is placed into AI tools, the data becomes available to the public and open source. This would likely have legal consequences should a data breach occur.  Please refer to the IT Security and Policy Office's AI guidance.

Data Misinformation: AI tools can generate data that is misinformed or inaccurate.  It is extremely important to cross-reference generated content with reliable sources.

When working with Sponsors, vendors, or collaborators inquire about their practices of using AI.   Additional terms and conditions may need to be included in a research agreement/contract to ensure responsible and ethical use of AI tools by collaborating organizations. Please consult with the Division of Sponsored Programs prior to entering into any contract and with help in negotiating contract terms.

Machine Learning Research: Machine learning research involves the study and development of new algorithms, models, and techniques in the field of machine learning. Researchers in this domain focus on advancing the theoretical understanding of machine learning, proposing novel algorithms, conducting experiments to evaluate their performance, and publishing their findings in academic journals and conferences. The primary goal is to push the boundaries of what is possible in machine learning and contribute to the theoretical foundation of the field. Some common Machine Learning frameworks include TensorFlow and PyTorch, for example.

Using AI Tools for Research: This refers to the application of existing AI and machine learning tools and techniques to solve specific research problems in various domains. Researchers use AI tools as tools of analysis, prediction, automation, or optimization to address their research questions. The primary goal here is to leverage AI's capabilities to advance research in other fields, such as biology, economics, astronomy, or social sciences. Some of these tools might include ChatGPT and DALL-E, for example.

The University of Iowa has a contract in place with UI Microsoft Copilot, but does not have a contract or agreement with any other AI tools or services. This means that standard UI security, privacy, and compliance provisions are not in place when using technologies, other than Copilot. As with any other IT service or product with no university contract or agreement, AI tools should only be used with institutional data classified as PUBLIC (Low Sensitivity).  See the UI Data Classification Levels for descriptions and examples of each data classification. If you want to use an AI tool with University/Internal, Restricted, or Critical data, the AI tool is subject to applicable university policies and standards prior to using it, including the Technology and Security Review processes.

UI Microsoft Copilot is currently approved for Public and University/Internal data.  Please email ai-feedback@uiowa.edu if you have a need to use Copilot with higher data classifications.

Some externally sourced transcription services utilize AI. As with all third-party software or services, Technology and Security reviews are required if you intend to use UI data with that software or service. If the software or service you want to use is not listed in the Technology Review and/or the Security Assessed Software List, you must submit Technology and Security Reviews prior to using it. 

We asked ChatGPT to write a 150-word abstract about the relationship between AI trends and research use.  As mentioned in previous questions above, it is important to be aware of AI's limitations before using generated content. Here is what it wrote:

"Title: Exploring the Dynamic Interplay Between AI Trends and Research Utilization

Abstract:

This research abstract delves into the intricate relationship between emerging Artificial Intelligence (AI) trends and their impact on the utilization of research findings. In the ever-evolving landscape of AI, staying abreast of the latest trends is crucial for maintaining competitiveness and relevance. This study seeks to elucidate how the rapid progression of AI technologies influences the utilization of research across various domains.

We employ a multi-disciplinary approach, drawing from fields such as computer science, social sciences, and economics, to examine the mechanisms through which AI trends shape research adoption. Our research takes into account factors such as the accessibility of AI tools, the adaptability of research methodologies, and the implications of AI-generated insights for decision-makers.

By analyzing case studies, survey data, and expert interviews, we aim to provide a comprehensive understanding of the synergistic interplay between AI trends and research utilization. Our findings will inform stakeholders in academia, industry, and policy-making about the dynamic nature of AI's influence on knowledge dissemination and application, paving the way for more informed strategies in an increasingly AI-driven world."

Last updated
Article number
7046