Skip to Main Content

Responsible Use of Generative AI

Evaluating AI Outputs

Generative AI (GenAI) tools produce responses that sound confident, but that does not mean they are accurate. These tools generate text by predicting the most probable word patterns in response to your prompt, without the ability to determine the level of accuracy it is providing. For that reason, it’s essential to critically evaluate any AI-generated content before using it. 

Common Issues with Generative AI Outputs

Hallucinations

GenAI tools can “hallucinate,” meaning they fabricate information, sources, quotations, statistics, or citations. These invented details may look convincing but can be completely false. 


Outdated Information

GenAI models are trained on data up to a specific point in time. They may not include recent research, events, or updates. Though, they may still respond confidently to your prompt. 


Missing Context and Perspective

GenAI models are trained on real-world datasets that often do not capture all viewpoints and cultural contexts. This can lead to reinforcing biases and to providing information that is incomplete, shallow, or missing nuance. 


Variable Information Quality & Lack of Source Transparency

Since GenAI models are trained on a mix of reliable and unreliable online content, most tools do not distinguish between high-quality scholarly sources and low-quality or inaccurate material. This also makes it difficult for GenAI to show where its information originally came from, as it blends patterns pulled from many sources.

Steps for Evaluating AI Outputs: The SIFT Method

One effective way to evaluate GenAI outputs is to use the SIFT Method which is outlined below.

For other methods and more information about evaluating sources, please visit our Evaluating Sources Research Guide.

 

 

SIFT

1. Stop: Pause and assess the output before accepting or using it. 
  • Is it relevant to your topic or assignment? 

  • What are the main ideas or claims? 

  • Does the response read as fact, opinion, or a mix? 

  • Are any sources cited? 

  • If no sources are provided, where could you begin searching for verification? 

  • Can you tell how current the information is? 


2. Investigate: Examine the claims and any cited sources. 
  • Can you verify the claims made in the AI text? 

  • Are the citations real and traceable? 

  • Do you recognize any authors, journals, or websites referenced? 

  • Is there an evidence of bias or is a particular perspective emphasized? 


3. Find: Locate stronger, more reliable sources to support or contextualize your topic. 
  • Is the AI tool the best place to get information for this question? 

  • Can you find more complete or up-to-date sources through the library or scholarly search tools? 


4. Trace: Track claims, quotations, and data back to their original sources. 
  • Can you access the sources through the library or a search engine? 

  • Do those sources accurately support what the AI tool summarized or claimed? 

 

The SIFT Method was developed by research scientist Mike Caulfield and is shared under a Creative Commons Attribution International License