
Always verify AI-generated information and any sources it produces.
AI tools may:
generate inaccurate, oversimplified, or exaggerated information and summaries
fabricate citations or sources (“hallucinations”)
recommend promotional or low-quality websites as reliable sources
Consultation with the GB Library's guide on Responsible Use of Generative AI is recommended to understand key terms, how to evaluate outputs, academic integrity, harm considerations, and copyright.
Please note uploading library e-resources to GenAI tools is not permitted and may constitute copyright infringement. Please refer to our Copyright Best Practices for more details.
A set of rules or instructions designed to carry out a specific task, typically involving a computer (Merriam-Webster, 2025).
Generative AI (GenAI) is trained on real-world data and information from the internet. This information is full of human biases, which then influence how GenAI generates content (TLP, 2025). AI can also develop biases on how it interprets data or be influenced by user input to create biased responses. Common biases include gender stereotypes and racial discrimination.
A field of computer science focused on the ability of a computer or robot to perform tasks typically requiring human behavior or intelligence (Copeland, 2025). Machines complete tasks based on a set of rules, or algorithms.
AI is an umbrella term that encompasses a number of subfields, including Generative AI.
An image, video, or recording that has been convincingly edited or generated using AI. This fake content will be altered or manipulated to the point where it is difficult to tell if it is true or not. Deepfakes portray media that does not actually exist or may suggest events that have never occurred. (Copeland, 2025).
A subfield of AI that generates new content based on user prompts. These systems are trained on large datasets by programmers. GenAI systems learn to find data patterns and utilize this information to create its outputs (National Library of Medicine, 2024).
Popular GenAI systems include ChatGPT and Microsoft Copilot.
AI hallucinations occur when a GenAI system produces a response that contains misleading or false information (TLP, 2025). The response may reference quotes, statistics, information, references, or citations that do not exist. This fabricated information will be presented as if it is true, requiring the user to fact-check to mitigate errors.
A category of AI systems trained in massive text datasets, designed to find patterns and complete natural language processing tasks. LLMs, like ChatGPT, can respond to prompts in human language. They do not actually understand language as humans do but can select words that are more probable than others to create sentences. (Copeland, 2025).
What a user receives from a GenAI system in response to a prompt. May consist of text, images, or videos.
What a user inputs into a GenAI system to get results. Prompts can range in length and complexity and typically consist of questions or commands.