Skip to Main Content

Responsible Use of Generative AI

Academic Integrity

Academic integrity is honest, fair, respectful, and responsible behaviour in an academic environment. When considering the use of any LLM/GenAI tool you first must remember:

  • you do not have permission to input or upload any library e-resources (such as journal articles or eBooks) into GenAI tools;
  • you are responsible for ensuring that you have verified the use of any text, image, or other content before inputting/uploading to a generative AI tool;
  • uploading/inputting materials for which you are not the copyright holder may constitute copyright infringement.

As a next step towards upholding your academic integrity in relation to GenAI, it is essential to understand the implications of misinformation, lack of transparency and bias. 

Misinformation

Because GenAI is trained on real-world data, text, and media from the internet, the content it provides may be misleading, factually inaccurate, or outright misinformation such as deep fakes. The reliability of a source is not given consideration by a LLM which means a peer-reviewed academic article and a Reddit post are considered of equal authority. As such, the output of a LLM may not always be credible or reliable and can reflect implicit or explicit biases, outdated information, or fabricated content (TLP, 2025).

Lack of Transparency

Concerns about the sources used to train the data are compounded by the fact that GenAI tools are often unable to replicate outputs and unable to correctly and consistently cite specific references. This is problematic particularly in academic contexts where your assignments require citations to uphold academic standards. As such, you must always verify the accuracy of any GenAI-generated content by using other reliable sources before including it in your work. Once you have verified content, you must also properly cite GenAI including the prompts used as inputs. 

Bias

Most LLMs are designed to benefit the people who already possess the most power and privilege in the world. Their design and development has not prioritized ethical engagement with historically marginalized communities and as such GenAI / LLMs are known to reproduce this ongoing bias and exclusion (Sweetman, 2024).  

Racial Bias in LLMs

“Poet of Code shares "AI, Ain't I A Woman" - a spoken word piece that highlights how artificial intelligence can misinterpret the images of iconic black women: Oprah, Serena Williams, Michelle Obama, Sojourner Truth, Ida B. Wells, and Shirley Chisholm” (Buolamwini, 2018).

Gender Bias in LLMs

A recent UNESCO report (2024) identifies gender bias in GenAI systems as a global, persistent issue that serves to reinforce, “perpetuate (and even scale and amplify) human, structural and social biases. These biases not only prove difficult to mitigate, but may also lead to harm at the individual, collective, or societal level” (UNESCO, p.3).  

One study of gendered names showed a strong, “deep-seated bias in how LLMs represent gender in relation to careers” and family roles, “where female names were associated with “home,” “family,” “children,” and “marriage”; while male names were associated with “business,” “executive,” “salary,” and “career” (UNESCO, p.9).”  
 
This bias reproduction is further reflected in how GenAI tools currently assess, select, exclude and make recommendations. In a Scientific American study (2023), researchers asked LLMs to produce recommendation letters for hypothetical employees and “observed significant gender biases,” where “ChatGPT deployed nouns such as “expert” and “integrity” for men”” and was more likely to call women beautiful and delightful (Stokel-Walker, 2023).

References