Skip to Main Content

Generative AI Guide

Issues with AI

AI is not a perfect tool. There are many issues and limitations, including:

  • Bias and Discrimination
  • Hallucinations
  • Risks to confidential information
  • Risks to intellectual property
  • Human and Environmental Cost

Bias and Discrimination

The data that is used to train large language models is chosen and created by humans. This training data has the potential for bias and discrimination, which is then reflected in the outputs of GAI. Algorithms can also introduce bias based on their designs.


More resources:

Accuracy and Hallucinations

The content generated by AI is not always accurate. Accuracy can vary depending on the model, training data, and the task you have asked the model to perform. AI can also spread fake news and misinformation.

Additionally, generative AI can sometimes 'hallucinate', giving a response that is false or imaginative. For instance, ChatGPT has been known to create fake citations.

It's important to always verify any information that you receive from ChatGPT or other AI models.


More resources:

Confidential Information

AI can pose risks to confidential and private data. It's important to treat all conversations with AI models like ChatGPT as public since you cannot control how the information is used once it has been added to a model's input.

It's important to never share sensitive information in an AI model.


More resources:

Intellectual Property

Generative AI poses risks to your own intellectual property and can infringe on the intellectual property of others. It's important to be cautious with the information you input and also be cautious when you use AI outputs.


More resources:

Human and Environmental Cost

AI is not created in a vacuum and there are real-world human and environmental costs, including AI's carbon footprint and the exploitation of workers.


More resources: