This guide is being created in response to artificial intelligence / large language model tools such as ChatGPT being brought to the public late November 2022. As this has become a hot topic for educators, we hope to help you understand it and learn how to work with it.
With that said, this is a work in progress (with thanks to librarian Alison Gurganus of Mesa College for sharing her guide) and will continue to be added to and improved as time goes on. Please feel free to email the creator of this guide (located in the bottom left of this page) with comments and/or suggestions.
Special note to students: your best practice will be to check-in with your instructors about class policy around AI use. Currently, there is no campus-wide policy for AI use in the classroom, but faculty are aware of the issues and CR is encouraging instructors to include AI use policies in the syllabus. There will be quite a bit of variability in how faculty do or don't incorporate AI into coursework. What is okay in one class may not be in another class.
Here are some basic terms related to artificial intelligence, as defined by the Alan Turing Institute.
Algorithm: A sequence of rules that a computer uses to complete a task. An algorithm takes an input (e.g. a dataset) and generates an output (e.g. a pattern that it has found in the data). Algorithms underpin the technology that makes our lives tick, from smartphones and social media to sat nav and online dating, and they are increasingly being used to make predictions and support decisions in areas as diverse as healthcare, employment, insurance and law.
Algorithmic Bias: Unfairness that can arise from problems with an algorithm’s process or the way the algorithm is implemented, resulting in the algorithm inappropriately privileging or disadvantaging one group of users over another group. Algorithmic biases often result from biases in the data that has been used to train the algorithm, which can lead to the reinforcement of systemic prejudices around race, gender, sexuality, disability or ethnicity.
Artificial Intelligence: The design and study of machines that can perform tasks that would previously have required human (or other biological) brainpower to accomplish. AI is a broad field that incorporates many different aspects of intelligence, such as reasoning, making decisions, learning from mistakes, communicating, solving problems, and moving around the physical world. AI was founded as an academic discipline in the mid-1950s, and is now found in myriad everyday applications, including virtual assistants, search engines, navigation apps and online banking.
Deep fakes: Synthetic audio, video or imagery in which someone is digitally altered so that they look, sound or act like someone else. Created by machine learning algorithms, deepfakes have raised concerns over their uses in fake celebrity pornography, financial fraud, and spreading false political information. ‘Deepfake’ can also refer to realistic but completely synthetic media of people and objects that have never physically existed; or sophisticated text generated by algorithms. See also ‘generative adversarial network’.
Generative adversarial network (GAN): A machine learning technique that can generate data, such as realistic ‘deepfake’ images, which is difficult to distinguish from the data it is trained on. A GAN is made up of two competing elements: a generator and a discriminator. The generator creates fake data, which the discriminator compares to real ‘training’ data and feeds back with where it has detected differences. Over time, the generator learns to create more realistic data, until the discriminator can no longer tell what is real and what is fake.
Generative A.I.: An artificial intelligence system that generates text, images, audio, video or other media in response to user prompts. It uses machine learning techniques to create new data that has similar characteristics to the data it was trained on (see ‘generative adversarial network’), resulting in outputs that are often indistinguishable from human-created media (see ‘deepfake’).
Large Language Model: A type of foundation model that is trained on a vast amount of textual data in order to carry out language-related tasks. Large language models power the new generation of chatbots, and can generate text that is indistinguishable from human-written text. They are part of a broader field of research called natural language processing, and are typically much simpler in design than smaller, more traditional language models.
Machine learning: A field of artificial intelligence involving computer algorithms that can ‘learn’ by finding patterns in sample data. The algorithms then typically apply these findings to new data to make predictions or provide other useful outputs, such as translating text or guiding a robot in a new setting. Medicine is one area of promise: machine learning algorithms can identify tumors in scans, for example, which doctors might have missed.
Natural language processing: A field of artificial intelligence that uses computer algorithms to analyze or synthesize human speech and text. The algorithms look for linguistic patterns in how sentences and paragraphs are constructed, and how the words, context and structure work together to create meaning. Applications include speech-to-text converters, customer service chatbots, speech recognition, automatic translation, and sentiment analysis (identifying the mood of a piece of text).
Neural Network: An artificial intelligence system inspired by the biological brain, consisting of a large set of simple, interconnected computational units (‘neurons’), with data passing between them as between neurons in the brain. Neural networks can have hundreds of layers of these neurons, with each layer playing a role in solving the problem. They perform well in complex tasks such as face and voice recognition.