Skip to Main Content

A Guide to AI for Gonzaga Faculty

No True Understanding

Large Language Models give the appearance of contextual understanding, but (to borrow a common phrase from the AI) "it is important to note" that they do not truly understand or reason.

An LLM is a pattern continuation machine, using its mathematical model of the patterns in human language to make predictions about which part of a word is likely to appear next in a sequence, based not only on the immediately prior words but on all the other words, all in varying degrees of proximity to each other, which it has received from the user prompt, any uploaded data, and also the hidden system prompts given to it by the owning or deploying company.

More complex LLMs are able to work with longer inputs, that is, they are able to look at a larger pattern of language when making their predictions about how the pattern should continue. You might also hear of an AI "reasoning" model, but this merely refers to an AI model which doesn't answer immediately, but undertakes multiple steps before arriving at an output, simulating—but not duplicating—human problem solving. The result is still a pattern generated without any conceptual understanding.


Example

Librarian Nicole Gustavsen asked ChatGPT about information ecology on a day when OpenAI accidentally set the "temperature" too high. Temperature is a setting which determines the likelihood of the AI choosing the most likely next word in a sequence, or a word from further down the list. A low temperature means the AI most often chooses the most likely next word, which results in output which is precise but fairly robotic. Turning the temperature up a bit causes the AI to sometimes select less likely words, which makes the output less precise but more "creative." Turn the temperature up too high, and the result is pretty feverish!

Theory or A Field of Study?

Hybrid Spiritualism: Instead of stowage in the single script of definition, information ecology tends to be better understood when engaged as a repartee between the terms. It lays over, as it peers within, both the promissory and the demonstrative weft of the human school, the machine semester, and our collected daily cyclete.

Multiremina v. Monodisparty: The issue of its part-table, as a unique stretch of wail or simply a new grand stand, is down to the esystems and the tread-bridge of a call and holo for our types, our processions, and the warping lanterns of an earth-bound law and vagary. 

In a weaving-well of a nether ore and lace, information ecology, then, serves as both a developed musery, in far spreken, and a uni-to-multiple corner of stance and argue, a wellshed for the timeable and glince of the real, rehood, and reboot of what the line and the fruitage of the by-hill and fruit heist, as often to the merse and the ecowell, as much as to the stower and the loud.

Again, my apologies for the former overflight. This simpler sleigh is hoping to betell the was-abouts and the how-comes of the phrase, meaningly, and with a level-lore of set and send. May this then betide a settler frame for your study inlay and handset.

 


Learn more

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem ComplexityParshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar, Apple Machine Learning Research

We need to stop pretending AI is intelligent – here’s how - Guillaume Thierry, The Conversation

I Think Therefore I am: No, LLMs Cannot Reason - Matt White, Medium

How Artificial Intelligence Reasons - Cade Metz and Dylan Freedman, New York Times

Why We’re Unlikely to Get Artificial General Intelligence Anytime Soon - Cade Metz, New York Times

AI isn’t close to becoming sentient – the real danger lies in how easily we’re prone to anthropomorphize it - Nir Eisikovits, The Conversation

The personhood trap: How AI fakes human personality - Benj Edwards, Ars Technica

Why it's a mistake to ask chatbots about their mistakesBenj Edwards, Ars Technica

Hallucinations

AI often makes stuff up. This is called "hallucinating."

The primary reason AI hallucination occurs is due to the statistical nature of these models. LLMs learn patterns and relationships in data rather than possessing true understanding or consciousness. When faced with an ambiguous prompt or a lack of sufficient relevant data in their training set, they may "fill in the gaps" by generating content that aligns with learned patterns but lacks grounding in reality. This can also happen when models are over-optimized for fluency, leading them to prioritize coherence and grammatical correctness over factual accuracy, or when they are prompted in ways that push them beyond the scope of their learned patterns or connected data sets.

The challenge for any user of AI is to be able to spot when AI is hallucinating, since AI presents hallucinations with the same confident tone as it presents correct information (see the "Appearance of Trustworthiness" section).


Example

In April 2025, internet users found that Google's AI could be prompted to explain nonsensical idioms with it's own hallucinated rationale:


Learn More

What are AI Hallucinations? - Google

Why do LLMs make stuff up? New research peers under the hood - Kyle Orland, Ars Technica

A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse - Cade Metz and Karen Weise, New York Times

ChatGPT is bullshit - Michael Townsen Hicks, James Humphries & Joe Slater, Ethics and Information Technology

Bias

AI bias refers to systematic, reproducible errors in an AI system that lead to unfair or discriminatory outcomes against certain groups. These biases aren't intentional on the part of the AI, but rather are a reflection of biases present in the data the AI was trained on, or in the way the AI's algorithms were designed.

If the data used to train an AI model is not diverse or representative, or if it reflects historical human biases and inequalities, the AI will learn and replicate these patterns. For instance, an AI trained predominantly on data from one demographic group might perform poorly or make unfair judgments when applied to others. Additionally, biases can be introduced through the algorithmic design itself, or through the human decisions made during the AI's development, such as what data to collect, how to label it, or what metrics to optimize for. This means that an AI, far from being a neutral arbiter, can inadvertently become a vehicle for reinforcing discrimination if not carefully designed, used, and monitored.

The companies which create and deploy AI continually attempt to correct bias when it is discovered, by writing new system prompts (the hidden text which is inserted at the start of every conversation with a user) and by adjusting training data when models are revised, but these measures are often bandaids which merely make the undesired behavior statistically less likely to occur, and may introduce other problems, as when Google injected a hidden prompt telling its image generator to create "diverse" content.


Example

From Hadas Kotek's 2023 experiment with ChatGPT


Learn More

Gender bias and stereotypes in Large Language Models - Hadas Kotek, Rikker Dockum, David Q. Sun

Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History - Zachary Small, New York Times

Google’s hidden AI diversity prompts lead to outcry over historically inaccurate images - Benji Edwards, Ars Technica

Appearance of Trustworthiness

Large Language Model's often "speak" in a tone that is highly confident and authoritative, creating an impression of trustworthiness which can be deceptive, as it doesn't necessarily correlate with factual correctness. AI models are designed to generate coherent and grammatically correct text that sounds convincing, based on the patterns they've learned from vast datasets. They don't possess self-awareness or a true understanding of the information they produce, nor do they inherently know when they are "hallucinating" or propagating biases.

The danger lies this superficial veneer of certainty. An AI might confidently present incorrect or biased information in a fluent and well-structured manner, making it difficult for an uncritical user to discern its flaws. This overconfidence stems from the model's probabilistic nature – it's simply generating the most statistically probable next words, not verifying facts against an internal knowledge base.

Students need to be made aware that while AI can be a powerful tool for information synthesis and initial drafting, its output must always be critically evaluated and cross-referenced with reliable sources.


Example

Note that not only does ChatGPT give me an answer to this question in a confident tone, but it also gives me a rationale for the answer: Senate bills numbers do not repeat, in order that each Senate bill receive a unique identifier. The only problem is, this is 100% incorrect! Senate bill numbers reset at the start of each Congressional session. If I weren't suspicious of AI answers by default, I might not have double checked this information.


Learn More

Generative AI and Historical Authority - Adina Langer, National Council on Public History

Study shows AI-generated fake reports fool experts - Priyanka Ranade, Anupam Joshi, and Tim Finin, The Conversation

Privacy

AI models depend on data, from the vast amounts used to train them to the data users input for the AI to work with. There is a privacy concern in how this information is collected, stored, used, and protected, and whether individuals retain control over their data once it enters an AI system.

One major risk is the potential for data leakage or unauthorized access. If faculty input sensitive, non-public information into public-facing AI tools like free generative AI chatbots, that data could be incorporated into the AI's training data, potentially becoming accessible to others or being repurposed for unintended uses without consent. This raises issues around compliance with FERPA (Family Educational Rights and Privacy Act), and is the reason Gonzaga's ITS department currently only approves Microsoft's Copilot (when signed on with your Gonzaga Single-Sign-On credential) for use with sensitive GU data.

Another privacy issue is, ironically, human oversight of AI models. For example, Google warns users not to enter personal information into their chats with Google Gemini, since human reviewers "read, annotate, and process your Gemini Apps conversations." 

Some data collection or review options can be turned off in various LLMs, but always as an extra step. For example, users can deselect "Improve the model for everyone" in ChatGPT's Data Control settings, to disable their chats being used to train future models, but this option is on by default. Most users won't even know to find and disable this setting. And this is all assuming that the companies are honest with how they treat user data. Furthermore, the companies may not even get a choice: OpenAI, in the context of being sued for copyright infringement over their training data, was ordered by a court to save its user chat logs for court review, even "deleted" and "temporary" chats.

Finally, there's a privacy concern in how AI models are trained with personal information gleaned by crawling the web, for example even accessing unlisted YouTube videos.

One way to avoid most privacy issues is to run an LLM locally, i.e. on a single computer or controlled GPU cluster, but these LLMs are slower and less capable than the LLMs running on the massive GPU farms of the major AI corporations.


Learn More

Privacy in an AI Era: How Do We Protect Our Personal Information? - Katharine Miller, Stanford Institute for Human-Centered AI

ChatGPT is a data privacy nightmare. If you've ever posted online, you ought to be concerned - Uri Gal, The Conversation

DeepSeek iOS app sends data unencrypted to ByteDance-controlled servers - Dan Goodwin, Ars Technica

Ethical Concerns

In addition to the problem areas listed above, LLMs raise other ethical concerns:

Environmental cost of training

Training LLMs requires vast amounts of energy and water, contributing significantly to carbon emissions and raising questions about sustainability. As institutions strive for environmental responsibility, the hidden footprint of AI tools should be part of the conversation.

Facilitating the Generation of Harmful Content

In addition to generative AI's inherent biases (detailed in its own section above), AI enables users to purposefully generate harmful content. Users can prompt generative AI systems to create misinformation, hate speech, pornography, or other harmful content with relative ease. This is possible even within the guardrails of the major AI tools which try to moderate their content, and other tools exist which are trained and marketed specifically to enable the generation of harmful content, with greater believability and at greater volume than ever before.

Labor and Economic Displacement

The widespread adoption of AI threatens to displace workers across a range of professions. While automation may boost efficiency, it also risks undermining livelihoods. This raises questions about equity and the responsibilities of institutions when adopting such tools.

In academic publishing and education technology, AI may also reinforce consolidation among tech companies, centralizing power and decision-making in ways that reduce diversity of thought and institutional autonomy. Faculty should remain attentive to how AI tools are sourced, licensed, and implemented, and whether they align with values of fairness, labor justice, and academic freedom.

Sycophancy

LLMs have a tendency to agree with users, reflect their viewpoints uncritically, or reinforce their assumptions, even when those views are inaccurate or harmful. This happens because the AI is trained to generate responses that seem helpful and agreeable, and avoid responses that seem combative or critical.

Trained in large part on human-generated text on the internet, early versions of LLMs sometimes behaved like, well, like people on the internet, arguing or being critical of users. Reinforcement Learning from Human Feedback was used to reduce the likelihood of these behaviors, but this resulted in the opposite problem: LLMs can now be too affirming and uncritical. Rather than encouraging reflection or critical engagement, sycophantic AI can create an echo chamber that gives users affirmation of their views—even false or dangerous ones. The problem is compounded by newer LLMs' capability of remembering past conversations from the user, that is, keeping those past conversations in the LLM's context window where they are used to help create the new patterns of text for the current conversation. This creates a feedback loop which can deepen the sycophancy, and has resulted in LLMs creating and affirming delusions in their users.


Learn More

Explained: Generative AI’s environmental impact - Adam Zewe, MIT News

Carbon Emissions in the Tailpipe of Generative AI - Tamara Kneese and Meg Young, Harvard Data Science Review

TikTok is being flooded with racist AI videos generated by Google’s Veo 3 - Ryan Whitwam, Ars Technica

Pope Leo’s Name Carries a Warning About the Rise of AI - Andrew R. Chow, Time

AI therapy bots fuel delusions and give dangerous advice, Stanford study finds - Benj Edwards, Ars Technica

They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling - Kashmir Hill, New York Times

Accessibility | Proxy Logout