Imagine you get a text from a friend: “Hey, listen to this song I made.”
You listen, and you’re impressed! It’s a very catchy song, with great harmony and rhythm. You had no idea your friend was so good at music! You text back, “Wow, how did you make that?”
They reply:

“Suno.”
Your feelings immediately change, don’t they? Instead of being impressed and pleased for your friend, you feel deflated. All your friend did was type a prompt into an AI. Is that really “making” music? And that music you listened to and liked . . . it wasn’t even created by a human. You probably feel differently about that music, too.
Now, imagine how your professor feels if they were to receive a good paper from you and realize that it’s actually generated by AI. They would feel that same deflation, and that same dwindling opinion of your abilities. Ditto for an employer. Sure, technically speaking, the work of writing is getting done, but is it really you doing it? And if it’s not you, then, well . . . what value do you have in that equation?
AI can be be used in valuable and creative ways. Like any good technology, it can be used to amplify your human capabilities and help you achieve more. But it can also be used to replace your human capabilities.
Here's an uncomfortable fact: if you use AI to replace work that you should be doing yourself, you are, by definition, making yourself replaceable by AI.
And employers are currently very eager to replace employees with AI.
So ask yourself: what do you bring to the table that AI doesn't? What is your uniquely human contribution?
Here at Gonzaga, we aspire to give you deep, transformational learning experiences. We don't want to merely teach you a set of technical skills—many of which AI is quickly mastering—but instead help you develop as a whole person, with (to quote the Gonzaga Mission Statement) "the capacities and dispositions for reflective and critical thought, lifelong learning, spiritual growth, ethical discernment, creativity, and innovation." Deeply human qualities, in other words, which can enrich any career, and also your personal life.
But many of these learning experiences work through writing, and can be shortcutted by using a human language simulator like ChatGPT. If you fall for the temptation to take that shortcut, you might achieve short term success—getting the grade, checking the box, moving on—but you will not be gaining the intended learning experience, and you will not be changing or growing. And at the end, you won't have gained anything to distinguish yourself from AI.
For you to reap the benefits of your education, you must engage with the work. Do the reps. Put in the practice. Be patient with yourself; allow yourself to exist in a space of growth, rather than succumbing to the temptation to simulate mastery. Mastery will come, if you do the work—and it will come with your unique, human, personal touch.
So when you use AI, ask yourself: are you using it to enrich you, or are you using it to replace you?
Quality academics are one of the reasons you chose to come to Gonzaga. When you cut corners you don't learn the ideas and the skills that your instructors are here to challenge you with. Your professors want you to engage critically with the materials in their courses. They want you to bring your full self to class, not just an AI replacement.
The folks in our Instructional Design and Delivery office have crafted a guide to using AI appropriately. This guide covers six things to consider when using AI in your classwork.
1. AI can't replace your ability to think, but it can help you get ideas.
2. Use AI ethically.
3. You are 100% responsible for your final product.
4. The use of AI must be open and documented.
5. These guidelines are in effect unless your instructor gives you alternative guidelines.
6. Confidential or personal data should not be entered into generative AI tools.
These six points lay out the foundational approach your instructors are using across campus. The full academic integrity policy expands on what Gonzaga's expectations are for you as a student. Your individual instructors may have a revised policy in their syllabus. If you have questions about whether or not AI use is allowed for an assignment ask! Transparency around AI use is important to maintain the quality of your educational experience.
Keep in mind, your instructors aren't here to police your work, keeping track of every potential infraction. They do however, want to maintain a high standard of education. When academic integrity gets flagged it is an attempt by your educational community to keep you out of the category of replaceability.
AI chatbots are incredibly easy to access, patient no matter how many questions you ask, and available for use 24/7. They can imitate styles of communication through tone and word choice. And they are digital spaces free from observation or community input. Personal use of AI chatbots can be very helpful: crafting a grocery list, planning a roadtrip, or considering how to respond to a difficult email. However, their uses do not replace human relationships. In other words, an AI chatbot will never bring you banana bread when you've had a difficult week. AI tools may sound empathetic, knowledgeable, or caring, but they can't be. They aren't human!
There is a term to describe our overconfidence in technology, the ELIZA effect. Initially coined by MIT computer scientist Joseph Weizenbaum in the 1960s, the ELIZA effect refers to the projecting of human capabilities and traits onto technology all while overestimating the quality of the returned results. Falling prey to the ELIZA effect can result in letting our guards down. AI tools aren't private nor do they have professional training. It is common to see news stories about people using ChatGPT for medical advise, replacement for romantic relationships, accessible mental health support, and offloading the emotional work of connection to others.
In the world of AI use caution about how you navigate your private life. Your personal relationships are valuable! Real connection and human interaction is far more fulfilling and reliable than blind trust in generative AI tools.