Plagiarism

"Plagiarism" is the word for using another person's literal expressions (words, images, etc.) or representing their ideas or concepts as your own, or in place of your own work.

Any amount of misrepresented work, large or small, passed off as your own, is plagiarism as a form of academic dishonesty.

In academic work you are expected to use your own words, and represent your own thoughts and ideas and concepts, which you have developed in the process of engaging with the work of many other people.

You are therefore expected to keep track of all of the other work you have read, and the expressions and ideas you have found there, and to be able to say clearly whose they are, and where they came from.

You are expected to be able to support your own words, your own thoughts and ideas and concepts, through exact references to all of that other work where it agrees with you, and to be able to argue with it in detail where you think something different.

The JKM Library is here to help with your research, and McCormick and LSTC also offer writing help through their own writing centers, including help with English, editing, and style guides.


Citation Guides  |  School Policies

Plagiarism and "AI"


Citation Guides

Accurately crediting and regularly citing your sources is an essential aspect of avoiding plagiarism.

It is important to know what information you need to quote, to paraphrase, and to cite, and how to do so properly for your sources and project.

The JKM Library is here to support you. We provide access to the following style guides, which will help you with a variety of sources and projects:

If you are not sure which to use, or how to use them for your project, please reach out first to your professor, advisor, or the supervisor of your project for specific advice.

You can also email JKM staff at ihaveaquestion@jkmlibrary.org for more general information.

 

School Policies

McCormick and LSTC each have their own descriptions of what constitutes academic integrity and plagiarism, and policies for dealing with it:

Other schools and programs have their own helpful descriptions and advice for dealing with plagiarism:

 

Plagiarism and "AI"

Large Language Model (LLM) "AI," also called "generative AI," is very popular today, and presents some serious problems from the standpoint of academic integrity.

These "AI" systems are capable of taking a brief prompt, and generating images or text on the basis of their training data.

Those images will look mostly like other images you see in artworks and on the internet. That text will sound mostly like other text you might read in books or on the internet.

That may sound tempting to you! Every author struggles to find the right words, after all. But remember: the point of academic work is that we want your words, your work. You are the author, you are the artist! And your words and your work will only get better through practice.

 

Systematic copyright violation and plagiarism

It is important to recognize that the output from these "AI" systems only looks that way because they are trained on data taken from existing authors and artists, overwhelmingly without their consent, without compensation for their hard work, and without citation.

"Generative" or LLM "AI" is therefore built on copyright violation, on a massive scale, and also plagiarizes the works it is based on. Everything it does is built on this basis.

That said, this is not the core complaint when it comes to whether or how you use it in your academic work.

 

"AI" generated words are not your words

When it comes to LLM "AI" as a source of text, you should treat its output like any other source of text you did not write.

No matter how much work you put into designing your prompt, you did not write the "AI" output. Its words are not your words, and do not represent your thoughts and ideas.

You are therefore not responsible for what it says. But you are responsible for how you use it!

If you receive LLM "AI" output text, and present that text as your own, you are engaging in plagiarism as a form of academic dishonesty.

 

Citation does not solve the problem of "AI" text

Unfortunately, the problem of plagiarism when using "AI" output cannot be solved by treating it like any other source needing proper citation.

You can and should admit where you got these words, as they are not yours. However, LLM "AI" cannot give you a text for which citation is in any way meaningful.

The point of citation is to demonstrate the authorship and provenance of the work that you are citing. This enables your reader to check your work against those sources, which still exist outside of your own work.

Setting aside its many other problems, however: LLM "AI" output does not exist outside of your interaction with that system. LLM "AI" systems do not give consistent responses. Your reader cannot follow your citation and get any useful information.

Additionally, LLM "AI" systems routinely make up answers that bear no resemblance to reality, and provide their own "citations" to work which simply does not exist—they made it up. 

LLM "AI" systems make a mockery of academic work. They are not a reliable source, and they do not even function as a search engine pointing to other reliable sources.

 

How does "AI" work?

There are two parts to the problem. The first, that LLM "AI" by nature does not, and cannot, give you a reproducible answer, we have covered above.

The second, and more basic, is that LLM "AI" by nature does not know or care about matters of fact. No matter how many facts may be in the training data, LLM "AI" is not programmed to understand that questions have factual answers.

The only thing that LLM "AI" is designed to do is reproduce patterns. Its only concern is the statistical likelihood that similar-looking data appears in similar patterns elsewhere.

LLM "AI" does not have any sense of the meaning of the data it was trained on, only its patterns.

You therefore cannot rely on the accuracy of the "data" it makes up to fill its own patterns in response to your questions. The output might be correct, but it might only be "similar" by some standard, and it might even bear no resemblance to reality at all.

LLM "AI" cannot reliably summarize text for you because, again, it does not understand the meaning of the text, only its patterns.

The more difficulty you have with a piece of text—because it is factually complex, because its author has their own unique way of expressing themselves—the less likely any "AI" system will be able to give you a correct summary. The more interesting your source, the less "similar-looking data in similar patterns elsewhere" is a useful approach.

LLM "AI" systems also rely on you to correct their output until it is satisfactory to you, meaning that your knowledge is their only standard of truth. This makes it radically unsuitable for any research work, where you are trying to learn things you do not yet know.

Please be careful not to get stuck in the trap of arguing with an LLM "AI" system. It is not important that you correct it; it is a piece of software. It is important that you do your research, that you learn, and that what you write expresses what you have learned in your own words!

 

Your library is here to help!

You have libraries at your disposal, including the JKM Library and all of our partner libraries, which are full of reliable texts, which will still be there when you or your readers come back to them.

These are texts you must cite properly, but they are also texts of which you may be critical. They may also be wrong, but their authors were trying, in the best case, to learn the truth and to be right. And you may even help prove some of them wrong, but that will be a meaningful disagreement, and maybe even a very important one! 

JKM Library staff, alongside the faculty and staff of McCormick and LSTC, will gladly help you through your process of becoming a capable and talented scholar using reliable resources.

We look forward to being able to share with you in the pride of your legitimate accomplishments. Do not cheat yourself, and us, of that joy!