Using Artificial Intelligence for Assignments

Advertisements

Is ChatGPT Plagiarism?

When the OpenAI Large Language Model (LLM) ChatGPT popped up, nothing was as it was before on campus. The quality of assignments changed overnight, sometimes for the better, sometimes for the worse. In some bibliographies, articles were documented that never existed, and in so many assignments, students “delved” into whatever the subject was. The issue was massive. It was apparent: Artificial Intelligence took over. What to do about it? We all were breaking new ground.

Can you plagiarise a machine?

That is the question here. We know that we can plagiarise ourselves, and that is an issue, although you “steal” from yourself, what may legally be impossible. Still, it is not allowed in the scientific community. But what about ChatGPT and other AI-driven programs producing content? Oxford University defines plagiarism as follows:

 “Presenting work or ideas from another source as your own, with or without consent of the original author, by incorporating it into your work without full acknowledgement. All published and unpublished material, whether in manuscript, printed or electronic form, is covered under this definition, as is the use of material generated wholly or in part through use of artificial Intelligence (save when use of AI for assessment has received prior authorisation e.g. as a reasonable adjustment for a student’s disability). Plagiarism can also include re-using your own work without citation.”

https://www.ox.ac.uk/students/academic/guidance/skills/plagiarism (29.03.2024)

Sounds like this is all-encompassing. Still, there are doubts, as most legal definitions do include a biological person, such as with law.com:

“(…) taking the writings or literary concepts (a plot, characters, words) of another and selling and/or publishing them as one’s own product.”

https://dictionary.law.com/Default.aspx?typed=Plagiarism&type=1 (29.03.2024)

“Of another” for me means a person, not an artificial intelligence-driven machine. Other sources even include the word “person.” The argument goes that you cannot plagiarise a machine. Hold it right here: it was a person prompting LLMs like ChatGPT. The impact is more indirect, but it is still there. It’s like if you calculate with a calculator. The machine presents the results, but the input comes from you. What GTP-4 or Bard produces originates in a person’s brain. Do you have a different opinion? OK, but be aware that the members of a Student Conduct Committee or a judge share your convictions about using ChatGPT and not marking it. 

How does GPT4 see itself being used in academic endeavors?

I prompted the LLM with the relevant question. This is what it produced (as unclear as the human equivalents):

JD:  Is using ChatGPT plagiarism? Answer in one paragraph.

ChatGPT: Using ChatGPT itself is not inherently plagiarism; however, the context in which it is used can make it so. Plagiarism occurs when someone presents another’s work or ideas as their own without proper attribution. If you use ChatGPT to generate content and then claim it as your original work without acknowledging the source, that would be considered plagiarism. On the other hand, using ChatGPT as a tool to generate ideas, assist with drafting, or provide information, while properly citing it as a source where appropriate, does not constitute plagiarism. The key is to follow ethical guidelines and academic or professional standards regarding citation and the acknowledgment of sources.

Source: OpenAI. (2024). ChatGPT (4) [Large language model]. https://chat.openai.com

As you see, I added the source in a format suggested by GPT-4 (pretty much an APA structure). You find a “get citation button in the bottom-right corner of the page. This is what it looks like:

Artificial Intelligence Should Assist You, Not Replace You

To avoid any issues, you should strive to use AI as a tool that assists you but does not replace you. That means you can use it for research, structuring text, and improving your writing skills, but neither for writing entire assignments for you, nor single chapters or even paragraphs. I know that the grey zone here is as wide as the Mississippi. In the end, it is your professor who tells you what you can use ChatGPT for. Don’t take me as a reference, please!

Pitfalls Using ChatGPT for Research

When using GPT-4 as a “research assistant,” I encountered several issues that partly left me perplexed. The mistakes are so grave that you never want to be associated with them. Here is a short list:

  • Ø  The AI makes up sources (once it was 5 out of 10, they just did not exist).
  • Ø  The LLM uses unreliable sources (training data is not that bad, but far from perfect – and if you use extensions, the internet with all disinformation and content crap is included, Pandora’s Box wide open).
  • Ø  The program has problems weighing information and uses information in an unbalanced manner (which occurs very often).

Check it out: The same question asked to the Artificial Intelligence, rephrased a bit, may lead to very different outcomes, and it is up to you – and nobody else – to decide what you use. All mistakes are yours in the end. You are the guide!

Here comes a list of prompts you can use that, in my opinion, constitute “assisting” and not “replacing,” making the use of ChatGPT less plagiarism-sensitive:

  • Ø  Summarizing scholarly articles (with GPT-4) to pre-select if the piece is interesting for your research. Of course, you have already gone through the summary, but the AI scrolls for relevant keywords. This saves you a lot of time.
  • Ø  Suggestions for an alternative structure of your piece. Sometimes you don’t see the forest because there are too many trees! Chatty will provide extensions or alternatives, you decide.
  • Ø  Make use of the Question Refinement Pattern: Refine your research question. Make sure that your question is understandable, and let ChatGPT provide alternatives. You do want to circumscribe if you use one of the suggestions later.
  • Ø  Apply Cognitive Verifying Pattern: You explain something, and the program – as your research buddy – repeats it in its own words. This way, you are sure that what you write makes sense.
  • Ø  Introduce the Personae Pattern: Let the LLM take somebody else’s perspective to criticize your findings.

These are tasks you would maybe tackle with the help of a classmate anyway, encouraged by your professor. The problem with research buddies is that everybody wants to go with the best students, and the best students should not go into it and refuse to slip into this role if help is not reciprocal. The best thing is to combine AI assistance with a cold beer in the pub in your buddy’s company.

Should I Inform the Reader that I used IA to Avoid Accusations of Plagiarism?

Yes. Generally, if you take entire passages created by LLMs, you must cite them. It is then up to your professor to determine whether this is still original work. If you do not take entire paragraphs but use AI as an assistant, you may want to include a reference in your methodology section and explain and document which AI-driven programs you used. At least nobody can accuse you of trying to hide it.

Large Language Models and Plagiarism: Delving into Muddy Waters

Using GPT-4 and similar will become as standard as using a calculator on your phone. Sure, they probably built the pyramids without wheels, as we do not find evidence of using wheels during construction. Later on, wheels were a game-changer for these ancient societies. Artificial Intelligence is the game changer for us. Rejecting it completely makes no sense at all, but we must regulate its use in universities, military applications, and research.

Ultimately, the question “is ChatGPT plagiarism?” cannot be answered straightforwardly. It is actually about HOW you use it. There are many traps in this jungle, and I advise you to follow the most precautionary approach: Quote, document, and list. Don’t forget to ask your professor about personal policies. And yes, if “delve” shows in your text, I’d be outright suspicious.

Leave a ReplyCancel reply