University of Potsdam: New guideline regulates the use of AI in scientific work

Label Icon

Moritz

Label Icon

December 9, 2025

Blog Thumbnail

Large-language models such as ChatGPT, DeepL or GPT.UP, the University of Potsdam's own chatbot, have long been part of everyday student work. They translate texts, suggest formulations, structure arguments or help with programming. But where does helpful support end and where does scientific misconduct begin?

Die University of Potsdam creates with a new guideline led by Professor Lisa Bruttel (as of April 2025) clear rules for using AI in seminar papers and theses. The aim is to enable responsible use while protecting scientific integrity.

We summarize the key points below.

Why is a guideline needed?

AI can speed up writing processes, provide feedback, or make complex issues easier to understand. At the same time, there is a risk that students will use AI as a substitute for their own academic performance.

The guideline therefore has two main objectives:

  1. Allow what helps students, such as editing, translation or structural suggestions.
  2. Ban what threatens self-reliance, namely to output AI-generated content as your own service.

What is forbidden and why?

Prohibition: Submitting work written by AI

It's very clear:
Anyone who submits a thesis that is “largely” from an LLM is committing a attempted deception.

This means that AI may support, but never take over the core of scientific performance.

What is allowed? AI as an assistant but responsibly

The University of Potsdam expressly recognizes the benefits of LLMs. Students may use AI tools provided that your own draft text remains the basis.

In particular, the following are permitted:

  • Spelling and language correction as well as language revision
  • Suggestions for structure
  • Feedback on the logical consistency of the argument
  • Finding ideas on the relevance of the research question
  • Programming assistance

Important:
Students must make check suggestions from AI critically. Scientific quality is only achieved through reflection, not through automated answers.

Transparency obligation: AI use must be explained

Every job that uses AI must have a separate statement on AI usage included.

This comprises:

  • Which Tools were used
  • To which purpose
  • Which passages, ideas, or code sections are based on AI suggestions
  • Prompts used (e.g. questions to ChatGPT or GPT.UP)

An example is included in the guideline and shows typical wording such as:

  • “Proofreading of chapter XY by DeepL. ”
  • “50% of the paragraph on page XY is based on a draft generated by GPT.UP. ”

Declaration of independence is still mandatory

In addition, students must have a classic declaration of independence surrender. This includes:

  • Confirmation that no illegal tools were used
  • An assurance that all AI-based content has been fully disclosed
  • The consent that the work carried out is checked by a plagiarism software and to recognize typical AI formulations

Among other things, software approved by the University of Potsdam (e.g. PlagAware) is used.

What does this mean for students?

The guideline provides clarity: AI may be used, but only as tool, not as author.

Students benefit when they use AI:

  • conscious and reflected
  • improves your own designs, instead of having finished texts generated
  • document openly where AI has helped them

Anyone who is uncertain should always consult their mentor.

Conclusion

With the new guideline in place, the University of Potsdam ensures that modern AI tools are sensibly integrated into everyday student life without jeopardizing the principles of scientific honesty.

The message is clear:
AI is welcome, but transparency and personal contribution remain mandatory.

Start now and experience effortless writing!

Try it for free — no credit card required

Dashboard image
CTA ShapeCTA Shape