Anyone here dealing with "detectare plagiat" in a university or publishing context?
I'm based in Romania and our faculty uses a couple of tools (one local "detectare plagiat" platform + Turnitin for some courses). Lately I've had a wave of reports that look… suspect. Example: a 42% similarity score on a thesis where most matches are the standardized methods section we've given everyone, common definitions, and properly quoted chunks. Another paper hit 18%, but nearly all of it was the bibliography and captions auto-matched line-for-line.
I know the score doesn't equal guilt, but admin folks still latch onto the number. I keep excluding quotes/bibliography and setting "exclude matches under X words," but the defaults differ wildly between tools, and some colleagues don't touch the filters at all. Do you have a personal set of "fair" settings or a checklist you use before calling a student in?
Also curious about two edge cases:
- Self-plagiarism for students reusing parts of their own prior reports. Do you allow limited reuse with citation, or is it a hard no?
- Cross-language stuff: I've seen copy → translate → paraphrase from Romanian sources into English. The detectors barely blink. Any way you spot that pattern without turning into a full-time detective?
Not trying to witch-hunt, just want a sane way to separate honest overlap from actual copying. How do you document borderline decisions so they don't turn into endless appeals? Any rules-of-thumb or examples you're willing to share would really help.
NEW
NEW
Bogdan Costin
Bogdan Costin
Bogdan Costin
Bogdan Costin
Bogdan Costin
Bogdan Costin