Vulnerability of assessments in Politics and International Relations to generative AI Team: Patrick TheinerSchool: Social and Political ScienceAbstractThis project aims to evaluate the vulnerability of different types of academic assessments to generative AI usage by students. As AI tools become increasingly sophisticated, there is a growing concern that students might use these technologies uncritically to complete assignments, undermining academic integrity. The study will involve using past assignments, such as exams, essays, and policy briefings, and generating AI responses to these tasks. These AI-generated answers, along with anonymized original student submissions, will be blindly evaluated by PhD student markers. The stakeholders in this project include course organizers, students, and markers. The methodology involves selecting a range of assignments from both pre-honours and honours courses, using multiple generative AI systems, implementing a robust blinding process, and establishing clear grading criteria.The anticipated outcomes of the project include identifying which types of assessments are most susceptible to AI-generated responses, understanding how human markers perceive and grade AI versus human work when they are unaware of a work’s origin, and informing the development of university policies on AI use in assessments. The findings will be disseminated through PIR- internal seminars, a short report to the School’s Learning and Teaching Directorate, and the CAHSS Governance, QA and Enhancement team. This article was published on 2025-11-06