Perceived impact of generative AI on assessments: Comparing educator and student perspectives in Australia, Cyprus, and the United States

René F. Kizilcec, Elaine Huber, Elena C. Papanastasiou, Andrew Cram, Christos A. Makridis, Adele Smolansky, Sandris Zeivots, Corina Raduescu

Research output: Contribution to journalArticlepeer-review

Abstract

The growing use of generative AI tools built on large language models (LLMs) calls the sustainability of traditional assessment practices into question. Tools like OpenAI's ChatGPT can generate eloquent essays on any topic and in any language, write code in various programming languages, and ace most standardized tests, all within seconds. We conducted an international survey of educators and students in higher education to understand and compare their perspectives on the impact of generative AI across various assessment scenarios, building on an established framework for examining the quality of online assessments along six dimensions. Across three universities, 680 students and 87 educators, who moderately use generative AI, consider essay and coding assessments to be most impacted. Educators strongly prefer assessments that are adapted to assume the use of AI and encourage critical thinking, while students' reactions are mixed, in part due to concerns about a loss of creativity. The findings show the importance of engaging educators and students in assessment reform efforts to focus on the process of learning over its outputs, alongside higher-order thinking and authentic applications.

Original languageEnglish
Article number100269
JournalComputers and Education: Artificial Intelligence
Volume7
DOIs
Publication statusPublished - Dec 2024

Keywords

  • Assessment
  • ChatGPT
  • Educators
  • Generative AI
  • Students
  • Survey

Fingerprint

Dive into the research topics of 'Perceived impact of generative AI on assessments: Comparing educator and student perspectives in Australia, Cyprus, and the United States'. Together they form a unique fingerprint.

Cite this