TY - JOUR
T1 - Perceived impact of generative AI on assessments
T2 - Comparing educator and student perspectives in Australia, Cyprus, and the United States
AU - Kizilcec, René F.
AU - Huber, Elaine
AU - Papanastasiou, Elena C.
AU - Cram, Andrew
AU - Makridis, Christos A.
AU - Smolansky, Adele
AU - Zeivots, Sandris
AU - Raduescu, Corina
N1 - Publisher Copyright:
© 2024 The Author(s)
PY - 2024/12
Y1 - 2024/12
N2 - The growing use of generative AI tools built on large language models (LLMs) calls the sustainability of traditional assessment practices into question. Tools like OpenAI's ChatGPT can generate eloquent essays on any topic and in any language, write code in various programming languages, and ace most standardized tests, all within seconds. We conducted an international survey of educators and students in higher education to understand and compare their perspectives on the impact of generative AI across various assessment scenarios, building on an established framework for examining the quality of online assessments along six dimensions. Across three universities, 680 students and 87 educators, who moderately use generative AI, consider essay and coding assessments to be most impacted. Educators strongly prefer assessments that are adapted to assume the use of AI and encourage critical thinking, while students' reactions are mixed, in part due to concerns about a loss of creativity. The findings show the importance of engaging educators and students in assessment reform efforts to focus on the process of learning over its outputs, alongside higher-order thinking and authentic applications.
AB - The growing use of generative AI tools built on large language models (LLMs) calls the sustainability of traditional assessment practices into question. Tools like OpenAI's ChatGPT can generate eloquent essays on any topic and in any language, write code in various programming languages, and ace most standardized tests, all within seconds. We conducted an international survey of educators and students in higher education to understand and compare their perspectives on the impact of generative AI across various assessment scenarios, building on an established framework for examining the quality of online assessments along six dimensions. Across three universities, 680 students and 87 educators, who moderately use generative AI, consider essay and coding assessments to be most impacted. Educators strongly prefer assessments that are adapted to assume the use of AI and encourage critical thinking, while students' reactions are mixed, in part due to concerns about a loss of creativity. The findings show the importance of engaging educators and students in assessment reform efforts to focus on the process of learning over its outputs, alongside higher-order thinking and authentic applications.
KW - Assessment
KW - ChatGPT
KW - Educators
KW - Generative AI
KW - Students
KW - Survey
UR - http://www.scopus.com/inward/record.url?scp=85200506024&partnerID=8YFLogxK
U2 - 10.1016/j.caeai.2024.100269
DO - 10.1016/j.caeai.2024.100269
M3 - Article
AN - SCOPUS:85200506024
SN - 2666-920X
VL - 7
JO - Computers and Education: Artificial Intelligence
JF - Computers and Education: Artificial Intelligence
M1 - 100269
ER -