Rethinking multiple-choice exams: the impact of NLP AI models for online assessment

Warren Earle, Aman Singh

Research output: Chapter in Book/Report/Published conference proceedingConference contributionpeer-review

Abstract

This study explores the efficacy of Natural Language Processing (NLP) AI models, specifically ChatGPT and MS Copilot, in answering multiple-choice questions (MCQs) within the framework of Cisco Certified Network Associate (CCNA) exams. Our methodology involved presenting these AI systems with a series of exam questions and assessing their responses against a predetermined passing grade threshold. The results demonstrated that the AI-generated answers closely aligned with the expected correct responses, suggesting a high degree of accuracy and reliability.

We delve into the broader implications of these findings, discussing the potential applications of AI in enhancing exam preparation, automated grading, and personalized learning experiences. Furthermore, we address the limitations and ethical considerations inherent in deploying NLP AI for educational purposes, such as the potential for over-reliance on technology, issues of academic integrity, and the need for transparency in AI-generated assessments. Our study underscores the significant potential of AI technologies to transform educational assessments and support students and educators alike.
Original languageEnglish
Title of host publicationICERI2024 Proceedings
PublisherInternational Academy of Technology, Education and Development
Pages6523-6531
Number of pages9
ISBN (Print)978-84-09-63010-3
DOIs
Publication statusPublished - 11 Nov 2024
Event17th annual International Conference of Education, Research and Innovation - Seville, Spain
Duration: 11 Nov 202413 Nov 2024
https://iated.org/iceri/

Conference

Conference17th annual International Conference of Education, Research and Innovation
Abbreviated titleICERI2024
Country/TerritorySpain
CitySeville
Period11/11/2413/11/24
Internet address

Cite this