Enhancing Code Tracing Question Generation with Refined Prompts in Large Language Models

Aysa X. Fan, Rully A. Hendrawan, Yang Shi, Qianou Ma

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This study refines Large Language Models (LLMs) prompts to enhance the generation of code tracing questions, where the new expert-guided prompts consider features identified from prior research. Expert evaluations compared new LLM-generated questions against previously preferred ones, revealing improved quality in aspects like complexity and concept coverage. While providing insights into effective question generation and affirming LLMs' potential in educational content creation, the study also contributes an expert-evaluated question dataset to the computing education community. However, generating high-quality reverse tracing questions remains a nuanced challenge, indicating a need for further LLM prompting refinement.

Original languageEnglish
Title of host publicationSIGCSE 2024 - Proceedings of the 55th ACM Technical Symposium on Computer Science Education
PublisherAssociation for Computing Machinery, Inc
Pages1640-1641
Number of pages2
ISBN (Electronic)9798400704246
DOIs
Publication statusPublished - 14 Mar 2024
Externally publishedYes
Event55th ACM Technical Symposium on Computer Science Education, SIGCSE 2024 - Portland, United States
Duration: 20 Mar 202423 Mar 2024

Publication series

NameSIGCSE 2024 - Proceedings of the 55th ACM Technical Symposium on Computer Science Education
Volume2

Conference

Conference55th ACM Technical Symposium on Computer Science Education, SIGCSE 2024
Country/TerritoryUnited States
CityPortland
Period20/03/2423/03/24

Keywords

  • computer science education
  • large language model
  • programming education
  • tracing question

Fingerprint

Dive into the research topics of 'Enhancing Code Tracing Question Generation with Refined Prompts in Large Language Models'. Together they form a unique fingerprint.

Cite this