We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.HC

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Human-Computer Interaction

Title: Evaluating the Effectiveness of LLMs in Introductory Computer Science Education: A Semester-Long Field Study

Abstract: The integration of AI assistants, especially through the development of Large Language Models (LLMs), into computer science education has sparked significant debate. An emerging body of work has looked into using LLMs in education, but few have examined the impacts of LLMs on students in entry-level programming courses, particularly in real-world contexts and over extended periods. To address this research gap, we conducted a semester-long, between-subjects study with 50 students using CodeTutor, an LLM-powered assistant developed by our research team. Our study results show that students who used CodeTutor (the experimental group) achieved statistically significant improvements in their final scores compared to peers who did not use the tool (the control group). Within the experimental group, those without prior experience with LLM-powered tools demonstrated significantly greater performance gain than their counterparts. We also found that students expressed positive feedback regarding CodeTutor's capability, though they also had concerns about CodeTutor's limited role in developing critical thinking skills. Over the semester, students' agreement with CodeTutor's suggestions decreased, with a growing preference for support from traditional human teaching assistants. Our analysis further reveals that the quality of user prompts was significantly correlated with CodeTutor's response effectiveness. Building upon our results, we discuss the implications of our findings for integrating Generative AI literacy into curricula to foster critical thinking skills and turn to examining the temporal dynamics of user engagement with LLM-powered tools. We further discuss the discrepancy between the anticipated functions of tools and students' actual capabilities, which sheds light on the need for tailored strategies to improve educational outcomes.
Comments: Accepted to Learning @ Scale 2024
Subjects: Human-Computer Interaction (cs.HC)
DOI: 10.1145/3657604.3662036
Cite as: arXiv:2404.13414 [cs.HC]
  (or arXiv:2404.13414v3 [cs.HC] for this version)

Submission history

From: Wenhan Lyu [view email]
[v1] Sat, 20 Apr 2024 15:58:22 GMT (4453kb,D)
[v2] Thu, 25 Apr 2024 02:53:56 GMT (4455kb,D)
[v3] Fri, 3 May 2024 00:28:46 GMT (4454kb,D)

Link back to: arXiv, form interface, contact.