This year, students who enroll in Computer Science 50: Introduction to Computer Science, Harvard’s flagship coding course, will have a new learning tool at their disposal: artificial intelligence.
Starting in the fall, students will be able to use AI to help them find bugs in their code, give feedback on the design of student programs, explain unfamiliar lines of code or error messages, and answer individual questions, CS50 professor David J. Malan ’99 wrote in an emailed statement.
AI use has exploded in recent months: As large language models like ChatGPT become widely accessible for free, companies are laying off workers, experts are sounding the alarm about the proliferation of disinformation, and academics are grappling with its impact on teaching and research.
Harvard itself did not have an AI policy at the end of the fall 2022 semester, but administrators increasingly ramped up communication about AI usage in class since then.
Malan wrote that CS50 has always incorporated software, and called the use of AI “an evolution of that tradition.” Course staff is “currently experimenting with both GPT 3.5 and GPT 4 models,” Malan wrote.
“Our own hope is that, through AI, we can eventually approximate a 1:1 teacher:student ratio for every student in CS50, as by providing them with software-based tools that, 24/7, can support their learning at a pace and in a style that works best for them individually,” Malan wrote.
Malan wrote that a “CS50 bot” will be able to respond to frequently-asked student questions on Ed Discussion, a widely-used discussion board software for STEM classes, and that AI-generated answers can be reviewed by human course staff. He added that this feature is currently being beta-tested in the summer school version of CS50.
AI programs like ChatGPT and GitHub Copilot are “currently too helpful,” Malan wrote. In CS50, the AI technology will be “similar in spirit” but will be “leading students toward an answer rather than handing it to them,” he added.
The AI being incorporated into CS50 will assist students in finding bugs in their code “rather than outright solutions.” The new programs will also explain potentially complex error messages in simpler terms for students and offer potential “student-friendly suggestions for solving them,” Malan wrote.
While Malan is committed to adding AI to CS50 as a resource for students, he wrote that the course itself will not increase in difficulty. When asked if he had concerns over cheating with AI and other forms of academic dishonesty, Malan wrote that students have always been able to access information in potentially unauthorized ways.
“But AI does facilitate such, all the more anonymously and scalably,” Malan added. “Better, then, to weave all the more emphasis on ethics into a course so that students learn, through guidance, how to navigate these new waters.”
CS50 is also one of the most popular courses on edX, a partnership launched between MIT and Harvard to make their courses accessible online that the two universities sold for $800 million in 2021. Malan says that the incorporation of AI into the course will also extend to the edX version.
“Providing support that’s tailored to students’ specific questions has long been a challenge at scale via edX and OpenCourseWare more generally, with so many students online, so these features will benefit students both on campus and off,” Malan wrote.
CS50 — like many large lecture courses at Harvard — has also been plagued by complaints of overworked and underpaid course staff.
Malan wrote that while CS50 course staff have been using software tools to grade more efficiently, he hopes that AI integration will further decrease the time they spend.
“[A]ssessing, more qualitatively, the design of students’ code has remained human-intensive. Through AI, we hope to reduce that time spent, so as to reallocate TFs’ time toward more meaningful, interpersonal time with their students, akin to an apprenticeship model,” he wrote.
There were 50 course assistants and teaching fellows for CS50 in fall 2023.
Malan wrote that he expects “early incarnations” of the new AI programs to “occasionally underperform or even err.”
“We’ll make clear to students that they should always think critically when taking in information as input, be it from humans or software,” he wrote. “But the tools will only get better through feedback from students and teachers alike. So they, too, will be very much part of the process.”
Source & Image Credit: The Harvard Crimson