Editorial: Columbia has an AI problem – without clear guidelines

By Editorial Board

Ruth Johnson

A new AI Task Force at the college is developing guidelines around artificial intelligence to help students and faculty navigate the use of ChatGPT in the classroom.

As reported by the Chronicle, the task force is presenting its findings to the provost this week, with a formal policy expected at some point. In the meantime, the college will offer resources “explaining what ChatGPT is, how to cite it, ethical ways to use it and also its limitations.”

We are already overdue for guidance on how to deal with ChatGPT in the classroom.

As an arts school, the conversation surrounding AI, particularly AI-generated art, is of particular interest. We are happy to hear that the college’s guidelines will not only look at the viral ChatGPT but also Midjourney and DALL-E. The ethical and moral concerns of artificial creation are abundant in the art realm.

ChatGPT, the latest integration of AI, works by generating text based on a prompt. It can be used to answer essay questions and offers detailed responses based on human feedback.

However, it has limitations. ChatGPT is trained to generate words but isn’t trained to think critically, which is a major component of writing. It also produces false citations and evidence, including quotations that are made up.

For now, without a policy at Columbia, students are getting very different experiences around ChatGPT use in the classroom.

In one writing intensive course, an instructor was so concerned about ChatGPT use that all writing assignments were abruptly moved into the class period for the rest of the semester. The assignments are now timed.

Students who juggle jobs and extracurricular activities are glad to have less homework. But students who do not perform well with timed writing assignments are worried about the added pressure of writing on deadline in class.

Other instructors are embracing AI and encouraging students to experiment with how it can be used within their industries.

Without clear guidance, professors are in a tough position as they try to maneuver the use of artificial intelligence.

Initially when asked about the college’s policy on artificial intelligence, Nate Bakkum, senior associate provost, directed the Chronicle to the Academic Integrity Policy, which states that assignments and examinations should be the products of the student’s own efforts.

There is no specific reference to artificial intelligence.

Many people argue that using ChatGPT is not plagiarism, which Columbia’s student code does prohibit. ChatGPT doesn’t outright copy someone else’s work. However, some educators still consider it cheating because students haven’t created the work themselves.

Some schools, mostly grade schools, have already restricted access to AI. Columbia will have to decide whether it will tolerate its use in the classroom and to what extent, ethically.

That’s why the college must work swiftly to draft and implement guidelines for professors to follow in their classrooms to deal with AI of all types. In doing this, we alleviate the pressure on students and professors to figure it out as we go.

This editorial has been updated.

The Chronicle Editorial Board is comprised of staff members who are independent of most news production. It has the same standards of fact and after debate among its members, presents the viewpoint of the Chronicle on issues of importance to the Columbia community.