We've got you covered

The Columbia Chronicle

We've got you covered

The Columbia Chronicle

We've got you covered

The Columbia Chronicle

Get exclusive Chronicle news delivered to your inbox!
* indicates required

Chronicle Countdown
Countdown to Manifest and Graduation
Congratulations to the Class of 2024!

Editorial: Columbia’s AI policy is the right move but could confuse students who get different messages from their instructors

Editorial%3A+Columbia%E2%80%99s+AI+policy+is+the+right+move+but+could+confuse+students+who+get+different+messages+from+their+instructors

Columbia is allowing individual instructors to set AI policies in their courses instead of issuing a blanket, campus-wide rule for using popular language models like ChatGPT.

This guidance is similar to policies at other colleges and universities in Chicago, including DePaul University and Roosevelt University

All three institutions allow professors to set their own AI policies and enforce them.

At Columbia, the guidance on AI follows the school’s Academic Integrity Policy, which calls for “scholars at all levels to demonstrate transparency and honesty about sources of knowledge consulted or used for assignments.”

This is the right move because it is important for the college not to ignore the reality of AI’s role in various industries, including in creative fields. 

Overall, it is important for the school to not overreact negatively to AI. Especially in fields that use AI like advertising and tech, it would harm students in their future careers by banning AI entirely. By being transparent, teachers can show how AI can be used productively and ethically. 

The problem is that Colombia’s policy puts the responsibility entirely on instructors. With varying approaches, students could receive mixed messages about when and how to use AI in the classroom.

Instructors will have to know the line between where AI is helping and where it might be used to plagiarize. 

That is tough to navigate.

ChatGPT and DALL-E 2 scrape the web for answers to prompts. ChatGPT collects data and DALL-E 2 collects images. Particularly in the case of DALL-E 2, copyrighted images could be used to create new AI-generated art. 

Is it theft?

That might depend on the instructor and on the particular course rules. Because rules will vary, students have to follow multiple guidelines. 

The college should make sure that when an allegation is made following the process outlined in the Academic Integrity Policy, the department chairs, the school deans and other administrators really understand AI. 

That is the only way to ensure that students are being treated fairly when it comes to AI. 

More to Discover