In a recent story for New York magazine, writer James D. Walsh paints a bleak picture of academia unraveling under the weight of AI-fueled dishonesty. Students, he argues, are outsourcing their essays, coding assignments and even job interviews to tools like ChatGPT, leaving professors disillusioned and institutions scrambling to respond.
But what if the real crisis isn’t cheating, it’s stagnation? What if the problem isn’t that students are using AI, but that education hasn’t evolved to meet the moment?
Generative AI is not going away. A 2023 survey found that nearly 90% of college students had used ChatGPT for coursework. And while some institutions have attempted to ban or restrict its use, enforcement is inconsistent, and detection tools are unreliable, often flagging ESL students or neurodivergent writers while missing polished AI-generated prose.
Columbia College’s policy on AI prohibits the “unauthorized use” of generative AI and considers it cheating. However, they ”remain committed to exploring potential changes to the policy, as necessary, in the future,” according to the policy. In classes, AI use varies widely, with some instructors who don’t allow it at all and some who teach students how to use it.
The truth is, students are using AI because it works. It helps them brainstorm, organize and polish their work under tight deadlines and mounting pressure. For many, it’s a lifeline, not a shortcut. It can be used to structure an essay, not to plagiarize but to scaffold thinking.
The panic over AI-assisted cheating echoes past moral panics over calculators, Wikipedia and even spellcheck. Each time, the educational system faced a choice: resist or adapt. And each time, adaptation led to better outcomes. Calculators didn’t destroy math education; they freed students to focus on problem-solving rather than arithmetic. Wikipedia didn’t end research; it democratized access to information.
So why not treat AI as the next evolution in learning tools?
Rather than banning AI, educators should integrate it into curricula with clear guidelines. Teach students how to use it ethically, as a collaborator, not a crutch. Encourage transparency by requiring students to document their AI interactions, much like citing sources. Develop assessments that prioritize critical thinking, creativity, and personal reflection, tasks that AI struggles to replicate authentically.
Moreover, institutions like Columbia should invest in AI literacy for both students and faculty. Understanding how these tools work, their limitations and their potential biases is crucial. By fostering a culture of informed use, we can mitigate misuse and promote responsible innovation.
The goal of education has always been to prepare students for the real world, which is what Columbia’s incoming president and CEO Shantay Bolton has said is vital to the college’s success. In today’s world, AI is a reality. By embracing it thoughtfully, we can enhance learning, uphold academic integrity and equip students with the skills they need to thrive in an AI-augmented future.
The choice is ours: cling to outdated models or evolve alongside our tools. Let’s choose evolution.
Copy edited by Trinity Balboa