Beyond cheating: teachers rethink education in the age of AI

Another school year has started, and something big is happening in classrooms everywhere. Teachers and professors are changing the way they teach so that students continue to learn, even as artificial intelligence tools become more common. Since ChatGPT arrived, students have found it easier to cut corners. But here’s the interesting part: many teachers don’t just see AI as a problem. They think it could be a really useful, even essential, tool.
The double-edged sword of AI in learning
John von Seggern, who runs the Futureproof Music School, an online hub for aspiring electronic music producers, points out one major upside: AI tutors could offer ‘true personalization at scale.’ Imagine every student having a dedicated, one-on-one learning experience – that’s the promise. Plus, AI can help students offload the boring, more ‘menial’ parts of research, freeing them up to focus on the truly important stuff. Some even argue it helps students pick up extra skills faster.
The cheating conundrum and deeper concerns
Now, let’s be honest, AI hasn’t exactly had a glowing reputation in education, and there’s a reason for that. Stories popped up in tech news about a sharp decline in ‘AI tokens’ at the end of the 2024 school year – essentially, a polite way of saying students were heavily relying on ChatGPT to churn out their assignments. It’s no surprise that many educators have voiced their frustration.
John von Seggern accepts that cheating is a big worry. But for Daniel Myers, who teaches computer science at Rollins College, the problem is more than just copying other people’s work. He told Cointelegraph, “The biggest challenge of AI is that it stops us from seeing the learning behind what students submit.” He says that in the old days when we used pen and paper, if a student turned in a paper with perfect citations, you could reasonably assume they’d learned how to cite properly. But with AI, just looking at the finished product doesn’t tell you anything about what the student actually understood.
Myers emphasizes that true learning requires ‘friction’ – it needs to be challenging enough to stretch you. So, even if a student isn’t technically cheating, simply letting AI do all the heavy lifting robs them of that crucial, challenging, and ultimately valuable educational experience.
Educators get creative: new rules for the AI era
Teachers and professors are getting more comfortable with AI and understanding how easy it is to use as a shortcut, so they’re not avoiding it. Instead, they’re getting smart, changing their methods to make sure students are still really interested in the material and doing the work.
For example, Von Seggern now asks his students to submit all their work, not just the finished project. This lets him see how they work, which helps stop cheating. He says, “We’re happy for students to use AI in their work, but we need to see how they do it to help them improve their skills.” He says that while AI can make things easier, teachers still have to “design the learning process so it still requires real understanding”.
Daniel Myers echoes this sentiment, urging educators to “lean into designing and curating the educational experience.” This means carefully rethinking what a class aims to achieve in a world where AI is ever-present. He shares a striking example from computer science, where AI is so good at coding that it could “completely obliterate” many of his old undergraduate assignments.
To deal with this, Myers has moved most of the core programming practice into classroom and lab time, so he can see and work with students directly. Now, assignments outside of class are much bigger, more creative, and even come with instructions on how to use AI (artificial intelligence) safely when designing things. Myers now puts ‘agency’ first when choosing assignments. “Am I asking students to just answer a question, or am I challenging them to set a vision and choose to pursue it?” he thinks. He believes that if students are given more control and a supportive environment, they will naturally become dissatisfied with low-quality AI generations.
When used wisely, AI can ‘Supercharge’ learning
But it’s not all about problems and policing. Despite the challenges, AI also opens up exciting new doors for education. Myers sees it as a tool that can truly ‘supercharge’ learning when applied thoughtfully. It allows students to access “a range of knowledge, skills and perspectives that would be difficult to obtain” otherwise. He’s witnessing students effectively leverage AI to tackle ambitious, creative projects that have a strong personal touch. “We often say that AI is like having a minor in everything,” he quips.
Von Seggern shows how AI can take over the boring parts of tasks like music production, giving his students more time to focus on the creative side of things. And what about AI tutors? When done well, they can provide the personal attention that is needed. He has an idea for a “24/7 personal learning coach” that uses AI to help every student with their unique needs and pace, and offers timely reminders. He says that this could make the feedback loop go from days to just seconds, which would make student learning much faster.
Tech giants step up: building AI for learning
People are realising that this is something that needs to be done, so companies that make AI are now making models for education. This is because people who run universities want to know what AI can do. Anthropic, for example, has developed ‘Claude for Education,’ which builds on their main Claude model but includes features designed especially for learning. A spokesperson explained that its ‘Learning Mode’ helps students think for themselves by guiding them through the learning process, rather than just giving them the answers. Instead of just solving a calculus problem, it helps students understand how to solve problems in general, and helps them understand the ideas behind the problem.
To ensure their tools align with educational principles, Anthropic even formed a Higher Education Advisory Board in July 2025, led by Rick Levin (former Yale president). This board, featuring members from top universities like Stanford and Michigan, helps steer development to ensure it aligns with ‘educational values and pedagogical best practices.’ They’re also partnering directly with universities to understand the real-world implementation challenges.
However, even Anthropic acknowledges the potential pitfalls, such as misuse or a lack of deep engagement. Their research from August revealed a striking finding: nearly half (47%) of student-AI conversations involve students simply seeking direct answers with minimal active engagement. This raises legitimate concerns about over-reliance on AI, potentially hindering the development of crucial critical thinking skills. Anthropic states they will continue to analyze usage patterns and openly share both the successes and the areas that need improvement as they develop more tools for students and teachers.
The future of learning: a collaborative path forward
It’s clear that AI is now a big part of education. Thankfully, teachers aren’t just trying to defend their position; they’re coming up with new ways to deal with the problems caused by AI and to make their students’ learning experiences better. This will be a difficult journey. But if teachers, developers, and students work together, we can make technology that helps learning, not stops it.