Report: Educators are turning to AI, even for grading
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Allie Carl/Axios
Anthropic analyzed 74,000 anonymized conversations with its Claude chatbot to understand how university educators are using AI.
Why it matters: The study found that some educators are using chatbots for grading — a task many teachers say should not be outsourced.
How it works: Anthropic analyzed anonymized conversations on Claude.ai from higher education professionals from May to June.
- The company used a tool called Clio ("Claude insights and observations"), which tracks Claude usage while preserving user privacy.
- Researchers have used Clio to determine the top general use cases of Claude, how students use Claude and to detect abuse of Claude.
- Anthropic also partnered with Northeastern University to survey faculty members directly about AI in their classrooms.
By the numbers: Clio found that 57% of higher ed instructors' AI chats in its sample involved developing curricula, 13% were conducting academic research, and 7% involved assessing students' performance.
The big picture: Anthropic says educators used AI for grading less often than other tasks. But 48.9% of the Claude conversations about grading turned the task fully over to the bot in ways that researchers found "concerning."
- "Using AI in grading remains a contentious issue among educators," Anthropic wrote in the report.
- "Ethically and practically, I am very wary of using [AI tools] to assess or advise students in any way," one Northeastern professor told Anthropic.
- "Students are not paying tuition for the LLM's time, they're paying for my time. It's my moral obligation to do a good job (with the assistance, perhaps, of LLMs)."
The intrigue: After years of universities banning students from using chatbots, students are now pushing back as they see professors adopt the same tools.
- A student at Northeastern demanded her tuition back after catching one of her professors using AI and not disclosing it, per the New York Times.
Yes, but: Kunal Handa, research resident at Anthropic's Societal Impacts team, told Axios there are "fundamental limitations of analyzing this sort of data from a chat interface."
- Clio might show that educators are using Claude to get feedback on a student essay — but there's no way to know whether the instructor shared the chatbot's response with the student, with or without editing it, or whether they disclosed their use of AI to the student.
- Handa says the research helps Anthropic track high-level usage patterns and monitor how they change over time.
What we're watching: Despite Anthropic and OpenAI releasing more Socratic modes for their chatbots intended to enhance learning and deter cheating, professors are still frustrated with how difficult the tools have made student assessments.
- One Northeastern professor told Anthropic that they "will never again assign a traditional research paper."
- The same professor said that after redesigning assignments a student complained because Claude and ChatGPT were useless in completing the work.
- "I told them that was a compliment," the professor told Anthropic.
