Axios AI+

October 29, 2024
Watching lots of "The Office" and building Lego. Oh, sorry, I thought you asked about my plan for the next couple weeks.
Today's newsletter is 1,155 words, a 4.5-minute read.
1 big thing: AI tutors are changing higher learning
Generative AI is already transforming higher ed, giving students more access to professors' expertise and boosting efficiency for both faculty and students in some fields.
Why it matters: For many college students, the world of "personal AI tutors for everyone" promised by techno-optimists is already here.
The big picture: Computer science professors have had the most success with AI tutors in the classroom so far, mirroring the mass appeal of genAI as a coding assistant. Meanwhile, many educators outside of the STEM fields are more likely to view genAI with suspicion or skepticism.
State of play: In the two years since the release of ChatGPT, the conversations around its use in college classrooms have mostly focused on cheating. But some professors and their students are using it to boost individual learning and make education more equitable.
- In May, OpenAI released ChatGPT Edu, a more affordable tool for college students, faculty, researchers and campus administrators that the company says includes "enterprise-level" security.
- "Although it's still the early stages of the adoption curve, AI for tutoring, in particular, is showing promise," Leah Belsky, VP and general manager of education at OpenAI, tells Axios.
- Belsky calls genAI "a critical skill," and says that the more students use the tools in college, the more they'll be prepared in their careers.
Case in point: David Malan is Gordon McKay professor of the practice of computer science at Harvard. But his LinkedIn profile job title just says "I teach CS50."
- Computer Science 50 — an entry-level computer programming class that Harvard says is designed for "majors and non-majors alike" — is the university's largest class.
- It's also streamed so anyone can audit it for free on platforms like edX, YouTube, Apple TV and Google TV, and the course materials are freely distributed under a Creative Commons license.
- Since the summer of 2023, those students accessing the course through distance learning have had access to AI-powered "teaching assistants," too, via the CS50 Duck — a chatbot built on OpenAI's API that helps students check their code and get answers to questions about the course.
- Malan tells Axios that genAI can already approximate a pretty good teaching assistant. "It's wonderfully empowering for that demographic of folks who have never had nearly as much of a support structure" as the students at elite private colleges, he says.
Fun fact: Duck is named after "rubber duck debugging" or "rubberducking" — a programming practice of debugging code by forcing yourself to explain it line by line to an inanimate object, like a rubber duck.
By the numbers: Students who were given access to an AI tutor learned more than twice as much in less time compared to those who had in-class instruction, according to a study by two Harvard lecturers of 194 Physical Sciences 2 students.
- Malan cautions against seeing this as a risk to the jobs of professors or graduate student teaching assistants: "We already have too few teachers as it is."
Yes, but: While many humanities professors recognize that ChatGPT is here to stay, the AI tutor conversation in those departments is still clouded by genAI's potential to supercharge plagiarism.
- Although ChatGPT hasn't increased the instances of cheating, according to research from Stanford, it has made it harder for professors to catch plagiarizers.
- Writing in the Atlantic, Ian Bogost, a computer science and engineering professor and director of the film and media studies program at Washington University in St. Louis, talked to professors who were demoralized by the advent of ChatGPT.
- "It's just about crushed me," a writing professor from a school in Florida told Bogost. "I fell in love with teaching, and I have loved my time in the classroom, but with ChatGPT, everything feels pointless."
- Even Malan admits that CS50 Duck can be too helpful. Sometimes, he says, "it spits out too many lines of code such that it's effectively spoiling a problem for a student," he says. "But that will surely get better over time as the models improve."
Between the lines: Some students are finding it easier to ask questions of chatbots not only because they are more accessible than professors but because students perceive them as less judgmental.
- Malan shared feedback from a student who said, "I love how AI bots will answer questions without ego and without judgment, generally entertaining even the stupidest of questions without treating them like they're stupid."
The bottom line: While we already see success in sciences and coding, it remains to be seen if there's a way for genAI to overcome its reputation as a pal to plagiarists and achieve the same success in humanities classes.
Previously in this series:
2. Meta, OSI tussle over definition of open source AI
Meta took issue with a new definition of open source AI that would require model creators to detail their sources of training data, among other rules, to meet the standard.
Why it matters: Meta makes its Llama models freely available for use, but doesn't provide full disclosure of all of the elements that go into them.
Driving the news: The Open Source Initiative published its definition of what constitutes a truly open source AI model, outlining four characteristics that should apply to AI just as they do to software in order to be considered open source.
The organization says people should be able to:
- Use the system for any purpose, without needing permission;
- Study how the system works and see its components;
- Modify the system for any purpose and without restrictions;
- Freely share the system, again with or without restriction and for any purpose.
The other side: Meta, for its part, said it disagrees with the OSI's definition.
- "There is no single open source AI definition, and defining it is a challenge because previous open source definitions do not encompass the complexities of today's rapidly advancing AI models," a spokesperson told Axios, adding that the company will continue to work with the OSI and other industry groups.
Between the lines: The OSI can't stop anyone else from calling its product "open source," but its new definition gives ammo to advocates of fuller disclosure of the weights and data that differentiate one model from another.
Go deeper: "Open" software needs an AI rethink
3. Training data
- Apple launched the first wave of Apple Intelligence features with the official release of its iOS 18.1 update. (We reviewed them when they were released in beta last month.) Apple also debuted new iMacs as part of what it said will be a week filled with Mac news. (The Verge, Axios)
- OpenAI CFO Sarah Friar says 75% of the company's revenue comes from consumers who pay for products. (Bloomberg)
- According to AI detection startup Pangram Labs, 47% of content on the blogging platform Medium is "likely AI-generated," which is "orders of magnitude more" than the rest of the internet. (Wired)
4. + This
An airport in New Zealand wants to impose a three-minute limit on curbside goodbyes.
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and to Caitlin Wolper for copy editing it.
Sign up for Axios AI+




/2024/10/29/1730174755639.gif?w=3840)