Ohio State preps for an AI-driven future
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Allie Carl/Axios
Ohio State University is undertaking one of the largest-scale AI implementations in the young life of the emerging technology.
Why it matters: OSU is one of the country's biggest universities and one of Ohio's biggest employers, meaning more than 100,000 people are part of a massive embrace of AI.
Flashback: University leaders tell Axios its AI era began a year ago with the appointment of new provost Ravi Bellamkonda.
- "He came in with the energy of, 'If we're going to really unlock and support this work at scale, we need to be able to make a bold statement about it,'" says Shereen Agrawal, executive director of OSU's Center for Software Innovation.
- The following months represented a period of "motivated and excited exploration" as the university made tools available and empowered experimentation, according to Anika Anthony, associate vice provost of the Drake Institute for Teaching and Learning.
The big picture: The unveiling of OSU's new AI focus came this June with the announced "AI Fluency" initiative.
- It set out to "embed AI education into the core of every undergraduate curriculum," teaching students technical AI skills along with an ethical understanding of how "AI tools can be harnessed for good."
- "We're not trying to replace the incredible work of our staff and faculty and administrators," Anthony tells Axios. "We are truly looking at how we can learn about and learn with and work with AI."
Zoom in: AI Fluency is based on six "key learning outcomes" the school wants undergraduates to achieve by graduation.
- They include teaching students to explain AI concepts, explore its benefits and limitations, evaluate its ability to receive inputs and create outputs, assess its work for accuracy and design new applications around it.
Between the lines: The AI race is often framed in terms of efficiency or workers it can replace — but OSU leadership sees it as the next paradigm shift to prepare students for.
- "We take seriously that we're a land-grant institution, and we're preparing future leaders and decision-makers. We need our students to be aware of not only the possibilities of what AI can do, but have some hands-on experiences and awareness of things we need to be cautious about," Anthony says.
Diversity of AI tools and applications
With the potential for 100,000 new users, OSU could have launched a feeding frenzy among AI platforms that are hungry for exclusivity.
- Instead, the university created a list of approved tools and encourages diversity of platform.
- "Given the pace of change in this field, we've focused on principles that ensure responsible, secure and ethical use of data rather than prescribing specific technologies," chief information officer Rob Lowden tells Axios.
State of play: The same flexibility applies to how OSU hopes its students, faculty and staff will use the technology.
- Faculty and students in each department are encouraged to use AI for their own unique purposes.
- "It's about application of AI to the field of study, to the thing you care about, to what you plan to do after graduation," Agrawal says.
Yes, but: OSU leaders are adamant that they're incentivizing and enabling AI use among the specific disciplines of staff and faculty — not mandating it.
- Anthony admits that balance is a "narrow path" to walk, but is confident in the university's approach.
- "We're incentivizing investment in their professional development."
The bottom line: Higher education is a place for critical assessment and experimentation, Agrawal tells us, and the university hopes the same applies to the early days of broad AI adoption.
- "What better place than a place that has people who know how to do research and run experiments and think through all the aspects?"
The ethics of AI in higher education
As students and staff embrace AI across the country, Ohio State is leading an interdisciplinary group of academics from four universities to guide that adoption in a responsible way.
What they did: Experts from Ohio State, Baylor, Rutgers and Northeastern have formed the Center on Responsible AI and Governance (CRAIG), operating out of OSU's Moritz College of Law.
- The program aims for holistic research and study of AI usage with the aim of use that is "safe, accurate, impartial, and accountable."
What they're saying: Those involved are taking their responsibility very seriously, Dennis Hirsch, OSU professor of law and director of CRAIG, tells Axios.
- "If the world is to achieve AI's promise and all the wonderful things it can bring to us, it's going to need to figure out how to use AI responsibly in ways that align with human values and that support human flourishing."
To achieve that goal, CRAIG assembled a diverse team with backgrounds ranging from law and business to philosophy and human resources.
- In an AI landscape dominated by business and industry, the "central role" of CRAIG is academic, understanding the issues at play and asking hard questions about the evolving field.
What we're watching: Whether you like it or not, AI is here — CRAIG's philosophy is to take an active role in advancing the technology.
- "AI holds real promise, and it poses real risks," Hirsch says. "It is by squarely facing and actively addressing those risks that we will achieve AI's promise. We currently have neither the knowledge nor the workforce required to do this. The center will help to build both."
