Boulder-based Gloo evaluates AI on faith values
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
Boulder-based Gloo is trying to answer a new question in the AI age: Can chatbots help people flourish?
Why it matters: As large language models (LLMs) gain traction in churches, nonprofits and schools, and increasingly field questions once reserved for pastors or counselors — Gloo argues AI should strengthen mental, relational and spiritual health — not just automate tasks.
Driving the news: Gloo recently launched its "Flourishing AI" tool, designed to evaluate how leading AI platforms align with values tied to human flourishing. The company builds data and AI tools for faith-based and mission-driven organizations and says it wants to shape how AI systems are trained and deployed in religious and values-based settings.
How it works: Drawing on Harvard research on well-being, the tool scores major LLMs across seven categories: character, happiness, faith, relationships, finances, health and meaning.
- Models are evaluated through two lenses — a general faith perspective and a more explicitly Christian framework — to test how responses shift depending on moral and theological context.
Zoom in: In its most recent round of evaluations in December, OpenAI's ChatGPT models scored highest through a general faith lens, while Anthropic's Claude performed best under a Christian-focused framework.
Yes, but: Still, no platform excelled across the board. All struggled with certain faith-related dimensions, particularly under the Christian-focused lens.
- Nick Skytland, Gloo's vice president of development and AI research, pointed to financial advice as an example.
- Most AI models are trained to optimize for saving money or maximizing profits, he said. But in many churches, financial guidance also emphasizes generosity, charitable giving and impact investing.
- "Think of how finances are taught in a Sunday morning in church," Skytland said. "The way that comes across in the (AI) model is very different."
Between the lines: Skytland noted that while tech companies and lawmakers are concerned with guardrails to address illegal or harmful uses of AI, Gloo's model hopes to illustrate how the technology can be used for good.
- "The industry is establishing a moral floor," Skytland said. "The harder and arguably more important challenge is to see how AI advances humankind."
- That question is becoming more urgent as people increasingly turn to AI for therapy, companionship and spiritual inquiry, said Oliver Roll, Gloo's chief marketing and communications officer.
- "Questions once reserved for pastors or counselors ... it's just more efficient to be able to ask the model," Roll told us. "They're asking it theological questions."
The bottom line: Gloo's leaders say they hope their tool helps AI avoid the unintended consequences that accompanied the rise of social media.
- "Social media failed us in terms of the significant risks as well as the signifiant benefits," Roll said. "There are very, very harsh realities around mental health and community connection, and I think with AI the stakes get a thousand times greater."
- "Tech is inherently neutral — it can be used for good or bad," he added. "So we're focused on how to harness this power for good."
