What ChatGPT secretly thinks about San Antonio
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Shoshana Gordon/Axios
Researchers have found that large language models (LLMs), such as ChatGPT, are just as biased as the rest of the world, including in their perception of San Antonio.
The big picture: To gauge AI inequality and bias, internet scholars at the University of Oxford and the University of Kentucky recently compared 91 U.S. cities with populations over 250,000 across a range of categories, including social and physical attributes, food quality, governance, politics, and business climate.
Zoom in: San Antonio landed in the top 10 U.S. cities for plenty of cultural attributes — including being better at using spices and flavors in food (no surprise there) and having stronger traditional dance.
- And we're No. 1 for hospitality.
- On the flip side, ChatGPT noted we're not as educated as other cities. We're also "smellier" than a lot of cities. In fact, only eight cities are smellier. (Sorry, New Orleans, but you stink, apparently!)
Try it out: You can spend hours digging into dozens of qualities ChatGPT attributes to San Antonio, comparing it with other cities in those same areas.
State of play: Last year, more than 50% of adults in the U.S. reported using LLMs, and as more people rely on the information these platforms deliver, geographic, racial and economic stereotypes are perpetuated further.
- The authors of this study offer the concept of the "silicon gaze" to explain how AI models "reproduce and amplify" inequalities or biases of countries, states and cities as the AI models tend to be shaped by predominantly male, white and Western sources.
The other side: "ChatGPT is designed to be objective by default and to avoid endorsing stereotypes," a ChatGPT spokesperson told Axios in a statement.
- "Research based on forced-choice prompts and older models doesn't reflect how ChatGPT is typically used or how current models behave today."
- "We continue to improve how ChatGPT handles subjective or non-representative comparisons, guided by real-world usage, ongoing evaluations, and user feedback."
Yes, but: LLMs train on what's already on the internet, so benign stereotypes like "Southern hospitality" and more damaging ones like "Southerners are lazier and less intelligent" will prevail.
- Questions and comparisons based on subjective criteria such as likability, attractiveness and intelligence also favored higher-income areas.
The bottom line: The bias from LLMs can extend into policies and decision-making if not viewed with a critical and nuanced lens.


