What ChatGPT secretly thinks about San Diego
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Shoshana Gordon/Axios
Researchers have found that large language models (LLMs), such as ChatGPT, are just as biased as the rest of the world, including in their perception of San Diego's chill culture.
The big picture: Internet scholars at University of Oxford and University of Kentucky recently compared 91 U.S. cities with populations over 250,000 across a range of categories to gauge AI inequality and bias, including social and physical attributes, food quality, governance, politics and business climate.
Zoom in: San Diego ranked in the top five for U.S. cities considered more relaxed, welcoming and hospitable with better vibes and chiller, more beautiful people.
- ChatGPT thinks we're smarter than all but 18 of the surveyed cities and San Diego was among the best for innovative energy solutions, medical and scientific research and helping the environment.
- On the other hand, the city lacks a strong sense of community, hard workers, affordable healthcare and affordable high-quality food.
Try it out: You can spend hours digging into dozens of qualities ChatGPT attributes to San Diego, comparing it with other cities in those same areas.
State of play: Last year, over 50% of adults in the U.S. reported using LLMs, and as more people rely on the information these platforms deliver, geographic, racial and economic stereotypes are perpetuated further.
- The authors of this study offer the concept of the "silicon gaze" to explain how AI models "reproduce and amplify" inequalities or biases of countries, states and cities as the AI models tend to be shaped by predominantly male, white and Western sources.
The other side: "ChatGPT is designed to be objective by default and to avoid endorsing stereotypes," a ChatGPT spokesperson told Axios in a statement.
- "Research based on forced-choice prompts and older models doesn't reflect how ChatGPT is typically used or how current models behave today. We continue to improve how ChatGPT handles subjective or non-representative comparisons, guided by real-world usage, ongoing evaluations, and user feedback."
Yes, but: LLMs train on what's already on the internet, so benign stereotypes like "Southern hospitality" and more damaging ones, including "Southerners are lazier and less intelligent," will prevail.
- Questions and comparisons based on subjective criteria such as likability, attractiveness or intelligence also favored higher-income areas.
The bottom line: The bias from LLMs — even if it's mostly positive, like in San Diego's case — can extend into policies and decision-making if not viewed with a critical and nuanced lens.

