Scientists are opening generative AI's black box and beginning to understand the models' inner workings.
Why it matters: The prospect of harnessing genAI to make decisions and perform tasks is pushing researchers to better understand how AI systems work, Axios managing editor Alison Snyder writes.
🧠Zoom in: One wayAI researchers are trying to understand how models work is by looking at the combinations of artificial neurons that are activated in an AI model's neural network when you engage with it.
These "features," as they're known, relate to different places, people, objects and concepts.
OpenAI looked at part of its GPT-4 network and found 16 million features, "akin to the small set of concepts a person might have in mind when reasoning about a situation," the company said.
They found features related to rhetorical questions, price increases and human imperfection.