An AI can inspire intimacy (with a little human help)
- Bryan Walsh, author of Axios Future

Illustration: Sarah Grillo/Axios
The AI startup Primer has harnessed a natural language processing (NLP) model to generate conversation-provoking questions for team building.
Why it matters: The exercise shows how AI, properly trained by experts, can "help humans be more humans," as Primer director of science John Bohannon puts it.
- But it also illustrates the very human work that still needs to be done to ensure such models produce meaningful content.
Background: To forge connection and intimacy between Primer's far-flung remote workers, Bohannon hit upon the idea of starting his Monday remote meetings with the company's machine learning staff by posing questions he hoped would "help us go deeper than small talk," he notes.
- But having to manually think up new questions each week was taxing, and so being a machine learning expert, Bohannon wondered if he could train an AI model to generate the questions for him.
How it works: Bohannon first wrote a numbered list of some 20 example deep talk questions — such as "What animal would you be for a day?" — and then fed those prompts into a language model called GPT-J-6B, a smaller, open-source version of OpenAI's GPT-3 text-generating system.
- In just a few seconds, the model took those training prompts and began spitting out deep talk questions in the same style — hundreds of them.
- In the end, Bohannon had 365 deep talk questions —like "What do you think of when you think of Earth?" or "What is the difference between loving and being loved?" — that he considered good enough to use with his team.
- "I got my 365 questions, and it did it vastly faster and better than I could have done it on my own," says Bohannon. "It came up with stuff I never would have thought of."
The catch: While what he calls “Deep Talk” was easier than writing his own 365 questions, Bohannon still had to shape the right prompts and manually select the final questions from the ones generated by the model, discarding questions that were repetitive, or, in his words, "not safe for work."
- That shows some of the limitations of current language models, which Bohannon notes are "amazing statistical word salad generators," but not yet capable of reliably generating useful content completely on their own.
- "Everyone dreams that someday there will be a giant single neural network that knows how to do everything on its own," he says. "But we're not there yet."