Jun 16, 2017 - Technology

Facebook unveils a chatbot that can bluff and negotiate

John Jackson (Creative Commons)

In a new paper, Facebook researchers claim an advance in the capacity of chatbots to conduct sophisticated conversations — the ability to negotiate.

In a blog post, artificial intelligence researcher Mike Lewis and four colleagues say they trained chatbots to find a fair resolution of a conflict over possessions. The interesting thing is that the chatbots were started out knowing only their own interests _ which of the possessions they wanted _ and not those of their negotiating opponent. But, speaking grammatical English, they figured out the reasonably just solution by themselves. They even at times employed deceit.

Why it's important: The research must now advance from the narrow focus of this experiment. But the proof of concept suggests more advanced digital assistants than commercially available at the moment that can not only organize your calendar but resolve complex conflicts, such as sales negotiations.

As set up, the Facebook researchers asked the chatbots to divide a nonsensical group of items between themselves _ two books, a hat and three balls. The bots understood their own interest because each of the items was assigned a value through a point system. The researchers also provided them dialogue questions and answers to use along the way. They could not walk away or lose all their potential points.

  • The result is interesting because it shows it's possible to build a bot system that can work through complex human behaviors "without having to truly understand what the complication is about," Alex Rudnicky, a professor at Carnegie Mellon University, told Axios. Rudnicky, who researches communication between humans and robots, said the next step "is to take that idea and try it on a different kind of negotiation."

Among the advanced capacities the bots developed: bluffing. "We find instances of the model feigning interest in a valueless issue, so that it can later 'compromise' by conceding it," the paper says. The researchers say that, ordinarily, deceit requires at least the ability to speculate as to your opponent's interests and strategy. But, they said, "Our agents have learnt to deceive without any explicit human design, simply by trying to achieve their goals."

Go deeper