Apr 6, 2017 - Technology

Facebook's Yann LeCun: robots won't seek world domination

Rebecca Zisser / Axios

Axios caught up with Yann LeCun, head of Facebook's artificial intelligence lab, backstage at the Future Labs AI Summit to get his thoughts on how the technology he's spent a career advancing will affect the average American. LeCun, who also teaches computer science at NYU, says AI will make us all richer, but that society must regulate the technology through broad, public consensus.

Why he matters: LeCun is a giant in the field, whose contributions to AI have helped drive the technology behind self-driving cars. AI is increasingly found all across Facebook products like image recognition and the personalized News Feed, and could eventually help identify fake news and improve voice-controlled assistant technology. Last summer, Facebook told Fast Company that it had hired more than 150 AI experts, and has tripled its investment in the area of late. Check out the interview below:

What does the Facebook AI Lab do?

The mission is to push the science of artificial intelligence forward, as well as coming up with useful and cool applications along the way.

What's an example of how Facebook uses artificial intelligence?

If you are visually impaired, and you're on Facebook on your smartphone, not only will the text be read to you, but every image you encounter, the system will vocally describe the image to you.

Also, what pieces of information all Facebook users are shown is being determined by your taste, and that includes being able to tell what the post talks about and what angle or attitude the post is taking.

What are your thoughts on effects of artificial intelligence on the economy and jobs?

There's no question that there's going to be a lot of beneficial effects of wide deployment of AI on the economy. You'll have safer cars, more personalized medicine—medical imaging will be revolutionized. It will save lives and increase overall wealth. The next question is: how is that wealth going to be distributed? When you have a rapid technological advance, you tend to see an increase in a concentration of wealth. AI is no different, it's just one contribution to accelerating technology, and so that question needs to be asked. A lot of politicians are refusing to recognize that this is a question to be addressed.

People don't want just a check from the government, they want a job and a sense of purpose. Do you worry that the median worker fifty years from now will have the aptitude to do jobs that need doing?

[AI and automation] will change the value we attribute to things. There will be more value attributed to create activities and intrapersonal relationships, and much less value attributed to material goods, because they will be created by machines.

Does that mean that we have to change the way the way we educate humans? A lot of people aren't really good at soft skills and intrapersonal communication, and it's not something we've had a ton of success teaching in school.

We're not asking people to go against their nature. It will actually be asking them to be more human. If you leverage people's intrapersonal skills and creativity, that's actually what really is human.

In the immediate future, what are the most exciting applications of artificial intelligence?

Healthcare will be one of the most important. It will begin in radiology and dermatology—there are prototype systems we have right now that work pretty well that can diagnose skin ailments, for example. The quality of healthcare will increase—we just have to figure out how make it widely accessible.

You said during your talk that we shouldn't worry about machines taking over the world, because that assumes that computers will have human failings, like greed or the tendency to become violent when threatened. But what about a scenario in which a hedge fund bot is programed to maximize returns, and it turns out the best way to do that is to buy a bunch of food before destroying the rest of the world's food supply. Such a machine would be fulfilling its purpose, but through evil, even if the person who programmed the machine didn't anticipate this reaction.

We have a lot of checks and balances built into society to prevent evil from having infinite power. Most companies are not either working for good or evil—they're just maximizing profits. But we have all sorts of rules and laws to prevent our economy from going haywire. It will be the same thing for AI. Learning to build AI systems that are safe—not because they're going to take over the world, but because you want them to work reliably—is going to take some time, similar to how long it took people to figure out how to build airplanes that don't crash.

There is a group that I helped found called Partnership for AI that's a forum for companies like Google and Facebook, and other groups like the ACLU, to discuss the best way to deploy AI systems in such a way that they are safe and unbiased. These are issues are so broadly important that they must be discussed in public.

Go deeper