
Lazaro Gamio / Axios
Neil Jacobstein has worked in artificial intelligence for thirty years, consulting for government agencies like DARPA and automakers including GM and Ford. Now, he chairs the artificial intelligence and robotics track at Singularity University, where among other things he helps students spin out AI startups. Speaking to Axios backstage at an exponential manufacturing conference, Jacobstein talked about cybercrime, the future of work and AI.
A Jacobstein pro tip: to stay safe from a personal cyber-attack, be ultra-vigilant to constantly update your electronic devices.
Axios: You've said that AI can help fight terrorism and other crime. Using the recent Crybaby ransomware attack as an example, what would that look like?
Jacobstein: A generic example of how that would work would be a "zero-day" attack [exploiting an unknown vulnerability in your device's program]. So if you have an antivirus system, and it's scanning for laptop health, it has some notion of what system health looks like. All of these viruses — worms, trojan horses -- are little snippets of AI. They're not called AI, but that's what they are. And they if you have a system monitoring the health of your laptop, it will be able to identify a zero-day attack , and quarantine it.
One of the things we need to do is monitor the system health of our cities — the air traffic control system, the ground traffic system, our hospital system. Anything that is a system, that you can define some of the parameters of system health, we should be looking at that very carefully. In some cases we should be air gapping those systems — meaning not directly connecting them to the Internet whenever possible — and having them updated onto hard drives that are themselves updated by the Internet and then scanned very carefully.
Why aren't we doing that?In some cases we are. You'll notice that our society is still functioning. One of the reasons for that is we have some protection.
These viruses are a huge concern in corporate America, and it seems like the executives just aren't on top of the problem.
So what I tell people is the first thing you ought to do is review all the good advice you've been given by your security personnel and actually follow it and enforce it. If you did that you would have had the update to the Microsoft operating system, and it would have had the patch that prevents the [Crybaby virus] from spreading around.
When you say "we," are we talking about all Internet users doing this independently? Individuals need to make sure they have the latest system patches and upgrades, from both their operating system provider and their anti-virus provider. The operating system providers themselves need to do a better job of making their systems patched and upgraded. There's a lot of responsibility to be passed around. There's a matter of security hygiene that's a big part of the first line of defense, and then there's AI looking at anomalies and trying to shut them down when they look malicious.Proper security hygiene on an individual level seems unrealistic. People are overwhelmed by even basic computer skills. It's not unrealistic if people understand the consequences. It's like saying, 'Expecting drivers to drive well is unrealistic, it's up to the car companies.'Fundamentally, people are still responsible for their driving behavior. They're still going to be responsible for their laptops and smartphones. If you are running a laptop that's on Windows 7 and you haven't renewed your anti-virus software, you can blame the anti-virus companies and Microsoft, but fundamentally you need to make sure that your system is protected. If you say, 'it's not realistic, I'm not computer literate,' or something, even so, you need to bring it somewhere that does know how to. It's like getting your tires checked.AI can be a black box, in that we can't necessarily explain why this software behaves the way it does. At the EU, there is regulation that will require AI to open up this box and explain its behavior. That law is going to cause them a massive amount of grief. Better than that law would be to educate software engineers about the importance of that explanation, and have users buy software and download free software built in. People should demand it as a feature. We're still doing research on it, and when it actually happens, I would expect it to have some performance hit associated with it. That's okay -- we can afford the hit. Would such a law dissuade entrepreneurs from operating in the EU? If the AI is deeply embedded in the braking system, in the anti-skid brakes of a car, you don't need a deep explanation for that. What are three examples of AI applications that are going to change people's lives in the near future? People underestimate how quickly self-driving cars are going to be everywhere. If you have kids, they are unlikely to drive; they'll think it's barbaric. Another is medical informatics. Right now doctors have a terribly hard time keeping up with the literature. If you're a kidney specialist, or a heart specialist, it's very unlikely that you're going to be able to keep up. If you prescribe a medicine for the heart that also affects the kidney, and the information on negative effects are being published in a renal journal, you still need to know that. We're going to have materials fifty times stronger than steel made out of carbon. We have carbon in abundant supply in the atmosphere. We're going to do things that are completely outside people's frame of reference ... over the next twenty years or so.