Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Denver news in your inbox
Catch up on the most important stories affecting your hometown with Axios Denver
Des Moines news in your inbox
Catch up on the most important stories affecting your hometown with Axios Des Moines
Minneapolis-St. Paul news in your inbox
Catch up on the most important stories affecting your hometown with Axios Twin Cities
Tampa Bay news in your inbox
Catch up on the most important stories affecting your hometown with Axios Tampa Bay
Charlotte news in your inbox
Catch up on the most important stories affecting your hometown with Axios Charlotte
The big fear around artificial intelligence is often the science-fiction nightmare of computers taking control from humans. But there are other, perhaps more likely, things to worry about. KPMG security experts write in the Harvard Business Review about some of the other scary scenarios possible with AI and cognitive computing:
- Cognitive technology: "[B]ad human actors — say, a disgruntled employee or rogue outsiders — could hijack the system, enter misleading or inaccurate data, and hold it hostage by withholding mission-critical information or by 'teaching' the computer to process data inappropriately."
- A hacker can pose as a bot: "Security monitoring systems are sometimes configured to ignore 'bot' or 'machine access' logs to reduce the large volume of systemic access. But this can allow a malicious intruder, masquerading as a bot, to gain access to systems for long periods of time — and go largely undetected."
Bottom line: When dealing with humans, the source of a security breach can be isolated. But with AI breaches, damage can turn massive in a matter of seconds and can be hard to trace —and therefore hard to correct — quickly.