The big fear around artificial intelligence is often the science-fiction nightmare of computers taking control from humans. But there are other, perhaps more likely, things to worry about. KPMG security experts write in the Harvard Business Review about some of the other scary scenarios possible with AI and cognitive computing:
- Cognitive technology: "[B]ad human actors — say, a disgruntled employee or rogue outsiders — could hijack the system, enter misleading or inaccurate data, and hold it hostage by withholding mission-critical information or by 'teaching' the computer to process data inappropriately."
- A hacker can pose as a bot: "Security monitoring systems are sometimes configured to ignore 'bot' or 'machine access' logs to reduce the large volume of systemic access. But this can allow a malicious intruder, masquerading as a bot, to gain access to systems for long periods of time — and go largely undetected."
Bottom line: When dealing with humans, the source of a security breach can be isolated. But with AI breaches, damage can turn massive in a matter of seconds and can be hard to trace —and therefore hard to correct — quickly.