Artificial intelligence, especially learning machines that can adapt to changing input, are seen by many in the computer security industry as the next great hope in challenging cyberthreats.
We recently wrote about the promise and potentially positive impact this type of AI might have when it comes to protecting firms against hacker incursions. AI can offer fast response times. It may detect patterns that allow for better oversight. And sometimes it may take an automation to adequately protect against the automatic attacks sent by hackers.
But as with any technology, an over-reliance on the new may have unintended consequences, and even undermine corporate goals. AI, like other technology, is ultimately a tool — but tools need a human operator.
That’s the gist behind comments from Levi Gundert, Recorded Future’s vice president of intelligence, at a panel session for RiskSec NY 2017.
“Supervised machine learning has a lot of promise, but you still need that paired up with human brains to make [your threat data feed] a truly valuable feed for your organization,” reports SC Media.
Gundert went on to outline that people still need to set parameters for the automation and track efficients.
Gundert isn’t the only one suggesting caution in the form of ensuring people remain engaged in the AI cybersecurity chain.
First, one temptation of technology is to assume that the most recent advances are what a firm needs, even if it’s not actually the best tool for the job.
“In our work with organizations, we have noticed that when a new threat arises, instead of holistically assessing it, organizations often simply request the latest, greatest analytic tool or contract out the work to third-party intelligence providers,” writes Jay McAllister, a senior analyst with Carnegie Mellon University’s Software Engineering Institute.
If a firm does employ the best tech for the job, it’s not foolproof.
“Too often, unsupervised machine learning contributes to an onslaught of false positives and alerts, resulting in alert fatigue and a decrease in attention,” writes Torsten George, the vice president of marketing and product management for software firm RiskSense, in Security Week.
While machines can flag areas of concern faster than a human security employee might catch, it takes a human touch to assess the quality of the information and decide action. The AI might learn from that human input, George notes, but it’s still a vital part of the process. It still takes a leader’s touch to ensure cooperation among departments and staff.
Overwhelmingly, the experts do support employing AI learning machines to track cyber threats. TechCrunch notes that a good program could pare down events to 100 for review, instead of expecting a department to comb through thousands. The hours and weeks saved could prove invaluable.
But as with any advancement, nothing remains foolproof. The key is understanding the holes in your cyber security network, considering which AI tools might help, and plugging those in — with adequate human support.