Machine Learning, Artificial Intelligence & Security
Our lives are more and more Mister Robot, and the industries which surround our lives are increasingly integrating Machine Learning and Artificial Intelligence into their processes – industries such as retail, healthcare, automotive, e-commerce… The technology which surrounds us is becoming ‘smarter,’ utilizing the ability to identify patterns in activity or behavior to allow it to learn the human, or work on intuition, as opposed to pre-defined rules, in providing the ultimate in user experience.
Similarly, many of today’s security vendors tout Machine Learning and Artificial Intelligence as part of what makes their solutions adaptable to quickly evolving threats. As more and more companies collect and build their business upon immense amounts of data and information stored on high value targets such as insecure centralized servers, and as hackers become more elusive, traditional security solutions are finding it harder to keep up. So, we increasingly integrate Machine Learning and Artificial Intelligence into our security solutions, but as with every technological development, there are challenges along the way.
Machine Learning Can Be Fooled
Within the security world, there has been much research into the field of Machine Learning and its applications over the past decade-and-a-half. Spam filtering was one of its early uses. Because spam includes both junk emails and malicious emails – it is a concern that can both effect employee productivity and organizational security. Machine Learning use in spam filtering has both pros and cons – as spam is continually evolving, and the humans behind the spam continually try to out-do the filters. False positives are common and users frequently have to review their spam mail folder to ensure nothing important has been labeled ‘junk.’
Today, security vendors are incorporating Machine Learning into their solutions, primarily to help identify malware, but false positives are a reality to be dealt with. According to a 2017 Ponemon study, within the endpoint-security niche, false positives cost organizations an average $1.37-million per year. The bottom line: Machine Learning can still be fooled, and it can cost the organization time, money and security.
Other Hollywood-Scenarios of Artificial Intelligence- and Machine Learning-based Platforms
In December of last year, 31-million Android users of the app AI.type, an Android keyboard which ‘learns’ the user’s voice to predict what they are going to type, had their accounts hacked. The breach put their personal details in the hands of attackers. This included names, email addresses, geographic location and, on the free version, even mobile numbers and mobile network names. The data was stored on an insecure server owned by the app creator.
This was a straight-forward hack on a popular Artificial Intelligence-based app platform, but what about the worst case scenario – a hack in which Artificial Intelligence actually becomes its own Achilles heel? As we encounter more Artificial Intelligence-based capabilities in our daily lives, some concerns put forth by researchers include: self-driving cars being hacked and directed to increase speed instead of stopping at red lights; attackers hi-jacking and over-riding voice-recognition-based commands; breach and bypass of anomaly detection engines; hi-jack of medical systems and devices, etc. And 2017 saw the first publicized Artificial Intelligence-driven attack in malware which attempted to learn employee behavior and ‘mimic’ it within its malicious activities on the organizational network, in order to evade detection.
Artificial Intelligence and Machine Learning – the Good, the Bad & the Reality
So Artificial Intelligence and Machine Learning are truly a two-way street. On the one hand, they are increasingly being used by security providers to hone in on malicious activity on endpoints, networks, on the SIEM and more. But just as security professionals rely on the ability of Machine Learning and Artificial Intelligence to make decisions benefiting the organizations they protect, attackers are utilizing them to their benefit as well. Professionals responsible for securing their organizations need to ensure that the security solutions they use are not only protecting them in the present, but also keeping pace with the rapidly evolving threats and vulnerabilities that come in a world of Machine Learning and Artificial Intelligence-based progress.