There is certainly a lot of news these days about artificial intelligence (AI) and the impact it has on every aspect of our online lives. It seems like every developer is trying to enhance an application, platform, or tool with machine learning capabilities. It’s enough to make one wonder how artificial intelligence affects cybersecurity. We did the research for you and we identified three things that just may surprise you.
Explainer: Machine learning vs. AI
Artificial intelligence (AI) refers to the systematic study of the way machines can be designed to perform human tasks. Machine learning is one category of AI. Machine learning refers to a system that is capable of learning from its experience. The goal of machine learning is to lessen the time spent on repetitive tasks, whether they are simple or complex. So, a machine learning tool can recognize patterns through examples and data it has encountered, not through programming or algorithms.
When we talk about AI in the cybersecurity field, we are really talking about machine learning tools. Machine learning is superior at the following tasks:
- Regression – that is, predicting the next step based on the previously known steps. One example is fraud detection in cybersecurity.
- Classification – dividing items into various known groupings, like spam filters grouping certain messages into the spam folder.
- Clustering – dividing items into groups by their similarities. This is one of the subsets of classification used to determine if an item belongs in a particular group.
- Recommending – making association rules based on past experience – used primarily to respond to cyber incidents.
- Generalization – finding the most common and most significant attributes of many examples. This is commonly used in facial recognition programs.
- Generative models – creating a likely example based on the previously known distribution. Often used in testing network vulnerabilities.
Surprise #1: Good or bad is in the eye of the beholder
AI is something of a double-edged sword. That is, AI can be used for good or bad. On the good side, machine learning is invaluable as a cybersecurity tool because it easily discerns the similarities between different cyber attacks. That is especially true when automated programs synchronize the cyber attacks.
Even better, the latest machine learning algorithms excel at grasping the significance of big data coming from various data collection tools. This machine learning is known as extracting information (IE) from unstructured data. AI takes the unstructured data and turns it into structured data that it spits out in a spreadsheet of some kind. In other words, AI can see things that ordinary humans may miss.
Law enforcement agencies use AI when they employ facial recognition or voice recognition software to catch criminals. They also study new technology known as emotion recognition software. This new software reads human emotions in micro-expressions via a combination of images and audio. Law enforcement can use this technology during the interview process to predict behaviors and curb dangerous situations.
On the other hand, AI is an impartial participant in cyber wars which means it is equally good at helping cybercriminals. Malware can use machine learning to hack into any cyber defense system. Cyber hackers can use AI to scale their attacks to any size by seizing bots and IoT botnets to do their dirty work. Malware made smart by machine learning will learn how to evade discovery on the network.
The scariest scenario might be the cyberhacker’s collection of consumer data unintentionally leaked out into the net. That information may be seized by an AI tool that can use its machine learning powers to coordinate immense attacks on unsuspecting – and now defenseless – consumers. If you would like to see an example of AI helping cybercriminals to the consumer’s detriment, read this February 2020 article from zdnet.com entitled “Android Malware Can Steal Google Authenticator 2FA Codes.”
And the AI that helps law enforcement catch criminals can also help criminals by stepping up its impersonation game on human victims to expand their criminal activity toward even greater financial rewards.
Surprise #2: SAST and Artificial Intelligence
AI’s strength lies in its ability to automate repetitive steps in a cybersecurity process so that IT security wizards can spend their valuable time identifying network vulnerabilities and eliminating imminent threats. One of those cybersecurity processes is Static Application Security Testing (SAST). SAST is a process that strengthens applications by using various tools and methods. One of the most important tools is SAST, which is also known as “white box” testing because it tests the application’s known source code, bytecode, and binaries.
SAST analysis looks at anything that raises the suspicion that it might cause an issue for the application. This process involves many repetitive tasks that make it a good one to employ machine learning collaboration.
In 2015, IBM created the Intelligence Finding Analytics (IFA) agent that conducts data analysis with an impressive accuracy rate. And IFA doesn’t have to take sleeping, eating or bathroom breaks. When applied to the false positives created by SAST, IFA reduced the false positives by an impressive and consistent rate of over 90%. Many industries adopted IFA to accelerate their cybersecurity testing.
Experts predict that the value of AI in the cybersecurity market will reach USD$34.81 billion by 2025. In fact, a recent study by Ponemon Institute said that the primary benefit of AI in security was the increased speed AI provided in threat analysis. The secondary benefit was the quickening of the network’s quarantine for infected hosts and remote computing devices.
Surprise #3: SCA and Artificial Intelligence
Open source products form the basis of many software applications today and their use will not slow down any time soon. Until recently, not many users of open source products investigated the open-source products they use to determine if they were in compliance with licenses and whether they use standard security measures. The Open Web Application Security Project (OWASP) recognized the need to analyze and recognize components in software that have known vulnerabilities. OWASP issued the A9 security risk to warn developers using open source components that their application’s security requires the acquisition of knowledge about open source vulnerabilities and requires consistent attention to the updating of those components.
As you might suspect, inventory and analysis of open source components is a labor-intensive activity that can clog IT professionals’ workloads. To remedy that situation, security experts developed a new tool for the open-source investigation work called Software Composition Analysis (SCA). SCA tools identify the open-source components used in an application’s source code. Then, SCA runs the various components against databases, security advisories, and version trackers to unlock known security issues.
Kiuwan Insights is an example of SCA technology. It helps you manage your products based on open-source technology. It reduces the risk from third-party components so IT does not have to do those tasks manually.
- checks license compliance
- identifies security vulnerabilities
- manages operational risks
- helps automate policies throughout the Software Development Life Cycle (SDLC)
Kiuwan Insights generates a complete inventory of all open source and other third-party components used in developing software applications or in other Application Programming Interfaces (API). This SCA tool investigates security risks and manages application libraries to check for critical updates and new versions, and automatically alerts users about obsolete software.
If you’d like to talk to one of our experienced professionals about SAST, SCA, AI, or anything else, just contact us.