Search
  • Liem Nguyen

AI Weaponization: An Intriguing Trend of Sophisticated Cyberattacks

Updated: Nov 23, 2019

Artificial intelligence technologies have received a lot of traction in recent years. In particular, AI has been extensively used in cybersecurity. It has been incorporated in numerous security solutions to address the rising cybercrime challenges. Also, cyber adversaries are increasingly using AI to create bots and malware to enable large-scale attacks.

Here are some of the AI weaponization use cases.


1. Weaponizing malware

Cyber actors use AI to conceal malware in unsuspicious and benign apps. They can use AI to hide malicious code in legitimate programs to evade detection. The malicious behavior triggers once the program has been used for a specified amount of time or when a certain number of users have acquired it. By applying an artificial intelligence model, cyber actors can conceal harmful code and derive a private key to be used in unlocking the hidden malware at a given place or time. Attackers can pre-define an AI feature that can be used to unleash an attack. For example, an AI-powered malware can be set to activate only after a voice recognition or other biometric authentication has been used or activated. This poses security challenges since cyber actors can use any type of indicators to feed a malicious artificial intelligence model, derive the needed key, and decide to attack at will.


2. Blending into a background

AI will enable hackers to create sophisticated threats that will permit them to maintain a presence in a targeted environment without detection. They can remain in a network or system for months if they so wish. Such threats move cautiously and slowly to avoid being detected by traditional security measures. The threats are targeted to a particular organization or individual. Besides, applying AI on cyber threats and malware enable them to identify attack vectors they can easily use to compromise a system. This is by learning the dominant channels used for communication, best protocols or ports used in a system’s movements, and having the ability to blend in a system’s routine activities. Malware with the capability to disguise in spite of the noise means that they can stealthily spread in a digital ecosystem compromising all connected devices. Additionally, AI-powered malware will be capable of analyzing large data sets at a machine speed. This will enable the malware to identify valuable data volumes, saving an attacker a lot of effort and time.


3. Impersonating trusted users

AI permits attackers to plan and execute highly tailored attacks, yet operational at scale. AI technologies facilitate the development of intelligent malware capable of learning the nuances associated with a user’s specific language or behavior. This is by analyzing various modes of communication, including social media and email. Machine learning algorithms enables the malware to train datasets using the behavior to build knowledge relating to an individual’s activities. By applying the acquired knowledge, AI malware can mimic the user’s specific behavior. For example, a phishing Trojan with AI abilities can learn a user’s writing style in email conversations. It can replicate the style and craft a message with unquestionable credibility. Since the malware learns even the subject matter, a targeted user would view a message as a normal conversation. Such attacks can have high success rates since messages crafted using AI malware may not be distinguished from legitimate conversations. As such attacks become more advanced, even users practicing the best cybersecurity practices are bound to fall victim.


4. Cyber-attacks will be faster and more effective

For attackers to execute sophisticated attacks today, they must be skilled in conducting reconnaissance on their human and machine targets. This is to gather information such as how users interact with various digital platforms, understand the networks used, and identify vulnerabilities for exploitation. The process might require resources and time. In contrast, AI-attacks will enable perpetrators to execute similar sophisticated attacks in a matter of minutes, with minimal resources, and at a larger scale. Also, AI will enable cyber actors to execute more tailored attacks, hence highly effective. Since they will be capable of understanding their environmental context, it will be more difficult for installed security systems to detect them. In fact, traditional security measures may not be able to address such threats.


Examples of cybercriminals using AI tools

Mealybug, the group that created the Emotet Trojan, leads the pack of attackers using AI tools. Emotet is mainly distributed using spam-phishing techniques. The Trojan has a module used to exfiltrate email data and information from an infected machine. Although the intentions for stealing the data were previously unclear, security researchers later realized Emotet used the information to send contextualized phishing messages. The malware is designed to insert itself in pre-existing email conversations. It can scan an email thread to gather information regarding the conversation’s subject, and respond appropriately. A recipient on the other end clicks on the message and attached documents, without the slightest knowledge he is being targeted by a phishing attack. Emotet leverages artificial intelligence ability to analyze an email’s context to learn and replicate the communication language and style. The phishing messages are thus highly tailored to a particular target. The Emotet malware is also capable of sending similar messages at scale, enabling the attackers to enjoy high success rates of phishing attacks.


Solution

Companies can protect themselves from AI-driven attacks by using AI-enabled security systems. Many companies are already researching on how they can deploy AI capabilities to cybersecurity solutions. One such solution is the use of cognitive security approach that utilizes AI to enhance the security of digital systems. Cognitive security solutions utilizes machine learning, human-computer interactions, deep learning, and data mining to mimic the functionality of the human brain to process information. These cognitive solutions are effective in developing comprehensive cybersecurity systems capable of withstanding a wide range of contemporary threats. Towards this end, the solutions are implemented in two broad categories. The first is the use of cognitive systems to analyze current security trends and process unstructured and structured data to acquire actionable knowledge needed to ensure continuous security improvement. The second category involves the use of data-driven and automated security technologies to support cognitive systems with high levels of accuracy and context.


About the Author - Liem Nguyen is the Co-Founder & CTO for Cognitive Security , a cyber security firm which specializes in AI-powered cyber security. He has worked for medium/large enterprises and government entities across diverse industry sectors all over the world.

He is passionate about the cognitive era and what AI can bring to cyber security.

You can find Liem on LinkedIn and Twitter.


#ArtificialIntelligence #AI #Weaponization #Cognitivesecurity

©2019 by Cognitive Security.  Proudly created with Wix.com

Background vector created by www.freepik.com