FacebookTwitterLinkedIn

Weaponising AI: A Future Look at Cyber Threats

Artificial Intelligence, generally referred to simply as AI, has the potential to revolutionize numerous industries. It has the potential for making numerous forms of employment redundant, a much argued economic side effect impacting on those made redundant. At its best, it could drastically improve the lives on the planet. These assumptions look at AI what it can potentially do when there is no malice behind their actions. How then could AI change the nature of cyber threats? Before that question can be looked at it is wise to look at what AI currently is and how it is defined. There are a lot of misconceptions circulating the subject. Some prophesize the technology to be the end of the world while others see it as a technology to take humanity to the next step. Both Stephen Hawking and Elon Musk have previously voiced their concerns over the technology.

Much of the misconception boils down to how AI is defined. John McCarthy, who coined the term, defined AI at a conference as,

“The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

The Oxford English Dictionary defines AI as,

“The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

While the definitions are colorless and lack all mention of Skynet it is important to keep in mind these definitions when discussing the threats posed.

AI and Cyber Threats

Security researchers have been asking some hard questions of their ability to deal with malware that may have AI technology in the future. Security researchers from Darktrace released a white paper proposing how current threats could be further evolved by implement AI technology. According to Darktrace, the current threat landscape is full of everything from script kiddies and opportunistic attacks to advanced, state-sponsored assaults continue to evolve.

weaponising ai future cyber threats

Conversely, they contend that there is the potential for further development through the future use of AI. In order to do this, researchers documented three active threats in the wild which have been detected within the past 12 months and theoretically applied AI to see what kind of cybersecurity threats could foreseeably be faced in the not too distant future.

Max Heinemeyer, Director of Threat Hunting at Darktrace, summarizes the approach as,

“We expect AI-driven malware to start mimicking behavior that is usually attributed to human operators by leveraging contextualization. But we also anticipate the opposite; advanced human attacker groups utilizing AI-driven implants to improve their attacks and enable them to scale better.”

Trickbot

The first real-world malware analyzed was Trickbot. The malware is a financial Trojan which uses the Windows vulnerability EternalBlue in order to target banks and other institutions. The malware continues to evolve and is currently equipped with injectors, obfuscation, data-stealing modules, and locking mechanisms. Darktrace researchers investigated a case involving an employee at a law firm who fell victim to a phishing campaign. In the end, Trickbot was able to infect a further 20 devices on the network, leading to an expensive audit and remediation.

Darktrace is of the opinion that malware bolstered through artificial intelligence will be able to self-propagate and use every vulnerability on offer to compromise a network. This would imply that the malware would be able to quickly change tactics from system to system in order to infect targeted devices. The attacker could avoid patched vulnerabilities and attack unpatched ones depending on the targeted device. Further, malware could switch to brute-force attacks, keylogging, and other techniques which have proven to be successful in the past in similar target environments. This also opens the possibility of attackers no longer needing a command and control server to instruct malware on what to do. This is because a key concept of AI is the ability to sit, learn, and then make a “decision” with incredible efficiency.

Stealth Improved

In the second case study, an incident involving a utility company was analyzed. In this instance a device loaded with malware used a variety of stealth tactics and obfuscation to stay hidden was observed. In order to trick and avoid security measures, the malware downloaded from n Amazon S3 service which established a backdoor into the compromised network. To further remain undetected the malware utilizing a self-signed SSL certificate. AI if implemented would operate in a similar way to that of the Trickbot example. Researchers believe that,

"Instead of guessing during which times normal business operations are conducted, it will learn it. Rather than guessing if an environment is using mostly Windows machines or Linux machines, or if Twitter or Instagram would be a better channel, it will be able to gain an understanding of what communication is dominant in the target's network and blend in with it.”

Stealing Data…Slowly

The last case study involved a medical technology company. It was observed that data was being stolen at such a slow pace and in tiny packages that it avoided triggering data volume thresholds in security tools. Multiple connections were made to an external IP address, but each connection contained less than 1MB. Despite the small packets, it did not take long before over 15GB of information was stolen. If AI components were applied to the attack, it is possible that the attackers wouldn’t have to guess at what size packets would remain undetected by security measures. Rather the malware itself could determine this and change the size depending on security measures and current traffic on the network.

It is important to remember that such white papers are not created to scare the public into believing AI is evil incarnate. Rather, as Darktrace states, “Defensive cyber AI is the only chance to prepare for the next paradigm shift in the threat landscape when AI-driven malware becomes a reality.” Rather be prepared than not.

▼ Show Discussion

About the author:

Karolis Liucveikis

Karolis Liucveikis - experienced software engineer, passionate about behavioral analysis of malicious apps.

Author and general operator of PCrisk's "Removal Guides" section. Co-researcher working alongside Tomas to discover the latest threats and global trends in the cyber security world. Karolis has experience of over five years working in this branch. He attended KTU University and graduated with a degree in Software Development in 2017. Extremely passionate about technical aspects and behavior of various malicious applications. Contact Karolis Liucveikis.

PCrisk security portal is brought by a company RCS LT. Joined forces of security researchers help educate computer users about the latest online security threats. More information about the company RCS LT.

Our malware removal guides are free. However, if you want to support us you can send us a donation.

About PCrisk

PCrisk is a cyber security portal, informing Internet users about the latest digital threats. Our content is provided by security experts and professional malware researchers. Read more about us.

Malware activity

Global malware activity level today:

Medium threat activity

Increased attack rate of infections detected within the last 24 hours.

Virus and malware removal

This page provides information on how to avoid infections by malware or viruses and is useful if your system suffers from common spyware and malware attacks.

Learn about malware removal