Cybersecurity Industry Fights Weaponized AI Tools - Industry Today - Leader in Manufacturing & Industry News
 

December 10, 2020 Cybersecurity Industry Fights Weaponized AI Tools

Computer industry researchers are developing new tools to nerf artificial intelligence tools used by cracking groups to carry out attacks.

cybersecurity ai tools

Photo by Adi Goldstein on Unsplash

Media pundits have cast the current security situation as a cyberwar between cracking organizations and cybersecurity researchers who are desperately doing everything they can to protect innocent consumers from falling victim to identity theft scams. Some have reported that crackers are now gaining the upper hand as a result of machine learning-based tools that can automatically reconfigure themselves to find potential exploits inside of shrinking attack surfaces. Considering that rapid growth in the overall number of remote work teams is increasing the number of devices that could become compromised, this seems like a massive undertaking.

Industry commentators might not be as familiar with the fact that security analysts are now deploying their own machine learning-enhanced technologies to fight back against weaponized artificial intelligence tools.

Dealing with Emerging Cyberwar Technology Trends

Computer security experts are going far beyond traditional malware scanning tools in their quest to stop automated threats from seizing control of systems. Since digital worms don’t require human intervention to embed themselves in otherwise legitimate software, conventional antivirus products aren’t capable of detecting them. According to a recent Avira test study, the only reason one particular product was up to this kind of difficult chore was it’s heavy reliance on cloud computing technology to stay abreast of new threats as they became an issue.

Traditional products wouldn’t be able to do so because they instead rely on heuristic analysis algorithms. These digital tools execute arbitrary code found in test samples. They do so inside of an emulator, so that they don’t cause any damage. While these can catch suspicious code that might have caused problems to an actual host machine, they have some serious shortcomings.

Heuristics subroutines are normally capable of detecting unknown threats as well as variants of ones normally found in the wild. However, these only ever work on the basis of their own experience. They’re likely to miss anything that contains a method of operation that’s not known to them while also generating a potentially large number of false positives. When security software starts pestering users too much, they’re likely to ignore its warnings and that makes it all the easier for crackers to do their dirty work.

Some criminal organizations have developed AI-based systems that can fool heuristics scanners. Researchers are on the lookout for new ways to stop machine learning threats that actively target these environments.

Fighting Back with White Hat AI

Computer scientists have fought back against these artificially intelligent attack agents with a few new AI programs of their own. Malware scanners that are equipped with machine learning technology can identify programs that show unusual behavior even the exact type of behavior they’re exhibiting isn’t part of a known threat. They can also decide whether or not a positive test result is actually phony, which means they won’t pester users to the point where they’ll ignore warnings. This software is simple enough that it can fit into inexpensive routers and network adapters, which ensures threats will be stopped at the source.

Most routers currently run firewalls and some sort of onboard Unix implementation to protect against threats. By arming these devices with a machine learning tool that grows as it protects against threats, system administrators are helping to reduce the risk of denial of service attacks that could bring down a system. Some experts are even advising that similar tools be installed on IoT devices to prevent them from becoming hijacked and used for unintended purposes.

Integrating an AI routine into existing software is becoming increasingly simple, which is a good thing seeing as some threats are becoming almost completely automated. With how many machines are currently connected to botnets, it’s not hard to imagine that the collective processing power of various cracking organizations must be humongous at this point.

This power is being put to nefarious use, which will test the mettle of white hat machine learning systems.

Some Sobering Cybersecurity Statistics

According to one study conducted by Google, AI-based optical character recognition technology can solve around 99.8 percent of all existing CAPTCHA challenges. Another study suggested that even a traditional brute force password checking attack turned out a 26 percent success rate. If over one quarter of all the world’s passwords could be compromised in this way, then an untold amount of information could be stolen in just a few weeks.

Other reports have found that IoT sensors are often connected to machines running a legacy operating system. On average, these systems are around 10-15 years old. Since they’re deployed in delicate environments, they can’t be taken down to install security patches or updates very easily.

While these are an obvious attack vector into some particularly sensitive areas, the outlook might not be as bad as it seems. Real-time mitigation technology is quickly being integrated into many security stacks across the world, which is helping to reduce the risk that individual devices may become targets. Some computer scientists are predicting that AI-based solutions might soon report the most likely times for these sorts of threats to arise.

Some have pointed to digital identity systems and other new technologies as offering solutions as well as providing additional potential attack vectors regardless of how well they’re secured. Regardless of which direction the industry moves in, however, it’s likely that machine learning routines will soon be able to predict the best course of action and help to keep manufacturing and laboratory data more secure.

philip piletic

Philip Piletic

About the Author
Philip’s primary focus is a fusion of technology, small business, and marketing. He is a writer, marketing consultant and guest author at several authority websites. In love with startups, the latest tech trends and helping others get their ideas off the ground.

You can reach him over LinkedIn.

 

Subscribe to Industry Today

Read Our Current Issue

Spotlighting Equipment Manufacturing: Advocate for the People Who Build, Power, and Feed the World

Most Recent EpisodeCADDi: Making Design and Supply Chain Data Accessible

Listen Now

Tune in to hear from Chris Brown, Vice President of Sales at CADDi, a leading manufacturing solutions provider. We delve into Chris’ role of expanding the reach of CADDi Drawer which uses advanced AI to centralize and analyze essential production data to help manufacturers improve efficiency and quality.