The Risk Artificial Intelligence Poses To Future Cybersecurity

Guest post submitted by Harold Kilpatrick

People are working hard every day to transform the way society lives through new technology, and one of the latest frontiers in computer science is artificial intelligence and machine learning. Researchers and engineers are making huge strides in the field, and their work affects a variety of industries, including financial institutions, manufacturing enterprises, and healthcare to name a few.

For clarity’s sake, artificial intelligence (AI) is defined as a computer system that can perform a task independent of human input. Machine learning goes hand in hand with AI, and you can think of it as a set of rules that help computers learn on their own.

We’re already using AI every day, though you might not be aware of that.

  • Google autocomplete search results as you type in a search bar.
  • Facebook shows you ads based on your likes and social media activity.
  • Gmail filters your spam email by sifting through emails containing specific terms.
  • Mitek manages risk in digital identity verification and mobile deposit of checks.
  • Amazon recommends products that are similar to the one in your cart.

In most instances, when people think of AI and machine learning, they think of robotics – and that is one avenue of application, but it’s actually a much broader field with widespread implications across almost all industries.

However, that also means that there’s the distinct possibility for a level of exploitation that grows beyond what we’ve seen so far, especially as the technology continues to improve. It’s almost certain that we’ll see an increase of use for AI and machine learning in our everyday lives, but much like other inventions in the tech industry, this also opens up a whole new avenue of increased misapplication from motivated cyber attackers.

The Future is Artificial

Right now, computer engineers that are working on machine learning are focusing more on the positive implications of AI than the potential ethical questions that come with this technology, or the biased intentions of others that can undermine their goals. While there’s nothing wrong with a positive outlook, a focus on the potential negative sides of AI is essential to balance out the potential risks involved. Of course, fear mongering isn’t the solution. The future of AI and machine learning needs to be considered without “poisoning the well” of an unbiased general population per se.

A recent report titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation has outlined the potential threats that AI and machine learning could pose to cybersecurity soon. The report identifies several ways in which artificial intelligence can increase the ability of attackers to target a wide range of devices and systems with even more precision than before.

Areas that could potentially be affected, according to the report, include the expansion on existing threats, AI changing the attributes of current attack models to make them more efficient and effective, and the introduction of new types of threats.

All of that could mean potentially disastrous results when it comes to cybersecurity as bad actors will likely find new ways to utilize AI to subvert existing security measures and expand on current labor-intensive attacks like spear phishing. Since AI and machine learning could take on a large part of the workload and do that automatically, attackers will benefit as their endeavors will become automated to a large extent.

According to The Malicious Use of Artificial Intelligence report:

To date, the publicly-disclosed use of AI for offensive purposes has been limited to experiments by “white hat” researchers, who aim to increase security through finding vulnerabilities and suggesting solutions. However, the pace of progress in AI suggests the likelihood of cyber-attacks leveraging machine learning capabilities in the wild soon, if they have not done so already.

This indicates potential threats to Internet of Things (IoT) household appliances, such as smart fridges, baby monitors, and home assistants.

Because security is so weak (and often non-existent) in IoT devices, they are perfect for hosting and spreading malware. For the same reason they are often used in DDoS (distributed denial-of-service) attacks, to take down the targeted websites or online servers.

Defensive AI Measures

Luckily, it’s not all bleak, and if proper precaution is taken, then these threats can be limited. The Malicious Use of Artificial Intelligence also proactively outlined how the potential malicious use of AI and machine learning can be mitigated. The idea is that researchers and cybersecurity companies should work together to assess the risks and identify practices that can facilitate safety even when facing AI-led attacks. For a future where AI can become a central feature of everyday life, so should our focus on cybersecurity.

Right now, however, one possible early solution comes in the form of endpoint protection. While a VPN can help lower potential risks for general internet users, endpoint protection helps companies prevent zero-day exploits and data leakage. An endpoint protection program limits the paths of access from a security threat through an administrator who has control over what type of external websites and internal data a server or device has access to. This is a good option for organizations who are simply relying on antivirus software to handle all of their cybersecurity needs.

Even though there is no perfect solution, vigilance is vital when handling any new technology. There will always be smart people who want to exploit a new system to their advantage. And artificial intelligence is no exception. By identifying the possible risks now and staying up to date with the latest cybersecurity practices, we can be better prepared for whatever the future holds in store.

About Harold Kilpatrick:
Harold is a cybersecurity consultant and a freelance blogger. His passion for virtual security extends way back to his early teens when he aided his local public library in setting up their anti-virus software. Currently, Harold’s working on cybersecurity campaign to raise awareness regarding virtual threats that businesses have to face on a daily basis.