Malicious use of Artificial Intelligence in cybersecurity

0
165

The latest whitepaper published on Arxiv.org, warns about the malicious use of artificial intelligence and focuses on forecasting, prevention, and mitigation.

The new whitepaper, titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation”, is written and contributed by over a dozen experts from the Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Center for a New American Security, Electronic Frontier Foundation, and OpenAI.

With artificial intelligence and machine learning becoming two of the most important technologies as well as growing at an incredible rate, and has numerous beneficial applications, it does not come as a surprise that there is a way to use those same technologies in a malicious way.

The report survey the landscape of potential security threats from malicious uses of artificial intelligence technologies as well as focus on ways to better forecast, prevent and mitigate those threats. The report starts with four high-level recommendations:

  1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
  2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.
  3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.
  4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.

With AI capabilities becoming quite powerful and widespread, the report suggests that it will have a significant impact on the expansion of existing threats, the introduction of new threats, as well as a change to the typical character of threats. The report also structures the analysis of three security domains, all of which can be impacted by AI, including digital security, physical security, and political security. It also suggests exploration of additional open questions and potential interventions in four priority research areas including learning from and with the cybersecurity community, exploring different openness models, promoting a culture of responsibility, and developing technological and policy solutions.

In the conclusion, the whitepaper/report states that while there are still many uncertainties, it is already clear that AI will be a key part in the security landscape of the future and while it brings many benefits, there are plenty of malicious ways to use these technologies as well.

“Artificial intelligence, digital security, physical security, and political security are deeply connected and will likely become more so. In the cyber domain,  even at current capability levels, AI can be used to augment attacks on and defenses of cyberinfrastructure, and its introduction into society changes the attack surface that hackers can target, as demonstrated by the examples of automated spear phishing and malware detection tools”, as discussed in the report.

You can check out the full whitepaper over at Arxiv.org.