Does ChatGPT Make Life Easier For Cyber Criminals

By Alchemmy’s Steve Eyre, Rob Price and Phil Aitchison

There has been lots of discussion around leveraging the creative side to ChatGPT, not just how it can help businesses accelerate some of their activities but even how it can help developers code faster and more effectively. Whilst all of this sounds really positive, what if this capability was in the hands of a group of people whose primary focus was malicious intent? What does this mean for the effect that ChatGPT could have in amplifying the cyber threat level yet further?

There have been some incredible demonstrations of how Large Language Models (LLMs) in ChatGPT can help write malware. The concern is that an LLM might help someone with malicious intent, but lacking insufficient skills, to create malware tools they would not otherwise be able to deploy. In their current state, LLMs aren’t yet capable of producing content which replicate the nuances from a human author and therefore are more suited to simple tasks rather than complex ones. This means LLMs are currently useful for aiding an expert, as the expert can validate the LLM’s output and therefore save time and execute faster. However, let’s not pretend this position will last very long as we all know the pace of change in generative AI is accelerating.

For now, let’s see where logic based on today’s capability takes us. For more complex tasks, it’s still currently easier for an expert to create malware from scratch, rather than having to spend time correcting what the LLM has produced. But the bigger risk is what an expert capable of creating highly capable malware is likely to be able to coax an LLM into producing i.e. a higher standard of more capable, or harder to remediate, malware. So, this trade-off between using LLMs to create malware from scratch and validating malware created by LLMs will change as LLMs improve and start to become more powerfully trained.

We know that LLMs can also be queried to advise on solving technical problems. This presents another risk that a threat actor such as an organised criminal gang might which to use LLMs to help mount a cyber-attack at a level of scale or complexity beyond their current capabilities. For example, if an attacker is struggling to escalate privileges to gain sufficient access in a network or find target data of interest quickly, they might ask an LLM, and receive an answer that’s not unlike a search engine result, but with more focused context toward the technique or effect the attacker is trying to execute.

Current LLMs provide convincing-sounding answers that may only be partially correct, or indeed wholly wrong (“hallucinating”), particularly as the topic gets more niche. These answers might help criminals with attacks they couldn’t otherwise execute, or they might suggest actions that hasten the detection of the criminal. Either way, the attacker’s queries will likely be stored and retained by LLM operators like OpenAI. However, there are already instances of LLMs that are run and operated on dedicated infrastructure, which gives threat actors the means to avoid detection too.

A more benign threat to consider with a low cost but high impact is that because LLMs excel at replicating specific writing styles on-demand, there is a risk of criminals using LLMs to write more convincing phishing emails, including emails in multiple languages. This may aid attackers with high technical capabilities but who lack linguistic skills, by helping them to create convincing phishing emails in the native language that are better tuned to reflect the circumstances of their intended targets. Remember that phishing and socially engineered attacks such as Multi-Factor Authentication (MFA) ‘Push Exhaustion’ attacks are the most used tactics to gain a foothold in an organisation. Also, it isn’t just writing styles that can be mimicked. It is already the case that both video and audio content can be created in the style, sound and look of others which makes the challenge of distinguishing fake from reality much harder. This potential destruction of trust will inevitably have an impact on the way that organisations and individuals interact in the future.

One thing to also bear in mind is the starting point from which AI-generative services have been developed from. Here, organisations should be mindful of the ethical and data privacy implications for how such capability is to be trained or contextualised from representative datasets relevant to their intended purpose. Issues around copyright, data obfuscation, anonymisation and pseudonymisation should be considered.

What can organisations and individuals do to mitigate the risks associated with AI language models like ChatGPT?


Precautionary Due diligence is important

Access to such vast LLM compute power and AI bots is still relatively new and certainly an exciting ground for organisations to understand and harness how such capability can bring value. Whilst it is tempting to join the gold rush, let’s take a minute to step back and think about the bigger picture here. There is little to no regulatory framework surrounding LLM’s usage or how organisations and users should adopt LLM capability from an ethical, security and safety perspective, certainly not in a way that would be similar to carrying out due diligence on cloud hyperscaler or 3rd party data suppliers. This is putting the responsibility firmly in the consumers court at the moment. So, there are some key things to be aware of regards some of the existing security aspects of ChatGPT, in particular to inform security and safety decision making when looking to adopt LLM providers such as OpenAI, Google and new entrants to this market. Let’s look at each of these in turn.


Privacy and security

Allowing an LLM provider like ChatGPT full access to a live internet connection can cause serious security and privacy threats. For instance, malicious users can easily access ChatGPT users’ sensitive information. Similarly, they can even spread misinformation while having full access to users’ personal information. Therefore, giving internet access to ChatGPT or any other AI system could be seriously threatening.

The National Cyber Security Centre’s advice on Data Privacy aspects of LLMs states that many organisations may be wondering if they can use LLMs to automate certain business tasks, which may involve providing sensitive information either through fine-tuning or by further enrichment with additional datasets. Whilst this approach is not recommended for public LLMs like ChatGPT, ‘private LLMs’ might be offered by a cloud provider (for example), or can be entirely self-hosted:

  • For cloud-provided LLMs, both usage and privacy policy are key (just as they are for public LLMs), but are more likely to fit within the existing terms for the cloud service. Organisations need to understand how the data they use within the LLM is managed. Is it available to the vendor’s researchers or partners? If so, in what form? Is data shared in isolation or in aggregation with other organisations in a multi-tenanted architecture? Under what conditions can an employee at the provider view queries? This all boils down to ‘what’ is audited, ‘how’ and by whom?
  • Self-hosted LLMs are likely to be highly expensive. However, following a security assessment (which should include referring to the NCSC’s principles for assuring the security of machine learning), they may be appropriate for handling organisational data. Organisations should refer to their own guidance and policies on securing infrastructure and data supply chains.


Ethical usage policy

Users and organisations should consider enforcing the LLM providers ‘Usage and Privacy policies’ by incorporating them into their existing HR ‘Internet usage policies’ or/and ‘data handling’ policies in addition to security educational awareness programmes that employees are required to complete as part of their onboarding and continual security and safety training obligations.


LLM operator Content Filtering

According to OpenAI, ChatGPT ver4 has more built-in capability to detect and disallow content that breaches it’s Usage Policy. This is encouraging but at present there is no published data as to how effective these guardrails are. For example, the clause; ‘Generation of malware: Content that attempts to generate code that is designed to disrupt, damage, or gain unauthorized access to a computer system’ is one that organisations and global security agencies will be wanting to scrutinise and keep a close eye on, however it is unclear as to how this will be policed, by who and how such breaches of usage policy will be dealt with by LLM providers and regulatory/law enforcement agencies .

Most importantly, we would advocate that any organisation that is proactively engaging with LLMs – understand where they can add value, understand where they represent increased risks and indeed malicious threat. Check the impact on existing policies and risk management, and introduce new policies where deemed appropriate. But do not ignore. Generative AI has the potential to impact and disrupt business in a significant way. You need to be in the midst of it.


Written by

Anjali Kajan

Published on

45th March 2023


We use cookies

Cookies help us deliver the best experience on our website. By using our website, you agree to the use of cookies.