ChatGPT: Weighing the Benefits and Risks in Cyber Security

ChatGPT. Seemingly overnight, generative AI was the topic on everyone’s mind, with ChatGPT fast becoming the go-to tool for students, workplaces and personal use.  It’s the latest AI innovation and everyone wants to join the cause. Since then, we’ve seen Google, Microsoft and boutique players, like Neeva, bring their wares to market, each with a different spin on their revolutionary platform. But what exactly is generative AI and how does ChatGPT work? Where did it come from? What does it mean for the cyber security landscape? While many dove into this data lake headfirst, it is important to take pause and understand first what it is and what its ramification are before you bet your business future on it. 

Artificial intelligence (AI) and machine learning (ML) have become essential tools in the field of cybersecurity and technology, and the emergence of AI-powered language models like ChatGPT has further changed the game. ChatGPT, developed by OpenAI, is a large language model that has been trained on a massive dataset, allowing it to generate human-like responses to a wide range of questions.

Can this emerging technology benefit the Cyber Security industry? Yes, absolutely. In a growing industry still wrangling with skill shortages, ChatGPT can help automate processes allowing current employees to deal with more complex activities that require human input. Beyond that, ChatGPT can assist in accelerating code development, test harnesses, and even accelerate the development of testing and assurance methodologies. All you need to do is ask a succinctly and clearly worded question and be wary of the answer in case it is not complete or entirely accurate. But if used wisely, the answer can be the basis for something a consultant or coder can use to create an end product. 

Here are just a few of the benefits we have already seen ChatGPT offer in Cyber Security:

  1. Increased Efficiency: AI and ML algorithms can analyse enormous amounts of data much faster than humans. This increased efficiency allows for faster detection and response times in the event of a security breach.
  2. Improved Threat Detection: AI algorithms can identify patterns and anomalies that may go unnoticed by human analysts. This allows for more accurate and thorough threat detection, reducing the risk of false positives or false negatives.
  3. Automated Responses: AI-powered tools can automate repetitive tasks, freeing up security personnel to focus on more strategic tasks. This can improve overall response times and efficiency.
  4. Continuous Monitoring: AI algorithms can continuously monitor networks and systems, providing real-time threat detection and allowing for quick response times in the event of a security breach.
AI-generated artworks

Clearly, it is early days for this sort of technology and none of it should be considered mainstream just yet. ChatGPT is more like a concept car than an off-the-shelf Mitsubishi, but it does showcase the art of the possible. The problem is these tools, as they stand, are not good at certain things, and are only as good as the data used to train them.

ChatGPT will slowly get out of date, unless the data set is continuously updated,so over time some aspect of it will be less useful. Furthermore, its empirical answers to scientific or mathematical questions will lack any form of logic and reasoning, so alone they are not good for science-based tasks. Yet, as generative AI tools, they are all amazing. What this demonstrates is that as the market and technology matures and integrations with other AI tools and systems happen, more complex questions can be posed, ones that have many deep analytical threads to follow, then the system brings the answer back together and presents the response in the most meaningful way. These are the tools of the near future rather than of today, but that’s not far off. Then we will need to ask what the real ramifications of tools doing the job better than us humans will be – but that is a topic for another day. 

As an AI model, ChatGPT is not immune to risks, threats and challenges and has the potential to be misused in a harmful way. Some of these dangers include:

  1. Misinformation / Bias: If an AI language model like ChatGPT has been trained on biased data, it may amplify those biases in its outputs, which could lead to discrimination or other harmful consequences.
  2. Manipulation: AI language models like ChatGPT can be used to generate misleading or false information, which could be used for malicious purposes, such as phishing scams or misinformation campaigns.
  3. Privacy concerns: If an AI language model like ChatGPT is collecting and storing sensitive information, there is a risk that this information could be misused or stolen if the model is not properly secured.

Generative AI models are trained to draft human-centric responses based on input and do not have the ability to execute code or manipulate systems – all they can do is write the basic code. However, it does not stop them being misused. For example, a malicious actor could use an AI language model to generate convincing phishing emails or scam messages designed to trick people into downloading malware or paying a ransom. There are many examples of how ChatGPT could do this, but by providing basic biographical information about someone, based on their LinkedIn profile, then asking the system to write a convincing invitation to a trade show, for example, that phishing attempt may be a lot more convincing than the attacker may have previously been able to create.

Generative AI models are trained to draft human-centric responses based on input and do not have the ability to execute code or manipulate systems – all they can do is write the basic code. However, it does not stop them being misused.

To minimise these risks, it is important to implement robust security measures to protect sensitive information and encourage users to engage with chat models responsibly and with caution. This is not always easy, and you sometimes see ChatGPT notice dangerous requests, but more time than not there are ways around it, just by changing phrasing.

In its early days and beyond, developers and users should monitor and evaluate the model’s outputs to identify and address potential biases and/or harms, and it is this broader question of whether we should implement these systems and what checks should be there that need to be answered before we progress too far down this road. There comes a point of no return, and it is not that far away.

Tony Campbell

Director of Research & Innovation, Sekuro

Tony has been in information and cyber security for a very long time and delivered projects and services across a bunch of different industries through a variety of different roles. Over the years, Tony has always tried to bridge the growing skills gap through his employment, by mentoring, teaching and working with other disciplines to help them understand the complexities of what we do.

Scroll to Top