Will Artificial General Intelligence Be the Ultimate Weapon Against Cybercrime?

Artificial Intelligence (AI) advancements over the past several months have significantly impacted cyber security, as they have with other industries. We’ve seen examples of ChatGPT writing exploits, offering security advice, and assisting with tasks like reverse engineering. These are incredibly powerful tools, yet they are still not truly intelligent. 

However, the next step-change in AI will be reaching the event horizon called Artificial General Intelligence (AGI). When AI systems can understand, learn, and apply knowledge across various domains, and provide answers based on many forms of input, which may change over time as it learns more about the subject, this is called AGI. It will certainly become a powerful tool in the fight against cybercrime, but the same technology may also become a cyber weapon. This blog explores the potential of AGI in cyber security and discusses how far we are from this reality.

What is AGI?

To be fully realised as an AGI, the system must have the cognitive flexibility and adaptability to perform any intellectual task that humans can, without being limited to a specific domain or narrow set of tasks. Some of the attributes that define an AGI include:

  1. Generalisation: The AI should be able to generalise knowledge and skills across subject domains, adapting to new situations and solving problems without additional training. 
  2. Learning: AGIs learn from experience, acquiring new knowledge and skills through interacting with the environment around them, absorbing feedback, and undertaking self-improvement, as we all do when we learn from our mistakes.
  3. Reasoning: The system can reason, plan, and problem-solve. It must be able to understand and manipulate concepts and relationships, including those not explicitly added during training.
  4. Creativity and innovation: This is the biggy. The AGI needs to have the ability to generate new ideas, concepts, and solutions by combining existing knowledge in novel ways. This will likely be the hardest aspect of AGIs to fully implement. 
  5. Common sense: The AGI understands the world and can apply common sense to make sense of ambiguous or incomplete information, just as we do.

Intuition and emotion: This may be one of the more controversial aspects of AGI, as it could lead to decisions being made based on human-like emotions, but with the power of the AGI behind it. It’s this aspect that most science fiction focuses on where the AGI turns against its “master” because of its feelings.

Cyber AGIs

Some of the capabilities AGIs will bring to cyber security are much more developed than current levels of chatbots or simple recommender systems. For example, an AGI working in the SOC team may be able to anticipate a threat actor’s next move. Using feeds from all the organisation’s systems in real time it could analyse the threat, predict the evolution of the attack, and contain it. This kind of automated incident response means the AGI will continuously analyse the nature of the breach, identify compromised systems, and take necessary actions to mitigate the damage. Its ability to learn and adapt will allow it to respond to attacks more efficiently than human analysts can today.

Regarding analysis and risk management, AGIs may have a profound impact. They will assist organisations in evaluating their security posture by conducting more detailed and thorough security assessments and identifying potential vulnerabilities. It could even organise the patching of systems and work with the service management team to schedule those changes while temporarily providing additional SOC monitoring until the patch is applied.

Challenges to Achieving AGI

While the potential of AGI is immense, several challenges must be addressed before it can become a reality. We are not there yet regarding technical limitations. We’ve seen significant steps forward with GPT 4.0 but AGI is still in its early stages. There are still advancements in AI research and technology needed before these systems truly match human-level intelligence across all the categories listed above. The reality is that AGI will incorporate various kinds of AI subsystems. Without this integration, the system will be unable to learn, use emotions, or use creativity in problem-solving. Before we get there, it’s just automation, albeit highly advanced. 

There are still advancements in AI research and technology needed before these systems truly match human-level intelligence across all the categories listed above.

We need to look at ethical considerations since, as AGIs become more advanced, questions surrounding their use and their implications on privacy, surveillance, and decision-making will need to be addressed. Used as a tool of oppression by a nation’s state, it has the potential to make decisions about potentially criminal or subversive behaviour, which could lead to imprisonment or worse. It’s worth noting that even if some countries establish regulatory frameworks that balance benefits with risks, unethical future instances of AGI systems are unfortunately inevitable.

We also need to consider the security of the AGI systems themselves. As AGIs become more powerful, they will become attractive targets for adversaries. Suppose we delegate responsibility to the AGI to act on our behalf while an attacker has changed the actions by polluting the underlying data the systems used for training. The system may be influenced to take unethical or risky actions, which could put people, systems, or organisations at risk. The level of potential misuse is worrisome as it could lead to cybercriminals using their own instances of AGIs to launch sophisticated attacks or plan how to target organisations. If criminals can remove the guard rails from systems like ChatGPT it could lead to cyber mayhem like we’ve never experienced before.

The Battle of the AGIs

While we are some ways from AGI becoming a reality, its potential in cyber security is undeniable. As AI research and technology progress, we can see how AGI can become a valuable assistant in the fight against cybercrime. However, if we plan to harness the full potential of AGI, we need to address the technical, ethical, and security challenges it presents, ensuring it is used responsibly and safeguards are in place.

Tony Campbell

Director of Research & Innovation, Sekuro

Tony has been in information and cyber security for a very long time and delivered projects and services across a bunch of different industries through a variety of different roles. Over the years, Tony has always tried to bridge the growing skills gap through his employment, by mentoring, teaching and working with other disciplines to help them understand the complexities of what we do.

Scroll to Top