Digging Deeper into the Models of Generative AI, and the Threats

Generative AI

In our previous article on Artificial Intelligence (AI) we looked at how the world of generative AI is taking the world by storm, with the race for dominance between ChatGPT, Bing, Neeva and a few other in hot pursuit. The fact is, AI is advancing fast, and business owners know there is goldmine ahead for those that dominate. Generative AI systems can quickly create new and original content, such as images, music, and text, based on learning patterns from existing data. In this post, we delve somewhat deeper into the different types of generative AI systems, looking to understand what companies are trying to do with this technology now and over the long term. We will also hook into our favourite subject, cyber security, looking at how generative AI will undoubtedly become more of a threat in the future.

Types of Generative AI Systems

There are three main types of generative AI systems: Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Autoregressive Models. So, what are these and how do they work? VAEs generate new images by learning the underlying distribution of input images. For example, a VAE might be trained using a large dataset of cat images, then used to generate new images of cats that have never been seen before. 

For example, here is a monkey cat. A new breed, designed entirely by Midjourney.

GANs work in a slightly different way, still generating high-quality images but by pitting a generator network against a discriminator network – you don’t need to understand the details (that’s for another day). One example of a GAN in action might be a system that is trained on a dataset of human faces, then used to generate new images of faces that are so realistic they could be mistaken for real people. This could be used in movie making to create AI versions of people who have passed away  and are unavailable for the production, while still offering their permission to be part of the show. Deepfakes are one obvious threat to consider with this kind of experimental technology, and some companies are already limiting usage both with KYC style checks and heavy penalties for misuse, as well as warnings for what constitutes criminality.

The last model type is Autoregressive Models, which are used to generate new text by predicting the next value in a sequence based on the previous values. For example, an autoregressive model can be trained on a dataset of song lyrics, and then used to generate new lyrics that sound like they could have been written by a real songwriter. ChatGPT is also autoregressive model, specifically a transformer-based neural network model. The way ChatGPT works is to generate natural language text by predicting the next word in a sequence based on the previous words. It has been so successful because it was pretrained on a massive amount of data and users can fine tune the output by iterating on the answers. 

Each of these models has advantages and disadvantages, and the choice depends on the outcome you are looking to achieve. GANs are great for generating high-quality images, while autoregressive models are perfect for text.

What’s Next for Generative AI?

The potential of this kind of new technology is huge, and many companies are exploring its possibilities. In the creative industries, such as movie and game production, generative AI can be used to create art, music, and scripts. For video games, specifically, generative AI is being used to automatically create new game content based on player interactions, while in the fashion industry, it is being used to create new designs. In healthcare, generative AI is already being leveraged to generate synthetic data that is used to protect patient privacy. There are many more examples, too many to cover in one article, but rest assured that no matter what industry you work in, sometime soon Generative AI will make an impact.

The Threat of Generative AI for Cyber Security Professionals

Generative AI systems can be used to create fake images, videos, and text, and are already being used by hostile nation states to speed up the production and dissemination of disinformation and fake news. This poses a significant threat to cyber security, and adds yet another complexity to the role of CISO as we try to protect the business from these sorts of threats. Fake images and videos may be used to manipulate public opinion about our companies, while other forms of AI are used to spread malware and launch attacks. Deepfakes are being used to produce fake videos of politicians and other public figures, which are also used to spread false information and undermine public trust.

As generative AI becomes more sophisticated, it will become even harder to detect these deepfakes and other types of synthetic content.

As generative AI becomes more sophisticated, it will become even harder to detect these deepfakes and other types of synthetic content. It is therefore important for individuals and businesses to be aware of the risks and to take steps to protect themselves. Introducing this topic into our security awareness training programmes is an important step to consider sooner rather than later.

Generative AI is a truly exciting and rapidly developing field with vast potential. However, as with any new, powerful, and misused technology, there are also significant risks associated with it. Hopefully, we’ll build the right sorts of checks and balances into our systems to mitigate these threats, but today it’s the Wild West, and there’s still no sheriff in town.

Tony Campbell

Director of Research & Innovation, Sekuro

Tony has been in information and cyber security for a very long time and delivered projects and services across a bunch of different industries through a variety of different roles. Over the years, Tony has always tried to bridge the growing skills gap through his employment, by mentoring, teaching and working with other disciplines to help them understand the complexities of what we do.

Scroll to Top