This series contains general information and is neither formal nor legal advice. Readers and organisations must make their own assessment as to the suitability of referred standards and this material for their specific business needs.
Introduction
The new ISO 42001:2023 framework is an Artificial Intelligence (AI) Management System for your organisation, similar to, for instance, ISO 9001 (Quality Control) and ISO 27001 (Information Security).
In this three part series we will cover background information regarding AI (part 1), matters to consider when looking at developing (part 2), using or offering AI in your organisation, and how ISO 42001 can help you with this in a responsible manner (part 3).
We do this from the perspective of Governance, Risk and Compliance (GRC) as that is our specific area of expertise in this matter, and it is also what ISO 42001:2023 is all about.
Machine Learning vs Artificial Intelligence
It is important to appreciate what Artificial Intelligence (AI) is, and what it is not.
Many applications of what is currently advertised as AI, are actually “simply” Machine Learning (ML). That is not technically incorrect, as ML is a subfield of AI, but nevertheless it is good to understand the distinction.
Machine Learning
Machine Learning is also known under the term “Predictive Analytics”, which may better describe what it is: a program that uses ML-analysed data in order to make predictions about future data in the same or an adjacent field. The intent is that as more data comes in, the predictions become better (more accurate).
An example of this is the predictive text feature on mobile phones: given word B following word A, the system has a shortlist of probable words C following “A B” of which some are more likely than others (ideally based on your choices from the past), and the three most likely ones are the ones shown to you. So basically, it is about statistics: some possibilities are more likely than others – as more data is processed over time, the statistics change and thus the possibilities do as well. Unfortunately, the presented choices are often based on collective past selections stored in the cloud rather than your personal history, which means the predictive choices you get are often less than useful, but the basic idea is sound – if only it were implemented locally on your phone rather than cloud based, which is easily feasible.
Another example is a web store making suggestions as to what products you might be interested in, based on your cart and prior purchases correlated with other people’s purchases. This can even be as specific as making different suggestions based on season, time of year, or day of the week.
These are simple examples, and if you were to ask a programmer, they would not even regard them as ML.
Since the modern web and progressions in storage capacity and cost efficiency have enabled the use of larger datasets, Machine Learning has been widely used for many years (big data analytics).
Artificial Intelligence
The ultimate goal of AI is to mimic the intelligence of humans. It generally focuses on specific aspects of human intellect and even specialities within that, because the topic as a whole is obviously very complex and as yet unsolved. Below are a few of the sub-categories of AI.
- Reasoning and problem solving: complex and computationally expensive, this aspect is as yet not nearly as far developed as some would like you to believe.
- Knowledge engineering: current research and practical applications include, for instance, clinical decision support.
- Planning and decision making: this is a big one, and essentially taking ML a step further: based on the available information, having the program make a decision without intervention from a human. Desirable by many governments, big organisations and startups, but also fraught with potential pitfalls. The dataset is only as good as its input, so if you take out the human factor and/or other checks, the results can be highly detrimental to individuals, groups of people that happen to not neatly fit within the boundaries of the dataset, or even broader society.
- Learning: whereas planning and decision making could work using a static dataset, an ongoing learning process could enable it to adapt to future changes. Mind that this does not necessarily remove the pitfalls, and in fact it also creates a few new ones.
- Natural language processing (NLP): as part of large language models and generative AI, this is currently a major field of research and new commerce. Here too be dragons.
- Perception: speech recognition could feed into natural language processing, which is of course what drives the various mobile and home assistants. Another use is facial recognition, which has been picked up by educational facilities for proctoring remote exams, and by governments to identify and track people on the street.
Large Language Models
The big news in recent years are Large Language Models (LLMs), such as OpenAI’s GPT or Meta’s Llama, designed to deliver most of the capabilities listed above. These systems can maintain and use context over long text passages or conversations, but also offer generative functions: they can create new text based on the input they receive. In addition, LLMs can be multilingual, as well as fine-tuned for specialised tasks. That is all pretty impressive, and it is no wonder that individuals and companies alike have jumped into this new world to see how it can help them.
Even as an experienced programmer, I have found that having what is essentially a fairly human conversation with a system like ChatGPT is pretty awesome, and in some cases, it could be all too easy to forget that you are in fact ‘talking with’ a computer program. While it is not terribly difficult to ‘trip up’, e.g. find out whether the other end of the conversation is a human or a computer, it isn’t bad at all.
Even the answers these systems provide tend to mostly be correct, provided the question was phrased properly and the operator (you!) critically reviews the output. In some cases, a further inquiry is required to clarify some aspects. Thus, we have gone from the need to be skilled in working with index cards in a library, to putting the best keywords into whichever flavour of search engine, to now phrasing questions (called prompts) to LLMs. In other words, a user still needs to have skill to use the system effectively and appropriately. Some early birds actually made it their business to write LLM prompts for others, and they are still around.
We have gone from the need to be skilled in working with index cards in a library, to putting the best keywords into whichever flavour of search engine, to now phrasing questions (called prompts) to LLMs. In other words, a user still needs to have skill to use the system effectively and appropriately.
AI Washing
‘AI washing’ is a new term, akin to ‘greenwashing’. AI washing refers to the misapplication of AI buzzwords like ‘machine learning’, ‘neural networks’, and ‘natural language processing’. In the same way that not all ML should be marketed as AI, putting an AI label on everything is not appropriate – doing so might even be regarded as misleading to (prospective) clients and investors. Marketing is about presenting a capability (of an organisation or product) in the best possible light, and to remain ethical it should never go beyond that.
In March 2024, the US Securities and Exchange Commission (SEC) charged two investment advisers with making false and misleading statements – the cases were settled with the companies paying several hundreds of thousands of dollars in civil penalties.
In part 2 of this series, we will look at some of the opportunities and challenges that AI brings.
Arjen Lentz
Senior Consultant (Governance, Risk, Compliance), Sekuro