AI, ISO42001, and YOU (Part 2) | Sekuro

AI, ISO 42001, and You (Part 2)

This series contains general information and is neither formal nor legal advice. Readers and organisations must make their own assessment as to the suitability of referred standards and this material for their specific business needs.

Read Part 1 Here

Introduction

The new ISO 42001:2023 framework is an Artificial Intelligence (AI) Management System for your organisation, similar to, for instance, ISO 9001 (Quality Control) and ISO 27001 (Information Security).

In this three part series we will cover background information regarding AI (part 1), matters to consider when looking at developing (part 2), using or offering AI in your organisation, and how ISO 42001 can help you with this in a responsible manner (part 3).

We do this from the perspective of Governance, Risk and Compliance (GRC) as that is our specific area of expertise in this matter, and it is also what ISO 42001:2023 is all about.

10 Risks among the Opportunities

AI, ISO42001, and YOU (Part 2) | Sekuro

Many countries have done in-depth and practical reviews on the use of AI, both inventorying what is already in use, and looking forward to new applications.

During its review of AI in the prelude to the recently passed EU legislation on the same topic, the European Parliament concluded that AI is likely to touch every aspect of our society, and also add some new uses that haven’t yet been thought of. The article also raised a number of interesting insights and challenges. We picked a few and added some more below:

1. Underuse and Overuse of AI

Underuse of AI could actually be a major risk in terms of lost opportunities, lack of initiative, and low investments. Have you considered that yet?

Overuse can also be problematic. An application that proves to not be useful can result in wasted investment (and lost time!). For instance, trying to use an AI for explaining a complex issue in society is not going to be useful, since it is not suited to that purpose.

2. Accountability

Who is responsible if a bad AI decision results in unrecoverable consequences such as bad injury or death? One example of this is a self-driving car. Should the car owner still be responsible, even though they are no longer the driver? The car manufacturer, the programmers, or perhaps the supplier of the dataset that was used to train the AI?

In any case, in no circumstance can the AI be blamed, that would be silly.

In the streaming series Upload (Amazon Prime), car owners can actually select whether to protect the occupant of the car, or evade the external subject, should the self-driving car’s AI need to make that choice. An interesting take on the classic philosophical exercise called ‘the trolley problem’!

Accountability for manufacturers provides some incentive for providing a good product, but too much regulation could reduce innovation. No matter where you currently stand on this topic, we can all agree that it is an interesting discussion with no easy single answer that covers all situations. This example clearly shows that thought must go into any application all the way from the idea and design phase, throughout architecture and development, and more review before it is launched.

3. Consequences

A bad AI decision can also have other consequences that are detrimental to an individual, group, or society as a whole. Amazon trained an AI on its HR records, in order to automate the preselection stage of its hiring process and possibly remove human bias. It turned out doing the opposite: the AI was biased against women and minorities. Looking back, the reason is obvious – past hiring practices had that bias, so by teaching the AI using the existing corpus of employees, the company reinforced that issue.

4. The Myth That Computer Output Is Always Correct

This is a really big issue. Simply put, computers are as good as their programming, and AI models are as good as the data they were trained on. Mistakes are made, and so unfortunately the output is not always correct. Large scale incidents of this are the Robodebt debacle in Australia, a childcare issue in The Netherlands, and the Post Office scandal in the UK where hundreds of subpostmasters were wrongly prosecuted for theft or fraud while in fact there were bugs in the computer program that showed inconsistencies. People were imprisoned, families ripped apart, many left destitute, and some people died. I’m sorry to introduce such a serious note, but these issues are real and most countries will have at least one such example by now.

Not all these cases are AI related, but the main lesson here is that even a system designed to make decisions requires some human oversight and periodic critical checks. This is not new, but often disregarded either from the start (by those wanting to believe the myth), or in the name of savings during the lifetime of the system. Either way, an organisation cannot blame the AI system, it is not an entity that can bear that accountability or legal responsibility.

The 1983 classic ‘hacker movie’ Wargames explores various aspects of this theme in detail: a computer designed to run military strategy simulations ends up in charge of the nuclear arsenal when, in the name of efficiency, the ‘flawed humans’ previously in the process are taken out of the loop.

5. Verification

Whereas with earlier AI – typically the model would be extracted after the main learning phase to see what rules it derived from its input – this is exceedingly difficult with the modern LLMs: they are just too big. Currently, one of the best approaches is to ask the model critical questions that cover the scenarios and decisions that the system is to be involved in, thus confirming that indeed it is making the correct decisions.

This needs to be done on an ongoing basis as new data is processed. It is the equivalent of what software engineers call a ‘test suite’.

6. Transparency and Privacy

How do you design a system so that it will be trusted by users? Are there safeguards in place that prevent data from being collected that is not needed (under Australian privacy legislation, marketing is not a valid need), prevent different sets of data from being inappropriate combined or correlated, prevent personally identifiable information (PII) from somehow ending up in the dataset of an AI model, prevent data from being shared with other parties, and so on. Increasingly, just saying ‘trust us’ will not suffice.

7. Jailbreaking

As mentioned, it has been shown to not be terribly hard to trip up an AI-driven application. Recently the State Library of Queensland recently introduced Charlie, an AI persona to guide visitors through an area of expertise, World War I. But soon, some creative individuals convinced it to act as Doctor Who and some other anachronistic characters instead.

These results were not intended, obviously, but nigh impossible to prevent, particularly when an existing AI model is used rather than created from scratch (which presently would still be prohibitively expensive).

While this particular incident may be considered mostly funny, in a different context such a vulnerability could pose a security risk or even worse. The AIs are built with safeguards, but crafty people can bypass those by phrasing their prompts in certain ways.

8. Reality

Reality can be very subjective, since there are now systems that can create text, voice, paintings, photos, and videos of people that appear about as real as an original – except (for instance) that the person in question never said what is quoted… this is problematic. Most countries are currently working on legislation to ban such uses, however that may not prevent some less obvious cases. And once ‘out there’ (broadcast or put online) the damage is done. If viewers are less discerning or don’t have access to other sources of information, they can easily be taken in by these ‘deep fakes’.

9. Copyright

Copyright is the exclusive right of an original creator of a work to say who may (among other actions,) copy their work, and under what conditions. Without such permission, third parties do not have copyright.

While one might think that everything visible on the Internet is ‘free’, it is actually still copyright of the creator and made visible under their conditions. For instance, they might display ads alongside the content. Without explicit permission, you are not allowed to take content and put it on your own site.

Already battles are being fought by content creators such as online newspapers with, for instance, OpenAI. The allegation is that the creators’ content has been used by OpenAI to train their AI models, without an agreement (including potential payment) in place. OpenAI argues that the use of copyrighted materials in transformative ways does not violate copyright laws.

If we have a generative AI create new content, it is doing so based on its training: an amalgamation of ingested content. Who holds the copyright to the new “mixed” (derivative) work? In general copyright law it might be either you or the owner of the generative AI, depending on their terms of use. However, that is contingent on the original content being appropriately licensed, and such is not the case here – in fact, the origins of the constituent snippets are essentially unknown. Realistically, we have to assume that there is no license. It is currently a dark grey area. Ideally, and for lack of opportunity to otherwise credit the original content creators, you would indicate that a text, image or other content is AI generated.

10. Hallucinations vs Idempotency

Last but not least, LLMs exhibit what is commonly known as hallucinations: they periodically deliver results that superficially appear sensible, but are really nonsense. One reason this occurs is because LLMs are probabilistic systems, they make a prediction about for instance a next word based on the previous – just like the simple word prediction example from part 1 of this series. Additionally, they apply a random factor, so their response is not always the same, even for an identical prompt. Therefore, it is possible to get good (but not identical) answers for a time, then a bad one, and then good ones again, while using the same question all the way.

Software engineering has a concept called idempotency, meaning that for a given input, a program produces the same output. Well-engineered programs have this trait, making them robust for applying business rules. As shown above, LLMs do not have this trait by default and fundamentally, LLMs do not possess understanding or have true knowledge.

That said, even minor changes in the phrasing of a question (prompt engineering) can reduce occurrence of hallucinations, and having a process with a ‘human in the loop’ where human reviewers monitor and correct outputs, yields higher quality results.

Mitigations: the Bright Intern

Now that we’ve reviewed a lot of risks and instances where things could go wrong, what measures can we put in place to prevent issues?

Regard an AI-enabled system as a bright intern. You can let it do work with a certain amount of independence, but you never give it control over final decisions; it requires active supervision and review.

Where an AI-enabled system really differs from a bright human intern is that an AI does not become smarter or more trustworthy over time. This appears very counter-intuitive, but as we’ve seen in the various examples, things can (and do) go bad later. An AI is not a human, and while they can learn things, they do not gain experience in the same way that humans do.

In the 3rd and final part of this series, we will put together what we learnt about AI so far, and how ISO 42001:2023 can be used to responsibly develop and implement AI-enabled systems within your organisation. 

Arjen Letz

Arjen Lentz

Senior Consultant (Governance, Risk, Compliance), Sekuro

Scroll to Top

Aidan Tudehope

Co-Founder of Macquarie Technology

Aidan Tudehope, Co-Founder of Macquarie Technology

Aidan is co-founder of Macquarie Telecom and has been a director since 1992. He is the Managing Director of Macquarie Government & Hosting Group with a focus on business growth, cyber security and customer satisfaction. 

Aidan has been responsible for the strategy and execution of the investment in Intellicentre 4 & 5 Bunkers, Macquarie Government’s own purpose-built Canberra data centre campus. This facility is leveraged to deliver Secure Cloud Services and Secure Internet Gateway.

With a unique pan-government view on the cyber security landscape, we are invested in leading the contribution from the Australian industry on all matters Cyber policy related.

Aidan holds a Bachelor of Commerce Degree.

James Ng

CISO, Insignia Financial

James Ng, CISO, Insignia Financial

James is a leader with a range of experience across various cyber security, technology risk and audit domains, bringing a global lens across a diverse background in financial services, telecommunications, entertainment, consulting and FMCG (Fast Moving Consumer Goods). He is currently the General Manager – Cyber Security at Insignia Financial and most recently was at AARNet (Australia’s Academic and Research Network) where he oversaw a managed Security Operations Centre (SOC) capability for Australian universities. Prior to this James was the acting Chief Information Security Officer for Belong and led the cyber governance and risk team at Telstra.

Noel Allnutt

CEO, Sekuro

Noel Allnutt CEO | Sekuro

Noel is a driven and award-winning IT leader. He has a passion for developing great teams and accelerating client innovation, and in enabling organisations to create a secure and sustainable competitive advantage in the digital economy. Noel also hosts the ‘Building Resilience Podcast,’ which explores the world of sport and deconstructs the tools and ethos of world-class athletes that can help create growth and optimise business and life.

Audrey Jacquemart

Bid Manager, Sekuro

Audrey Jacquemart, Bid Manager, Sekuro

Audrey is an innovative cybersecurity professional with a versatile profile spanning across Product Management, Presales and Delivery. She has worked within organisations from start-ups to large international organisations in Europe and APAC before joining Sekuro.

Nicolas Brahim

Principal Consultant, CRP and OT

Nicolas Brahim, Principal Consultant, CRP and OT

Nico leads Sekuro’s Cyber Resilience Program and OT Cybersecurity, ensuring continuous support and effective program execution for our clients. With over a decade in the security industry, including the creation and leadership of several Security Programs for IT and OT across Australia, New Zealand, Argentina, Chile and the US, his core philosophy emphasises an equal balance of people, process, and technology in delivering actionable and simple solutions.

Trent Jerome

Chief Financial Officer, Sekuro

Trent Jerome

Trent is a seasoned CFO with over 30 years’ experience in Finance. Trent has broad experiences across Capital raises, debt financing, M&A and business transformation. He is a CPA and member of AICD. Trent works with Boards around risk and risk mitigation plans and assists Boards in navigating the risk mitigation versus cost conversation.

Ada Guan

CEO and Board Director, Rich Data Co

Ada Guan, CEO and Board Director, Rich Data Co

Ada is the CEO and Co-founder of Rich Data Co (RDC). RDC AI Decisioning platform provides banks the ability to make high-quality business and commercial lending decisions efficiently and safely. With over 20 years of global experience in financial services, software, and retail industries, Ada is passionate about driving financial inclusion at a global scale.

Before launching RDC in 2016, Ada led a Global Client Advisor team at Oracle Corporation, where she advised Board and C-level executives in some of the largest banks globally on digital disruption and fintech strategy. She also drove Oracle’s thought leadership in banking digital transformation for Global Key Accounts. Previously, Ada implemented a multi-million dollar program to deliver a mission-critical services layer for Westpac Bank in Australia and formulated the IT strategy that was the basis of an $800m investment program to transform Westpac’s Product and Operation division and complete the merger with St. George Bank. Ada is an INSEAD certified international director and holds an EMBA from the Australia Graduate School of Management, and a Master of Computer Engineering from the University of New South Wales, Australia. She also graduated from the Executive Insight Program at Michigan University Ross Business School and IESE Business School.

Megan Motto

Chief Executive Officer, Governance Institute of Australia

Megan Motto, CEO, Governance Institute of Australia

Megan Motto is Chief Executive Officer of Governance Institute of Australia, a national education provider, professional association and leading authority on governance and risk management. The Institute advocates on behalf of professionals from the listed, unlisted, public and not-for profit sectors.

Megan has over 25 years of experience with large associations, as a former CEO of Consult Australia, as well as holding significant positions in Australia’s built environment sector and business chambers.

She is currently a director of Standards Australia, a member of the ASIC Corporate Governance Consultative Panel and a councillor of the Australian Chamber of Commerce and Industry (ACCI) where she chairs the Data, Digital and Cyber Security Forum.

Megan’s expertise spans governance, risk management, public policy and education. She holds a Bachelor of Arts/Bachelor of Education, a Masters of Communication Management and a Graduate Diploma of Corporate Governance and Risk Management. She is a Fellow of the Governance Institute of Australia, the Chartered Governance Institute and the Australian Institute of Company Directors and is also a member of Chief Executive Women. Megan is also an Honorary Life Trustee of the Committee for Economic Development of Australia (CEDA) and was a 2014 recipient of the AFR/Westpac 100 Women of Influence.

Shamane Tan

Chief Growth Officer, Sekuro

Shamane Tan, Chief Growth Officer, Sekuro

Sekuro’s Chief Growth Officer, Shamane Tan, is passionate about uniting minds and experiences, excelling in aligning C-Suite and Board members with cyber security imperatives. As the author of “Cyber Risk Leaders,” she unravels executive communication nuances and distils C-Suite expectations. 

Her work extends to “Cyber Mayday and the Day After,” a roadmap for navigating crises by mining the wisdom of C-level executives from around the globe. It’s filled with interviews with managers and leaders who’ve braved the crucible and lived to tell the tale. Her most recent book, “Building a Cyber Resilience: A Cyber Handbook for Executives and Boards,” was featured on Forbes Australia’s top list of books for CEOs. 

Shamane has also founded a transcontinental cyber risk and executive meetup spanning Sydney, Melbourne, Adelaide, Perth, Singapore, the Philippines, and Tokyo, fostering mentorship, women’s empowerment and thought leadership. As a strong advocate for the importance of having a voice and helping others use theirs, Shamane Tan has spoken at TEDx and global conferences, including FS-ISAC, RSA, Silicon Valley, Fortune 500 and ASX companies. 

Recipient of the IFSEC Global Top 20 Cybersecurity Influencer award and named among the 40 under 40 Most Influential Asian-Australians, Shamane leverages her unique fusion of technical prowess and business acumen to help organisations progress on their security maturity journey.

David Gee

David Gee, CIO, CISO, NED, Board Advisor & Author

 

David Gee, CIO, CISO, NED, Board Advisor & Author

David has just retired in July 2024 and is building out his portfolio. He is an Advisor with Bain Advisory Network and also an Advisor to JS Careers (Cyber Recruitment) and Emertel (Software Commercialisation).

He is a seasoned technology executive with significant experience and has over 25 years’ experience in CIO and CISO roles across different industries and countries. At Macquarie Group David served as Global Head Technology, Cyber and Data Risk. Previously was CISO for HSBC Asia Pacific. His career as a CIO spans across multiple industries and geographies including – Metlife, Eli Lilly and Credit Union Australia. He was winner CIO of the Year 2014, at CUA where he successfully completed a significant Transformation of Core Banking, Online and Mobile Banking systems.

David is past Chairman for the FS-ISAC Strategy Committee and awarded Global Leaders Award in 2023 for his contributions to the cyber security industry. A regular conference keynote speaker and 150+ published articles for CIO Australia, Computerworld, iTnews and CSO (Cyber Security), David now writes for Foundry CIO.com and AICD.

His most recent book – the Aspiring CIO & CISO was published in June 2024 and David is writing his second – A Day in the Life of a CISO with a number of CISOs from around the world for 2025.

Naomi Simson

Co-founder, Big Red Group and Former Shark Tank Judge

Naomi Simson, Co-founder, Big Red Group

INTRODUCTION

For 25 years as an entrepreneur, Naomi Simson has been bringing people together whether it’s with her business experience, her speaking or writing. Passionate about small business and local community, Naomi is considered a home grown success story.

Naomi had a corporate career with Apple, KPMG, IBM and Ansett Australia prior to becoming an entrepreneur. She is a prolific blogger, podcaster and business commentator, and appeared as the #RedShark in four seasons of Shark Tank Australia and she appears regularly on ABC The Drum. She is a non-executive director at Big Red Group, Australian Payments Plus, Colonial First State and Weebit Nano, as well as the Cerebral Palsy Research Foundation and the University of Melbourne Business and Economics Faculty.

A true business leader and influencer, with more than 2.7 million LinkedIn followers, Naomi is Australia’s most followed person on the business networking platform. She has four seasons of her podcast ‘Handpicked’, and she has authored two best-selling books Live What You Love, and Ready to Soar, and is sought after speaker.

FULL BIO

For 25 years Naomi has been bringing people together whether it’s with her business experience, her speaking or writing. She is a strong advocate of business owners.

Known as an entrepreneur and business leader; following the growth of RedBalloon which she founded in 2001, Naomi co-founded the Big Red Group (BRG) in 2017.

Naomi had a corporate career with Apple, KPMG, IBM and Ansett Australia prior to becoming an entrepreneur. She is a prolific blogger, podcaster and business commentator, and appeared as the #RedShark in four seasons of Shark Tank Australia. She is a non-executive director at Big Red Group, Australian Payments Plus, Colonial First State and Weebit Nano. As well as the Cerebral Palsy Research Foundation and the University of Melbourne Business and Economics Faculty.

A true business leader and influencer, with more than 2.7 million LinkedIn followers, Naomi is Australia’s most followed person on the business networking platform. She has authored two best-selling books Live What You Love, and Ready to Soar, and is an engaging, humorous and insightful speaker. She has four seasons of her Podcast – Handpicked.

Naomi is relatable across a broad variety of audiences and topics, often drawing on her personal experiences to provide thoughtful and valuable views into topics; including the customer obsession, intentional leadership, growth mindset, personal development. She is a regular panellist on ABC The Drum.

Peter Ngo

Product Line Manager, Global Certifications, Palo Alto Networks

Peter Ngo

Peter leads the Commercial Cloud, Global Certifications organisation at Palo Alto Networks which oversees global cloud security compliance efforts to various frameworks and standards including IRAP, SOC 2, ISO, PCI, C5, ISMAP, and IRAP and more for 25+ cloud products.

He has held many roles over the years covering areas of IT Operations, and Governance, Risk, & Compliance (GRC) for a wide range of industries including technology, insurance, and manufacturing.

Peter holds various security and professional certifications, including the CCSP, CISSP, PCI ISA, CISA, CISM, CDPSE & ISO Lead Auditor, in addition to a Master of Science degree in Information Assurance. 

Jack Cross

CISO, QUT

Jack Cross

Jack Cross is an experienced business leader with expertise in digital technologies and risk management. Through a steadfast commitment to integrating people, processes, and technology, he champions the fight against cyber threats while mitigating organisational risks. 

Over the past 15 years, Jack has navigated diverse leadership roles within the Defence and Education sectors, honing his skills in steering multidisciplinary teams through intricate and sensitive technical landscapes. In addition to this experience, he holds numerous formal qualifications such as: a Master of Systems Engineering (Electronic Warfare); CISSP; and CISM certifications.

Nadene Serman

Global CTO, Infotrack

Nadene Serman

Nadene Serman is a leading IT executive with a proven track record spearheading first-of-its-kind technology and business transformation for some of the most prominent organisations globally and in Australia. As the Global Chief Technology Officer of InfoTrack, she is a key protagonist of innovation as an enabler of InfoTrack’s next stage growth. Her energy, commercial acuity and strategic capability have fueled her success.

Nadene leads with clarity, transparency and urgency, uniting people in complex, multi-layered technology and business execution, and go-to-market transformation and innovation. She tackles and resolves complex and seemingly intractable challenges while building support and collaboration – even in times of crisis. Her people-first, ‘think straight, talk straight’ approach makes her a formidable force.

John Doe

President Great Technology

Cyber Resilience Program | Sekuro

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.