Back to Blog

Balancing AI risk and innovation: preparing for ISO/IEC 42001:2023

AI
InformationSecurity
Two men are engaged in a friendly conversation in a modern, brightly lit café with colourful, curved window panels. The man on the right is smiling warmly, wearing a cream cable-knit sweater and holding a takeaway coffee cup. The man on the left, seen from behind, is wearing a black hoodie and glasses. The background features vibrant plants, orange and green decor, and bar-style seating.

With the rapid development and adoption of Artificial Intelligence, it feels like the wild west out there. Many businesses are struggling to keep up with the changes and have little visibility into how or why their employees are using AI, let alone the potential business opportunities and risks involved. That’s where ISO/IEC 42001:2023 comes in.

Currently working its way through UKAS, the National Accreditation Body for the United Kingdom, ISO/IEC 42001 is being piloted for official certification to provide a framework for organisations to design, implement and maintain effective Artificial Intelligence Management Systems.

To find out more about ISO/IEC 42001 and how LeftBrain is helping our clients prepare for getting certified to the standard, we spoke with LeftBrain’s Information Security Analysts, Matthew Bensley and Lucas Jansen.

Lucas, why do you think there’s now a need for the ISO/IEC 42001 standard, and what issues is it designed to address?

Here at LeftBrain we’re already accredited and helping our clients with ISO 27001 which is the world’s best known standard on information security management, and ISO 9001 which ensures quality management. Both of these standards have been around for a while and are regarded as industry benchmarks. But with the rapid rise of AI, there became a need for a similar framework to manage its uses safely, ethically and transparently.

At LeftBrain, in line with ISO 42001, we recently conducted an internal audit of all the AI platforms we use and carried out a risk assessment to guide our clients in doing the same. This ensures they are prepared when their own clients start asking about AI governance. We anticipate significant supply chain pressure to adopt these measures soon after UKAS establishes accreditation bodies for certification.

Matt, what are the most common AI-related risks you’ve come across so far?

By far the greatest risk is ignorance. Many organisations haven’t even begun to think about the acceptable and non-acceptable business uses for AI. Beyond ignorance, we’ve boiled it down to five key risks for UK small to medium businesses who want to use AI to get ahead:

  1. Data privacy and security – AI tools could accidentally expose sensitive customer data, financial details, or business strategies if not handled securely.
  2. Inaccuracy, bias and hallucinations – AI content may be misleading, biased, or factually incorrect, leading to poor business decisions. AI can make up facts or behave unpredictably due to limitations in its training data.
  3. Workforce and productivity impact – AI automation could improve efficiency but may also require reskilling staff as job roles evolve.
  4. Over-reliance on AI – Depending too much on AI for customer interactions, marketing, or decision-making could reduce human oversight and damage customer trust.
  5. Compliance and legal risks – AI use must align with GDPR, intellectual property laws, and industry regulations, or the business could face fines or reputational damage.
Matt, who is ISO/IEC 42001 for?

Most of the clients we’re currently advising are in the tech industries (MarTech, FinTech, HealthTech and more), where the standard is highly relevant for companies developing AI-based products and services. However, as AI becomes more integrated into day-to-day business operations across the board, we think that ISO/IEC 42001 will be essential for organisations of any size or industry, regardless of their focus.

Lucas, do you have any top tips around what businesses can be doing now to mitigate AI-related risks? 

The simplest control businesses can put in place right now, without having to adopt a whole standard, is to outline which AI tools are acceptable and which ones aren’t. We don’t fully understand what AI models are doing with our data, so the biggest risk is users inputting sensitive information that could be stored, shared, or misused without their knowledge. It ultimately comes down to your risk tolerance and whether you trust your employees to be cautious with their prompts, or whether you decide to limit them to specific, vetted tools.

There are also some effective technical controls you can implement. For example, some of our clients have opted to block access to AI tools at the DNS level. This prevents employees from navigating to those websites on company devices. Alternatively, simply having a clear policy that states whether AI tools are allowed, and if they are, specifying which ones are permitted, can be highly effective. By communicating this to employees, you have established your stance.

My top tip: don’t bury your head in the sand! AI offers significant opportunities for efficiency and innovation, but it will also bring supply chain pressure to establish strong AI governance. Now is the time to get ahead and put the right measures in place.

Would you like to find out more about preparing for ISO/IEC 42001:2023 and mitigating AI-related risks for your business?

Schedule a call
Green arrow

Read Next

Two animated Memoji characters are shown. The character on the left has brown hair styled upward, wears round glasses, and has a cheerful, open-mouthed smile with bright blue eyes. The character on the right has neatly styled brown hair and is posing thoughtfully, resting his chin on his hand with a curious, pondering expression.
Matthew Bensley and Lucas Jansen
Friday 28th February 2025