Currently working its way through UKAS, the National Accreditation Body for the United Kingdom, ISO/IEC 42001 is being piloted for official certification to provide a framework for organisations to design, implement and maintain effective Artificial Intelligence Management Systems.
To find out more about ISO/IEC 42001 and how LeftBrain is helping our clients prepare for getting certified to the standard, we spoke with LeftBrain’s Information Security Analysts, Matthew Bensley and Lucas Jansen.
—
Here at LeftBrain we’re already accredited and helping our clients with ISO 27001 which is the world’s best known standard on information security management, and ISO 9001 which ensures quality management. Both of these standards have been around for a while and are regarded as industry benchmarks. But with the rapid rise of AI, there became a need for a similar framework to manage its uses safely, ethically and transparently.
At LeftBrain, in line with ISO 42001, we recently conducted an internal audit of all the AI platforms we use and carried out a risk assessment to guide our clients in doing the same. This ensures they are prepared when their own clients start asking about AI governance. We anticipate significant supply chain pressure to adopt these measures soon after UKAS establishes accreditation bodies for certification.
By far the greatest risk is ignorance. Many organisations haven’t even begun to think about the acceptable and non-acceptable business uses for AI. Beyond ignorance, we’ve boiled it down to five key risks for UK small to medium businesses who want to use AI to get ahead:
Most of the clients we’re currently advising are in the tech industries (MarTech, FinTech, HealthTech and more), where the standard is highly relevant for companies developing AI-based products and services. However, as AI becomes more integrated into day-to-day business operations across the board, we think that ISO/IEC 42001 will be essential for organisations of any size or industry, regardless of their focus.
The simplest control businesses can put in place right now, without having to adopt a whole standard, is to outline which AI tools are acceptable and which ones aren’t. We don’t fully understand what AI models are doing with our data, so the biggest risk is users inputting sensitive information that could be stored, shared, or misused without their knowledge. It ultimately comes down to your risk tolerance and whether you trust your employees to be cautious with their prompts, or whether you decide to limit them to specific, vetted tools.
There are also some effective technical controls you can implement. For example, some of our clients have opted to block access to AI tools at the DNS level. This prevents employees from navigating to those websites on company devices. Alternatively, simply having a clear policy that states whether AI tools are allowed, and if they are, specifying which ones are permitted, can be highly effective. By communicating this to employees, you have established your stance.
My top tip: don’t bury your head in the sand! AI offers significant opportunities for efficiency and innovation, but it will also bring supply chain pressure to establish strong AI governance. Now is the time to get ahead and put the right measures in place.
Would you like to find out more about preparing for ISO/IEC 42001:2023 and mitigating AI-related risks for your business?