The U.K. government has introduced its “world-first” AI Cyber Code of Practice for companies developing AI systems. The voluntary framework outlines 13 principles designed to mitigate risks such as AI-driven cyberattacks, system failures, and data vulnerabilities.

The voluntary code applies to developers, system operators, and data custodians at organisations that create, deploy, or manage AI systems. AI vendors that only sell models or components fall under other relevant guidelines.

“From securing AI systems against hacking and sabotage, to ensuring they are developed and deployed in a secure way, the Code will help developers build secure, innovative AI products that drive growth,” the Department for Science, Innovation, and Technology said in a press release.

Recommendations include implementing AI security training programmes, developing recovery plans, carrying out risk assessments, maintaining inventories, and communicating with end-users about how their data is being used.

To provide a structured overview, TechRepublic has collated the Code’s principles, who they apply to, and example recommendations in the following table.

PrinciplePrimarily applies toExample recommendation
Raise awareness of AI security threats and risksSystem operators, developers, and data custodiansTrain staff on AI security risks and update training as new threats emerge.
Design your AI system for security as well as functionality and performanceSystem operators and developersAssess security risks before developing an AI system and document mitigation strategies.
Evaluate the threats and manage the risks to your AI systemSystem operators and developersRegularly evaluate AI-specific attacks like data poisoning and manage risks.
Enable human responsibility for AI systemsSystem operators and developersEnsure AI decisions are explainable and users understand their responsibilities.
Identify, track, and protect your assetsSystem operators, developers, and data custodiansMaintain an inventory of AI components and secure sensitive data.
Secure your infrastructureSystem operators and developersRestrict access to AI models and apply API security controls
Secure your supply chainSystem operators, developers, and data custodiansConduct risk assessment before adapting models that are not well-documented or secured.
Document your data, models, and promptsDevelopersRelease cryptographic hashes for model components that are made available to other stakeholders so they can verify their authenticity.
Conduct appropriate testing and evaluationSystem operators and developersEnsure it is not possible to reverse engineer non-public aspects of the model or training data.
Communication and processes associated with end-users and affected entitiesSystem operators and developersConvey to end-users where and how their data will be used, accessed, and stored.
Maintain regular security updates, patches, and mitigationsSystem operators and developersProvide security updates and patches and notify system operators of the updates.
Monitor your system’s behaviourSystem operators and developersContinuously analyse AI system logs for anomalies and security risks.
Ensure proper data and model disposalSystem operators and developersSecurely dispose of training data or model after transferring or sharing ownership.

The Code’s publication comes just a few weeks after the government’s publication of the AI Opportunities Action Plan, outlining the 50 ways it will build out the AI sector and turn the country into a “world leader.” Nurturing AI talent formed a key part of this.

Stronger cyber security measure in the U.K.

The Code’s release comes just one day after the U.K.’s National Cyber Security Centre urged software vendors to eradicate so-called “unforgivable vulnerabilities,”  which are vulnerabilities with mitigations that are, for example, cheap and well-documented, and are therefore easy to implement.

Ollie N, the NCSC’s head of vulnerability management, said that for decades, vendors have “prioritised ‘features’ and ‘speed to market’ at the expense of fixing vulnerabilities that can improve security at scale.” Ollie N added that tools like the Code of Practice for Software Vendors will help eradicate many vulnerabilities and ensure security is “baked into” software.

International coalition for cyber security workforce development

In addition to the Code, the U.K. has launched a new International Coalition on Cyber Security Workforces, partnering with Canada, Dubai, Ghana, Japan, and Singapore. The coalition committed to work together to address the cyber security skills gap.

Members of the coalition pledged to align their approaches to cyber security workforce development, adopt common terminology, share best practices and challenges, and maintain an ongoing dialogue. With women making up only a quarter of cybersecurity professionals, progress is certainly needed in this area.

Why this Cyber Code matters for businesses

Recent research shows that 87% of U.K. businesses are unprepared for cyber attacks, with 99% experiencing at least one cyber incident in the past year. Moreover, only 54% of U.K. IT professionals are confident in their ability to recover their company’s data after an attack.

In December, the head of the NCSC warned that the U.K.’s cyber risks are “widely underestimated.” While the AI Cyber Code of Practice remains voluntary, businesses are encouraged to proactively adopt these security measures to safeguard their AI systems and reduce exposure to cyber threats.

Subscribe to the TechRepublic UK Newsletter

Catch up on the week’s essential technology news, must-read posts, and discussions that would be of interest to IT pros working in the UK and Europe. Delivered Wednesdays

Subscribe to the TechRepublic UK Newsletter

Catch up on the week’s essential technology news, must-read posts, and discussions that would be of interest to IT pros working in the UK and Europe. Delivered Wednesdays