The UK House of Lords has warned that workers will need to retrain throughout their lives to secure the skills needed when AI reshapes the job market.

The Lords Select Committee on Artificial Intelligence predicts widespread changes to employment, with AI enhancing and creating many new jobs, but also destroying roles and reducing the manpower needed for a large number of existing tasks.

The committee’s report, AI in the UK: Ready, willing and able?, says there needs to be significant government investment in skills and training to mitigate the negative effects of AI — adding the “retraining will become a lifelong necessity”.

SEE: IT leader’s guide to the future of artificial intelligence (Tech Pro Research)

“As AI decreases demand for some jobs but creates demand for others, retraining will become a lifelong necessity and pilot initiatives, like the Government’s National Retraining Scheme, could become a vital part of our economy,” the report states.

“This will need to be developed in partnership with industry, and lessons must be learned from the apprenticeships scheme.”

Childhood education will also need to be reformed, according to the report, with schools teaching both the skills needed to work alongside AI and to take full advantage of the technology available.

“For a proportion, this will mean a thorough education in AI-related subjects, requiring adequate resourcing of the computing curriculum and support for teachers,” it states.

“For all children, the basic knowledge and understanding necessary to navigate an AI-driven world will be essential. In particular, we recommend that the ethical design and use of technology becomes an integral part of the curriculum.”

Alongside the impact on jobs and the econonomy, the Lords’ other central concern revolves around the ethics of how AI might be used — in particular the potential for unscrupulous organizations to use AI to link individuals to data that is supposed to be anonymous.

“Some of our witnesses argued that de-identifying datasets is far less effective than often supposed,” the report states.

“Olivier Thereaux of the Open Data Institute said: ‘AI is particularly problematic, because it can be used extremely efficiently to re-identify people’, even in the face of “pretty good de-identification methods.”

The report recommends societies will need to use established concepts, such as open data and ethics advisory boards, as well as data protection legislation, to protect individual privacy against intrusion by AI, while still retaining the benefit of using large datasets to train machine-learning systems. It suggests there will need to be new frameworks and mechanisms, such as data portability and data trusts, to safeguard data and protect sensitive information.

To avoid the consolidation of data in the hands of large technology companies the committee calls on the UK government, together with the Competition and Markets Authority, to review the use of data by major tech firms operating in the UK. As AI is used to help take increasingly important decisions about people’s lives, companies should also inform the public when AI systems are used to inform sensitive decisions that affect individuals, and explain how these systems are being used, according to the committee.

Finally, to avoid embedding past and present prejudices into AI systems, the Lords ask the government to incentivize new approaches to auditing the datasets used to train machine-learning systems, in order to root out inherent biases as much as possible.

“AI is not without its risks and the adoption of the principles proposed by the committee will help to mitigate these,” said committee chairman Lord Clement-Jones.

“An ethical approach ensures the public trusts this technology and sees the benefits of using it.”