UK Property

House of Lords AI report decries UK’s focus on AI safety


The UK is at risk of falling behind on AI because of its narrow focus on safety, a House of Lords committee has argued.  In its latest report on the state of AI research in the UK, the Lords’ Communications and Digital Committee argued that the government’s focus on setting guardrails on Large Language Models threatens to stifle domestic innovation in the space. The intervention from the committee was greeted with surprise by industry observers, who claim that the Conservative government’s hands-off approach to legislating on AI has marked the UK out as relatively hands-off when it comes to regulating AI. 

A House of Lords AI Report argues that the UK's focus on AI safety could risk it falling behind
The Communications and Digital Committee report recommends the UK government take a “more positive vision” about the benefits of AI. (Photo by pxl.store / Shutterstock)

“We need to address risks in order to be able to take advantage of the opportunities – but we need to be proportionate and practical,” said Baroness Stowell of Beeston, the committee’s chairperson. “We must avoid the UK missing out on a potential AI gold rush.”

House of Lords AI report calls for “more positive” AI vision

While the committee was broadly welcoming of the government’s work in promoting the UK abroad as an AI leader, it called for a “more positive vision” from Westminster about the potential socio-economic benefits of the technology. Instead, said the report, the government had devoted too much time toward discussing existential risks around AI and not enough on addressing more material threats to public safety from the technology, including cyberattacks and the proliferation of disinformation. The government was also missing an opportunity to protect businesses from the proliferation of generative AI, said the committee, stating that Westminster “cannot sit on its hands” while copyrighted material is routinely and illegally used as training data for LLMs without the permission of rightsholders. 

“LLMs rely on ingesting massive datasets to work properly but that does not mean they should be able to use any material they can find without permission or paying rightsholders for the privilege,” said Baroness Stowell. “This is an issue the government can get a grip of quickly and it should do so.”

Measures to encourage AI innovation

The report also recommended several measures to encourage more innovation within the UK’s AI sector, including more support for AI start-ups, boosting computing infrastructure, improving skills and exploring options for an ‘in-house’ sovereign UK LLM.

“These issues will be of huge significance over the coming years and we expect the government to act on the concerns we have raised and take the steps necessary to make the most of the opportunities in front of us,” said Baroness Stowell.

 Not everyone agreed with the committee’s argument that the UK’s focus on AI safety threatened to curb the sector’s inventiveness.  “Making sure that safety is at the forefront when designing, implementing, and deploying AI does not stifle innovation,” said Dr. Carolina Sanchez, senior assurance technologist at Cambridge Consultants. “Clarity and rules on what needs to be done by innovators to assure safety and trust do not stifle innovation (in fact it is proven to accelerate innovation). It is the procedures and processes that regulators put in place that indeed could slow down or frustrate the innovation process.”

Content from our partners
When AI meets hybrid cloud

Manchester Prize to reward AI innovations for the public good

Insurers can enhance customer experience and competitive edge with generative AI

Though the UK government has played a proactive rule in organising global consensus around broad AI safety norms, it has adopted a self-consciously “hands-off” approach to regulating the technology generally in comparison to other jurisdictions, preferring instead to devolve this responsibility to sectoral watchdogs. These agencies, in turn, are encouraged to align their thinking on AI issues around the UK’s “core AI principles” published in August



Source link

Leave a Response