fbpx
Saturday, September 7, 2024
Saturday September 7, 2024
Saturday September 7, 2024

UK urged to implement AI incident reporting system to mitigate risks

PUBLISHED ON

|

Centre for Long-term Resilience advocates for a comprehensive AI misuse and malfunction logging system to keep the government informed and prepared

The UK needs a structured system to record instances of artificial intelligence (AI) misuse and malfunctions, according to a new report by the Centre for Long-Term Resilience (CLTR). The think tank warns that without such a system, ministers risk remaining unaware of critical incidents involving AI technology. The report emphasizes the necessity for the next government to create a system to log AI-related incidents in public services and consider establishing a central hub for collecting AI-related episodes across the UK.

CLTR, which specializes in government responses to unforeseen crises and extreme risks, suggests that an incident reporting regime similar to the one used by the Air Accidents Investigation Branch (AAIB) is crucial for the successful integration of AI technology. The report references a database compiled by the Organisation for Economic Co-operation and Development (OECD) that lists 10,000 AI “safety incidents” since 2014. These incidents range from physical harm to economic, reputational, and psychological damages.

Embed from Getty Images

Examples cited in the OECD’s AI safety incident monitor include a deepfake video of Labour leader Keir Starmer, Google’s Gemini model inaccurately portraying German WWII soldiers, self-driving car incidents, and a man encouraged by a chatbot to assassinate the late queen. Tommy Shaffer Shane, a policy manager at CLTR and the report’s author, stated, “Incident reporting has played a transformative role in mitigating and managing risks in safety-critical industries such as aviation and medicine. But it’s largely missing from the regulatory landscape being developed for AI. This is leaving the UK government blind to the incidents that are emerging from AI’s use, inhibiting its ability to respond.”

The thinktank calls for the UK government to adopt a well-functioning incident reporting regime, akin to those in aviation and medicine, to address safety concerns in AI use. CLTR notes that many AI incidents may go unreported due to the lack of a dedicated regulator for advanced AI systems like chatbots and image generators. Labour has pledged to introduce binding regulations for the most advanced AI companies.

The proposed system would offer rapid insights into AI malfunctions, aiding the government in anticipating similar incidents in the future. It would also facilitate coordinated responses to serious incidents and help identify early signs of large-scale harms. The thinktank warns that some AI models may only reveal their detrimental effects post-release, despite thorough testing by the UK’s AI Safety Institute. An incident reporting system would allow the government to assess the effectiveness of the country’s regulatory setup in addressing these risks.

The report criticizes the Department for Science, Innovation and Technology (DSIT) for potentially lacking an up-to-date understanding of AI misuse, including disinformation campaigns, bioweapons development, bias in AI systems, and misuse of AI in public services. For instance, in the Netherlands, a misguided AI program caused financial distress for thousands of families in an attempt to tackle benefits fraud. The report urges DSIT to prioritize finding out about novel harms through proven incident reporting processes rather than through news reports.

CLTR, largely funded by Estonian computer programmer Jaan Tallinn, recommends three immediate steps: creating a government system to report AI incidents in public services, asking UK regulators to identify gaps in AI incident reporting, and considering a pilot AI incident database. This database could collect AI-related episodes from bodies such as the AAIB, the Information Commissioner’s Office, and the Medicines and Healthcare Products Regulatory Agency (MHRA).

The think tank suggests that the reporting system for AI use in public services could build on the existing algorithmic transparency reporting standard, which encourages departments and police authorities to disclose their use of AI. In May, ten countries, including the UK, along with the EU, signed a statement on AI safety cooperation that included monitoring AI harms and safety incidents. The report adds that an incident report system would support the DSIT’s Central AI Risk Function (CAIRF), which assesses and reports on AI-associated risks.

Analysis:

Political Perspective: The recommendation for an AI incident reporting system intersects with political debates on technology regulation. The establishment of such a system would demonstrate the government’s commitment to responsible AI governance. Politically, it could also position the UK as a leader in AI safety, influencing international policies and regulations. Labour’s pledge to introduce binding regulations for advanced AI companies aligns with this recommendation, reflecting a growing political consensus on the need for robust AI oversight.

Social Perspective: From a societal viewpoint, the call for an AI incident reporting system addresses public concerns about the rapid proliferation of AI technologies. It highlights the importance of transparency and accountability in AI deployment. The system would provide the public with reassurance that AI technologies are being monitored and managed responsibly, potentially alleviating fears about AI’s impact on privacy, employment, and social equity.

Racial Perspective: AI technologies have often been criticized for perpetuating racial biases. An incident reporting system would enable the identification and rectification of AI systems that discriminate based on race. By logging and addressing such incidents, the government can work towards ensuring that AI technologies are fair and equitable, thereby contributing to broader efforts to combat systemic racism.

Gender Perspective: Gender biases in AI systems have been documented, with AI often reflecting and amplifying societal gender inequalities. An incident reporting system would help identify these biases, allowing for corrective measures to be taken. This would contribute to the development of AI technologies that are more inclusive and gender-sensitive, supporting gender equality in tech.

Economic Perspective: Economically, an AI incident reporting system could prevent significant financial losses caused by AI malfunctions. By logging and addressing incidents promptly, businesses and public services can mitigate the risks of economic damage. Additionally, it would help maintain consumer trust in AI technologies, which is crucial for their widespread adoption and the economic benefits they promise.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related articles