UK Boosts AI Safety, Signs Partnership With Singapore To Grow Trusted AI Market

Image by Anthony DELANOIX, from Unsplash

UK Boosts AI Safety, Signs Partnership With Singapore To Grow Trusted AI Market

Reading time: 3 min

In a Rush? Here are the Quick Facts!

  • UK’s AI assurance market could grow six-fold by 2035.
  • Expansion aims to unlock £6.5 billion in economic growth.
  • New AI assurance platform launched to support responsible AI use.

The UK government announced yesterday new measures to support the safe and responsible use of AI, aiming to unlock £6.5 billion in economic growth by 2035. The UK’s AI assurance sector—which focuses on ensuring AI systems are fair, transparent, and secure—is expected to expand six-fold in the next decade.

This growth is seen as essential to the government’s broader strategy to incorporate AI in public services and boost economic productivity, while maintaining public trust in these technologies.

Peter Kyle, Secretary of State for Science, Innovation, and Technology, emphasized that public trust is essential to fully harness AI’s potential to improve services and productivity. He noted that these steps aim to position the UK as a leader in AI safety.

To aid this expansion, the Department for Science, Innovation, and Technology (DSIT) and the UK’s AI Safety Institute have introduced a new AI assurance platform, designed to help British businesses manage the risks associated with AI use.

The platform will centralize resources for assessing data bias, conducting impact evaluations, and monitoring AI performance. Small and medium-sized enterprises (SMEs) will also have access to a self-assessment tool to implement responsible AI practices within their organizations.

The UK is also strengthening its international efforts on AI safety by signing a partnership with Singapore.

The Memorandum of Cooperation, signed by Secretary Kyle and Singapore’s Minister for Digital Development Josephine Teo, aims to promote joint research and establish common standards for AI safety.

This agreement builds on discussions held at last year’s AI Safety Summit and aligns with the goals of the International Network of AI Safety Institutes (AISI), a global initiative to coordinate AI safety efforts.

Josephine Teo emphasized that both countries are committed to advancing AI for public benefit while ensuring it remains safe.

“The signing of this Memorandum of Cooperation with an important partner, the United Kingdom, builds on existing areas of common interest and extends them to new opportunities in AI,” Teo said.

Hyoun Park, CEO of Amalgam Insights—a firm specializing in financially responsible IT decisions—points out that, although marketed as a tool for building trust in AI, the platform’s main purpose is to provide businesses with a government-aligned framework for evaluating AI, reports CIO.

Park raised concerns about the platform’s current capabilities. “The platform is still fairly rudimentary, with plans for an essential toolkit that has yet to be fully developed,” he said, as reported by CIO.

“This assessment relies on human responses rather than direct integration with the AI itself, and the scale used by the assessment tool is vague, offering only binary yes/no options or responses that are difficult to quantify,” he added.

Park also pointed out that bias assessments could be especially challenging. “Every AI has a bias, and the notion that bias can be eliminated is both a myth and potentially dangerous,” he noted to CIO.

For smaller businesses, new compliance requirements like risk assessments and data audits may pose additional burdens, potentially stretching limited resources, says CIO.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Show more...