UNECE Issues Declaration on AI-Embedded Products

Image by Freepik

UNECE Issues Declaration on AI-Embedded Products

Reading time: 3 min

In a Rush? Here are the Quick Facts!

  • The declaration promotes global regulatory cooperation for AI technologies.
  • High-risk AI products require stringent oversight, while low-risk products need minimal supervision.
  • Market surveillance must adapt to the evolving nature of AI technologies.

The United Nations Economic Commission for Europe (UNECE) announced today a declaration addressing the regulatory challenges posed by AI and digital technologies embedded in everyday products and services. This initiative, aims to enhance global regulatory coherence amid the complexities of emerging technologies.

Building on the Overarching Common Regulatory Arrangements (CRA), the declaration encourages voluntary regulatory cooperation among governments, while safeguarding global trade and technological progress.

The declaration highlights the prevalence of products using embedded AI and digital technologies but notes the absence of consistent definitions and regulations.

Although the CRA states that it does not encompass autonomous vehicles or weapons, its guidance remains pertinent to these sectors.

The declaration emphasizes the importance of managing risks associated with products that incorporate embedded AI or other digital technologies. According to the declaration, completely eliminating all risks is unrealistic; instead, regulations should aim to manage risks within acceptable limits.

High-risk products, particularly those impacting health, safety, or privacy, will require stringent oversight. Medium-risk products, which may pose user safety concerns but do not involve personal data, will need moderate monitoring. In contrast, low-risk products that do not use personal data or directly influence users will require minimal supervision.

For high-risk AI systems, the declaration advocates for the inclusion of human decision-making wherever feasible. For instance, medical devices using AI for diagnostics should involve human review to mitigate risks to patients, while AI-powered industrial machinery should allow for human oversight in workplaces.

Recognizing the unpredictable nature of AI technologies, the declaration stresses the necessity for rigorous testing while acknowledging the persistence of unknown risks. Regulators and distributors are encouraged to disclose these residual risks and ensure they remain manageable.

To prevent harm, embedded AI systems must address biases in decision-making, reflecting both human and machine biases.

The declaration further states that these systems should respect human autonomy, mental well-being, and societal values, including children’s rights, and function effectively in developing economies without creating trade barriers.

Safety features are crucial for AI technology to prevent misuse. Regulatory conformity assessments are vital for AI products, where low-risk items may require only a supplier’s self-declaration, while high-risk products should undergo third-party assessments to verify compliance with international standards.

Additionally, the declaration argues that market surveillance must evolve to keep pace with the dynamic nature of AI. Continuous compliance checks are necessary, especially as products receive updates.

Independent audits should confirm that products maintain initial standards and are safe for use. Non-compliant products, particularly those that pose significant risks, should be recalled, with international alerts issued for serious issues.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Loader
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Loader
Loader Show more...