Big Tech’s Influence On EU AI Standards Raises Concerns
Big Tech companies hold considerable sway over the development of EU standards for AI, according to a report by the campaign group Corporate Europe Observatory (CEO).
In a Rush? Here are the Quick Facts!
- Over half of JTC21 members represent corporate or consultancy interests, not civil society.
- Corporations prioritize light AI regulations, undermining safeguards for fundamental rights.
- Civil society and academia struggle to participate in AI standard-setting due to barriers.
The report raises concerns about the inclusivity and fairness of the standard-setting process, which underpins the EU’s newly approved AI Act.
The AI Act, which became law in August 2024, adopts a risk-based approach, categorising AI systems by risk levels. While high-risk systems, such as those in healthcare and judicial applications, face stricter requirements, specific guidelines remain undefined, as noted by the CEO.
Over half (55%) of the 143 members of the Joint Technical Committee on AI (JTC21), established by European standardization bodies CEN and CENELEC, represent corporate or consultancy interests.
The report criticizes most standard-setting organizations, like CEN and CENELEC, for being private and lacking democratic accountability. These bodies often charge high fees and operate with limited transparency, making it difficult for civil society to participate.
Meanwhile, civil society faces logistical and financial barriers to participation. Only 23 representatives (16%) in Corporate Europe Observatory’s sample were from academia, and just 13 (9%) represented civil society.
While the EU’s Annex III organizations advocate for societal interests, they lack voting rights and resources in comparison to corporate participants. Oracle, a major tech corporation, has publicly lauded its involvement in AI standardisation, claiming its efforts ensure “ethical, trustworthy, and accessible” standards, as reported by the CEO.
However, the report argues that corporations like Oracle, which has ties to global surveillance systems, prioritize light regulations that align with their interests rather than addressing fundamental rights. Similar involvement by companies like Microsoft, Amazon, and Google raises fears of weakened safeguards within the AI Act.
Bram Vranken, a researcher at CEO, expressed concern about this delegation of public policymaking.
“The European Commission’s decision to delegate public policymaking on AI to a private body is deeply problematic. For the first time, standard setting is being used to implement requirements related to fundamental rights, fairness, trustworthiness and bias,” he said, as reported by Euro News.
CEN and CENELEC did not disclose participants in their AI standards development. CEO’s requests for participant lists from the JTC21 committee were met with silence, and their Code of Conduct enforces secrecy about participant identities and affiliations.
CEO described these rules as overly restrictive, preventing open discussion about committee membership.
In response to the report, the European Commission stated that harmonised standards would undergo assessment to ensure they align with the objectives of the AI Act. Member States and the European Parliament also retain the right to challenge these standards, reported Euro News.
In conclusion, legislators have delegated the task of operationalising these standards to private organisations. Civil society groups and independent experts, often underfunded and outnumbered, are struggling to counterbalance corporate dominance.
This imbalance risks undermining protections against discrimination, privacy violations, and other fundamental rights, leaving Europe’s AI governance largely shaped by industry interests.
Leave a Comment
Cancel