Google Lifts Ban On AI Use For Weapons And Surveillance Technologies
Alphabet, Google’s parent company, has reversed its promise not to use AI for developing weapons or surveillance tools.
In a Rush? Here are the Quick Facts!
- Google updated its AI ethics guidelines, removing harm-related restrictions, just before earnings report.
- AI head Demis Hassabis emphasized national security and global AI competition as key factors.
- Experts warn that Google’s updated guidelines could lead to more autonomous weapons development.
On Tuesday, just before reporting lower-than-expected earnings, the company updated its AI ethics guidelines, removing references to avoiding technologies that could cause harm, as reported by The Guardian.
Google’s AI head, Demis Hassabis, explained that the guidelines were being revised to adapt to a changing world, with AI now being seen as crucial to protecting “national security.”
In a blog post, Hassabis and senior vice-president James Manyika emphasized that as global AI competition intensifies, the company believes “democracies should lead in AI development,” guided by principles of “freedom, equality, and respect for human rights.”
WIRED highlighted that Google shared updates to its AI principles in a note added to the top of a 2018 blog post introducing the guidelines. “We’ve made updates to our AI Principles. Visit AI.Google for the latest,” the note reads.
Aljazeera reported that Google first introduced its AI principles in 2018 following employee protests over the company’s involvement in the U.S. Department of Defense’s Project Maven, which explored using AI to help the military identify targets for drone strikes.
In response to the backlash, which led to employee resignations and thousands of petitions, Google decided not to renew its Pentagon contract. Later that year, Google also chose not to compete for a $10 billion cloud computing contract with the Pentagon, citing concerns that the project might not align with its AI principles, as noted by Aljazeera.
The update to Google’s ethics policy also follows Alphabet Inc. CEO Sundar Pichai’s attendance, alongside tech leaders such as Amazon’s Jeff Bezos and Meta’s Mark Zuckerberg, at the January 20 inauguration of U.S. President Donald Trump.
However, in Tuesday’s announcement, Google revised its AI commitments. The updated webpage no longer lists specific prohibited uses for its AI projects, instead giving the company more flexibility to explore sensitive applications.
The revised document now emphasizes that Google will maintain “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.” Additionally, the company states its intention to “mitigate unintended or harmful outcomes.”
However, experts warn that AI could soon be widely deployed on the battlefield, although concerns are rising over its use, especially in relation to autonomous weapons systems.
“For a global industry leader to abandon red lines it set for itself signals a concerning shift, at a time when we need responsible leadership in AI more than ever,” said Anna Bacciarelli, senior AI researcher at Human Rights Watch, as reported by BBC.
Bacciarelli also noted that the “unilateral” decision highlights “why voluntary principles are not an adequate substitute for regulation and binding law.”
Leave a Comment
Cancel