February 5, 2025
1 min read

Google Removes AI Guidelines Against Surveillance and Weapon Use

Google first published its AI principles in 2018. In a recent blog post, senior vice president James Manyika and Sir Demis Hassabis, head of Google DeepMind, explained the update was necessary due to the evolving role of AI in society

Google has removed its previous pledge not to develop artificial intelligence (AI) applications for surveillance or weapons, sparking concerns about the tech giant’s ethical stance. The parent company Alphabet had earlier vowed never to create technology that would “cause or are likely to cause overall harm.” However, the company’s guidelines have now been revised, omitting this commitment.

The previous principles specifically stated that Google would not develop AI applications “that gather or use information for surveillance violating internationally accepted norms.” The company has now replaced this with a pledge to apply “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.”

Google first published its AI principles in 2018. In a recent blog post, senior vice president James Manyika and Sir Demis Hassabis, head of Google DeepMind, explained the update was necessary due to the evolving role of AI in society.

“Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organisations and individuals use to build applications,” they wrote. “It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself.”

Google, which previously faced criticism for its $1.2 billion cloud computing and AI contract with the Israeli government, stressed the importance of collaboration with governments and organisations to develop responsible AI.

“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” Manyika and Hassabis added. “And we believe that companies, governments, and organisations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

Previous Story

No More Solo Climbs Above 8,000m in Nepal

Next Story

India to Launch Indigenous AI Model Within Six Months

Latest from LITE BLOGS

Netaji’s Ashes Await Return Home

Today, 78 years after India gained independence in 1947, Bose remains physically separated from the country he fought to free. The Indian government has in the past considered repatriating his remains, but

Sweet Traditions Mark Janmashtami

Krishna Janmashtami, the joyous festival marking the birth of Lord Krishna, is one of the most cherished celebrations across India and beyond. Falling in the month of Bhadrapada (August–September) on the Ashtami tithi

Stories That Teach Life

Recognised as one of the few age-appropriate resources in this area, Earn, Save, Invest has been praised by educators and parents for sparking early conversations about money—an often-neglected subject in childhood learning As

Jashanmal Launches Back-to-School

Founded in 1919 by Rao Sahib Jashanmal in Basra, Iraq, Jashanmal Group has over a century of retail excellence. Headquartered in Dubai, the Group operates across the UAE, Kuwait, Bahrain, Oman, and

India’s Tourism Sector Booms

In 2023, Domestic Tourist Visits (DTVs) rose by 44.98 per cent year-on-year to 2.5 billion. Uttar Pradesh led the tally with 478.53 million visits, followed by Tamil Nadu at 286.01 million India’s
Go toTop

Don't Miss

Infosys Launches Open-Source Toolkit to Promote Responsible AI

The toolkit is designed to equip organizations with advanced tools