
Alphabet and Google. Credit: Shutterstock, IgorGolovniov
Google has officially removed a key pledge from its AI principles, no longer ruling out the use of artificial intelligence (AI) for military or surveillance purposes.
The tech giant’s parent company, Alphabet, updated its AI policies just before releasing its latest financial results, as reported by the BBC and Firstpost.
Previously, Google’s 2018 AI principles committed to avoiding AI applications that could cause harm or be used for mass surveillance. However, a new blog post by Google executives James Manyika and Demis Hassabis suggests a shift in priorities, stating that democratic nations should lead AI development to “support national security.”
Google’s AI ethics
Google originally pledged in its 2018 AI principles that it would not work on AI projects contributing to weapons development. This decision followed Project Maven, a controversial contract with the US Department of Defense that used AI to analyse drone footage. The backlash from employees led to resignations and a petition signed by thousands of staff members, ultimately forcing Google to drop the project.
However, the company’s latest stance suggests that AI development has evolved significantly. In the updated blog post, Manyika and Hassabis explained that AI is now a “general-purpose technology” comparable to the internet and mobile phones.
“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” they wrote in the statement published on Google’s official blog.
Major investment in AI despite market dip
The timing of this announcement coincided with Alphabet’s financial report, which fell short of market expectations. Despite a 10 per cent increase in digital advertising revenue, largely influenced by US election spending, the company’s share price dropped.
To counteract concerns, Alphabet revealed plans to invest $75 billion (€69.7 billion) in AI this year – a 29 per cent increase compared to analyst predictions. The investment will focus on AI infrastructure, research, and commercial applications, including Gemini, Google’s AI-powered search tool, which now provides AI-generated summaries in search results and appears on Google Pixel devices.
Is AI potentially dangerous?
Google’s decision to remove its AI ban on military and surveillance use has reignited concerns about ethical AI development. While the company insists it remains committed to responsible AI systems, the policy shift suggests a growing alignment between AI development and national security interests.
Amid increasing global competition in AI, particularly between democratic nations and rivals like China with DeepSeek, the debate over AI governance is expected to intensify in the coming years.