In August 2025, a cross party group of 60 UK lawmakers signed an open letter alleging that Google DeepMind violated its commitments to AI safety after releasing its latest model Gemini 2.5 Pro without timely and sufficient safety documentation.
The core of the complaint: at a 2024 summit held in Seoul DeepMind along with other leading AI firms pledged under the Frontier AI Safety Commitments to publicly report details about model capabilities risk assessments and whether external parties (including government agencies or independent testers) took part in evaluations.
Yet when Gemini 2.5 Pro was launched in March 2025 DeepMind initially released only a very brief model card. The full safety evaluation and detailed documentation only appeared weeks later and even then without clarity on which external bodies were involved in the testing. The lawmakers say this amounts to a troubling breach of trust with governments and the public.
The 2024 safety pledge was built around transparency giving governments researchers and the public confidence that frontier AI models are developed responsibly. Delayed or insufficient disclosure undermines that confidence.
Lawmakers warn that deploying powerful AI models without proven thorough safety testing or while hiding evaluation details could lead to misuse unintended harms or erosion of public trust.
If a major player like DeepMind treats safety commitments as optional it may encourage others to cut corners weakening overall AI safety norms and regulations worldwide.
In the open letter UK lawmakers asked DeepMind to:
Clarify its definition of deployment (i.e. at what point safety obligations kick in)
Commit to publishing a full safety evaluation report before or at the time of any future model release
Fully disclose names and identities of external testers / independent evaluators involved in safety audits along with testing timelines for transparency
Essentially shift from voluntary vague commitments to concrete public facing accountability.
Global AI Governance: As frontier AI models reach global scale transparency and safety standards in one country influence norms worldwide. What happens in UK sets a benchmark.
Public Trust in AI: Incidents like this shape public perception. Continued trust in AI depends on companies being seen as responsible and transparent.
Regulatory Pressure: If voluntary pledges fail it increases pressure on governments to enact formal AI laws and regulations possibly stricter than current guidelines.
The Risk of a “Race to Release: The pressure to push out advanced AI fast (for public consumption or commercial gain) might outpace safety practices unless accountability becomes standard.
A spokesperson for DeepMind said the company stands by its “transparent testing and reporting processes” and claims it followed the original Frontier AI Safety Commitments. According to them Gemini 2.5 Pro underwent “rigorous safety checks” including involvement of third party and government associated testers.
However critics argue that the delayed and limited reporting fails to meet the spirit if not the letter of responsible AI deployment.
Will DeepMind revise its internal process for future model releases to provide instant detailed safety documentation?
Will other AI companies follow suit or will they treat “safety pledges” as optional?
Will governments push for legally binding AI regulations rather than voluntary promises?
Will public pressure build for more transparency causing a shift in how frontier AI is developed and released?
The recent open letter by 60 UK lawmakers against Google DeepMind highlights a pivotal moment in the evolution of AI governance. As AI becomes more powerful and widespread transparency, accountability and public trust are becoming non negotiable.
Whether this controversy results in stronger industry standards or fades into the noise will depend on how seriously companies including DeepMind treat safety commitments going forward.
For the broader AI community and public alike this fight isn’t just about one model it’s about defining what responsible AI development looks like for generations to come.