
AI Security Experts Urge Urgent Action on Risks from Advanced AI Models
2025-04-30
Author: Liam
Urgent Call for Enhanced AI Security Evaluations
As artificial intelligence continues to evolve at a breakneck pace, leading experts stress the pressing need for rigorous assessments of the security and safety risks associated with deploying advanced AI systems. During the RSAC Conference 2025, representatives from major players like Google DeepMind, Nvidia, and the UK AI Security Institute voiced serious concerns about the current state of AI model evaluations.
A Race Against Rapid Development
Jade Leung, CTO of the UK AI Security Institute, highlighted the myriad unknowns surrounding agentic AI systems. She pointed out that current safety evaluations are failing to keep up with advancements in technology, describing it as an 'evolving science' that many underestimate. "While some companies are making substantial investments in this area, we need an order of magnitude more effort to ensure safety," she emphasized.
The Complexity of AI Systems
Daniel Rohrer, VP of Software Product Security at Nvidia, reinforced this by noting that as AI systems grow more intricate, organizations must adapt their evaluation strategies. Agentic AI and complex models present unique challenges, demanding close scrutiny to predict behaviors under real-world conditions. "Understanding the various functionalities of a general-purpose AI is crucial in controlling its output effectively," Rohrer explained.
The Unpredictable Nature of AI Development
John 'Four' Flynn from Google DeepMind concurred, stating that security teams must continuously re-evaluate AI models because initial predictions during pre-training often fail to capture the model’s real-world performance. Flynn revealed significant discrepancies in how models perform against simulated threats versus actual applications.
Championing Collaboration for Global Standards
The panelists collectively emphasized the importance of international cooperation in sharing intelligence on AI risks. Leung urged for a universal framework to better define AI capabilities and associated risks, particularly when threats transcend national borders.
The Evolving Threat Landscape
A recurring theme at the conference was the sophisticated techniques employed by malicious actors, particularly leveraging AI for cyberattacks. Flynn acknowledged that AI's proficiency in coding is rapidly advancing, predicting that this year will witness groundbreaking capabilities in malware creation.
Looking Ahead: The AGI Debate
The panel showcased a divide in opinions regarding the implications of artificial general intelligence (AGI) for security. While Flynn believes AGI may emerge by 2030, Rohrer stressed the importance of establishing robust frameworks to manage its risks when it eventually arrives. "Our focus should be on understanding and influencing these capabilities as they develop," Rohrer noted.
In conclusion, the need for comprehensive evaluations and international standards in AI security has never been more critical as we navigate this rapidly transforming landscape. The call for increased collaboration and understanding is essential to harness the benefits of AI while safeguarding against its potential dangers.