
Sergey Brin Proposes Shocking Method for Optimizing AI Responses
2025-05-28
Author: Wei Ling
Could Threatening AI Models Actually Improve Their Performance?
In a surprising revelation, Google co-founder Sergey Brin claimed that generative AI models might deliver better results when subjected to threats. During an interview on All-In-Live Miami, he boldly stated, "We don’t circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them… with physical violence."
This unorthodox approach challenges the conventional wisdom that AI prompts should be polite and respectful, with many users prefacing their inquiries with "Please" and "Thank you." OpenAI's CEO, Sam Altman, echoed this sentiment last month, questioning the financial viability of overly courteous language in AI interactions. "Tens of millions of dollars well spent – you never know," Altman remarked.
The Evolving Landscape of AI Prompt Engineering
Prompt engineering has emerged as a crucial practice aimed at optimizing responses from AI models. As highlighted by University of Washington professor Emily Bender, AI models effectively act as "stochastic parrots," mimicking and merging learned data in unexpected ways. Though prompt engineering has gained traction over the past two years, its relevance is now under scrutiny.
A recent IEEE Spectrum article declared prompt engineering as 'dead,' while the Wall Street Journal paradoxically referred to it as the 'hottest job of 2023' – a stark contradiction that reflects the industry's rapidly evolving nature.
Threats and Security: A Double-Edged Sword
Brin's controversial suggestion has raised eyebrows, yet experts like Stuart Battersby, CTO of AI safety firm Chatterbox Labs, underscore that this behavior is not unique to Google's models. "Threatening a model to yield specific content can be seen as a type of jailbreak, where an attacker tries to circumvent AI's security protocols," reported Battersby.
He emphasized the necessity of conducting deep, systematic testing to identify which threats could successfully exploit a given AI model's safeguards.
Anecdotes vs. Systematic Studies
While claims similar to Brin’s have surfaced before, they often remain anecdotal rather than empirically backed. Daniel Kang, an assistant professor at the University of Illinois Urbana-Champaign, pointed to a paper titled "Should We Respect LLMs?" that presents mixed results regarding prompt politeness affecting AI performance.
Despite the buzz around Brin's statements, Kang encourages users of language models to favor systematic experimentation over intuition when it comes to prompt engineering.
The Bottom Line: AI's Future and User Engagement
Brin’s assertion is bound to provoke debate within the AI community as experts analyze both the ethical and practical implications of using threats to shape AI responses.
As we continue to navigate the complexities and challenges of AI interaction, one thing is certain: understanding how these models respond to various stimuli will remain a hot topic in the tech world.