- Telling the AI that it’s an expert at something makes it go down a completely different path
- By introducing a persona, the AI may not be able to think for itself, reducing output quality
- The best prompts explain the task to the AI and give it all the context and tools it needs
New research has claimed that asking AI to ‘act like an expert’ does not actually improve the reliability of the result, despite being a widely used prompt reinforcer.
More specifically, it may help with alignment-style tasks such as writing, tone and structure guidance, but it likely hurts knowledge tasks such as math and coding.
According to the data, these so-called expert personas underperform models based on benchmarks, likely because they trigger the AI to switch to instruction-following mode instead of fact recall.
The article continues below
Stop over-engineering your AI prompts
“We specifically advise against making (system) requests for maximum performance by exploiting biases, as this may have unexpected side effects, reinforce societal biases, and poison training data obtained with such prompts,” says the paper, written by researchers affiliated with the University of Southern California (USC).
Separate research similarly found that while persona prompting can help shape tone and style, it does nothing to add factual capabilities to a model.
Instead, fast means length and accuracy. A comprehensively designed prompt will ultimately give the AI as much context as it needs to act autonomously and generate higher quality output.
The paper introduces a new PRISM (Persona Routing via Intent-based Self-Modeling) solution, whereby AI generates responses with and without a persona and compares which response is best. The AI then learned when to apply personas in the future and fell back on the base model’s functionality when personas hurt output quality.
To add to the complexity of rapid construction, the researchers also uncover differences in model types, noting that reasoning models benefit more from context length, while instruction-aligned models may be most sensitive to personas.
In short, it seems that model developers are doing all the work necessary to ensure that generative AI gives us the best output, and that we should only aim to give chatbots tasks and share relevant context without dictating how they should go about generating a response.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



