Telling an AI model that it’s an expert programmer makes it a worse programmer

https://www.theregister.com/headlines.atom Hits: 56
Summary

Many people start their work with AI by prompting the machine to imagine it is an expert at the task they want it to perform, a technique that boffins have found may be futile. Persona-based prompting – which involves using directives such as "You're an expert machine learning programmer" in a model prompt – dates back to 2023, when researchers began to explore how role-playing instructions influenced AI models’ output. It's now common to find online prompting guides that include passages like, "You are an expert full-stack developer tasked with building a complete, production-ready full-stack web application from scratch." But academics who have researched this approach report it does not always produce superior results. In a pre-print paper titled "Expert Personas Improve LLM Alignment but Damage Accuracy: Bootstrapping Intent-Based Persona Routing with PRISM," researchers affiliated with the University of Southern California (USC) find that persona-based prompting is task-dependent – which they say explains the mixed results. For alignment-dependent tasks, like writing, role-playing, and safety, personas do improve model performance. For pretraining-dependent tasks like math and coding, using the technique produces worse results. The reason appears to be that telling a model it's an expert in a field does not actually impart any expertise – no facts are added to the training data. In fact, telling a model that it's an expert in a particular field hinders the model's ability to fetch facts from pretraining data. The researchers used the Measuring Massive Multitask Language Understanding (MMLU) benchmark, a means of evaluating LLM performance, to test persona-based prompting and found "when the LLM is asked to decide between multiple-choice answers, the expert persona underperforms the base model consistently across all four subject categories (overall accuracy: 68.0 percent vs. 71.6 percent base model). A possible explanation is that persona prefixes activate the ...

First seen: 2026-03-24 01:14

Last seen: 2026-03-26 09:05