Being polite to a chatbot could enhance its performance—here’s how

The impact of tone on generative AI models, such as ChatGPT, has garnered attention from both casual users and researchers alike. Interestingly, phrasing requests with varying degrees of politeness or urgency can influence the responses generated by these models.

Reports from Reddit users suggest that incentivizing ChatGPT with rewards or expressing politeness can lead to improved performance and higher-quality responses. This phenomenon, termed “emotive prompts,” has been studied extensively by academics and AI vendors.

In a recent study involving researchers from Microsoft, Beijing Normal University, and the Chinese Academy of Sciences, it was found that generative AI models tend to perform better when prompted with expressions conveying urgency or importance. Similarly, efforts to prevent discrimination by requesting the model nicely have shown promise, as demonstrated by the AI startup Anthropic.

Furthermore, Google data scientists observed that providing calming cues, such as telling the model to “take a deep breath,” resulted in improved performance on challenging tasks. These findings underscore the nuanced ways in which human-like interactions can influence the behavior and output of generative AI models.

Treating AI Chatbots Nicely Can Improve Performance: Here's Why | Robots.net

It’s important to remember that generative AI models lack real intelligence and consciousness. They operate based on statistical patterns and predictions rather than true understanding or emotion. While they can produce human-like responses, these are generated based on learned patterns from training data and are not indicative of genuine understanding or intent.

Emotive prompts, such as expressing urgency or politeness, can influence the behavior of generative AI models by manipulating their underlying probability mechanisms. When prompted in a certain way, these models may prioritize certain patterns or responses that align with the emotional tone of the prompt. As a result, they may generate responses that differ from those produced in response to neutral prompts. However, this doesn’t imply that the models possess emotions or intentions; rather, they’re simply following statistical patterns in the data they’ve been trained on.