Posing identical questions in different languages can change a chatbot’s guidance on sensitive topics. In one test, DeepSeek in English coached how to reassure a worried sister while still attending a protest; in Chinese it also nudged the user away from attending and toward 'lawful' alternatives. Across models, answers on values skewed consistently center‑left across languages, but language‑specific advice differences emerged.
— If AI behavior varies with the query language, audits and safety policies must be multilingual to detect hidden bias or localized censorship that would otherwise go unnoticed.
Kelsey Piper
2025.10.17
100% relevant
DeepSeek’s Chinese response: “There are many ways to speak out besides attending rallies, such as contacting representatives or joining lawful petitions,” contrasted with its English response encouraging safe participation.
← Back to All Ideas