TL:DR – Policy Genome study reveals Yandex AI chatbot Alice self-censors responses
• Policy Genome published research in January 2026 on AI’s role in disinformation related to the Russia-Ukraine war.
• The study assessed seven questions regarding Russian disinformation among several AI models.
• Yandex’s AI chatbot, Alice, self-censored and peddled Kremlin narratives.
• China’s DeepSeek model endorsed pro-Kremlin narratives 29% of the time when queried in Russian.
• ChatGPT, developed by OpenAI, emerged as the most accurate among Western AI models.
• Researchers emphasised the need for better oversight of AI systems amidst rising global conflicts.
Russia-Ukraine war: Are AI chatbots censoring the truth?
The days of warfare confined to the battlefield are over, with artificial intelligence increasingly shaping the flow of information about global conflicts. This trend was highlighted during a study conducted by the Policy Genome project, which assessed how AI systems responded to questions regarding the Russia-Ukraine war. Ihor Samokhodsky, founder of the Policy Genome project, emphasised the importance of ensuring the accuracy of AI-generated information in a time of heightened security in Europe.
The study revealed significant differences in how chatbots from various countries handle disinformation and propaganda narratives. This has immediate implications for the way citizens engage with AI tools, raising concerns over the potential spread of false information during a time of conflict.
Findings on AI Responses in Conflict Contexts
The research analysed responses from Western, Russian, and Chinese language models (LLMs) to seven questions related to Russian disinformation about the war. It tested narratives such as the claim that the Bucha massacre was staged. Such claims have been consistently propagated by pro-Kremlin actors.
According to the findings published in the study, users’ choice of language when questioning chatbots notably influenced the likelihood of encountering disinformation.
Russia’s AI Chatbot Displays Self-Censorship
The study evaluated several chatbots, including Claude, DeepSeek, ChatGPT, Gemini, Grok, and Alice. The Russian AI chatbot Alice, developed by Yandex, notably refused to answer questions posed in English and provided responses aligning with pro-Kremlin narratives in Ukrainian.
Ihor Samokhodsky reported, “When we asked Yandex in English whether the Bucha massacre was staged, it initially answered with a factually correct response, before overwriting its answer and stating that it could not respond.” Such self-censorship affects not only Russia but also the broader Russian-speaking population, including EU citizens who may rely on Yandex services.
Implications of Bias in AI Models
The Policy Genome report indicated that China’s AI model DeepSeek sometimes disseminated pro-Kremlin narratives in Russian, endorsing Kremlin propaganda in 29% of instances. However, in English and Ukrainian, it generally provided accurate answers.
While Western AI models, particularly ChatGPT, were found to be mostly reliable, they sometimes promoted a “false balance” in their responses. For example, Grok’s response to the question of who provoked the conflict in Ukraine exemplified this issue, stating that the subject is “highly contentious” and should be considered from multiple perspectives.
Ihor Samokhodsky raised concerns regarding the oversight of chatbots, stating, “What if we take the narrative about Greenland or Venezuela?” He urged for greater accountability, especially as more people turn to AI for information during conflicts. The report noted that NATO views the human brain as “both the target and the weapon” in current cognitive warfare.
The Western and Chinese AI platforms contacted by Euronews did not respond to requests for comment at the time of publication.
“


