Dutch regulator issues AI election advice warning

Oct 22, 2025 - 15:00
 0
Dutch regulator issues AI election advice warning

Over half of all chatbot suggestions lean towards two dominant political blocs, a data watchdog has found

The Dutch data protection authority (AP) has warned voters not to rely on AI chatbots for advice ahead of national elections, claiming that the tools provide unreliable information and could steer users toward two major opposition parties.

AI advice disproportionately favored two front-running blocs – the right-wing Party for Freedom (PVV) and the left-wing GroenLinks-PvdA alliance – accounting for 56% of responses, a concentration that contrasts with the highly fragmented 15-party Dutch parliament, the regulator said. Opinion polls project the two blocs could secure just over a third of the vote in the October 29 election, it added.

According to the report, some parties, including the center-right CDA, “are almost never mentioned, even when the user’s input exactly matches the positions of one of these parties.”

“Chatbots may seem like clever tools, but as a voting aid, they consistently fail,” the watchdog’s vice-chair, Monique Verdier, stated, describing their operation as “unclear and difficult to verify.”

She said the technology risked steering voters toward a party that did not necessarily reflect their political views.

“We therefore warn against using AI chatbots for voting advice,” Verdier added.

Read more
RT
WATCH Western journalists finally ‘speak’ the truth (AI VIDEO)

The agency tested four major chatbots, which it did not name, and found they sometimes advised voting for one of the two major parties even when explicitly fed the campaign platform of a smaller party.

The snap election in the Netherlands was triggered months ago by the collapse of the right-wing coalition after the PVV, led by MP Geert Wilders, exited. The vote is widely seen as a contest between the formation of a new, all-conservative government or a more centrist or center-right coalition.

An international study coordinated by the European Broadcasting Union and the BBC found that major AI assistants, including ChatGPT and Google’s Gemini, distorted news content in nearly half of their responses. The research analysed more than 3,000 AI-generated answers in 14 languages and concluded that 45% contained “at least one significant issue” when addressing news-related queries.

OpenAI and Microsoft have previously acknowledged that so-called “hallucinations” – cases in which an AI system generates incorrect or misleading information – remain an issue they are working to address.