Government must be crystal clear about the use of genAI

Tribune Editorial Staff
September 10, 2025

THE HAGUE--How can residents be sure that the government is communicating with them in a human way, when more and more generative AI is being used by the government? The outgoing State Secretary of Digitalisation and Kingdom Relations insists on transparency and human control.

The Standing Committee on Digital Affairs is concerned about the influence of generative AI in the government's communication with residents and asked a number of questions on this subject to outgoing State Secretary Van Marum for Digitalisation (BBB) in response to the Government-wide position on generative AI. How are residents supposed to know whether a government message comes from a human or an AI model? And what does it do to trust in the government if the government uses more and more generative AI?

In answering the questions, Van Marum points to the legal obligations surrounding generative AI. The European AI Regulation states that AI-generated content must be recognizable as such. This obligation entered into force on 2 August 2025. The Dutch guide to generative AI for the government also requires transparency about the use of genetic AI.

If AI is used in government communication, it is mandatory that a human employee then examines the generated content, whether it is a letter, a video or a graph in an appendix. AI systems make mistakes. Governments and employees must be aware of this, Van Marum writes. Editorial control should prevent inaccuracies from remaining in the generated content.

In addition, all AI-generated content must make it clear that AI has been put to work. "If the many pilots and experiments that are currently taking place show that generative AI can be used responsibly for direct contact with citizens, I will investigate how it can be made clear that a text has been created with the help of generative AI," Van Marum writes.

Even if a (government) organization is not itself a provider of an AI system but only uses it, from August 2026 it will be mandatory to make it clear that the content has been generated or manipulated, for example by stating in an caption to an image that it was created by AI. The transparency obligations also apply to chatbots. Users need to know that they are talking to an AI system and not a human.

The government itself will always remain ultimately responsible for all government communication to citizens and businesses. Van Marum writes: "As far as the responsibility for the result is concerned: generative AI is a tool and cannot bear any responsibilities."

Share this post

Join Our Community Today

Subscribe to our mailing list to be the first to receive
breaking news, updates, and more.

By clicking Sign Up you're confirming that you agree with our Terms and Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.