CEE Digital Democracy Watch along with other industry, civil society, academia, and independent experts, took part in the online Plenary to help develop the first Code of Practice for general-purpose AI models, under the AI Act. The Code of Practice aims to facilitate the proper application of the AI Act’s rules for general-purpose AI models, including transparency and copyright-related rules, systemic risk taxonomy, risk assessment, and mitigation measures.
We also submitted feedback on the first draft of the General-Purpose AI Code of Practice. Our main insights focused on framing of systemic risks: terms like “large-scale persuasion” and “homogenisation of knowledge” are subjective and lack precise definition in Section 6.1, under the category of “persuasion and manipulation. “The vague language possibly can grant excessive power to AI providers in content moderation and undermine public dialogue and create “freezing effect” on political discourse.