Forum Issues Policy Framework for Ensuring Democratic Control of AI in the Information Space

0
68
Mr. Michael Bąk, Executive Director of the Forum on Information and Democracy

The Paris-based Forum on Information and Democracy, an international organization providing democratic safeguards to the global space of information and communication, has issued a new policy framework outlining standards, accountability and governance mechanisms as well as incentives for developing ethical systems aimed at ensuring the democratic control of artificial intelligence (AI).

Entitled “AI as a Public Good: Ensuring Democratic Control of AI in the Information Space”, the 141-page policy framework was released on February 28, 2024, and contains over 200 recommendations targeted at governments, AI companies and other relevant stakeholders designed to mitigate the destructive power that AI can have on political processes.

The policy framework takes a comprehensive approach calling for safe and inclusive AI systems, putting in place accountability mechanisms and incentives for ethical AI as well as governance mechanisms.

The recommendations were developed by an international Policy Working Group with 14 members from diverse disciplines and 13 countries on all continents.

The Group was co-chaired by Laura Schertel Mendes, a lawyer and Professor of Civil Law at the University of Brasilia, a federal public university in Brazil; and Jonathan Stray, a journalist, computer scientist, and Senior Scientist at the Center for Human-Compatible Artificial Intelligence (CHAI), a research center at the University of California, Berkeley in the United States.

Other members of the Group are Rachel Adams, Linda Bonyo, Marta Cantero Gamito, Alistair Knott, Syed Nazakat, Alice Oh, Alejandro Pisanty, Gabriela Ramos, Achim Rettinger, Edward Santow, Suzanne Vergnolle and Claes de Vreese.

According to the Forum on Information and Democracy, an international entity founded by 11 independent organizations from different backgrounds and regions, the Group worked through an inclusive and consultative process over a period of six months, receiving inputs from over 150 experts worldwide.

Key recommendations contained in the policy framework include:

• Fostering the creation of a tailored certification system for AI companies inspired by the success of the Fair Trade certification system.
• Establishing standards governing content authenticity and provenance, including for author authentication.
• Implementing a comprehensive legal framework that clearly defines the rights of individuals, including the right to be informed, to receive an explanation, to challenge a machine-generated outcome, and to non-discrimination
• Providing users with an easy and user-friendly opportunity to choose alternative recommender systems that do not optimize for engagement but build on ranking in support of positive individual and societal outcomes, such as reliable information, bridging content or diversity of information.
• Setting up a participatory process to determine the rules and criteria guiding dataset provenance and curation, human labeling for AI training, alignment, and red-teaming to build inclusive, non-discriminatory and transparent AI systems.

Michael Bąk, Executive Director of the Forum on Information and Democracy, said: “Democracies must stop allowing tech companies to dictate the trajectory of technology, to capture the policy narrative and to set the agendas. Solutions exist to build a global information and communication space conducive to democracy, that creates value for people not only as consumers, but first and foremost as citizens. We are presenting these solutions today. They call for a comprehensive framework encouraging companies developing and deploying AI to implement democratic procedures, suggesting measures to incentivize an ethical development and use of AI and setting a framework for accountability, governance and oversight.”

In a section in the policy framework titled “Defending Democracy in the Frontier of Artificial Intelligence”, he argued that “Our democratic institutions have a responsibility to shape and guide the evolution of AI in a responsible direction, one that conforms to the shared values of our democracies, respects the agency of people everywhere, and strengthens our fundamental human rights, including that of the right to reliable information.”

Mr. Bąk said the Forum on Information and Democracy is focused on preventing, limiting, and mitigating stress placed on democracies due to unrestrained tech, including artificial intelligence.

According to him, “Artificial intelligence presents an unprecedented transformation in how we create, disseminate and consume information. It decides what you see and what I see; and that we don’t see the same things. These systems enable anyone to easily create and disseminate information, yet they are often biased, discriminate against specific groups, or hallucinate. AI systems can also easily be abused by malicious actors that seek to deceive citizens, influence political processes and sow doubt on the facts that form the bedrock of democratic discourse.”

He said the recommendations in the policy framework aim to pre-empt and prevent such harms and steer technological innovation as a public good, in a direction that is in the public interest.

He stressed that “We must not commit the same errors as in the past, where social media and tech companies decided the rules of the game, set the agendas, determined which harms mattered (and where), and captured the policy narratives. This resulted in too much corporate apology and, sadly, substantial harm to our communities, our democratic institutions and to our agency as citizens.”

Mr. Bąk expressed the belief that artificial intelligence can take a different, more enlightened, meaningful path and that AI technology can move forward on a path guided by democratic oversight, constantly assessed and improved through civic leadership and inclusive participation.

He urged stakeholders to ensure inclusive frameworks and mechanisms that allow citizens to ensure AI systems are developed and deployed in the interests of the diverse world, adding that “this can only happen with transparency, accountability, and democratic oversight.”

Mr. Bąk explained that the policy framework presents recommendations to achieve these goals, saying “this can be done while encouraging thriving innovation.”