The Freedom Online Coalition (FOC), a partnership of 42 governments committed to advancing internet freedom and protecting human rights in the digital era, has reaffirmed its commitment to a rights-respecting, human-centric, safe, secure, trustworthy and ethical Artificial Intelligence (AI) future grounded in human rights, accountability, and the rule of law.
In a joint statement on Artificial Intelligence and Human Rights, the Coalition acknowledged that while AI offers transformative potential for sustainable development and public service delivery, it also introduces unprecedented risks when implemented without adequate safeguards. The Coalition said that since its 2020 Joint Statement, it has tracked a sharp acceleration in the deployment of AI technologies, some of which are weaponised to entrench discrimination, suppress dissent, and manipulate public discourse.
It stated that AI systems are now deeply integrated into state and corporate infrastructures, often with limited transparency or accountability. These systems, it noted, impact core human rights, such as privacy, equality, and access to information, and pose growing threats to the integrity of democratic institutions. The Coalition warned that when misused, through unlawful surveillance, disinformation campaigns, or biased decision-making, AI erodes civic trust and endangers already marginalised communities.
The FOC emphasised the urgent need for inclusive, human rights-based AI governance frameworks rooted in international law. These frameworks, it said, must address the lived experiences of vulnerable and at-risk groups, such as women and girls, linguistic and cultural minorities, and climate-affected populations, who disproportionately bear the brunt of algorithmic harms and systemic exclusion.
The Coalition welcomed progress made at the international level, including the UN General Assembly Resolution 78/265 on Trustworthy AI and the Council of Europe’s Framework Convention on AI and Human Rights, Democracy and the Rule of Law. It noted that these developments reflect a growing global consensus that AI governance must be democratic, transparent, and accountable, not dominated by authoritarian regimes or purely commercial interests.
Recognising the shared responsibility of public and private actors, the FOC stressed that governments must implement safeguards for high-risk AI applications through mechanisms like human rights impact assessments, independent oversight, and access to effective redress. Also, the Coalition said private sector actors must conduct meaningful human rights due diligence across the AI lifecycle, including transparent documentation of system capabilities and risks, diverse data inputs, meaningful stakeholder consultation, and commitments to prevent discriminatory outcomes.
In addition, the FOC warns against the proliferation and export of AI systems likely to be misused for repression or surveillance, urging all actors to refrain from facilitating technologies that contribute to human rights violations or shrink civic space. Governments and developers must take proactive steps to ensure algorithmic transparency, resist manipulative political microtargeting, and support safeguards against content manipulation.
Another area of concern that the Coalition pointed out is the environmental footprint of AI, particularly the energy-intensive nature of training large models, which disproportionately affects climate-vulnerable communities. The FOC called for collaboration between AI governance structures and climate-focused institutions like the Intergovernmental Panel on Climate Change (IPCC) to ensure that environmental sustainability is factored into AI innovation and deployment.
The Coalition also drew attention to the growing threat AI poses to independent journalism, civil society actors, and media pluralism, particularly through disinformation, deepfakes, and algorithmic content suppression. It highlighted the urgent need for digital and media literacy, AI education in local languages, and support for journalistic integrity as critical pillars for resilient democracies.
The FOC further called on national governments and multilateral institutions to strengthen legal and institutional safeguards that enable individuals and communities to seek remedies for AI-related harms, whether caused directly or through structural effects such as deskilling or social exclusion. Processes like the Global Dialogue on AI Governance and the UN Independent International Scientific Panel on AI must embed human rights at their core and meaningfully involve underrepresented voices across the Global South and affected communities.
The joint statement advocates for the development of locally relevant AI models that reflect and protect cultural and linguistic diversity, while stressing that access to and benefits from AI must be equitably distributed to avoid deepening existing digital divides. The Coalition said that by centering human rights, transparency, and democratic accountability, the global community can unlock the potential of AI to improve lives while minimising its harms.