UNESCO, Thomson Reuters Foundation Publish New Report Highlighting Gaps in AI Governance Globally

Lidia Arthur Brito
Ms Lidia Arthur Brito, UNESCO’s Assistant Director-General for Natural Sciences and Social and Human Sciences
4 min read

The United Nations Educational, Scientific and Cultural Organization (UNESCO) has published a new report highlighting significant gaps in how companies around the world govern and manage artificial intelligence (AI) and warning that rapid adoption of the technology is outpacing accountability, transparency, and ethical safeguards.

Titled “Responsible AI in Practice: 2025 Global Insights from the AI Company Data Initiative,” the 94-page report published in March 2026 by UNESCO, in collaboration with the Thomson Reuters Foundation, is based on what Antonio Zappulla, the CEO of the Thomson Reuters Foundation, described as “the world’s largest study assessing corporate AI adoption globally”.

It relies on an analysis of 2,972 companies across 11 sectors and five regions and draws on more than 100,000 data points to provide what the researchers described as the most comprehensive global snapshot of corporate AI practices to date.

According to the report, AI is being rapidly embedded into corporate operations, products, and services, but governance and disclosure mechanisms are “not evolving at the same speed,” thereby creating a widening accountability gap.

They reported that the gap is leaving investors and the public with limited insight into how AI systems are deployed, how risks are managed, and who is responsible when systems fail, as companies often communicate high-level strategies and principles but provide little evidence of how these are implemented in practice.

The study identifies several troubling trends across industries, including low commitment to governance frameworks with only a small proportion of companies aligning their AI strategies with formal governance systems, as well as limited accountability structures, because while some firms report board-level oversight, detailed operational controls and monitoring mechanisms are often absent.

Other trends are weak protections for workers as companies are not demonstrating adequate safeguards for employees affected by AI-driven changes, and poor data governance, with a majority of firms lacking clear policies on training data quality and third-party data use.

The report also noted that ethical considerations such as human rights and environmental impacts are frequently sidelined in favour of compliance-focused concerns like data privacy.

One of the most striking findings of the study is the lack of meaningful disclosure on ethical and human rights impacts.

According to the data presented, only a small percentage of companies report conducting human rights or ethical impact assessments, while most focus primarily on privacy and compliance risks.

Similarly, independent oversight remains rare as very few companies disclose the existence of external ethics advisory boards or third-party audits of AI systems, raising concerns about the credibility of internal governance claims.

UNESCO is warning that the speed of AI adoption is exceeding the pace of regulatory development.

UNESCO’s Assistant Director-General for Natural Sciences and Social and Human Sciences, A.I., Ms Lidia Arthur Brito, said in a Foreword to the report: “The data confirms both the immense momentum of AI adoption and the stark reality of the current governance gap. Much like the broader ecosystem, companies across industries are embedding new AI capabilities faster than they are formalising accountability, internal controls, and oversight. This lag creates risks, ranging from operational failures and embedded biases to severe reputational harm.”

The report underscores the importance of UNESCO’s global standards on AI ethics, which aim to guide governments and companies in balancing innovation with human rights and societal well-being.

It calls on businesses, particularly large multinational corporations, to take the lead in developing transparent and accountable AI systems, while also supporting smaller firms that may lack the resources to implement robust governance frameworks.

The report concludes that ultimately, the central challenge facing AI governance is no longer awareness but implementation, urging companies to move beyond broad commitments and demonstrate how accountability works in real-world conditions, including how systems are monitored, corrected, or withdrawn when necessary.

As AI continues to shape economies and societies, the findings highlight an urgent need for stronger oversight, clearer standards, and greater transparency to ensure that technological innovation serves the public interest rather than undermining it.