Groups Examine Governments’ National AI Strategies from Human Rights Perspective

0
169
Charles Bradley, ED, Global Partners Digital
Charles Bradley, ED, Global Partners Digital

Global Partners Digital (GPD) and Stanford’s Global Digital Policy Incubator have released a report that examines governments’ National Artificial Intelligence (AI) Strategies from a human rights perspective.

The report titled: “National Artificial Intelligence Strategies and Human Rights: A Review,” examines existing strategies adopted by governments and regional organisations since 2017, assessing the extent to which human rights considerations have been incorporated—with a series of recommendations to policymakers looking to develop or revise AI strategies in the future.

As AI applications are embedded in different areas of life – from healthcare to labour, criminal justice to retail – governments are increasingly conscious of the potential impacts of AI. At the same time, they seek to take advantage of the economic opportunities offered in this rapidly developing sector. One way through which countries are addressing AI policy is by developing comprehensive, cross-governmental strategies – National AI Strategies – outlining the actions the government will take. Almost 30 countries have developed AI strategies since 2017 after Canada became the first country to adopt such.

As countries begin to develop strategies around AI, it becomes critical that they also consider the potential harms and opportunities as they relate to human rights, and prepare themselves to protect and promote human rights in the context of this new technology.

Eileen Donahoe, Executive Director of the Global Digital Policy Incubator at Stanford University, one of the partners on the report said: “The key takeaway message of this report is that human rights principles should be embedded into National AI Strategies at the outset. A human rights-based approach to AI is the best way for governments to protect citizens from potential harms as they arise, but also to capture the benefits for society.”

The report finds that while the majority of National AI Strategies mention human rights, very few contain a deep human rights-based analysis or concrete assessment of how various AI applications impact human rights. In all but a few cases, they also lacked depth or specificity on how human rights should be protected in the context of AI, which was in contrast to the level of specificity on other issues such as economic competitiveness or innovation advantage.

It provides recommendations, falling under six broad themes, to help governments develop human rights-based national AI strategies.

The recommendations are:

  • Include human rights explicitly and throughout the strategy: Thinking about the impact of AI on human rights – and how to mitigate the risks associated with those impacts – should be core to a national strategy. Each section should consider the risks and opportunities AI provides as related to human rights, with a specific focus on at-risk, vulnerable and marginalized communities.
  • Outline specific steps to be taken to ensure human rights are protected: As strategies engage with human rights, they should include specific goals, commitments or actions to ensure that human rights are protected.
  • Build-in incentives or specific requirements to ensure rights-respecting practice: Governments should take steps within their strategies to incentivize human rights-respecting practices and actions across all sectors, as well as to ensure that their goals with regards to the protection of human rights are fulfilled.
  • Set out grievance and remediation processes for human rights violations: A National AI Strategy should look at the existing grievance and remedial processes available for victims of human rights violations relating to AI. The strategy should assess whether the process needs revision in light of the particular nature of AI as technology or in the capacity-building of those involved so that they are able to receive complaints concerning AI.
  • Recognize the regional and international dimensions to AI policy: National strategies should clearly identify relevant regional and global fora and processes relating to AI, and the means by which the government will promote human rights-respecting approaches and outcomes at them through proactive engagement.
  • Include human rights experts and other stakeholders in the drafting of National AI Strategies: When drafting a national strategy, the government should ensure that experts on human rights and the impact of AI on human rights are a core part of the drafting process.

Dr. Megan Metzger, Associate Director of Research for the Global Digital Policy Incubator, and co-author of the report said: “We hope that our report will help governments in better addressing human rights questions in relation to AI in their national strategies, and provide others with guidance on how to evaluate these strategies as they are released.”