In October 2023, the AI Advisory Body was convened by Secretary-General António Guterres with an aim “to undertake analysis and advance recommendations for the international governance of AI.”

In December 2023, they published the Interim report: Governing AI for Humanity which outlines principles for what global governance of AI should be based on. 

Currently, they are inviting individuals, groups, and organisations to provide feedback and recommendations which will help them structure the final report ahead of the Summit of the Future in the summer of 2024.

I think this is a unique opportunity to help shape the UN vision, discourse and future recommendations on its AI/global governance/global development agenda, so if you haven’t heard about this before and are interested, please submit your inputs through this form by 31st March 2024.

A few examples of what the UN vision on AI might shape:

  • international narrative on values and expectations for global governance of AI
  • UN development agenda after SDGs
  • country-specific recommendations on the use and regulation of AI
  • UN members’ engagement with the current governance initiatives (e.g. the Safety Summit, the U.S. Executive Order)
  • deployment of AI for SDGs. 

If you would like to work on this together or discuss other potential strategies for action, please contact me on joanna.wiaterek@gmail.com.

The Global Majority must be welcomed and given an active position at the AI table. The urgent question is how to facilitate that best. Recommendations are being shaped right now and the UN will inevitably have a strong influence on forming the long-term narrative. Let’s help to ensure its highest quality!


 

72

6
0

Reactions

6
0
Comments2
Sorted by Click to highlight new comments since:

After reviewing the report, my comments are as follows:

 

Opportunities and Enablers:

Democratizing access to AI capabilities and infrastructure - Expanding access to AI tools, datasets, compute resources and educational opportunities, especially for underrepresented groups and regions

Providing incentives for AI applications focused on social good, sustainability and inclusive growth is important. The UN could highlight exemplary AI projects aligned with the SDGs

 

Risks and Challenges:

Include Intrinsic safety as one of the principle in AI design and risk assessment:

Intrinsic safety is a key principle that should be integrated into the design, development and deployment of AI systems from the ground up to mitigate risks. This means building in safeguards, constraints and fail-safe mechanisms that prevent AI systems from causing unintended harm, even in the case of failures or misuse.

 

Guiding Principles to guide the formation of new global governance institutions for AI: 

Enforcement Mechanisms - clearly define the authority or mechanisms to enforce their decisions or policies

 

Institutional Functions that an international governance regime for AI should carry out: 

Referring to the FDA's Adverse Event Reporting System (AERS), it is recommended to establish a similar system as a tool for the global AI safety monitoring program

I would also share this on LessWrong if you haven't already!

Curated and popular this week
Relevant opportunities