A Turning Point for Global Artificial Intelligence Governance: Understanding the United Nations’ Draft Resolution on Artificial Intelligence and Its Importance.


On 18 August 2025, the United Nations General Assembly adopted draft resolution A/79/L.118 setting terms for the formation of the Independent International Scientific Panel on Artificial Intelligence and the Global Dialogue on Artificial Intelligence Governance. The resolution, once enacted, will establish the first universal framework for analysing the risks, potential, and governance needs of Artificial Intelligence (AI). It relies on the Pact for the Future (resolution 79/1) and the Global Digital Compact, and importantly, restricts its application to the peaceful uses of AI.

The move is historic. For decades, the UN has led global discourse on disarmament, human rights, and climate change. Today, it is seeking to leverage its convening power in a new field: AI. By so doing, the UN is acknowledging a reality cautioned by numerous scholars and policy officials — regulating AI cannot be left to fragmented national policies or corporate self-regulation.


Why This Resolution Matters

AI is no longer experimental. It underpins everything from financial systems and healthcare diagnostics to hiring decisions, digital policing, and judicial sentencing. Tools such as ChatGPT, autonomous driving systems, and algorithmic trading platforms show AI’s ability to scale across borders in seconds. With such reach, local regulation alone cannot contain global risks.

The current regulatory landscape is uneven:

  • The European Union leads with the AI Act, a risk-based legal framework. While it sets comprehensive standards for high-risk AI systems, it struggles with rapid technological change, creating delays in applying rules to emerging AI innovations.
  • The United States relies on a patchwork of sectoral laws and voluntary guidelines. This encourages innovation but leaves gaps in accountability, particularly in AI-driven decision-making in employment, healthcare, and law enforcement.
  • China enforces centralised, state-driven regulations, prioritising control and alignment with political goals. While this allows swift implementation, it raises concerns about surveillance, censorship, and individual rights.
  • Africa and Latin America are exploring national strategies, but most countries lack resources for robust oversight, and there is limited technical expertise to enforce compliance effectively.

This fragmentation produces “AI governance gaps”: conflicting standards, regulatory arbitrage, and uneven protection for citizens. The UN’s initiative attempts to provide a global baseline — not to replace national efforts, but to coordinate them and encourage shared principles, standards, and ethical guidelines.

 

The Independent International Scientific Panel on AI

The proposed panel would consist of 40 independent experts, operating in their individual capacities, elected for three-year terms. Geographic, gender, and disciplinary balance would be the essence of its composition.

 

Its mandate would involve:

·      Providing, each year, evidence-informed scientific analyses of AI risks, opportunities, and effects. These reports would integrate research across computer science, law, ethics, economics, and social sciences, furnishing policymakers with an empirical evidence base.

·      Releasing policy-influential but non-prescriptive briefings, staying neutral but observing possible regulatory and ethical factors. For example, the Panel may consider the use of AI in healthcare or criminal justice, observing risks to bias or disparity in access without mandating a single regulatory answer.

·      Hearing from external experts and establishing working groups on highly technical topics, such as generative AI, autonomous systems, or AI in public administration, with the flexibility to address emerging technologies.

·      Voting Co-Chairs, one from a developed and one from a developing country — to symbolise geographic balance, inclusivity, and diverse perspectives in leadership.

 

This structure draws explicit inspiration from the Intergovernmental Panel on Climate Change (IPCC). Just as the IPCC has shaped climate negotiations with its authoritative science, the AI Panel could provide a trusted reference point for governments navigating AI’s complexity.

Safeguards such as disclosure of conflicts of interest and ongoing independence requirements are crucial. AI is a field dominated by powerful corporate actors with vested interests; without such safeguards, the Panel risks capture and the erosion of global trust in its recommendations.


The Global Dialogue on AI Governance

The second pillar of the resolution is the Global Dialogue on AI Governance a platform for states, civil society, private sector actors, and academia. Its purpose is to foster inclusive, transparent, and multi-stakeholder discussions on how AI can be harnessed responsibly.

The Dialogue’s agenda includes:

  • Building trustworthy, human-centred AI systems: This requires AI that is safe, reliable, transparent, and accountable. Trustworthy AI includes mechanisms for auditing, reporting, and monitoring systems for errors, bias, and unintended consequences. For instance, predictive policing tools must be tested for racial bias before deployment.
  • Addressing capacity gaps: Developing countries often lack resources, technical infrastructure, and regulatory expertise. Capacity-building may include training for regulators and legal professionals, investments in high-performance computing, and partnerships with international experts. The aim is to avoid leaving countries dependent on foreign standards or technologies.
  • Exploring ethical, cultural, and linguistic dimensions of AI: AI affects societies differently. Language models, facial recognition, and cultural recommendation algorithms may inadvertently perpetuate bias. The Dialogue will discuss contextualised approaches that respect cultural diversity, local ethics, and linguistic inclusivity.
  • Encouraging interoperability across governance systems: Different countries may adopt varying AI rules. Interoperability ensures global systems can operate together, reduce regulatory conflicts, and promote fair competition. This might include shared frameworks for transparency reporting or cross-border AI audits.
  • Ensuring human rights compliance, transparency, accountability, and oversight: AI must uphold principles such as due process, freedom of expression, and non-discrimination. The Dialogue will guide legal and operational standards for AI use, ensuring human control over critical decisions in healthcare, law enforcement, and finance.
  • Promoting open-source AI models and open data: Access to AI innovations should not be restricted to a few corporations or countries. Open-source tools and data allow broader participation, innovation, and equitable distribution of benefits. However, the Dialogue will also address safeguards to prevent misuse, such as deepfakes or automated cyberattacks.

The Dialogue will meet annually, alternating between Geneva and New York, alongside existing UN events. It will launch in 2025 during the General Assembly’s high-level week and subsequently align with the ITU’s AI for Good Summit (2026) and the UN’s Science, Technology, and Innovation Forum (2027).


Comparisons with Regional Approaches

The UN’s plan must be understood against regional governance trends:

  • European Union: The AI Act is the world’s most comprehensive regulation, but its bureaucratic complexity slows adaptation to fast-changing technologies. It demonstrates a high standard of ethics and risk management, but may be difficult to implement consistently across all sectors.
  • United States: Innovation thrives under a fragmented regime, but liability gaps remain — evident in cases like the 2019 Tesla Autopilot crash, which exposed weaknesses in accountability and enforcement mechanisms in AI-related accidents.
  • China: Rapid, centralised regulation enables control and swift adoption but raises human rights concerns, as seen in AI-driven mass surveillance and the social credit system, which illustrate the trade-offs between state control and individual freedoms.
  • Africa: The African Union is drafting a Continental AI Strategy, but progress is slowed by infrastructure deficits, skills shortages, and resource limitations. The Vodacom Tanzania chatbot lawsuit ($4.3M) highlights the potential for AI to cause real-world harm even in developing contexts.

The UN approach differs in three ways: inclusivity, neutrality, and universality. Unlike regional frameworks that reflect political or economic blocs, this resolution aspires to give equal voice to both innovators and late adopters.


Legal Principles Underpinning the Resolution

Though forward-looking, the draft resolution is grounded in traditional legal principles:

  1. Corporate Liability – Extends long-standing doctrines of product liability and negligence to AI. For example, an AI system that misdiagnoses a patient could expose the developer and deploying institution to legal responsibility.
  2. Transparency and Due Process – Ensures individuals can understand, challenge, or appeal AI-driven decisions. This principle aligns with constitutional guarantees of procedural fairness and accountability.
  3. Equality and Non-Discrimination – AI bias must be addressed through anti-discrimination laws. For instance, hiring algorithms must be evaluated for gender or racial bias to comply with employment law.
  4. Privacy and Data Protection – Reinforces obligations in line with human rights law and instruments like the GDPR, protecting personal data and requiring informed consent for AI data use.
  5. International Law Principles – The Dialogue aims to ensure AI governance aligns with human rights treaties, sustainable development objectives, and the principle of sovereign equality of states, promoting cooperation without undermining national sovereignty.

 

Challenges Ahead

The resolution is ambitious, but several challenges loom:

  • Speed vs. Process: AI evolves in months, while UN processes move in years. The Panel must maintain flexibility to provide timely guidance.
  • Geopolitical Tensions: Rivalries between AI powers — notably the US, EU, and China could obstruct consensus or delay implementation.
  • Inclusivity vs. Influence: Developing countries may participate symbolically unless empowered with resources, representation, and technical knowledge.
  • Funding and Independence: Reliance on voluntary contributions risks conflicts of interest. Transparent reporting mechanisms will be critical.
  • Enforcement: As with most UN bodies, the Panel and Dialogue wield moral and persuasive power, not legally binding authority, requiring diplomatic skill to achieve tangible outcomes.


Conclusion 

If put into practice, A/79/L.118 might do for AI what the IPCC did for global warming: create a science basis and a forum for open worldwide debate. Its success will depend on implementation, whether or not it can deliver authoritative knowledge, bridge divides, and create consensus in a polarised world.

 

This decision is historic because it understands that AI is not just a question of algorithms or innovation, but humanity's shared future. With AI potentially worsening inequality or accelerating the trajectory toward the Sustainable Development Goals, the UN initiative comes at a critical and opportune moment.

 

It will not necessarily solve all of the governance questions, but it starts a process toward something more: a global social contract for AI that ensures technology serves people, not vice versa.

 

References

·      European Union. (2025, February 19). Artificial Intelligence Act: First regulation on artificial intelligence. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

·      European Union. (2025, August 2). EU Artificial Intelligence Act takes effect, sparking new Europe-US clash. https://www.lemonde.fr/en/economy/article/2025/08/02/eu-artificial-intelligence-regulation-takes-effect-sparking-new-europe-us-clash_6744002_19.html

·      Intergovernmental Panel on Climate Change. (n.d.). Intergovernmental Panel on Climate Change. https://www.ipcc.ch/

·      Reuters. (2025, August 25). Tesla rejected $60 million settlement before losing $243 million Autopilot verdict. https://www.reuters.com/legal/litigation/tesla-rejected-60-million-settlement-before-losing-243-million-autopilot-verdict-2025-08-25/

·      The Guardian. (2025, August 1). Jury orders Tesla to pay more than $200 million to plaintiffs in deadly 2019 Autopilot crash. https://www.theguardian.com/technology/2025/aug/01/tesla-fatal-autopilot-crash-verdict

·      United Nations. (2024, September 22). Global Digital Compact. https://www.un.org/global-digital-compact/en

·      United Nations. (2024, September 22). Resolution A/RES/79/1: The Pact for the Future. https://docs.un.org/en/A/RES/79/1

·      United Nations. (2025, August 18). Draft resolution A/79/L.118: Terms of reference and modalities for the establishment and functioning of the Independent International Scientific Panel on Artificial Intelligence and the Global Dialogue on Artificial Intelligence Governance. https://docs.un.org/en/A/79/L.118

Comments

Popular posts from this blog

How Tech Companies Foster Cyber Abuse Through Negligence

Innovating Legal and Legislative Solutions for Human Trafficking in Uganda

MY 27TH BIRTHDAY GIFT TO THE FUTURE OF KWANIA CHILDREN: Empowering Education Through the Back to School Mission 2025.