This is the ’Finnish Information policy working group’, the Collaboration of information policy actors from all Finnish parliamentary parties input to the UN roadmap for the
intergovernmental process and consultations to identify the terms of reference and modalities for the establishment and functioning of the Independent International Scientific Panel on Artificial Intelligence (AI) and the Global Dialogue on AI Governance.
https://www.un.org/global-digital-compact/en/ai
1. Mandate of the Panel
The Independent International Scientific Panel on AI should serve as a neutral, expert body providing decision-makers with evidence-based analysis and foresight on AI. Its mandate would be to assess the state of the art in AI and its impacts, risks, and opportunities across society, the economy, and the environment.
Additionally, AI’s transformative effects on labor markets, taxation, and social security systems should be a core part of the Panel’s assessments. AI-driven automation is expected to shift the balance of value creation, impacting traditional tax structures and labor market participation. The Panel should analyze how AI-induced productivity changes influence job availability, income distribution, and taxation models, ensuring that policymakers have evidence-based strategies to address these shifts.
In practice, this means the Panel would synthesize the best available scientific research on AI development (including technical capabilities and trends) as well as the technology’s social, ethical, and economic implications. The Panel should inform global policy without being policy-prescriptive. Instead, it would offer a “state of science” on AI – for example, clarifying areas of scientific consensus and uncertainty – thereby equipping governments and stakeholders with a common knowledge base for knowledge-based decisions.
Crucially, the Panel’s mandate should align with human-centered and sustainable AI principles. Panel’s work should help countries understand how AI can be developed and deployed in ways that are ethical, inclusive, and aligned with international law, while accelerating progress toward all 17 Sustainable Development Goals. In sum, the Panel would be analogous to the IPCC (Intergovernmental Panel on Climate Change) for AI: multidisciplinary, independent, and focused on scientific consensus-building to bridge divides in understanding AI’s risks and benefits. By providing credible, globally legitimate assessments of AI’s impacts, the Panel can help forge a shared understanding among nations and stakeholders, a vital foundation for coordinated AI governance. This mandate balances AI ethics, governance, sustainability, and security considerations, ensuring the Panel addresses issues from algorithmic bias and safety to AI’s role in sustainable development and global inequality.
2. Size, Composition, and Governance Structure
The Panel should be large enough to encompass diverse expertise but small enough to be manageable and decisive. A core panel of a few dozen eminent experts (e.g. 20–30 members) could be appointed, supported by a broader network of contributing researchers in various working groups. The composition must be multidisciplinary – including computer and data scientists, ethicists, legal and policy scholars, economists, sociologists, environmental scientists, political scientists, and other relevant domains – reflecting AI’s far-reaching impact. It must also be geographically diverse and gender-balanced so that all regions and perspectives are represented. The European Union, for example, has stressed that the Panel should consist of scientists from a broad range of disciplines with balanced geographical and gender representation, covering the technical, social, economic, environmental, and cultural dimensions of AI. Such diversity will bolster the Panel’s legitimacy and its ability to consider geopolitical and cultural contexts in AI development.
In terms of governance, the Panel should be established within the UN system but with strong guarantees of scientific independence. It would likely report to the UN (for instance, to the Secretary-General and General Assembly), yet its assessments should be free from political influence or veto. One possible structure is to have a small executive board or bureau elected from the Panel’s members to steer its work (similar to the IPCC Bureau), and a dedicated secretariat to coordinate research and logistics. This secretariat could be housed in the new UN AI Office (as recommended in the UN’s Governing AI for Humanity report) or a relevant UN agency to leverage existing staff and resources. Integrating the Panel into existing UN structures (e.g. drawing on UNESCO for ethics, ITU for technical standards, and OHCHR for human rights) will create synergies and avoid duplication.
The Panel’s governance should also include clear conflict-of-interest policies and transparency rules, given the influence of private sector and national interests in AI. While members serve in their personal capacity (not as state or corporate representatives), observer arrangements can ensure multi-stakeholder input (e.g. allowing industry, civil society, and UN agencies to observe or advise on non-sensitive aspects). Overall, this structure balances the scientific rigor and independence of the Panel with the need for inclusive, globally trusted governance.
3. Nomination and Selection Process
The nomination and selection of Panel experts should be transparent, merit-based, and inclusive. A good approach would be a multi-source nomination process: UN Member States, academic institutions, scientific societies, industry bodies, and civil society organizations could all be invited to nominate qualified experts. Additionally, an open call could allow individual experts to apply, ensuring emerging talents from underrepresented groups have a chance. To maintain the Panel’s credibility, nominees should have demonstrated expertise and a strong scientific or technical background in AI or related fields (including its societal impacts). They must also commit to act in the public interest, with ethical integrity and independence (serving in their personal capacity, not as delegates).
Given the ethical challenges of AI, including bias in datasets, opaque decision-making processes, and the societal consequences of autonomous systems, the Panel should integrate a dedicated AI ethics subcommittee. This group would be responsible for examining fairness, transparency, and accountability in AI systems, with a particular focus on human rights considerations. The subcommittee should include law-, philosophy-, and governance experts to ensure that AI development aligns with fundamental human values.
A neutral committee or mechanism should carry out the selection. For example, the UN Secretary-General might appoint a small selection panel composed of respected figures from the global scientific community (potentially including representatives from existing international science bodies) to evaluate nominations. This committee would aim to achieve a balanced panel across disciplines, geography, gender, and stakeholder backgrounds. It should apply clear criteria – e.g. scientific excellence, experience in interdisciplinary AI issues, and knowledge of governance or ethics – and consider the need for representation from developing countries (perhaps allocating a certain number of seats per region). Geopolitical balance is key to ensuring all countries trust the Panel’s composition so no single country or bloc should dominate.
Once a shortlist is formed, there could be a consultation with Member States for any objections or feedback, but without turning the process into a political negotiation over seats. The Secretary-General could make the final appointment of members or through endorsement by the General Assembly, underscoring UN ownership but still respecting the independent, expert nature of the Panel. Members might serve staggered terms (e.g. 2–3 years), with the possibility of renewal once, to allow refreshment of expertise over time. A rotation system ensures new perspectives (for instance, as AI technology evolves rapidly, the Panel can bring in new experts in cutting-edge areas). Throughout this process, openness and fairness should be emphasized – for instance, publishing the selection criteria and the list of appointed experts. This will instil confidence that the Panel comprises a genuinely independent, scientific, and multidisciplinary group of experts, as some Member States advocate.
4. Types of Assessments and Frequency
The Panel should deliver regular assessments of AI’s impacts, risks, and opportunities grounded in rigorous evidence. These assessments would take multiple forms:
Periodic Comprehensive Reports: Much like climate assessments, the Panel could produce a significant assessment report every 2 years that provides a holistic update on AI. This report would survey the latest developments in AI capabilities and evaluate societal impacts (e.g. on employment, education, inequality), environmental effects (such as energy use), and emerging risks (from algorithmic bias to security threats and existential risks). It would also highlight opportunities for leveraging AI for sustainable development and human well-being. Such reports should be widely peer-reviewed and present consensus findings with transparent uncertainty ranges.
Thematic or Sectoral Studies: The Panel can issue thematic assessments focused on specific issues or sectors (e.g., AI and healthcare, AI and climate change) between comprehensive reports. These targeted reports can dive deeper into particular risks or opportunities, providing timely input to relevant international discussions.
Rapid Response Briefs: Given the fast pace of AI, the Panel should have the agility to release brief updates on new developments (e.g. a breakthrough in generative AI). While maintaining scientific rigor, such briefs could be produced relatively quickly, ensuring policymakers worldwide are kept informed of the latest critical findings.
In terms of frequency, at minimum the Panel should aim to produce an annual report or summary of the state of AI. The European Union has suggested at least one major report per year. Comprehensive assessment cycles (spanning a few years) would allow deeper analysis and consensus-building, while interim outputs keep information current.
All assessments should be evidence-based, peer-reviewed, and multidisciplinary, covering not only technical aspects but also social, economic, ethical, and geopolitical dimensions of AI. Over time, these assessments could become the definitive reference for the international community on AI – much as IPCC reports are for climate – and would be timed to feed into global policy processes.
5. Global Dialogue on AI Governance – Mandate
The Global Dialogue on AI Governance should act as an inclusive forum under UN auspices for international cooperation on AI policy. Its core mandate is to facilitate open, ongoing exchange among governments and all stakeholders on maximising AI’s benefits while mitigating its risks, thereby moving toward coherent and enforceable global governance approaches. In essence, this Dialogue will provide the space to “turn a patchwork of initiatives into a coherent approach” to AI governance that adheres to international law, human rights, and sustainable development goals.
Key elements of the mandate include:
- Knowledge Sharing and Best Practices: The Dialogue should enable countries to share policy approaches, regulatory experiences, and ethical frameworks for AI.
- Global Policy Convergence: It should work towards bridging gaps in AI governance by highlighting areas of emerging consensus (e.g., transparency or human oversight requirements) and discussing areas of divergence.
- Addressing Risks and Opportunities: The Dialogue’s mandate covers AI’s ethical issues (bias, privacy, autonomy), security (misuse, arms race concerns), and economic/developmental impacts. It should consider how AI governance can support inclusive and sustainable development.
- Agenda-Setting for Action: While not a decision-making body, the Dialogue can produce recommendations or highlight priority areas for formal action by the UN or other international bodies.
- Towards a Binding Decision-Making Framework: Recognizing the increasing role of AI in global security, economy, and human rights, the Dialogue should work towards the establishment of an internationally binding decision-making system in cooperation with nation-states. This system should ensure effective enforcement mechanisms while respecting national sovereignty, fostering international collaboration, and securing a future where AI serves humanity responsibly and ethically.
Importantly, the Dialogue must be multilateral and multi-stakeholder in nature. It is a UN-based platform that gives every country a seat at the table and also invites the private sector, academia, civil society, and technical experts. This ensures that state priorities and international norms guide the outcomes while benefiting from the perspectives of non-state actors. The ultimate goal is to build a shared global vision and a legally robust cooperative approach to AI governance, aligning efforts worldwide so AI is developed and used in a manner that is safe, ethical, and beneficial for all humanity.
6. Outcomes the Dialogue Should Achieve
As a process-focused forum, the Global Dialogue will yield outcomes that are more about frameworks and understandings rather than binding decisions. Nevertheless, it should strive for concrete and valuable results, such as:
1. Consensus Principles or Guidelines: One key outcome could be the articulation of shared principles for AI governance, building on existing ones like the OECD AI Principles or UN human rights frameworks. These principles, while non-binding, could guide national policies and industry standards.
2. Best Practice Compendium: By sharing experiences, the Dialogue can produce compendium reports or toolkits of best practices, including regulatory approaches, audit methods, and risk assessment frameworks.
3. Policy Recommendations and Roadmaps: Discussions may crystallize into policy option recommendations for various stakeholders. For instance, the Dialogue could propose an international code of conduct on AI or a harmonization of certain technical standards.
4. Coordination of Initiatives: The Dialogue can improve coordination among the myriad AI governance initiatives globally, linking forums like the OECD, Global Partnership on AI, and UNESCO’s AI Ethics work. It might produce joint statements or partnership announcements.
5. Knowledge Sharing and Capacity-Building: The Dialogue should generate outcomes that help less-resourced stakeholders, such as training workshops or funding commitments announced as side benefits of Dialogue meetings.
6. Identification of Emerging Issues: By convening regularly, the Dialogue will serve as an early-warning system for new AI challenges or opportunities, guiding the focus of the Independent Scientific Panel’s research.
7. Mitigation of AI’s Security Risks: Given the increasing integration of AI in military and security applications, the Dialogue must prioritize discussions on autonomous weapon systems, cyber warfare, and AI-driven intelligence gathering. International coordination is necessary to prevent an AI arms race and establish safeguards against malicious AI use in global conflicts. The Panel should closely monitor developments in AI-powered military applications and offer evidence-based insights on risks and mitigation strategies.
Ultimately, the success of the Dialogue will be seen in greater convergence of AI governance efforts worldwide. It is a platform for building the “holistic and global approach” needed for AI governance, with outcomes that equip all stakeholders – especially governments – to craft policies that harness AI’s benefits while minimizing harm
7. Involvement of Governments and Stakeholders
Ensuring broad and meaningful participation is essential for the Dialogue’s legitimacy and impact. Governments must be at the center of this process, since they hold regulatory authority and represent the public interest; at the same time, industry, academia, civil society, and technical experts must be deeply involved. The Dialogue’s design should therefore embody a multi-stakeholder ethos underpinned by intergovernmental leadership.
How to involve governments:
The Dialogue could be structured as a series of forums or conferences convened by the United Nations, guaranteeing that all Member States are invited on an equal footing. Certain segments (e.g. high-level sessions) could be reserved for government statements while working groups could be led or co-led by government representatives. Keeping a strong intergovernmental foundation will ensure states’ priorities remain central. This might be achieved through co-chairing arrangements (e.g. one from a developed and one from a developing country).
How to involve other stakeholders:
- Multi-stakeholder Advisory Group: A planning committee or advisory body that includes representatives from tech companies, academia, NGOs, and possibly youth/indigenous groups.
- Open Participation and Consultation: Allow open floor discussions, breakout sessions, and roundtables where all can speak.
- Roles and Formats: In some sessions, stakeholders might be featured as expert panelists, discussing technical or ethical dimensions.
- Inclusion and Accessibility: Special efforts to support participation from the Global South and marginalized groups, including travel support or hybrid event formats. The EU has emphasized promoting local languages and local content.
Coordination with existing stakeholder initiatives (e.g. Global Partnership on AI, IEEE, ISO, faith-based alliances) avoids duplication. In summary, governments provide the mandate and political weight, while stakeholders provide expertise and societal perspectives. Balancing both is crucial for effective AI governance.
8. Format of the Dialogue
The Global Dialogue on AI Governance should be designed for regular, agile, and inclusive engagement, rather than a one-off conference. In line with recommendations, a feasible format is to hold the Dialogue twice a year in conjunction with existing international gatherings, with a structure that maximizes participation and concrete outcomes.
Frequency and Timing:
Hosting the Dialogue biannually keeps momentum. According to the Governing AI for Humanity report, these meetings could be organized “on the margins of existing meetings at the United Nations”. For example, one session might coincide with the UN General Assembly’s high-level week and another with a major tech or AI event.
Rotating Venues:
Rotating between New York (UN HQ), Geneva (home to ITU and the Human Rights Council), and Paris (UNESCO, OECD) has been suggested. Occasional meetings in other regions, such as Africa or Asia, would boost inclusivity in the Global South.
Hybrid Format:
To maximize inclusiveness, sessions should allow virtual access, as the EU has recommended having “provision for virtual formats”. Interactive tools (Q&A, remote interventions) make the Dialogue truly global.
Structured Agenda with Multiple Tracks:
Each Dialogue could span 1-2 days with parallel tracks (e.g. AI & Ethics, AI & Development, AI & Security), plus a plenary. Interactive workshop-style sessions should be encouraged over formal speeches.
Linkages with Other Forums:
One option is to schedule Dialogue events near the ITU’s AI for Good Summit, the Internet Governance Forum, or major AI expos to capitalize on existing participant presence. This approach leverages momentum and fosters alignment.
Secretariat and Preparation:
A small secretariat (possibly under the UN Tech Envoy or a proposed UN AI Office) could coordinate logistics, prepare background papers, and ensure synergy with UN agencies. The UN Interagency Working Group on AI could help set agendas. This ensures the Dialogue remains visible, relevant, and accessible, without creating duplicative bureaucracy.
9. Relationship between the Panel and the Dialogue
The Independent Scientific Panel on AI and the Global Dialogue are distinct mechanisms but should operate in a complementary, mutually reinforcing relationship. The Panel provides expert knowledge, while the Dialogue provides the forum for debate and coordination.
Distinct but Complementary Roles:
The Panel is an expert body focused on scientific assessment; the Dialogue is an inclusive platform for policy exchange. Keeping these functions separate ensures scientific work is not politicized and policy discussions remain well-informed. The EU and others emphasize they should have “distinct and separate, but complementary functions”.
How the Panel Informs the Dialogue:
The Panel’s reports and findings should feed directly into Dialogue meetings, offering a common factual basis. The Dialogue can allocate agenda time for the Panel to present its latest assessments, ensuring policymakers have up-to-date evidence on AI risks and opportunities.
How the Dialogue Supports the Panel:
The Dialogue’s discussions will highlight pressing concerns and questions from the international community, guiding the Panel’s future research topics. The Panel gains relevance and visibility when its findings are debated in a high-level forum, increasing the likelihood that its assessments shape actual policy.
Coordination Mechanisms:
A small coordination task force or shared secretariat could synchronize their schedules, exchange feedback, and orchestrate joint outreach. By creating a feedback loop – science informing policy, and policy needs guiding science – the two bodies can function akin to the IPCC and global climate negotiations, albeit less treaty-focused.
10. Leveraging Existing UN Initiatives and Support
To be effective, the Panel and Dialogue should build on existing UN efforts on AI and digital cooperation rather than starting from scratch. The UN system’s comparative advantage is its convening power and diverse specialized agencies.
Drawing on Existing Initiatives (Panel):
- UNESCO’s Recommendation on the Ethics of AI can serve as a foundation for ethical considerations.
- ITU leads AI for Good and develops technical standards. It also has a Global Initiative on Virtual Worlds – Discovering the Citiverse. Citiverse, as the UI for AI, is relevant for human interaction with AI.
- WHO, FAO, UNDP, and other agencies cover AI’s health, agriculture, and development aspects.
- OHCHR examines human rights implications (privacy, discrimination).
- UNIDIR tracks AI in weapons systems.
The Panel could form liaison teams or advisory subgroups with these agencies. Its secretariat might include seconded staff from relevant UN bodies, ensuring knowledge sharing and avoiding duplication.
Drawing on Existing Initiatives (Dialogue):
- Leverage UN Forums: The Dialogue can incorporate debates at the Security Council or Human Rights Council on AI.
- Coordinate with non-UN efforts: OECD’s AI Policy Observatory, Global Partnership on AI, or regional initiatives (EU AI Act, Council of Europe) can feed findings into the Dialogue.
- In-house UN Dialogues: If UNESCO or ITU hold AI events, their outputs can be presented in the Dialogue to inform the broader audience.
Coordinated Support by the UN System:
- A UN AI Office (as proposed) could provide secretariat functions for both the Panel and the Dialogue.
- Funding might come from the regular UN budget, voluntary contributions, or a new Global AI Fund.
- High-level leadership (Secretary-General, Deputy SG, Tech Envoy) should champion the Panel and Dialogue as part of the UN’s digital cooperation agenda.
By embedding the Panel and Dialogue within the UN system’s existing AI activities, both can “hit the ground running” and benefit from established expertise, networks, and legitimacy. This increases coherence and reduces fragmentation in AI governance efforts.
11.
The twin proposals of an International AI Panel and a Global AI Governance Dialogue represent a landmark opportunity to shape AI’s trajectory for the global good. A few final points are worth emphasizing:
- Urgency and Adaptability: AI is advancing rapidly, so both mechanisms must be set up with urgency and be able to adapt. As the UN Secretary-General noted, “Every moment of delay in establishing international guardrails increases the risk for us all.”
- Inclusivity and Equity: Special attention is needed for developing countries and marginalized communities, ensuring AI does not widen the digital divide. AI must produce inclusive outcomes.
- Ethics and Human Rights as a North Star: Technical governance must be rooted in human rights and a human-centric approach. Sustainability is also key, given AI’s environmental footprint.
- Geopolitical Balance and Trust: In an era of low trust among major powers, AI governance could be an area of cautious collaboration. Neutral, UN-led initiatives offer a platform for constructive engagement among all countries.
- Not a Silver Bullet – but a Framework for Action: The Panel and Dialogue will not instantly solve all AI governance challenges, but they provide the processes and relationships through which solutions can emerge. Real-world impact will measure their success, such as more informed policies and reduced AI-related harms.
By implementing these proposals with ambition and care, the international community can ensure AI evolves in a direction that is safe, equitable, and beneficial for all. The overarching goal is that “humanity’s hand guides AI forward” rather than allowing technology to dictate the future.