In recent years, the international community has witnessed a surge in momentum toward the development of standards for artificial intelligence (AI) governance. The United Kingdom, home to a burgeoning AI industry, has positioned itself as a key player in these discussions. Against this backdrop, the AI Safety Summit, convened last month (Nov. 1-2) by Prime Minister Rishi Sunak, marked a personal win for the embattled British leader, who wants the U.K. to play a role in shaping the trajectory of global AI governance.
Domestically, the summit was an opportunity for Prime Minister Sunak to pitch himself as an influential global statesman in the hopes of rallying support for himself and the governing Conservative Party ahead of the looming elections. It also allowed him to demonstrate the U.K.’s ability to act as a consensus builder for global AI governance, a role that neither the United States nor the European Union have been able to fill due to their respective political constraints. Looking ahead, London aims to dilute Brussels’ disproportionate influence on framing the conversation around the global AI governance landscape — a goal incidentally shared by Washington. As such, the November AI Safety Summit reflected the U.K.’s efforts — again shared by the U.S. — not to repeat how the EU’s General Data Protection Regulation (GDPR) became the regulatory standard for cross-border personal data safeguards worldwide. None of the non-European AI powers want to see the same happening this time on the question of AI rules and regulations without their input.
The AI Safety Summit was attended by major world leaders and industry representatives, including the rare presence of China, underlining the urgency of addressing the risks posed by AI and lending an overarching sense of legitimacy to the conference. While the Chinese delegation was not headed by the most senior statesman, Beijing’s participation was nonetheless invaluable, as China is an essential stakeholder in setting a baseline for AI governance standards. Prime Minister Sunak, as he had hoped, was able to successfully gather nations, companies, and thought leaders at the forefront of AI advancements.
However, the event’s broader significance needs to be understood in context. Notably, it coincided with the release of the Biden administration’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, adding a layer of complexity to the evolving global governance landscape. While the summit produced a tangible result in the form of the “Bletchley Declaration,” the document lacks specific policy goals and includes language that is modest compared to more comprehensive initiatives like the EU AI Act or the White House’s executive order. Still, the declaration does provide a useful foundation for future AI governance discussions. The summit document highlights key principles that could form the foundation for broader consensus. For example, the declaration centers around three pillars:
Global Opportunities: An affirmation of AI’s potential to positively transform and enhance human wellbeing, peace, and prosperity on a global scale.
Inclusive Development and Cooperation: An acknowledgment of the need for the safe development of AI, its transformative opportunities for public services, science, sustainability, and human rights, and a call for inclusive, collaborative efforts in AI development.
Addressing Risks and Ensuring Safety: The recognition of significant risks associated with AI, especially at the frontier, and the urgency to address issues such as human rights, transparency, fairness, accountability, and safety, along with the commitment to international cooperation, risk-based policies, and scientific research to ensure responsible AI development and deployment.
The EU’s AI Act, intended to compel companies with AI products in the European market to adhere to comprehensive rules, and the U.S. executive order, designed as a model for global regulation, overshadowed the summit. These initiatives, with their specificity and enforceability, are likely to shape the future of AI governance more significantly than the U.K.-hosted summit’s inclusive but less impactful discussions.
The ongoing debate over AI’s short-term risks, such as disinformation and biased outcomes, versus potential existential threats underscored the complexities in aligning diverse perspectives. While the Nov. 1-2 summit aimed to address longer-term risks and provide a platform for diverse voices, it faced challenges with achieving any tangible outcomes.
Two follow-up conferences planned for 2024, hosted by South Korea and France, signify a continued effort to advance discussions. As these discussions unfold, the expected passing of the EU’s AI Act and the implementation of President Joe Biden’s executive order on AI safety and security will likely have more far-reaching implications for the global direction of governance in this space than the results of the Bletchley Summit. However, countries such as India, the U.S., and the U.K. will continue to advance their proactive approaches to AI governance in hopes of diluting the EU’s ability to single-handedly shape global standards based on European priorities.
In the Middle East, Saudi Arabia and the United Arab Emirates stand out in discussions on global AI governance. Saudi Arabia, a burgeoning G20 economy, has actively supported the G20’s endeavors on the topic, presenting the perspectives of the grouping’s non-Western members. This aligns with Riyadh’s domestic initiatives to enhance its AI capabilities, particularly in developing extensive language models in both Arabic and English. Meanwhile, the UAE, which already serves as a regional technology hub, has demonstrated greater engagement in championing the technology industry and embracing diverse viewpoints on AI governance.
While the U.K. AI Safety Summit marked a diplomatic milestone and a definite step in the right direction, its impact is contingent on the ability of the global community to navigate the multiparty landscape of AI governance, where enforceable regulations and comprehensive initiatives will play a defining role.
Mohammed Soliman is the director of MEI’s Strategic Technologies and Cyber Security Program, and a Manager at McLarty Associates’ Middle East and North Africa Practice. His work focuses on the intersection of technology, geopolitics, and business in the Middle East and North Africa.
Photo by Alastair Grant - WPA Pool/Getty Images
The Middle East Institute (MEI) is an independent, non-partisan, non-for-profit, educational organization. It does not engage in advocacy and its scholars’ opinions are their own. MEI welcomes financial donations, but retains sole editorial control over its work and its publications reflect only the authors’ views. For a listing of MEI donors, please click here.