December 2nd, 2024
Explora Articles The challenge of governing AI for humanity
October 2, 2024 9 min
The challenge of governing AI for humanity
Acordar un modelo de gobernanza global para la inteligencia artificial se vuelve imprescindible ante la magnitud de los desafíos sociales y económicos que plantea este cambio tecnológico y la actual fragmentación de las regulaciones en este campo. Pero ¿Seremos capaces?
Agreeing on a global governance model for artificial intelligence is becoming imperative considering the social and economic challenges posed by this technological shift and the current fragmentation of regulations in this field. But will we be capable of achieving this?
AI is revolutionizing the way we live and work, offering unprecedented opportunities for economic and social progress. However, the rapid advances in this field also bring complex challenges that require coordinated global attention. So far, the implementation of disparate AI regulations in different regions of the world has resulted in inconsistencies in norms and standards. This lack of harmonization not only creates obstacles for innovation and international trade but also complicates the effective management of risks associated with AI deployment, such as misinformation, data privacy, algorithmic bias, and the potential for massive job loss.
Given this scenario, there is an urgent need to establish a global AI governance system that addresses the social and economic challenges posed by this emerging technology. In this context, the recently released report Governing AI for Humanity, prepared by the United Nations Secretary-General’s High-Level Advisory Body on Artificial Intelligence, provides a comprehensive analysis of AI’s opportunities and risks. It offers recommendations for its international governance, emphasizing the need to establish regulatory, ethical, and collaborative frameworks to ensure that the development and implementation of AI benefit all humanity, mitigate potential risks, and address emerging inequalities.
Towards a new governance model
Among other mechanisms to ensure that all of humanity benefits from AI advances, the report recommends establishing an international scientific panel on AI that brings together experts from diverse disciplines and backgrounds. This panel would collaborate with global organizations and initiatives to gather, analyze, and promote research, publishing an annual report on AI capabilities, opportunities, risks, and uncertainties. By highlighting areas of consensus and identifying topics requiring further study, this panel could enhance transparency and inform political debates and decision-making. It could also conduct research on specific issues, such as the use of AI to discover new materials or treat neglected diseases.
The report also recommends launching an intergovernmental and multi-stakeholder political dialogue on AI governance, to be held twice a year alongside existing United Nations meetings. Its purpose would be to share best practices in AI governance that promote development while upholding the respect, protection, and fulfillment of all human rights. This includes leveraging opportunities and managing risks, as well as fostering common understandings regarding the implementation of AI governance measures by public and private sector developers and users to improve international interoperability in this area.
Thirdly, recognizing that many countries need better access to essential AI resources—such as computing power, inclusive and representative training datasets, skilled talent, and a global data framework—the report also recommends establishing a Global Fund for AI. Governed by an independent body and funded through both monetary and in-kind contributions from public and private sources, this fund would support data sharing, the construction of digital infrastructure, the promotion of local AI ecosystems, and the encouragement of entrepreneurship.
Another proposal is the creation of an AI capacity-development network to expand global access to talent and expertise and advance the Sustainable Development Goals (SDGs). This network would connect a set of capacity-building centers, affiliated with the United Nations, that would provide key stakeholders with expert knowledge, computing capacity, and AI training data.
To ensure standardization, regulatory alignment, and coordinated approaches to ethics and security, the report also recommends establishing an AI standards exchange and a global AI data framework for discussions on AI governance. These initiatives would build on the work of UN agencies and other international efforts, promoting interoperability and cross-border collaboration.
Finally, to achieve effective coordination of all these mechanisms, the report proposes establishing an AI officethat reports directly to the United Nations Secretary-General. This office, described in the report as “light and agile,” would act as a nerve center, connecting and integrating various institutional initiatives. By linking efforts led by regional organizations and other stakeholders, it could reduce the costs of cooperation and streamline collective action.
Obstacles and opposing forces to global AI governance
The initiative is commendable. But still, we must recognize the significant obstacles and opposing forces that may hinder the implementation of the model proposed in the report.
Firstly, there are divergent national interests. Each country has its own priorities and strategies regarding AI development and use. Technological powers such as the United States and China, along with the European Union, seek to maintain or achieve supremacy in this field, which may lead to reluctance to cede sovereignty or adopt regulations that they perceive as limiting their technological advancement.
This situation is further complicated by the issue of regulatory sovereignty and autonomy. Governments may be hesitant to accept a global regulatory framework that interferes with their national autonomy. The idea of an international entity having influence over internal policies can generate resistance, especially in strategic areas like AI.
Geopolitical competition and distrust between nations also hinder international cooperation. If countries suspect that others will not comply with regulations or might gain unfair advantages, they are less likely to commit to a global model. A lack of trust can make it difficult to build strong and effective agreements.
National security considerations add another layer of complexity. AI has sensitive military and security applications. Countries may resist sharing information or submitting to regulations that could compromise their defense or internal security capabilities, limiting international cooperation in these areas.
Moreover, the rapid technological advancement of AI often outpaces lawmakers’ ability to regulate it effectively. This gap can make regulations quickly obsolete or ineffective in addressing new challenges. The speed of innovation requires flexible and adaptive regulatory frameworks, which is difficult to achieve on a global level.
Private sector interests also play a significant role. Large technology corporations may oppose regulations they view as restrictive to innovation or harmful to their business models. Their economic power and lobbying capabilities can make it difficult to implement stricter regulations.
There are also concerns about innovation and flexibility. As we are seeing in regards EU AI Act, some argue that strict regulations could inhibit innovation and reduce economic competitiveness. This fear may lead to resistance against regulations perceived as too restrictive, especially in emerging industries where flexibility is key to development.
Finally, the fear of losing competitive advantages may make leading countries in AI research and development reluctant to adopt a global model. They may fear that international regulations will level the playing field, allowing other countries to catch up or surpass them, affecting their dominant position in the global market.
The path forward
The good news is that these challenges are not insurmountable, but they require a realistic and collaborative approach. To begin with, it is essential for global actors to recognize that the risks of unregulated AI transcend borders and affect all of humanity, and that international cooperation, though difficult, is necessary to address inherently global problems.
That is why the report concludes with an urgent call for all involved actors to work together to build that global AI governance model. Among other benefits, this model should contribute to ensuring that technological advances propel us toward an inclusive, equitable, and sustainable future of work, rather than leading us to a dystopian scenario. The key lies in proactively addressing the changes, establishing solid ethical and regulatory frameworks, and ensuring that the benefits of AI are fairly distributed across society.
We must understand that effective AI governance is not just a technological or economic issue, but a human imperative that requires vision, cooperation, and commitment to shared values of dignity, justice, and well-being for all. Recognizing and addressing the obstacles and opposing forces is an essential part of this process.
Whether we are capable is not a question of possibility but of will. Only through a collective and determined effort we can overcome the barriers and build a future where AI truly serves humanity. The path to global AI governance will undoubtedly be complex, but it is a challenge we must face together, with conviction and cooperation. By aligning our efforts, embracing shared values, and acting with foresight, we can ensure that AI becomes a force for good—one that empowers all of humanity and leads us to a future marked by inclusion, equity, and sustainability.
+++
Photo Niels Huenuerfuest
Did you like it?
Future for Work Institute operates on an annual subscription model that includes access to our calendar activities and knowledge repository resources, as well as in-company services.
Plan
Curiosity
Recommended for HR teams of between 5 and 20 people.
Plan
Pioneer
Recommended for HR teams of between 15 and 100 people.
Plan
Exploration
Recommended for HR teams of more than 100 people.
Plan
Horizons
For more complex organizations.
Already Registered? Log in here