SEOUL: At the conclusion of a global summit in Seoul, more than a dozen countries along with major tech firms pledged to collaborate in addressing the potential risks associated with artificial intelligence.
AI safety emerged as a primary focus during the two-day gathering. In a joint declaration, over two dozen nations, including the United States and France, committed to working together to counter threats posed by advanced AI technologies, acknowledging the existence of “severe risks.”
These risks encompass scenarios such as AI systems aiding non-state actors in the development or use of chemical or biological weapons, as well as AI models capable of evading human oversight through various means such as circumvention, manipulation, or autonomous adaptation.
The commitment by tech giants, including OpenAI and Google DeepMind, to share their risk assessment methodologies was also highlighted. These companies vowed not to deploy AI systems that exceed predefined risk thresholds. The summit, co-hosted by South Korea and Britain, aimed to build upon the consensus established at the inaugural AI safety summit held the previous year.
UK technology secretary Michelle Donelan emphasized the need to match the rapid pace of AI development with effective risk management strategies. She highlighted the importance of societal resilience to AI-related risks. Additionally, a group of tech companies, including Samsung Electronics and IBM, pledged to develop AI responsibly through the Seoul AI Business Pledge.
IBM’s Chief Privacy and Trust Officer, Christina Montgomery, stressed the importance of responsibly utilizing AI technology and implementing safeguards to prevent misuse.