OpenAI, Anthropic, and Cohere, all based in the United States, have engaged in unofficial diplomacy with Chinese AI specialists, expressing worry regarding how the powerful technology could spread misinformation and damage social cohesion.
According to many sources with direct knowledge, two meetings were held in Geneva last year in July and October, attended by scientists and policy analysts from American AI groups and representatives from Tsinghua University along with other Chinese state-backed institutions.
According to attendees, the discussions allowed both sides to debate the risks of developing technology and demand investments in AI safety research. They went on to say that the ultimate objective was to identify a scientific road ahead that would allow them to develop more complex AI technologies securely.
“There is no way for us to set international standards around AI safety and alignment without agreement between this set of actors,” said one participant in the talks. “And if they agree, it makes it much easier to bring the others along.”
The previously unreported discussions are rare evidence of Sino-US cooperation amid the two big powers’ race for domination in advanced technologies like AI and quantum computing. Currently, Washington has restricted US exports of high-performance chips created by companies such as Nvidia, which are required to develop complex AI software.
However, given the possibility of existential dangers to humankind, the question of AI safety is currently a source of mutual concern among AI developers in both countries. An anonymous negotiator said that the White House, the UK, and Chinese governments were all informed about the planned Geneva discussions.
The Chinese embassy in the UK stated, “China supports efforts to discuss AI governance and develop needful frameworks, norms, and standards based on broad consensus.”
“China stands ready to carry out communication, exchange and practical cooperation with various parties on global AI governance, and ensure that AI develops in a way that advances human civilization.”
The private mediation group known as the Shaikh Group arranged the meetings to bring together influential people in crisis zones, most notably the Middle East.
“We saw an opportunity to bring together key US and Chinese actors working on AI. Our principal aim was to underscore the vulnerabilities, risks, and opportunities attendant with the wide deployment of AI models that are shared across the globe,” said Salman Shaikh, the group’s CEO.
“Recognising this fact can, in our view, become the bedrock for collaborative scientific work, ultimately leading to global standards around the safety of AI models.”
Those who were a part of the conversations mentioned that Chinese AI businesses like Baidu, Tencent, and ByteDance weren’t there. Google DeepMind was informed about the details but chose not to appear either.
During the sessions, AI specialists from both sides discussed areas for technical cooperation as well as more concrete policy suggestions that fed into discussions surrounding the UN Security Council meeting on AI in July 2023 and the UK’s AI conference in November last year.
According to the negotiator present, the success of the sessions has resulted in preparations for future debates that will concentrate on scientific and technical ideas for how to integrate AI systems with legal codes, rules, and values of each culture. There has been an increase in calls for great powers to work together to combat the rise of AI.
Chinese AI researchers joined Western academics in November to urge for stronger regulations on the technology, issuing a declaration warning that powerful AI would pose an “existential risk to humanity” in the coming decades.
One of China’s most renowned computer scientists, Andrew Yao, was part of the group that demanded the establishment of an international regulatory agency, the obligatory registration and auditing of sophisticated AI systems, the implementation of instant “shutdown” protocols, and the allocation of 30% of developers’ research budgets to AI safety.