Towards the development of “safe AI”? Sixteen of the world’s leading artificial intelligence (AI) companies, whose representatives are meeting this Tuesday, May 21 in Seoul, have made new commitments to ensure the safe development of this science, the British government announced. “These commitments ensure that the world’s leading AI companies will be transparent and accountable for their plans to develop safe AI,” British Prime Minister Rishi Sunak said in a statement released by the ministry. British Institute of Science, Innovation and Technology.

The agreement, signed notably by OpenAI (ChatGPT), Google DeepMind and Anthropic, builds on the consensus reached during the first global “summit” on AI security, last year in Bletchley Park, in the United Kingdom. United. This second “summit” in Seoul is jointly organized by the South Korean and British governments. AI companies that have not yet made public how they assess the security of the technologies they develop are committed to doing so.

Also read: Brad Smith: “Microsoft will invest 4 billion euros in France in the service of artificial intelligence and the country’s economic growth”

This includes determining what risks are “deemed intolerable” and what companies will do to ensure that these thresholds are not crossed, the press release explains. In the most extreme circumstances, companies also agree to “not develop or deploy a model or system” if mitigation measures fail to keep risks below set thresholds. These thresholds will be defined before the next “summit” on AI, in 2025 in France.

Among the companies that accept these security rules are also the American technology giants Microsoft, Amazon, IBM and Meta, the French Mistral AI and the Chinese Zhipu.ai.

The runaway success of ChatGPT shortly after its 2022 release sparked a rush in the generative AI field, with tech companies around the world investing billions of dollars into developing their own models. Generative AI models can produce text, photos, audio, and even videos from simple prompts. Their supporters present them as a breakthrough that will improve the lives of citizens and businesses around the world.

But human rights defenders and governments also fear their misuse in a wide range of situations, including to manipulate voters through fake news or “deepfake” photos and videos of political leaders. Many are demanding that international standards be established to govern the development and use of AI.

In addition to security, the Seoul “summit” will look at how governments can help drive innovation (including AI research at universities) and how the technology could help solve problems such as climate change and poverty. The two-day Seoul meeting is being held partially virtually, with some sessions held behind closed doors while others are open to the public in the South Korean capital.