Politicians commit to collaborate to tackle AI safety, US launches safety institute | TechCrunch

The world is locked in a race, and competition, over dominance in AI, but today, a few of them appeared to come together to say that they would prefer to collaborate when it comes to mitigating risk.

Speaking at the AI Safety Summit in Bletchley Park in England, the U.K. minister of technology, Michelle Donelan, announced a new policy paper, called the Bletchley Declaration, which aims to reach global consensus on how to tackle the risks that AI poses now and in the future as it develops. She also said that the summit is going to become a regular, recurring event: another gathering is scheduled to be held in Korea in six months, she said; and one more in France six months after that.

As with the tone of the conference itself, the document published today is relatively high level.

“To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible,” the paper notes. It also calls attention specifically to the kind of large language models being developed by companies like OpenAI, Meta and Google and the specific threats they might pose for misuse.

“Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks – as well as relevant specific narrow AI that could exhibit capabilities that cause harm – which match or exceed the capabilities present in today’s most advanced models,” it noted.

Alongside this, there were some concrete developments.

Gina Raimondo, the U.S. secretary of commerce, announced a new AI safety institute that would be housed within the Department of Commerce and specifically underneath the department’s National Institute of Standards and Technology (NIST).

The aim, she said, would be for this organization to work closely with other AI safety groups set up by other governments, calling out plans for a Safety Institute that the U.K. also plans to establish.

“We have to get to work and between our institutes we have to get to work to [achieve] policy alignment across the globe,” Raimondo said.

Political leaders in the opening plenary today spanned not just representatives from the biggest economies in the world, but also a number speaking for developing countries, collectively the Global South.

The line up included Wu Zhaohui, China’s Vice Minister of Science and Technology; Vera Jourova, the European Commission Vice President for Values and Transparency; Rajeev Chandrasekhar, India’s minister of state for Electronics and Information Technology; Omar Sultan al Olama, UAE Minister of State for Artificial Intelligence; and Bosun Tijani, technology minister in Nigeria. Collectively, they spoke of inclusivity and responsibility, but with so many question marks hanging over how that gets implemented, the proof of their dedication remains to be seen.

“I worry that a race to create powerful machines will outpace our ability to safeguard society,” said Ian Hogarth, a founder, investor and engineer, who is currently the chair of the UK government’s task force on foundational AI models, who has had a big hand to play in putting together this conference. “No one in this room knows for sure how or if these next jumps in compute power will translate into benefits or harms. We’ve been trying to ground [concerns of risks] in empiricism and rigour [but] our current lack of understanding… is quite striking.

“History will judge our ability to stand up to this challenge. It will judge us over what we do and say over the next two days to come.”

Source link