Skip to the content

AI Safety Institute to set up in San Francisco

21/05/24

Mark Say Managing Editor

Get UKAuthority News

Share

AI Safety Institute logo
Image source: GOV.UK, Open Government Licence v3.0

The UK Government’s AI Safety Institute is to open its first overseas office in San Francisco this summer.

Technology Secretary Michelle Donelan announced the plan, saying it will enable the UK to tap into the technology talent around the Bay Area in northern California and engage with the world’s largest AI labs.

A team of technical staff will be recruited and led by a research director.

The office will complement the work of the London headquarters of the institute, which now have over 30 technical staff.

Pivotal for leadership

Donelan said: “This expansion represents British leadership in AI in action. It is a pivotal moment in the UK’s ability to study both the risks and potential of AI from a global lens, strengthening our partnership with the US and paving the way for other countries to tap into our expertise as we continue to lead the world on AI safety. 

“Since the prime minister and I founded the AI Safety Institute, it has grown from strength to strength and in just over a year, here in London, we have built the world’s leading government AI research team, attracting top talent from the UK and beyond. 

“Opening our doors overseas and building on our alliance with the US is central to my plan to set new, international standards on AI safety which we will discuss at the Seoul Summit this week.”

Testing models

The institute has also released a selection of recent results from safety testing of five publicly available advanced AI models, saying it is the first government backed organisation in the world to unveil the results of its evaluations.  

It assessed AI models against four key risk areas and found that several models completed cyber security challenges but not in more advanced areas, that all tested models remain vulnerable to AI ‘jailbreaks’ that could produce uncensored content, and tested models were unable to complete more complex tasks without human oversight.

The AI Safety Institute was launched in November of last year with the aim of evaluating the risks of frontier AI models and was claimed to be the first of its kind in the world.

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.