Skip to the content

Ada Lovelace Institute urges evaluations of advanced AI

29/07/24

Mark Say Managing Editor

Get UKAuthority News

Share

Cyborg looking at AI icon
Image source: istock.com/Andrey Suslov

The Ada Lovelace Institute has said the evaluation of advanced AI systems has become an urgent policy priority for governments around the world.

It has published a report on the issue, titled Under the radar? responding to growing concerns around the need to ensure the safety of advanced AI, also known as foundation models.

This comes after it has conducted research to understand whether current evaluation methods are effective and reliable at ensuring AI systems are safe.

The institute said that key findings include that evaluations can serve a useful purpose, but alone they are insufficient for determining the safety of products and services built using these AI models. They need to be used alongside other governance tools.

Another is that existing methods like ‘red teaming’ and benchmarking have technical and practical limitations – and could be manipulated or ‘gamed’ by developers.

In addition, safety cannot be evaluated in a vacuum. Evaluations of models in laboratory settings can be useful but do not provide the full story.

Recommendations

The report provides a number of recommendations for the use of AI as a governance tool, including that evaluations should be context-specific and there should be investments to develop more robust evaluations, including understanding how models operate.

There should also be support for an ecosystem of third party evaluation through certification schemes, and regulation to incentivise companies to take evaluation seriously. The latter could include giving regulators powers to scrutinise models and their applications, along with the possibility of blocking release if they appear unsafe.

Andrew Strait, associate director at the Ada Lovelace Institute, said: “The field of AI evaluation is new and developing. Current methods can provide useful insights, but our research shows that there are significant challenges and limitations, including a lack of robust standards and practices.

“Governing AI is about more than just evaluating AI. Policymakers and regulators are right to see a role for evaluation methods, but should be wary about basing policy decisions on them alone.

“Evaluations are worth investing in, but they should be part of a broader AI governance toolkit. These emerging methods need to be used in tandem with other governance mechanisms, and crucially, supported by comprehensive legislation and robust enforcement.”

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.