Skip to the content

Centre for Data Ethics and Innovation works on AI assurance roadmap

26/04/21

Mark Say Managing Editor

Get UKAuthority News

Share

The Centre for Data Ethics and Innovation (CDEI) has begun to work on a roadmap for AI assurance as part of the effort to ensure public trust in the technology.

It has outlined the plan in a series of blogposts covering the overall need for AI assurance, user needs, the types of assurance and role of standards.

CDEI, which is within the Department for Digital, Culture, Media and Sport, says that in the AI context assurance tools and services are needed to provide trusted information on products’ fairness, safety, reliability and compliance with standards.

It highlights existing models of assurance – audits of business and compliance or bias, certification schemes, accreditation processes and impact assessments – and says there is a need to co-ordinate these within an AI assurance ecosystem.

In 2020, CDEI launched a work programme to assess how assurance approaches used in other sectors could be applied to AI, the current maturity and adoption of these tools in addressing compliance and ethical risks in AI, and the supporting role of standards.

It has worked with a team of researchers at University College London, building on their recent survey of AI auditing approaches and involving four pilot projects to expand the evidence base.

The main output of the work will be an AI assurance roadmap setting out the CDEI view of the current ecosystem, as well as how it should develop to support innovation and minimise the risks.

This will involve laying out the activities needed to build a mature assurance ecosystem, and identifying the roles and responsibilities of different stakeholders.

In turn, this should help to build common understanding around the different types of AI assurance and how they contribute to responsible innovation, meet the needs of different users and clarify the relationship between different kinds of standards.

The organisation has not set out a timescale for the roadmap, but says it is now engaging with stakeholders across the public sector, industry, and academia to take in a wide range of views. It adds that the ecosystem must be built on a consensus to resolve tensions between user interests, clarify standards and ensure regulatory regimes are complementary to each other.

It also identifies the users as government policy makers, regulators, executives considering whether to develop, buy or use AI systems, developers, frontline users and the individuals affected.

Along with this it points to two broad groupings of assurance models: compliance assurance, which compares a service or product to an existing set of standards; and risk assurance, which asks open ended questions abut how a system works.

Both types can be mutually reinforcing in an assurance ecosystem, with the former providing for consistency, while the latter accommodates greater ambiguity and context specificity. In line with this, compliance assurance requires unambiguous standards, while risk assurance does not. 

“Going forward,” CDEI says, “an effective AI assurance ecosystem will play an enabling role, unlocking both the economic and social benefits of these technologies through the coordination of AI assurance tools and user responsibilities.

“Playing this crucial role in the development and deployment of AI, AI assurance is likely to become a significant economic activity in its own right and is an area in which the UK, with particular strengths in legal and professional services, has the potential to excel in.”

Image from iStock, 3D Generator

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.