Poll shows public discomfort with AI in public services

RSA publishes report calling for early engagement with people in an effort to build confidence in the technology’s use

A majority of people are uncomfortable with the idea of artificial intelligence being used in decision making in public services, according to the results of a poll carried out for the Royal Society of Arts, Manufactures and Commerce (RSA).

Abstract of lines and dotsIt has also urged in a report that the public should be engaged early about the increased use of AI in an effort to build confidence in the responsible use of the technology.

A survey carried out for the RSA by polling firm YouGov, which took in responses from 2,000 people, showed that small numbers were aware of the role of automated decision systems in public services: 9% for criminal justice, 14% for immigration, 18% for healthcare and 19% for social support.

On being asked if they supported its use, the numbers were only a little higher, or lower in one case: 12% for criminal justice, 16% for immigration, 20% for healthcare and 17% for social support. For each service area there was a majority of 50-60% saying they opposed its use with the rest saying they were unsure, although for healthcare the figure in opposition was 48%.

The main concerns about the increased use of AI is that it does not have the empathy required to make important decisions affecting people and communities (61%), that it reduces people’s responsibility and accountability (31%) and there is a lack of oversight or government regulation of the decisions (26%).

Minority support

Even with the introduction of various safeguards – such as the introduction of common principles and the right to request an explanation of a decision – only a third or less of the respondents said they would increase their support for using AI in a range of public and private sector services.

When asked if they were comfortable with the idea of automated systems being used more over time as their accuracy and consistency improve, only 26% said they were comfortable to any degree, with 64% saying they were uncomfortable and the remainder being unsure.

The report, Artificial Intelligence: Real Public Engagement, says that while public attitudes to AI have not yet impeded progress, negative perceptions could begin to create barriers. If people feel victimised by the technology they may resist its use, even it means they lose out on benefits.

It could be because they feel decisions about how it is used are beyond their control, and do not trust those who are making them. It would need a stronger sense of engagement to overcome their distrust.

The report suggests this could be achieved through setting up citizens’ juries or reference panels, which would deliberate on how public authorities could use AI and provide input to impact statements or impact assessment frameworks. They could say whether they accept the use of an automated decision system or, if not, under what conditions they would.

Ethics forum

The RSA is testing this approach with its Forum for Ethical AI – under a programme run with AI company Deep Mind – looking at what government agencies like the Centre for Data Ethics and Innovation and the Information Commissioner’s Office could do to ensure transparency, fairness and accountability. It is aiming to produce conclusions later this month, then run some workshops that will lead to a report being published towards the end of the year.

In an accompanying blogpost, Brhmie Balaram, a senior researcher at the RSA, says: “The RSA is holding a citizens’ jury because we believe that when it comes to controversial uses of AI, the public’s views and, crucially, their values can help steer governance in the best interests of society. We want the citizens to help determine under what conditions, if any, the use of automated decision systems is appropriate.

“These sorts of decisions about technology shouldn’t be left up to the state or corporates alone, but rather should be made with the public. Citizen voice should be embedded in ethical AI.”

Image: Detail of report cover