Robodebt is just one reason why we should be worried about AI

By Peter McDonald , Project Lead, Data for Good partnership at the Centre for Social Impact, Flinders University

Have you noticed the increasing number of automated decisions being made about you?

Many automated decisions are benign. The Australian Tax Office operates an automated system that gathers much of our income data and makes an assessment of our tax return, with the days of ATO staff manually checking each return long gone.

Most of us hope for a decision that puts a few extra dollars into our pockets. We are surrounded by many similar systems which automatically generate a decision, but as we now know, some automated decision making processes can be absolutely devastating to an individual’s or family’s life.

There is a whole chapter in the Robodebt Inquiry which describes the impact of automated decision making on social service recipients. The Inquiry found that even well-intentioned staff were not enough to stop Robodebt from automatically issuing erroneous debt notices to recipients based on inaccurate calculations and questionable assumptions.

Such automated decision making wrecked lives. As Commissioner Catherine Holmes summed up in her report of the Royal Commission into the Robodebt Scheme , it was an "extraordinary saga" of "venality, incompetence and cowardice".

The lack of recourse for the people impacted by Robodebt made this failed approach to uncovering overpayments worse. There was no access to a legitimate complaints process for recipients. Alarmingly, as the Inquiry records, departmental officers receiving calls to the inquiries line were unable to explain how the debts had been calculated. There was no access to a human who could examine and explain the information on which automated decisions were made.

No avenue for justice existed.

Why do the learnings from Robodebt matter?

A leap forward in the data led world is upon us. Artificial intelligence (AI) now generates content, forecasts and recommendations, and even makes decisions for us.

Robodebt was not an AI system. It completed data matching and automated decision making without the use of AI. AI adds a layer of sophistication to automated decision making, turbocharging the ability of industry and governments to gather and process huge amounts of data.

The coming together of AI and automated decision making creates a new and significant risk for marginalised people and communities, on a scale that even Robodebt didn’t achieve.

There is a developing evidence base that suggests unregulated AI will amplify our society's existing racism, sexism and other biases, in ways that could further entrench disadvantage. Researchers in the US recently devised a test for AI bias and found that its recognition process was prejudiced against dark-skinned people.

The research concluded that AI will embed unfair practices into the hiring, insurance, and renting sectors, and even the education of people.

These new AI processes have the potential to entrench the discrimination and racism that many in our community already experience, especially if the software designers are not willing to grapple with these issues.

Many social service organisations work at addressing racism, sexism and other biases in our community. We don’t want to see this important work to make our society more equitable undone by the indiscriminate use of technology without appropriate levels of scrutiny and safeguards.

We cannot foresee the next frontier in the application or development of AI, but we should be concerned that there is no obvious place for independent review and recourse if the outcomes of that application are unfair or discriminatory. There is no existing legitimate third party to review decisions made by AI for those who find themselves trapped by poorly designed biased technology.

Insufficient consideration has been given to the impact of AI paired with automated decision making, especially as it applies to marginalised people and communities. The Data for Good Project, a new partnership between the Centre for Social Impact Flinders and Uniting Communities, is exploring the increasing impact of data, AI and automated decision making on disadvantaged people and communities, and will work to help shine a light on this area.

We need clearer regulation so that there is a legal requirement to create proper complaint and recourse processes. We do not want to see unfair automated practices develop in hiring, renting or the education of people. State and Commonwealth legislation regarding anti-discrimination is the logical starting point for safeguards.

AI has the power to be a hugely positive force. But without a regulatory framework which pays attention to AI’s capacity to reinforce biases, it has the potential to entrench further disadvantage and marginalise people within our community.