Unchecked AI is a risk we can’t take

Like a high-performance sports car, AI is racing down the road, unhindered by any road rules.

Seeking to grab the steering wheel back, the Australian government finally has taken the first steps towards tighter regulations on artificial intelligence, with Industry and Science Minister Ed Husic releasing an interim response to the Safe and Responsible AI report that it commissioned last year.


Challenges in AI Risk Classification

    While opening the door to further consultation, this response does not dampen serious concerns that future legislation may be too generous to the Googles and Microsofts of our world while overlooking the adverse effects on low-income people.

    To date, there have been worrying signs that our regulators are failing to appreciate the unique risks AI poses to society, placing disadvantaged Australians in even greater jeopardy.

    The interim response outlines mandatory safeguards only for “high risk” AI that negatively affects a person’s safety or fundamental rights, and voluntary standards for other applications of the technology.

    This narrow view ignores the potential danger from other “lower risk” AI and emerges from the government’s current risk framework that classifies AI that causes lasting damage to people as a normal risk of doing business.

    Causing lasting damage to people should not be a normal and acceptable risk of doing business.


    Government Regulation of Artificial Intelligence and the Need for Oversight

      Husic’s proposed new advisory body must be prepared to call out shortcomings in the government’s approach, particularly around the validity of the government risk framework if it doesn’t adequately shield Australians from the immense potential harm that can be inflicted by AI.

      The proposed body should consider not just high-risk AI but all AI, and must include voices from people who are adversely affected by the technology.

      The interim response is focused on high-risk AI and its impact on people. While the paper has widened the definition of high risk, concern remains that the definition of high-risk harm is too narrow to properly protect those affected most unfairly by the evolving power of AI. This will particularly threaten those already experiencing disadvantage – similar to the recent failures of Robodebt – unless we prioritise regulatory reform to enact necessary safeguards.

      The significant dangers of having too many adverse outcomes categorised as low risk has been outlined in a submission to the government’s inquiry into safe and responsible AI by the Data for Good partnership at the Centre for Social Impact, Flinders University.

      The department’s position so far has been inadequate, considering high impacts that are ongoing and difficult to reverse as medium risk, not high risk, and as such suggesting that the question of redress should be left to self-regulation.

      This means that when a computer makes a decision that has a lasting detrimental impact on someone, the company or government responsible for that decision will see this as within the range of acceptable business risks.

      This sends a dangerous signal to adopters of AI that high and ongoing impacts to Australians is an acceptable part of doing business, with unclear external redress procedure.

      The Robodebt Royal Commission highlighted the disasters which can flow from allowing automated decision making to reign, and the significant barriers that were put in the way of people who were negatively affected. The Commission noted that disadvantaged people were unfairly targeted and shouldered the burden of the Robodebt algorithm.

      While self-management is appropriate for low-risk AI, high-risk AI falls across many sectors and necessitates stringent oversight and legislation. The full protection of the law must be utilised to shield people from any damage that AI and automated decision making inflicts. People who are discriminated against because of biased AI tools need the opportunity to seek a legal remedy.

      Addressing Bias, Discrimination, and the Role of Legislation in Artificial Intelligence

      Algorithmic bias is widely considered as among the biggest dangers of AI. Datasets used to inform an algorithm can be flawed and lead to disproportionate impacts on vulnerable groups, such as First Nations people. Recent research concluded that AI will embed unfair practices into the hiring, insurance, and renting sectors, and even education.

      Our legislation, particularly in discrimination law, needs firm amendments to safeguard against these risks. We should also consider redress mechanisms that are accessible, independent, and written into future legislation, consistent with the direction of other jurisdictions.

      The speed at which AI is moving is exciting, but it is racing down the road freely without the necessary road rules to keep it in check. To manage the risk to Australians, government needs to both regulate and provide a method for redress. We all want to see the benefits of this new transformative technology used for social good.

      This opinion piece, written by Peter R. McDonald, Project Lead for the The Data for Good Partnership at CSI Flinders, was originally published in The Australian.