High-risk artificial intelligence that encourages self-harm and sows disinformation could be banned as the government moves to get on top of the technology, which some estimate could boost the economy by up to $4 trillion by early next decade.
As society wrestles with AI, the government has released two landmark papers laying the groundwork to regulate the technology, as reported in this masthead on Tuesday.
An Industry Department discussion paper acknowledged the immense productive benefit AI could bring, including through optimising building engineering, providing legal services and consolidating hospital patient data.
Consulting firm McKinsey calculated the technology could add between $1.1 trillion and $4 trillion to the Australian economy by the early 2030s, the report said.
But the paper – and a separate response to the consequences of generative AI like ChatGPT by the National Science and Technology Council – warned of equally profound risks.
The papers were published in the same week as the founder of ChatGPT and other AI leaders issued a statement saying: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Industry and Science Minister Ed Husic said Labor wanted to grow public confidence in the government’s ability to protect citizens from harm.
“Using AI safely and responsibly is a balancing act the whole world is grappling with at the moment,” he said.
“The upside is massive, whether it’s fighting superbugs with new AI-developed antibiotics or preventing online fraud.
“But as I have been saying for many years, there needs to be appropriate safeguards to ensure the safe and responsible use of AI.”
As part of an eight-week consultation to determine potential legislative changes, the government’s paper asks whether any high-risk AI applications or technologies should be banned completely.
It flags possible changes to consumer, corporate, criminal, online safety, administrative, copyright, intellectual property and privacy laws.
“There are many examples and concerns around AI being used for potentially harmful purposes, such as: generating deepfakes to influence democratic processes or cause other deceit, creating misinformation and disinformation, encouraging people to self-harm,” the discussion paper said.
“Inaccuracies from AI models can also create many problems. These include unwanted bias and misleading or entirely erroneous outputs such as ‘hallucinations’ from generative AI.”
Algorithmic bias represented one of the most substantive threats, according to the discussion paper.
“These include: racial discrimination where AI has been used to predict recidivism, which disproportionately targets minority groups, educational grading algorithms favouring students in higher-performing schools, [and] recruitment algorithms prioritising male over female candidates.”
( Information from politico.com was used in this report. Also if you have any problem of this article or if you need to remove this articles, please email here and we will delete this immediately. [email protected] )