Human Rights Commission wants privacy laws adjusted for an AI future
Last Updated on by Segun Ayo
The Australian Human Rights Commission has called on the Australian government to modernise privacy and human rights laws to take into account the rise of artificial intelligence (AI) as one of 29 proposals put forward in its Human Rights and Technology discussion paper.
“We need to apply the foundational principles of our democracy, such as accountability and the rule of law, more effectively to the use and development of AI,” Human Rights commissioner Edward Santow wrote in his foreword in the discussion paper [PDF].
“Where there are problematic gaps in the law, we propose targeted reform. We focus most on areas where the risk of harm is particularly high. For example, the use of facial recognition warrants a regulatory response that addresses legitimate community concern about our privacy and other rights.
“Government should lead the way.”
One of the specific changes, the paper suggested, was for the Australian government to develop a national strategy that will protect human rights during the development of new and emerging technologies.
The commission said the strategy should set the national aim of promoting responsible innovation and protecting human rights; prioritise and resource national leadership on AI; promote laws, co-regulation, and self-regulation to allow for industry to be closely involved; and provide education and training for government, industry, and society.
“This national strategy should set a multi-faceted regulatory approach — including law, co-regulation and self-regulation — that protects human rights while also fostering technological innovation,” the paper stated.
See also: Ethical questions in AI use cannot be solved by STEM grads alone (TechRepublic)
The proposal comes off the back of the paper revealing that regulatory lag by the government has “contributed to a drift towards self-regulation in the technology sector” and has resulted in a weakening of existing human rights protection.
The commission also identified that public trust in many new technologies, including AI, is low.
“The majority of respondents to a national survey were uncomfortable with the Australian Government using AI to make automated decisions that affect them, and an international poll indicated that only 39% of respondents trusted their governments’ use of personal data,” the paper said.
“In Australia, community concern associated with practices such as Centrelink’s automated debt recovery program is emblematic of broader concerns about how new technologies are used by the public and private sectors.
“Building or re-building this community trust requires confidence that Australia’s regulatory framework will protect us from harms associated with new technologies.”
In addition, the paper stated that stakeholders have also expressed concern about a “power imbalance between the consumer and big tech companies”.
The discussion paper also proposed that the Australian government appoint an “appropriate” independent body to assess the efficacy of existing ethical frameworks for the protection and promotion of human rights, while also identifying opportunities to improve the operation of ethical frameworks.
Santow highlighted that the appointment of a new AI Safety Commissioner was also another suggestion put forward in the discussion paper. He noted the commissioner would be responsible for monitoring the use of AI, preventing individual and community harm, promoting the protection of human rights, help existing regulators, government, and industry bodies responds to the rise of AI.
When it came to specific regulatory changes, the discussion paper noted there needs to be legislation to ensure AI systems that are deployed do not infringe on individual human rights; clearly state who is liable for the AI systems; specify what actions can be taken when there’s serious invasion of privacy; and explains AI-informed decision making.
Establishing a regulatory sandbox to test AI-informed decision-making systems to compliance with human rights should also be considered, the discussion paper said.
It further added that to ensure people with disabilities are able to equally access digital technologies, all levels of government should adopt a standard procurement policy, with the commission pointing out that “there is currently no whole of government approach to the provision and procurement of accessible goods, services, and facilities”.
“The Commission proposes the adoption of government-wide accessibility and accessible procurement standards. This would enhance accessibility for public sector employees and users of public services,” the paper said.
The commission said it also wants to see providers of tertiary and vocational education include the principles of human rights by design in relevant degrees and other courses in science, technology and engineering, and for professional accreditation bodies in engineering, science, and technology to consider introducing mandatory training on human rights by design as part of continuing professional development.
“The adoption of a ‘human rights by design’ strategy in government policies and procedures would be an important step in promoting accessible technology,” the paper said.
Must read: AI and ethics: The debate that needs to be had (TechRepublic)
In addition to the proposals, the discussion paper examined how AI is being used to make decisions, pointing out how on one hand, it’s being used to improve diagnostics, personalise medical treatment, and prevent diseases, and on the other, it’s adversely affecting human rights such as in the case of the controversial robo-debt scheme where the government eventually conceded parts of it was unlawful.
The discussion paper is part of the Human Rights and Technology Project that is being led by Santow. It was launched back in July 2018 and has since seen the release of an issues paper, white paper, and phase one consultation.
While the discussion paper only addressed the “most pressing” issues “with the most widest implications for human rights” — regulation, accessibility, and AI informed decision-making — the commission said there are other areas that could also “benefit from dedicated research, analysis and consultation. These areas include the future of work and the impact of automation on jobs, impact of connectivity, digital inclusion, regulation of social media content, and digital literacy education.”
The discussion paper will now be subject to public consultation. The Australian Human Rights Commission is inviting for submissions and responses to its proposals be made until 10 March 2020.
A final report will be released sometime in 2020, with expectations that the implementation of the final report due to take place between 2020-21.
The Australian Human Rights Commission, however, is not alone in examining issues surrounding the potential ethical questions in relation to AI.
The Commonwealth Scientific and Industrial Research Organisation earlier this year also highlighted a need for development of artificial intelligence in Australia to be wrapped with a sufficient framework to ensure nothing is set onto citizens without appropriate ethical consideration earlier this year.
The Australian National University is also currently undertaking a research project that focuses on designing Australian values into artificial intelligence systems.
The funding will dispersed via grants through the federal government’s Medical Research Future Fund.
AI deployments are saturating businesses but few are thinking about the ethics of how algorithms work and the impact it has on people.
Aside from the blockchain-based CHESS replacement project, the ASX is experimenting with artificial intelligence and machine learning, but it’s cautious of letting the tech run wild in such a sensitive market.
With the help of Harrison-AI, the IVF clinic is preparing to commence trials to validate the use of AI for IVF treatments.
New report says 49% of Australian businesses that are early adopters of the tech have indicated a ‘major to extreme AI skills gap’ in the country.