Share this article on:
The Office of the Australian Information Commissioner (OAIC) has decided that it will no longer invest in pursuing Clearview AI for its use of images of Australian faces in its controversial facial recognition software.
Background
Clearview AI, facial recognition software that was used and/or trialled by law enforcement agencies, including the Australian Federal Police, gained international infamy after it was discovered that its database of faces was created through the scraping of billions of images of people without their consent. According to the firm, the database Clearview AI users has over 50 billion faces.
In 2021, the OAIC ruled that the company was to halt the collection of these images and delete its current database within 90 days.
Originally, Clearview AI appealed the decision with the Administrative Appeals Tribunal, which, according to a freedom of information request last year, was based on the company’s assumption that it was not liable to Australian jurisdiction as it had prevented its scraping tools from collecting images on Australia-based servers.
However, during another scrape in January 2023, the company did not have the measures in place to ensure that Australian facial images on social media and others not based on Australian servers were not collected, thus the appeal would fail.
“All regulated entities, including organisations that fall within the jurisdiction of the Privacy Act by way of carrying on business in Australia, which engage in the practice of collecting, using or disclosing personal information in the context of artificial intelligence are required to comply with the Privacy Act,” said the OAIC.
However, before the appeal failed, Clearview AI backed down, and the original OAIC ruling stood.
OAIC gives up the chase
Now, despite no evidence that Clearview AI carried out what the OAIC ruled, the Aussie privacy watchdog announced on Wednesday (21 August) that it would no longer be pursuing Clearview AI to ensure that it cooperated.
“I have given extensive consideration to the question of whether the OAIC should invest further resources in scrutinising the actions of Clearview AI, a company that has already been investigated by the OAIC and which has found itself the subject of regulatory investigations in at least three jurisdictions around the world as well as a class action in the United States,” said privacy commissioner Carly Kind.
“Considering all the relevant factors, I am not satisfied that further action is warranted in the particular case of Clearview AI at this time.”
Despite no longer pursuing the facial recognition software firm, the OAIC said that it maintains that Clearview AI’s actions were “troubling” and that the development of generative AI only increases the risk of similar incidents that threaten the privacy of Australians.
“The OAIC will soon be issuing guidance for entities seeking to develop and train generative AI models, including how the APPs apply to the collection and use of personal information. We will also issue guidance for entities using commercially available AI products, including chatbots,” said the OAIC.
“In the meantime, we reiterate that the determination against Clearview AI still stands.”