Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Victorian child protection worker uses ChatGPT for protection report

Victoria’s child protection agency has been ordered to ban the use of AI tools after a case worker used ChatGPT to write a child’s protection report, resulting in sensitive data being submitted and a number of inaccuracies being generated.

user icon Daniel Croft
Thu, 26 Sep 2024
Victorian child protection worker uses ChatGPT for protection report
expand image

The Office of the Victorian Information Commissioner (OVIC) received reports of the incident in December last year after the Department of Families, Fairness and Housing (DFFH) discovered that a case worker was suspected of drafting a protection application report using ChatGPT.

The report was used in the Children’s Court in a case regarding a child who had changed families as a result of sexual offenses.

Now, the OVIC has found the DFFH failed to “take reasonable steps” to protect the child’s personal data and ensure accurate reporting.

============
============

While the outcome for the child did not change, it determined that “a significant amount of personal and delicate information” was input into ChatGPT, meaning it was disclosed to OpenAI outside the control of the DFFH.

It also found that the report contained inaccurate data “which downplayed risks to the child in the case”.

“Of particular concern, the report described a child’s doll – which was reported to child protection as having been used by the child’s father for sexual purposes – as a notable strength of the parents’ efforts to support the child’s development needs with ‘age-appropriate toys’,” said Victorian information commissioner Sean Morrison.

Investigations highlighted several points that indicated that ChatGPT may have been used for the report. Eventually, the worker admitted to using ChatGPT for the report to “save time and to present work more professionally”. The worker never admitted to submitting sensitive data.

The OVIC further investigated the department and found 100 cases in which ChatGPT may have been used in drafting protection-related documents.

Additionally, from July to December 2023, it was determined that almost 900 employees (almost 13 per cent of the departments workers) accessed ChatGPT.

Following the findings, the OVIC ordered the DFFH to effectively ban the use of generative AI tools like ChatGPT, requiring the department to block access to generative AI websites. The block will last two years, starting 5 November.

The OVIC has not ruled out the use of the technology completely, but the technology would need to be used specifically to ensure the safety of vulnerable children.

“The deputy commissioner believes there may be some specific use cases where the risk is less than others, but that child protection, by its nature, requires the very highest standards of care,” said the OVIC.

“Any application to vary the specified actions in relation to child protection staff, information, or activities would need to be accompanied by the highest standards of verifiable evidence.”

Daniel Croft

Daniel Croft

Born in the heart of Western Sydney, Daniel Croft is a passionate journalist with an understanding for and experience writing in the technology space. Having studied at Macquarie University, he joined Momentum Media in 2022, writing across a number of publications including Australian Aviation, Cyber Security Connect and Defence Connect. Outside of writing, Daniel has a keen interest in music, and spends his time playing in bands around Sydney.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.