Bad actors could use artificial intelligence to generate child sexual abuse content and other illicit material, an inquiry has heard.
The eSafety Commissioner is examining risks associated with generative AI, executive Morag Bond told a parliamentary committee looking at digital platforms.
"We're looking at the risk of generative AI acts that could potentially lead to class one content being created - primarily child sexual abuse material or pro-terror material," she said in a hearing on Tuesday.
"It's very much an issue for us."
Generative AI uses machine learning to produce content such as text, images and other forms of media by drawing from enormous datasets inspired by the structure of the human brain.
It has been used for years in technologies like chatbots, image generators and deepfake creators, but burst into the public consciousness at the beginning of 2023 thanks to the popularity of text generator ChatGPT.
The eSafety Commissioner released its position statement on generative AI in August and warned the dangers of the technology had already begun to bloom.
Users could enter prompts that create new, explicit images of real children or illicit images of children who do not exist.
Children who use chatbots as a safe space for sharing personal experience could be met with harmful information.
For example, one user pretending to be a 13-year-old reportedly received advice from Snapchat's AI chatbot on how she could lie to her parents about meeting a 31-year-old man.
It can also be used in scam calls to manipulate people by convincingly impersonating a human conversation through highly personalised responses.
Generative AI models are also freely available to the public.
While advocates claim this promotes transparency and innovation, the eSafety Commissioner's report says it can encourage the proliferation of harmful material in the wrong hands.
Terrorist organisations, for example, could use large learning models to convincingly imitate a human conversation and manipulate others to commit cybercrimes, fraud or finance terrorist acts.
"AI generated content has the potential to influence public perceptions and values, including towards extremist ideologies," the report said.
However, the report did not universally pan the technology.
While generative AI has the potential to create harm, it can also be used to detect and prevent it through content moderation tools on social media. It could also help scale up online support services.