Jump to content

Deepfake images harming Aussie kids and businesses


Recommended Posts

  • Author

Deepfake images harming Aussie kids and businesses

Deepfake images harming Aussie kids and businesses

Deepfake images and videos are harming Australian children and businesses and laws are urgently needed to prevent more people being extorted or scammed through the technology, experts say.

The warnings came at the AI Leadership Summit in Melbourne on Monday, with speakers calling for strict technical and legal regulations to govern the use of generative AI.

Experts revealed criminal gangs were using the technology in sexual extortion attempts, while young girls were failing to report AI image-based abuse because they felt regulations could not protect them.

The summit, hosted by CEDA and the National AI Centre, heard from business, safety, privacy and university experts on artificial intelligence, just weeks after consultation closed on proposed mandatory AI rules.

Deepfake images were singled out as a major concern with generative AI technology and eSafety Commissioner Julie Inman Grant said the technology was already being exploited by criminal organisations.

“Criminal gangs out of Nigeria and West Africa (are) using face-swapping technology in video-conferencing calls to execute sophisticated sexual extortion schemes targeting young Australian men between the ages of 18 and 24,” she said.

“We’ve seen a four-fold increase in reports since 2018.”

Deepfake images should not just be a concern for individuals, she said, as “vishing” attacks that combined video conference calls with phishing attempts were increasingly targeting business executives.

Deepfake images were being used to bully and harass school children, ThatsMyFace chief executive Nadia Lee said, and they were finding it hard to trust remedies available to them.

Ms Lee said she recently spoke with a girl in year seven who had been targeted by a year 12 student using AI-generated nude images.

“He had generated deepfake pornographic images of her, put it on Snapchat, it was live for 24 hours, everybody saw it and it was very traumatic for her,” she said.

“She was very hesitant (about reporting it) and ended up not going forward because she thought, ‘as far as I know, I’m the only victim of what he did and if he gets in trouble he will know that it’s from me’.”

Children and parents needed greater education about generative AI rules and potential resolutions, Ms Lee said, as well as higher levels of trust in the reporting system.

Laws governing AI technology should focus on its potential misuse first, IBM chief privacy and trust officer Christina Montgomery said, to limit harm to people, organisations and elections.

“(Deepfakes) are one of the most pressing challenges posed by generative AI, particularly given the potential for bad actors to use it to undermine democracy,” she said.

“Making the distribution of materially deceptive deepfake content related to elections illegal is one step that governments can take right now to help instil trust.”

The federal government recently completed its consultation into mandatory guardrails for AI and received more than 300 submissions.

An interim report from the Senate’s Adopting Artificial Intelligence inquiry recommended laws to restrict deepfake political ads be introduced before the 2029 federal election.



Source link

#Deepfake #images #harming #Aussie #kids #businesses

📬Pelican News

Source Link

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

Cookie Consent & Terms We use cookies to enhance your experience on our site. By continuing to browse our website, you agree to our use of cookies as outlined in our We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.. Please review our Terms of Use, Privacy Policy, and Guidelines for more information.