ESafety Commissioner Julie Inman Grant has called on social media giants for higher standards on removing graphic content from their platforms.
In a statement released on Tuesday, Inman Grant named X, Facebook, Instagram, Snapchat, TikTok and YouTube as platforms being used to circulate “gore” content.
The commission notes a prolific circulation of “extreme violent material online, including recent assassinations and brutal murders, mass casualty events and conflict footage”.
Know the news with the 7NEWS app: Download today
It follows footage of the murders of Charlie Kirk and Iryna Zarutska in the United States of America being broadly shared across social media, which Inman Grant said was being accessed by teenagers.
“My concern is not just how fast this material spreads, but how algorithms amplify it further,” she said in a statement. “Algorithms reward engagement, even when that is driven by shock, fear and outrage.”
“While most social media networks have policies that require the application of sensitive content labels or interstitials to blur gore rather than exposing innocent eyes to such visceral and damaging content, we have seen the major platforms fail to deploy these filters quickly or consistently.”


Inman Grant added that while artificial intelligence should be used to help identify and remove such content, platforms have instead chosen to strip back “content moderation policies” and allowing it to spread further.
“We expect the major platforms to do better,” she said.
The commission noted that the added virality of gore content has led to “dedicated” websites with “searchable libraries of content, follower tools, chat functions and recommendation loops”.
While Inman Grant can call on larger platforms for stronger action, gore websites often have complex hosting arrangements in “permissive jurisdictions” which make them harder to be taken down.
Alongside the statement, eSafety also released a survey of which found almost three quarters of children had seen or heard content “associated with harm online”.
The survey questioned more than 3,000 Australians aged from 10-17, 83 per cent of which had seen such content by the time they are 16-17-years-old.
It also found that almost 20 per cent of those surveyed had encountered content that suggests “how a person can hurt of kill themselves on purpose”.

