Australia Demands Tech Giants Tackle ‘Extremist’ Content

Australia’s online safety watchdog has warned six giant technology companies suspected of failing to take down violent extremist content.

Google, Meta, the owner of Facebook and Instagram, WhatsApp, Telegram, Reddit and X, the social media platform formerly known a Twitter, are in the sights of the eSafety commissioner today.

The companies have been issued legal notices giving them 49 days to explain how they’re taking down violent content from their platforms, or they risk facing fines up to $11 million.

Esafety commissioner Julia Inman Grant says video of the horrific 2019 terror attack in the New Zealand city of Christchurch that left 51 people dead continues to circulate online.

“We remain concerned about how extremists weaponise technology like live-streaming, algorithms and recommender systems and other features to promote or share this hugely harmful material,” Inman Grant said.

“We are also concerned by reports that terrorists and violent extremists are moving to capitalise on the emergence of generative AI and are experimenting with ways this new technology can be misused to cause harm.”

The regulator is also concerned about evidence that messaging and social media services are being exploited by extremists.

It wants to examine what technology giants are doing, or failing to do, to protect their users.

The federal government is concerned the risk of online terrorist recruitment and radicalisation remains high in Australia and worldwide.

Well-funded and highly resourced-technology companies should be monitoring whether their products could be exploited by terrorists and other criminals.

  • All
  • Australia News
  • Business News
  • Entertainment News
  • International News
  • Sports News
  • Sri Lanka News
    •   Back
    • India News
Load More

End of Content.

latest NEWS

  • All
  • Australia News
  • Business News
  • Entertainment News
  • International News
  • Sports News
  • Sri Lanka News
    •   Back
    • India News