Tech companies consider giving up efforts to combat misinformation online in Australia | Social media | The Guardian

Created
Oct 5, 2025 5:39 AM
Tags
media influencemedia literacy
Notes

Tech companies in Australia, including Meta and Google, are reconsidering their commitments to combat misinformation amid political controversies, highlighting the need for media education to address the complexities of discerning truth in the digital age.

Signatories to the Australian Code of Practice on Disinformation and Misinformation including Meta and Google have backed away from previous factchecking programs in Australia after the re-election of Donald Trump in the United States.

Signatories to the Australian Code of Practice on Disinformation and Misinformation including Meta and Google have backed away from previous fact-checking programs in Australia after the re-election of Donald Trump in the United States

The tech industry is considering abandoning its obligations to combat online misinformation in Australia, claiming that regulation is too “politically charged and contentious”.

The digital platforms lobby group Digi launched the Australian Code of Practice on Disinformation and Misinformation in 2021, with signatories including Facebook (now Meta), Google, Microsoft and Twitter (now X).

Misinformation is false or deceptive information that may not be deliberately spread, while disinformation is information spread deliberately to cause confusion or undermine trust in governments or institutions, according to the Australian Communications and Media Authority

Under the code, companies are required to offer tools for people to report misinformation and disinformation, and to release transparency reports annually in May on their efforts to tackle the scourge.

Digi released a discussion paper on Tuesday questioning whether misinformation should be included in the code going forward. The group said “recent experience” demonstrated “misinformation is a politically charged and contentious issue within the Australian community”.

Digi said misinformation is “subjective, as the concept of misinformation is fundamentally linked to people’s beliefs and value systems”.

The group pointed to 2022 research it conducted arguing there is no consensus in Australia on the meaning of the word “misinformation”, and people’s assessment on whether something amounted to misinformation on issues such as climate change “is sharply divided, according to their allegiance to different political parties”.

Single pieces of information only have limited power to change people’s attitudes, Digi said, adding that studies show people who interact with and share misinformation tend to already agree with the advocated political stance.

Tom Sulston, the head of policy at Digital Rights Watch, said it appeared the industry was giving combatting misinformation a “brush-off” as too difficult to police. But Sulston said misinformation remains a big problem that social media platforms profit from.

“One of the key causes of the spread of misinformation is the way that the social media companies choose to promote it because it is exciting to users, draws a lot of comments, creates engagement and increases their advertising revenue,” he said.

Technology platforms 'weapons of mass destruction to democracy', Nobel prize winner says – video

Technology platforms 'weapons of mass destruction to democracy', Nobel prize winner says – video

Timothy Graham, an associate professor at Queensland University of Technology, said the term “misinformation” has become politicised, and attempts to legislate or regulate truth in the social media age have generated problems.

“For a given piece of content, the problem is often just establishing what the proposition actually is, which we could then ‘verify’ against the facts,” he said. “People often don’t agree on what is actually being asserted, let alone how to evaluate its truthfulness.”

Privacy Notice: Newsletters may contain information about charities, online ads, and content funded by outside parties. If you do not have an account, we will create a guest account for you on theguardian.com to send you this newsletter. You can complete full registration at any time. For more information about how we use your data see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.

Sulston said regulating the algorithms that promote content to people would have a bigger impact on the spread of misinformation.

Signatories to the code including Meta and Google have backed away from previous factchecking programs in Australia after the re-election of Donald Trump in the United States and a push against the practice in favour of community-led notes systems that append context to posts, such as that used on X.

Sunita Bose, Digi’s managing director, said the outcome of the review will be informed by stakeholder submissions and was not something being driven by the platforms’ recent moves.

The Albanese government abandoned plans to legislate a mandatory misinformation and disinformation code last year after widespread opposition to the bill.

Acma’s latest report on the voluntary code in 2024, released in August, found fewer individual pieces of content were actioned for violating misinformation policies. However, the report stated a recent survey had found 74% of Australian adults were concerned about online misinformation.

X was kicked out of the code nearly two years ago, after removing tools to report misinformation and disinformation during the 2023 voice referendum.

Digi is consulting on the code until 3 November.