Select Page

Scott Morrison’s plan to unmask online trolls revived by states

08 September 2022

It might be fashionable these days to condemn Scott Morrison and all his works. But one of the former prime minister’s policies that was never enacted has been adapted by the states and now looks set to live on.

Last year, Morrison announced plans to address online trolling by empowering social media users to “unmask” anonymous accounts that disseminate offensive and defamatory material. That plan was a federal intervention in defamation law – a state responsibility – and it divided stakeholders.

Morrison was undeterred. Had he been re-elected he wanted to make it a priority for his next term in office.

That would have ended an injustice that arose because defamation law had not kept pace with the technology. But it would have done so by cutting across a long-term reform process that is being run by the states.

How remarkable that Morrison’s plan bears more than a passing resemblance to one of the main elements of the latest proposals from the states.

Their ideas are outlined in an exposure draft covering online defamation that was released on August 12 after a meeting in Melbourne of attorneys-general.

It deserves support because it gives life to the principle that liability should generally rest with those who are most responsible for defamatory statements.

The states have produced an elaborate scheme that, at its core, is designed to achieve the same purpose as Morrison’s anti-trolling bill: it seeks to unmask trolls so they can be sued.

Compare that to what Morrison announced last November. He would have forced big tech companies like Facebook and Twitter to implement a complaints process for those who believed they had been defamed by anonymous trolls.

 

If those companies did not comply, the Federal Court would have been given authority to compel them to identify the online trolls.

Morrison’s plan was never enacted but it is hard to avoid the conclusion that it influenced the exposure draft that was made public last month by the states.

Both schemes would address the injustice that came to light last year in the Dylan Voller case when the High Court decided that media companies were the publishers of defamatory remarks by others that had been posted on the media’s Facebook pages without their knowledge or consent. 

The Voller decision means everyone who has a Facebook page is at risk of being considered the publisher of defamatory remarks that are left on those pages without their knowledge or consent. That means community groups, businesses and government agencies are all potentially liable for the wrongdoing of third parties.

The Voller decision exposed the need for statutory intervention. Someone had to do something to ensure defamation law returned to its true purpose of targeting wrongdoers instead of extracting money from those who had no knowledge of the wrongdoing.

In the modern world, every troll with a telephone has the ability to post material on the internet and publish their poison.

Yet the Voller case shows that the legal meaning of the term “publish” now extends to those who have no knowledge of such wrongdoing.

Morrison’s anti-trolling bill and the plan backed by the states would both address this problem. There are differences of approach, but the bottom line is the same. Morrison’s scheme – and to a lesser extent that of the states – amount to recognition that it is unfair to impose liability on anyone for the wrongdoing of others.

The difference is that the states would impose a few more conditions.

One of the country’s leading authorities on defamation law, Professor David Rolph, did not like Morrison’s scheme because he believed it would have introduced immunity from liability for the owners of social media pages.

Yet it was welcomed last November by James Chessell, managing director of publishing for Nine Entertainment.

 

Chessell believed it would “put responsibility for third-party comments made on social media pages with the person who made the comment, or with platforms if the platforms cannot identify the person”.

 

The states’ plan would do much the same. Their first option is a ”safe harbour” defence that would focus the dispute between the complainant and the originator of the defamatory material.

It would be an automatic defence if the complainant knows the identity of the originator.

If the complainant does not have that information, the owner of a Facebook page or other internet intermediary would still have a complete defence if, with permission, the intermediary disclosed the originator’s identity or, failing that, blocked access to the defamatory material.

Less protection would be available under an alternative “innocent dissemination” defence. This option recognises that internet intermediaries should not be liable for third-party defamatory content when they are merely subordinate distributors without knowledge of what had been posted online.

But once intermediaries are put on notice by a complainant, the clock is ticking.

They would have 14 days to take reasonable steps to block access to the defamatory material or risk being sued.

Both options would ensure complainants would have a remedy. But the safe harbour defence gives greater weight to the principle that people should not be liable for the misdeeds of others.

The alternative – the innocent dissemination defence – would erode that principle by favouring plaintiffs.

Even if the plaintiff knows the identity of the originator, intermediaries would be vulnerable.

A complainant could sue the person who posted the defamatory material, the internet intermediary or both.

The safe harbour defence is preferable. It adheres to principle, gives intermediaries an incentive to unmask trolls and, when trolls cannot be identified, it requires their poison to be blocked.

 Published in the Australian Newspaper