The Justice Department (DOJ) has unveiled a series of legislative proposals that would curtail broad legal protections for online platforms in an effort to push tech companies to address illicit material while moderating content responsibly.
The proposed reforms are the latest in the Trump administration’s ongoing clash with big tech companies such as Twitter and Facebook. President Donald Trump signed an executive order on May 28 directing federal agencies to develop regulations that protect users from unfair or deceptive content restriction practices employed by online platforms.
Trump had called for new regulations under Section 230 of the 1996 Communications Decency Act to limit liability protections for companies that engage in “censoring” or “political conduct.” He even went so far to call for the revocation of the law in a statement on May 29.
Section 230 largely exempts online platforms from liability for content posted by their users, although they can be held liable for content that violates anti-sex trafficking or intellectual property laws.
The law allows companies to block or screen content “in good faith” if they consider it “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” The protections, however, weren’t intended to apply to services that act more like publishers than online platforms, Attorney General William Barr said in a speech in May.
The DOJ said on Wednesday that its proposals would “update the outdated immunity for online platforms” under section 230.
“When it comes to issues of public safety, the government is the one who must act on behalf of society at large. Law enforcement cannot delegate our obligations to protect the safety of the American people purely to the judgment of profit-seeking private firms. We must shape the incentives for companies to create a safer environment, which is what Section 230 was originally intended to do,” Barr said in a statement.
“Taken together, these reforms will ensure that Section 230 immunity incentives online platforms to be responsible actors,” he added.
The attorney general said the internet and tech industry has evolved since section 230 was adopted 25 years ago. At the time, it was used to protect websites that served as bulletin boards for third-party content and to give protection to companies from liability for removing content such as child pornography or human trafficking advertising, he said.
But now, he said, section 230 has been interpreted in such a broad manner that has left online platforms “unaccountable for a variety of harms flowing from content on their platforms and with virtually unfettered discretion to censor third-party content with little transparency or accountability.”
The president and Barr have both cited Twitter, which recently added a “fact check” feature, as an example of when an online platform is playing the role of a publisher. Twitter had applied its fact checking label on two of Trump’s tweets and invited readers to “get the facts” after Trump made claims that mail-in-voting leads to voter fraud.
“The choices that Twitter makes when it chooses to suppress, edit, blacklist, shadowban are editorial decisions, pure and simple,” Trump said. “In those moments, Twitter ceases to be a neutral public platform and [becomes] an editor with a viewpoint, and I think we can say that about others also.”
Areas for Reform
The department had conducted a review of section 230 over the last 10 months following its broader review announced in July 2019 of online platforms and their practices.
The review included engaging in a public workshop, facilitating an expert roundtable to discuss potential reforms, allow for written submissions, and meeting with companies who expressed interest in discussing section 230 (pdf).
Following that review, the department determined that section 230 was “ripe for reform” and had developed four overarching recommendations—incentivizing online platforms to address illicit content, promoting open discourse and greater transparency, clarifying federal government enforcement capability, and promoting competition.
Some of the department’s recommendations include denying section 230 immunity to “truly bad actors,” which it describes as “an online platform that purposefully facilities or solicits third-party content or activity that would violate federal criminal law.”
The department also suggested carving out exemptions to immunity protection for platforms who are willfully blind to egregious content, including child exploitation and sexual abuse, terrorism, and cyber-stalking.
The reforms will also seek to encourage platforms to be more transparent and accountable to their users by clarifying the text and reviving the original purpose of the law.
They will also make it clear that section 230 immunity does not apply to civil enforcement actions brought by the federal government.
“These reforms are targeted at platforms to make certain they are appropriately addressing illegal and exploitative content while continuing to preserve a vibrant, open, and competitive internet,” Barr said.
Bowen Xiao and Petr Svab contributed to this report.