You must have heard lawmakers rage about Section 230 and their threats to revoke the protection for Internet companies unless the companies stop alleged biases against their respective bases. Today, I want to share some of my thoughts on the law, what has happened around it and what I consider a failure by Internet companies, specifically Facebook and Twitter, to live up to their responsibilities. Frankly, it’s a highly complicated matter and I am no lawyer, but I just lay out what I read and thought for myself. Have a read, but form your own opinions . First off, have a look at what Section 230 text actually says. Here is the piece that matters the most:
(1)Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2)Civil liabilityNo provider or user of an interactive computer service shall be held liable on account of—
(A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B)any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)
The term “information content provider” means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.Source: Cornell
From a service provider perspective, what I understand from the text is that companies such as Facebook or Twitter cannot be held liable for user-generated content on their platforms, unless it is the companies themselves that create the content in question. It is also very important that the text of Section 230 exempts service providers from liabilities for content moderation efforts, whether it’s the technical means to content (user accounts) or censoring/take-down of the content itself.
This protection given to service providers is particularly helpful in growing the Internet. From a personal point of view, I have benefited greatly from content sharing by a 3rd party. I have learned a lot from things shared on Twitter by folks other than the content originators themselves and I believe others do too. Were Internet users made reluctant to share content because of the removal of Section 230, the Net wouldn’t be as great as it is right now. From a company perspective, Section 230’s protection enables the building of platforms without investing heavily in content moderation to avoid possible litigation that would happen due to frivolous lawsuits. Imagine that you are an aspiring entrepreneur that wants to build a small forum dedicated to basketball by yourself and with your modest saving, yet has to shoulder legal expenses because some guys sue you for not taking down offensive comments.
While content sharing has its own benefits, it does also have downsides as we now allow the worst, the dishonest and the ones with a harmful agenda to inflict harms on others. In this case, it falls onto service providers to moderate content. Having millions, if not billions, of users, Twitter and Facebook are prime destinations for actors that want to disseminate false and harmful information. Bafflingly, even though the law specifically shields them from legal liabilities for content moderation and they have the resources to conduct the moderation, Facebook and Twitter still fail to do their duties. For instance, when Trump posted false, dangerous and disparaging information on Twitter, the social network labeled his posts, but still kept them on site because according to Twitter, it’s in the public interest to do so. If the point was to let the public know that the President of the United States lied, distributed propaganda and conducted online harassment, it would be sufficient to simply say so and take down the harmful content. Twitter didn’t remove mostly what Trump said because they were afraid of the wrath from Republicans and they didn’t want to lose a significant portion of the user base.
In many cases, Facebook didn’t live up to their duties for keeping their platform safe for users, either. But in the case of Facebook, the reason remains to be seen, whether it’s financially motivated or Mark Zuckerberg is concerned about the political blowback or he actually prioritizes what he considers “free speech”.
While Section 230 isn’t perfect and leaves much to be desired, calling for a revoke of the law, in my mind, inflicts damages to free speech and 1st Amendment. What it needs is an upgrade and revisions designed to solve the shortcomings of Section 230. Sadly, what has transpired is nothing but. Trump signed an executive order that essentially would strip Twitter of Section 230 protection because it labeled his Tweets as harmful and hid them. Senator Hawley introduced a legislation that would require big tech companies to be content neutral, a definition that would be determined by a panel of five FTC commissioners. If a company is deemed to have politically biased content by two commissioners, it will lose Section 230 protection. The problem is that FTC commissioners can be political appointees and as a consequence, there is no guarantee their assessments are not biased. The legislation would create disastrous downstream effects.
Danielle Citron, a law professor at the University of Maryland, proposed a seemingly vague revision to Section 230, which states that “immunity is only available to platforms that take “reasonable steps to prevent or address unlawful uses of its services.”” The specific definition of reasonable will be left up to the courts. While such a suggestion has its upside, the problem again is that the judicial system in the US has been increasingly politicized. As of this writing, there is a huge battle with regard to the appointment of a Supreme Court and a discussion over court packing. Politically appointed judges can’t guarantee fair rulings any more.
In defense of big tech companies like Facebook or Twitter, moderating content for millions of users with different philosophies in complicated matters is no easy feat. It’s labor intensive and expensive, and even with immense investments, it’s highly challenging to cover endless scenarios that can happen in real life. Moreover, political pressure is also a legitimate threat to their business. With that being said, I still stand by my criticisms because:
- These companies are still benefiting from Section 230 protection, yet they fail their responsibilities to moderate content sufficiently.
- They have enough financial resources to invest more in content moderation or lobbying for a more fair Congress
With great power come great responsibilities. Facebook and Twitter have millions, if not billions of users. They wield enormous power, yet they are failing us in their responsibilities. I wish they fought in this issue as hard as they did in the issue of immigration.