Los Angeles, 1950. Eleazar Smith owned a bookstore where he offered unread books for sale. Everyone knows that while one always expects the good advice of a trusted bookseller, it’s impossible to pretend you’ve read everything on your shelf before the newsstands are full. Eleazar lives in the US during the Cold War and the rise of McCarthyism, a time when undemocratic techniques such as association guilt, indiscriminate invasion of privacy, and baseless accusations were used to censor the general populace. Eleazar would likely watch on his paid black-and-white television the meetings of the House Un-American Activities Committee, which launched thousands of investigations into potentially Communist activities and resulted in the famous Hollywood blacklist.
The City of Los Angeles was no stranger to this issue and issued a city ordinance that “prohibited anyone from possessing obscene or offensive writing [o]… any business establishment where … books are sold or offered for sale.” Thus, Mr. Smith was convicted of violating this ordinance when books deemed obscene were found at his establishment. California city and superior courts upheld his criminal liability for mere possession of the obscene material, even though he had no knowledge of the book’s contents, as willful intent and knowledge were not required to impose a punishment. Smith appealed to the US Supreme Court (SCOTUS).
He was neither the first nor the last to use freedom of expression and freedom of the press to defend himself against the hostilities of obviously intrusive governments during these turbulent years. The court agreed in 1956. The publication and distribution of books are protected by the rights of freedom of the press, with booksellers playing a key role not only in their publication but also in their distribution, authorization is not an acceptable regulation, the effect of which is to make freedom of expression impossible . Otherwise, according to SCOTUS, booksellers would de facto be afraid or reluctant to exercise such freedoms.
Were that not important enough, the sentence establishes the following principle, the one that brings us here today: if we accept that booksellers without knowledge of the contents of the books they sell are criminally liable, they would only display their window displays those who have personally inspected, which inevitably reduces the number of works in the public domain. So we make them irresponsible. This is the Good Samaritan’s legal protection from civil and criminal liability. Since Smith v. California Librarians and Booksellers are not responsible for the books, creative works, or content they store, distribute, or sell.
At the birth of the commercial Internet, the protection of the Good Samaritan was brought to intermediation services by the famous Section 230 of the Communication Decency Act. If a librarian didn’t know what was in all the books in his library, like a communications services provider, hosting, a proxy , a search engine, or a network of blogs would have the faintest idea of everything that flew through its network from byte to byte.
Therefore, they could not be obliged to filter the content, which is technically almost impossible and financially costly. Of course, a complaint system had to be activated in case content violated someone’s intellectual property. That’s why it’s so easy to remove a video of a Mariah Carey song from YouTube, and so complicated when it comes out that your son is being attacked by his schoolmates.
This principle was drawn into European legislation, translating it into the E-Commerce Directive, and ending up in our Information Society Services Act (LSSI). I leave the Sinde law that came later to the old people of the place to tell you.
The famous DSA (Digital Services Regulation in Spanish) published last year that replaces and repeals the directive, and by the way the LSSI, maintains the Good Samaritan protection because he had no choice, although it imposes a series of controls and audits that are irksome, the big tech companies that base their entire business model on content irresponsibility can certainly afford it. And that is what González v. Google was shown this again in a public hearing before the US Supreme Court on Tuesday, February 21. And also Twitter v. Taamneh, seen in the same court on February 22.
Both cases stem from similar circumstances: the 2015 terrorist attack in Paris, which killed, among others, an American student named Nohemi González; and the 2017 terrorist attack on a nightclub in Istanbul that killed a Jordanian national named Nawras Alassaf. After the Paris attack, González’s father sued Google for helping and supporting a terrorist group by not only allowing ISIS members to post videos on YouTube, but also by allowing its algorithms to post videos to them to recommend.
On Twitter vs. Taamneh, Alassaf’s relatives accused Twitter, Facebook and Google of “collaborating” in the Istanbul bombing by allowing ISIS propaganda to spread online. In Gonzales v. Google, SCOTUS will have to decide whether Section 230 not only protects platforms from liability for their users’ posts, but also if their recommendation algorithms interfere with the distribution of that content. In Taamneh, the court will overrule Section 230 and examine whether an Internet platform can really be accused of complicity in terrorism.
The two cases have drawn dozens of amicus curiae (interventions by people not involved in the process but supporting one of them in the process) from tech companies, civil rights defense groups, and even the petitioners themselves. which alerted the court to the danger of reducing or removing the Good Samaritan’s protections, which, as the reader already knows, are based on the protection of freedom of expression and freedom of the press. Others, including conservative lawmakers, law enforcement advocates, children’s rights groups and Frances Haugen, have taken the opposite stance, advocating a restrictive interpretation of Section 230. A third group has approached the court without taking sides, but the implications of reforming or repealing the law.
As we can see, the question is not easy. Big Tech benefits from a 67-year-old doctrine that applied to the technology under radically different conditions than today. Not to mention that the first Internet operators were in fact not able to comprehensively read all the content that they hosted and transported, but this too has changed radically. Facebook, Twitter, Google or Instagram have the ability to “read” the content and we know that because they select it and present it to us like an editor would.
Hence their fierce opposition to the moderation of the content, because to do so would mean acknowledging that they have editorial powers, which would jeopardize the protection of the Good Samaritan, which the González family proposes. What we do know from the hearing so far is that the court’s judges seemed puzzled by their arguments (Kagan and Kavanaugh acknowledged that the court may not have the requisite technical knowledge on the matter and that Congress would do better to deal with it should deal with). legislation), but several of them proposed, with their interventions, to redesign the protections afforded by Section 230 by distinguishing between the content and what the platform does with it, giving rise to liability in the case of the use of recommendation algorithms and gate would open. If this position is accepted, much will change on the Internet.
you can follow THE COUNTRY Technology on Facebook and Twitter or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits