Big Tech ‘Amplification’: What Does That Mean?
Lawmakers have spent years investigating how hate speech, misinformation and bullying can cause real-world harm on social media sites. Increasingly, they’re pointing fingers at algorithm-powered sites like Facebook and Twitter, the software that determines what content users will see and when.
Some MPs from both parties argue that social media sites become allies when they promote hateful or violent posts. And they have proposed bills to remove legal shields from companies that allow them to stop lawsuits over most of the content posted by their users, in a situation where the platform has increased the reach of harmful posts.
The House Energy and Commerce Committee will hold a hearing on Wednesday to discuss a number of proposals. The hearing will also include testimony from Francis Hogan, a former Facebook employee who recently leaked capital from the company to disclose internal documents.
The removal of the legal shield, known as Article 230, would mean a major change for the Internet, as it has enabled a large number of social media websites. Ms Hogan said she supports changing Article 230, which is part of the Communications Civilization Act, so that it would no longer include certain decisions made by algorithms on tech platforms.
But what exactly counts as algorithmic amplification? And what exactly is the definition of harmful? Proposals give different answers to these important questions. And how they respond can determine whether the courts find the bills constitutional.
Here’s how bills solve these thorny issues:
What is algorithmic amplification?
There are algorithms Everywhere. Basically, an algorithm is a set of instructions that tells a computer how to do something. If the algorithm does anything to a post, the platform can be sued at any time, while products that do not attempt to regulate can be deceived.
Some of the proposed legislation defines the behavior we want to regulate in general terms. The bill, sponsored by Minnesota Democrat Senator Amy Klobucher, will open a platform for litigation if public health “promotes” access to misinformation.
Ms. Klobuchter’s health misinformation bill would pass the platform if her algorithm promoted the content in a “neutral” way. This may mean, for example, that chronological post-ranking platforms do not have to worry about the law.
Other laws are more specific. California Representative Anna G. Tom Malinowski, of Eshu and New Jersey, both Democrats, define dangerous promotions as a bill that would do anything to “classify, order, promote, recommend, enhance, or otherwise modify the distribution or display of information.”
Another bill written by the House Democrats specifies that a lawsuit can only be filed on the platform if the amplification in question is driven by the user’s personal data.
“These platforms are not passive spectators – they are deliberately picking profits from people and our country is counting the price,” said Frank Palon Jr., chairman of the Energy and Commerce Committee, in a statement.
Mr. Palon’s new bill includes discounts for any business with five million or fewer monthly users. Excludes posts that appear when a user searches for an item, even if the algorithm places them, and web hosting and other Internet-backing companies.
What content is harmful?
Lawmakers and others have pointed to a wide range of content that they believe is linked to real-world harm. There are conspiracy theories, which can lead some followers to become violent. Posts from terrorist groups could lead to an attack on a person, as the case was filed on Facebook after a man was attacked by a Hamas member. Other policymakers have expressed concern about targeted advertising that leads to housing discrimination.
Currently, most bills in Congress address specific types of content. Ms. Klobuchter’s bill contains “misinformation about health.” But it is up to the Department of Health and Human Services to decide exactly what that means.
“The coronavirus epidemic has shown us how dangerous misinformation can be and it is our responsibility to take action,” Ms. Klobuchter said when announcing the proposal, which was co-authored by New Mexico Democrat Senator Ben Ray Lujan.
Ms. Eshu and Mr. The law proposed by Malinowski takes a different approach. This only applies to the extension of three law-breaking posts – two that prohibit civil rights violations and a third that prosecute international terrorism.
Mr. Pallon’s bill is the latest and “applies to any post that physically contributes to any person’s physical or serious emotional injury.” This is a high legal standard: emotional distress must be accompanied by physical symptoms. But it can cover, for example, a teenager who sees a post on Instagram that lowers her self-esteem so much that she tries to hurt herself.
What do the courts think?
Judges are skeptical of the idea that platforms should lose their legal immunity as they increase access to content.
In a case involving an attack in which Hamas claimed responsibility, most of the judges hearing the case agreed with Facebook that its algorithm did not calculate the cost of legal protection for user-generated content.
If Congress makes a waiver for a legal shield – and it stands for legal scrutiny – the courts will have to follow its lead.
But if the bills become law, they will raise important questions about whether they violate the First Amendment’s free-speech protection.
The courts have ruled that the government cannot grant benefits to a person or entity of a company whose speech is otherwise protected by the constitution. So the tech industry or its allies can challenge the law by arguing that Congress is looking for a backdoor method to limit free speech.
“The problem is: can the government directly ban algorithmic promotions?” Said Jeff Kosef, an associate professor of cyber security law at the United States Naval Academy. “It’s going to be difficult, especially if you’re trying to say you can’t grow a certain kind of speech.”
#Big #Tech #Amplification