Tag Archives: misinformation

Social Media Giants’ Climate Misinformation Policies Leave Users ‘In the Dark’: Report

“Despite half of U.S. and U.K. adults getting their news from social media, social media companies have not taken the steps necessary to fight industry-backed deception,” reads the report.

Weeks after the Intergovernmental Panel on Climate Change identified disinformation as a key driver of the planetary crisis, three advocacy groups published a report Wednesday ranking social media companies on their efforts to ensure users can get accurate data about the climate on their platforms—and found that major companies like Twitter and Facebook are failing to combat misinformation.

The report, titled In the Dark: How Social Media Companies’ Climate Disinformation Problem is Hidden from the Public and released by Friends of the Earth (FOE), Greenpeace, and online activist network Avaaz, detailed whether the companies have met 27 different benchmarks to stop the spread of anti-science misinformation and ensure transparency about how inaccurate data is analyzed.

“Despite half of U.S. and U.K. adults getting their news from social media, social media companies have not taken the steps necessary to fight industry-backed deception,” reads the report. “In fact, they continue to allow these climate lies to pollute users’ feeds.

The groups assessed five major social media platforms—Facebook, Twitter, YouTube, Pinterest, and TikTok—and found that the two best-performing companies, Pinterest and YouTube, scored 14 out of the 27 possible points.

As Common Dreams reported earlier this month, Pinterest has won praise from groups including FOE for establishing “clearly defined guidelines against false or misleading climate change information, including conspiracy theories, across content and ads.”

“One of the key objectives of this report is to allow for fact-based deliberation, discussion, and debate to flourish in an information ecosystem that is healthy and fair, and that allows both citizens and policymakers to make decisions based on the best available data.”

The company also garnered points in Wednesday’s report for being the only major social media platform to make clear the average time or views it allows for a piece of scientifically inaccurate content before it will take action to combat the misinformation and including “omission or cherry-picking” of data in its definition of mis- or disinformation.

Pinterest and YouTube were the only companies that won points for consulting with climate scientists to develop a climate mis- and disinformation policy.

The top-performing companies, however, joined the other firms in failing to articulate exactly how their misinformation policy is enforced and to detail how climate misinformation is prioritized for fact-checking.

“Social media companies are largely leaving the public in the dark about their efforts to combat the problem,” the report reads. “There is a gross lack of transparency, as these companies conceal much of the data about the prevalence of digital climate dis/misinformation and any internal measures taken to address its spread.”

Twitter was the worst-performing company, meeting only five of the 27 criteria.

“Twitter is not clear about how content is verified as dis/misinformation, nor explicit about engaging with climate experts to review dis/misinformation policies or flagged content,” reads the report. “Twitter’s total lack of reference to climate dis/misinformation, both in their policies and throughout their enforcement reports, earned them no points in either category.”

TikTok scored seven points, while Facebook garnered nine.

The report, using criteria developed by the Climate Disinformation Coalition, was released three weeks after NPR reported that inaccurate information about renewable energy sources has been disseminated widely in Facebook groups, and the spread has been linked to slowing progress on or shutting down local projects.

In rural Ohio, posts in two anti-wind power Facebook groups spread misinformation about wind turbines causing birth defects in horses, failing to reduce carbon emissions, and causing so-called “wind turbine syndrome” from low-frequency sounds—a supposed ailment that is not backed by scientific evidence. The posts increased “perceptions of human health and public safety risks related to wind” power, according to a study published last October in the journal Energy Research & Social Science.

As those false perceptions spread through the local community, NPRreported, the Ohio Power Siting Board rejected a wind farm proposal “citing geological concerns and the local opposition.”

Misinformation on social media “can really slow down the clean energy transition, and that has just as dire life and death consequences, not just in terms of climate change, but also in terms of air pollution, which overwhelmingly hits communities of color,” University of California, Santa Barbara professor Leah Stokes told NPR.

As the IPCC reported in its February report, “rhetoric and misinformation on climate change and the deliberate undermining of science have contributed to misperceptions of the scientific consensus, uncertainty, disregarded risk and urgency, and dissent.”

Wednesday’s report called on all social media companies to:

  • Establish, disclose, and enforce policies to reduce climate change dis- and misinformation;
  • Release in full the company’s current labeling, fact-checking, policy review, and algorithmic ranking systems related to climate change disinformation policies;
  • Disclose weekly reports on the scale and prevalence of climate change dis- and misinformation on the platform and mitigation efforts taken internally; and
  • Adopt privacy and data protection policies to protect individuals and communities who may be climate dis/misinformation targets.

“One of the key objectives of this report is to allow for fact-based deliberation, discussion, and debate to flourish in an information ecosystem that is healthy and fair, and that allows both citizens and policymakers to make decisions based on the best available data,” reads the report.

“We see a clear boundary between freedom of speech and freedom of reach,” it continues, “and believe that transparency on climate dis/misinformation and accountability for the actors who spread it is a precondition for a robust and constructive debate on climate change and the response to the climate crisis.”

Originally published on Common Dreams by JULIA CONLEY  and republished


Related:

Check out Lynxotic on YouTube

Find books on Music, Movies & Entertainment and many other topics at Bookshop.org

Lynxotic may receive a small commission based on any purchases made by following links from this page

Leaked Facebook Documents Reveal How Company Failed on Election Promise

CEO Mark Zuckerberg had repeatedly promised to stop recommending political groups to users to squelch the spread of misinformation

Leaked internal Facebook documents show that a combination of technical miscommunications and high-level decisions led to one of the social media giant’s biggest broken promises of the 2020 election—that it would stop recommending political groups to users.

The Markup first revealed on Jan. 19 that Facebook was continuing to recommend political groups—including some in which users advocated violence and storming the U.S. Capitol—in spite of multiple promises not to do so, including one made under oath to Congress

The day the article ran, a Facebook team started investigating the “leakage,” according to documents provided by Frances Haugen to Congress and shared with The Markup, and the problem was escalated to the highest level to be “reviewed by Mark.” Over the course of the next week, Facebook employees identified several causes for the broken promise.

The company, according to work log entries in the leaked documents, was updating its list of designated political groups, which it refers to as civic groups, in real time. But the systems that recommend groups to users were cached on servers and users’ devices and only updated every 24 to 48 hours in some cases. The lag resulted in users receiving recommendations for groups that had recently been designated political, according to the logs.

That technical oversight was compounded by a decision Facebook officials made about how to determine whether or not a particular group was political in nature.

When The Markup examined group recommendations using data from our Citizen Browser project—a paid, nationwide panel of Facebook users who automatically supply us data from their Facebook feeds—we designated groups as political or not based on their names, about pages, rules, and posted content. We found 12 political groups among the top 100 groups most frequently recommended to our panelists. 

Facebook chose to define groups as political in a different way—by looking at the last seven days’ worth of content in a given group.

“Civic filter uses last 7 day content that is created/viewed in the group to determine if the group is civic or not,” according to a summary of the problem written by a Facebook employee working to solve the issue. 

As a result, the company was seeing a “12% churn” in its list of groups designated as political. If a group went seven days without posting content the company’s algorithms deemed political, it would be taken off the blacklist and could once again be recommended to users.

Almost 90 percent of the impressions—the number of times a recommendation was seen—on political groups that Facebook tallied while trying to solve the recommendation problem were a result of the day-to-day turnover on the civic group blacklist, according to the documents.

Facebook did not directly respond to questions for this story.

“We learned that some civic groups were recommended to users, and we looked into it,” Facebook spokesperson Leonard Lam wrote in an email to The Markup. “The issue stemmed from the filtering process after designation that allowed some Groups to remain in the recommendation pool and be visible to a small number of people when they should not have been. Since becoming aware of the issue, we worked quickly to update our processes, and we continue this work to improve our designation and filtering processes to make them as accurate and effective as possible.”

Social networking and misinformation researchers say that the company’s decision to classify groups as political based on seven days’ worth of content was always likely to fall short.

“They’re definitely going to be missing signals with that because groups are extremely dynamic,” said Jane Lytvynenko, a research fellow at the Harvard Shorenstein Center’s Technology and Social Change Project. “Looking at the last seven days, rather than groups as a whole and the stated intent of groups, is going to give you different results. It seems like maybe what they were trying to do is not cast too wide of a net with political groups.”

Many of the groups Facebook recommended to Citizen Browser users had overtly political names.

More than 19 percent of Citizen Browser panelists who voted for Donald Trump received recommendations for a group called Candace Owens for POTUS, 2024, for example. While Joe Biden voters were less likely to be nudged toward political groups, some received recommendations for groups like Lincoln Project Americans Protecting Democracy.

The internal Facebook investigation into the political recommendations confirmed these problems. By Jan. 25, six days after The Markup’s original article, a Facebook employee declared that the problem was “mitigated,” although root causes were still under investigation.

On Feb. 10, Facebook blamed the problem on “technical issues” in a letter it sent to U.S. senator Ed Markey, who had demanded an explanation.

In the early days after the company’s internal investigation, the issue appeared to have been resolved. Both Citizen Browser and Facebook’s internal data showed that recommendations for political groups had virtually disappeared.

But when The Markup reexamined Facebook’s recommendations in June, we discovered that the platform was once again nudging Citizen Browser users toward political groups, including some in which members explicitly advocated violence.

From February to June, just under one-third of Citizen Browser’s 2,315 panelists received recommendations to join a political group. That included groups with names like Progressive Democrats of Nevada, Michigan Republicans, Liberty lovers for Ted Cruz, and Bernie Sanders for President, 2020.

This article was originally published on The Markup By: Todd Feathers and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license (CC BY-NC-ND 4.0).

Related Articles:


Check out Lynxotic on YouTube

Find books on Big Tech and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page