Tag Archives: Fake News

Social Media Giants’ Climate Misinformation Policies Leave Users ‘In the Dark’: Report

“Despite half of U.S. and U.K. adults getting their news from social media, social media companies have not taken the steps necessary to fight industry-backed deception,” reads the report.

Weeks after the Intergovernmental Panel on Climate Change identified disinformation as a key driver of the planetary crisis, three advocacy groups published a report Wednesday ranking social media companies on their efforts to ensure users can get accurate data about the climate on their platforms—and found that major companies like Twitter and Facebook are failing to combat misinformation.

The report, titled In the Dark: How Social Media Companies’ Climate Disinformation Problem is Hidden from the Public and released by Friends of the Earth (FOE), Greenpeace, and online activist network Avaaz, detailed whether the companies have met 27 different benchmarks to stop the spread of anti-science misinformation and ensure transparency about how inaccurate data is analyzed.

“Despite half of U.S. and U.K. adults getting their news from social media, social media companies have not taken the steps necessary to fight industry-backed deception,” reads the report. “In fact, they continue to allow these climate lies to pollute users’ feeds.

The groups assessed five major social media platforms—Facebook, Twitter, YouTube, Pinterest, and TikTok—and found that the two best-performing companies, Pinterest and YouTube, scored 14 out of the 27 possible points.

As Common Dreams reported earlier this month, Pinterest has won praise from groups including FOE for establishing “clearly defined guidelines against false or misleading climate change information, including conspiracy theories, across content and ads.”

“One of the key objectives of this report is to allow for fact-based deliberation, discussion, and debate to flourish in an information ecosystem that is healthy and fair, and that allows both citizens and policymakers to make decisions based on the best available data.”

The company also garnered points in Wednesday’s report for being the only major social media platform to make clear the average time or views it allows for a piece of scientifically inaccurate content before it will take action to combat the misinformation and including “omission or cherry-picking” of data in its definition of mis- or disinformation.

Pinterest and YouTube were the only companies that won points for consulting with climate scientists to develop a climate mis- and disinformation policy.

The top-performing companies, however, joined the other firms in failing to articulate exactly how their misinformation policy is enforced and to detail how climate misinformation is prioritized for fact-checking.

“Social media companies are largely leaving the public in the dark about their efforts to combat the problem,” the report reads. “There is a gross lack of transparency, as these companies conceal much of the data about the prevalence of digital climate dis/misinformation and any internal measures taken to address its spread.”

Twitter was the worst-performing company, meeting only five of the 27 criteria.

“Twitter is not clear about how content is verified as dis/misinformation, nor explicit about engaging with climate experts to review dis/misinformation policies or flagged content,” reads the report. “Twitter’s total lack of reference to climate dis/misinformation, both in their policies and throughout their enforcement reports, earned them no points in either category.”

TikTok scored seven points, while Facebook garnered nine.

The report, using criteria developed by the Climate Disinformation Coalition, was released three weeks after NPR reported that inaccurate information about renewable energy sources has been disseminated widely in Facebook groups, and the spread has been linked to slowing progress on or shutting down local projects.

In rural Ohio, posts in two anti-wind power Facebook groups spread misinformation about wind turbines causing birth defects in horses, failing to reduce carbon emissions, and causing so-called “wind turbine syndrome” from low-frequency sounds—a supposed ailment that is not backed by scientific evidence. The posts increased “perceptions of human health and public safety risks related to wind” power, according to a study published last October in the journal Energy Research & Social Science.

As those false perceptions spread through the local community, NPRreported, the Ohio Power Siting Board rejected a wind farm proposal “citing geological concerns and the local opposition.”

Misinformation on social media “can really slow down the clean energy transition, and that has just as dire life and death consequences, not just in terms of climate change, but also in terms of air pollution, which overwhelmingly hits communities of color,” University of California, Santa Barbara professor Leah Stokes told NPR.

As the IPCC reported in its February report, “rhetoric and misinformation on climate change and the deliberate undermining of science have contributed to misperceptions of the scientific consensus, uncertainty, disregarded risk and urgency, and dissent.”

Wednesday’s report called on all social media companies to:

  • Establish, disclose, and enforce policies to reduce climate change dis- and misinformation;
  • Release in full the company’s current labeling, fact-checking, policy review, and algorithmic ranking systems related to climate change disinformation policies;
  • Disclose weekly reports on the scale and prevalence of climate change dis- and misinformation on the platform and mitigation efforts taken internally; and
  • Adopt privacy and data protection policies to protect individuals and communities who may be climate dis/misinformation targets.

“One of the key objectives of this report is to allow for fact-based deliberation, discussion, and debate to flourish in an information ecosystem that is healthy and fair, and that allows both citizens and policymakers to make decisions based on the best available data,” reads the report.

“We see a clear boundary between freedom of speech and freedom of reach,” it continues, “and believe that transparency on climate dis/misinformation and accountability for the actors who spread it is a precondition for a robust and constructive debate on climate change and the response to the climate crisis.”

Originally published on Common Dreams by JULIA CONLEY  and republished


Related:

Check out Lynxotic on YouTube

Find books on Music, Movies & Entertainment and many other topics at Bookshop.org

Lynxotic may receive a small commission based on any purchases made by following links from this page

Facebook Isn’t Telling You How Popular Right-Wing Content Is on the Platform

Above: Photo Collage / Lynxotic

Facebook insists that mainstream news sites perform the best on its platform. But by other measures, sensationalist, partisan content reigns

In early November, Facebook published its Q3 Widely Viewed Content Report, the second in a series meant to rebut critics who said that its algorithms were boosting extremist and sensational content. The report declared that, among other things, the most popular informational content on Facebook came from sources like UNICEF, ABC News, or the CDC.

But data collected by The Markup suggests that, on the contrary, sensationalist news or viral content with little original reporting performs just as well as—and often better than—many mainstream sources when it comes to how often it’s seen by platform users.

Data from The Markup’s Citizen Browser project shows that during the period from July 1 to Sept. 30, 2021, outlets like The Daily Wire, The Western Journal, and BuzzFeed’s viral content arm were among the top-viewed domains in our sample. 

Citizen Browser is a national panel of paid Facebook users who automatically share their news feed data with The Markup.

To analyze the websites whose content performs the best on Facebook, we counted the total number of times that links from any domain appeared in our panelists’ news feeds—a metric known as “impressions”—over a three-month period (the same time covered by Facebook’s Q3 Widely Viewed Content Report). Facebook, by contrast, chose a different metric, calculating the “most-viewed” domains by tallying only the number of users who saw links, regardless of whether each user saw a link once or hundreds of times.

By our calculation, the top performing domains were those that surfaced in users’ feeds over and over—including some highly partisan, polarizing sites that effectively bombarded some Facebook users with content. 

These findings chime with recent revelations from Facebook whistleblower Frances Haugen, who has repeatedly said the company has a tendency to cherry-pick statistics to release to the press and the public. 

“They are very good at dancing with data,” Haugen told British lawmakers during a European tour.

When presented with The Markup’s findings and asked whether its own report’s statistics might be misleading or incomplete, Ariana Anthony, a spokesperson for Meta, Facebook’s parent company, said in an emailed statement, “The focus of the Widely Viewed Content Report is to show the content that is seen by the most people on Facebook, not the content that is posted most frequently. That said, we will continue to refine and improve these reports as we engage with academics, civil society groups, and researchers to identify the parts of these reports they find most valuable, which metrics need more context, and how we can best support greater understanding of content distribution on Facebook moving forward.”

Anthony did not directly respond to questions from The Markup on whether the company would release data on the total number of link views or the content that was seen most frequently on the platform.

The Battle Over Data

There are many ways to measure popularity on Facebook, and each tells a different story about the platform and what kind of content its algorithms favor. 

For years, the startup CrowdTangle’s “engagement” metric—essentially measuring a combination of how many likes, comments, and other interactions any domain’s posts garner—has been the most publicly visible way of measuring popularity. Facebook bought CrowdTangle in 2016 and, according to reporting in The New York Times, has since largely tried to downplay data showing that ultra-conservative commentators like The Daily Wire’s Ben Shapiro produce the most engaged-with content on the platform. 

Shortly after the end of the second quarter of this year, Facebook came out with its first transparency report, framed in the introduction as a way to “provide clarity” on “the most-viewed domains, links, Pages and posts on the platform during the quarter.” (More accurately, the Q2 report was the first publicly released transparency report, after a Q1 report was, The New York Times reported, suppressed for making the company look bad and only released later after details emerged.)

For the Q2 and Q3 reports, Facebook turned to a specific metric, known as “reach,” to quantify most-viewed domains. For any given domain, say youtube.com or twitter.com, reach represents the number of unique Facebook accounts that had at least one post containing a link to a tweet or a YouTube video in their news feeds during the quarter. On that basis, Facebook found that those domains, and other mainstream staples like Amazon, Spotify, and TikTok, had wide reach.

When applying this metric, The Markup found similar results in our Citizen Browser data, as detailed in depth in our methodology. But this calculation ignores a reality for a lot of Facebook users: bombardment with content from the same site.

Citizen Browser data shows, for instance, that from July through September of this year, articles from far-right news site Newsmax appeared in the feed of a 58-year-old woman in New Mexico 1,065 times—but under Facebook’s calculation of reach, this would count as one single unit. Similarly, a 37-year-old man in New Hampshire was shown 245 unique links to satirical posts from The Onion, which appeared in his feed more than 500 times—but again, he would have been counted just once by Facebook’s method.

When The Markup instead counted each appearance of a domain on a user’s feed during Q3—e.g., Newsmax as 1,065 instead of 1—we found that polarizing, partisan content jumped in the performance rankings. Indeed, the same trend is true of the domains in Facebook’s Q2 report, for which analysis can be found in our data repository on GitHub.

We found that outlets like The Daily Wire, BuzzFeed’s viral content arm, Fox News, and Yahoo News jumped in the popularity rankings when we used the impressions metric. Most striking, The Western Journal—which, similarly to The Daily Wire, does little original reporting and instead repackages stories to fit with right-wing narratives—improved its ranking by almost 200 places.

“To me these findings raise a number of questions,” said Jane Lytvynenko, senior research fellow at the Harvard Kennedy School Shorenstein Center. 

“Was Facebook’s research genuine, or was it part of an attempt to change the narrative around top 10 lists that were previously put out? It matters a lot whether a person sees a link one time or if they see it 20 times, and to not account for that in a report, to me, is misleading,” Lytvynenko said.

Using a narrow range of data to gauge popularity is suspect, said Alixandra Barasch, associate professor of marketing at NYU’s Stern School of Business.

“It just goes against everything we teach and know about advertising to focus on one [metric] rather than the other,” she said. 

In fact, when it comes to the core business model of selling space to advertisers, Facebook encourages them to consider yet another metric, “frequency”—how many times to show a post to each user on average—when trying to optimize brand messaging.

Data from Citizen Browser shows that domains seen with high frequency in the Facebook news feed are mostly news domains, since news websites tend to publish multiple articles over the course of a day or week. But Facebook’s own content report does not take this data into account.

“[This] clarifies the point that what we need is independent access for researchers to check the math,” said Justin Hendrix, co-author of a report on social media and polarization and editor at Tech Policy Press, after reviewing The Markup’s data.

This article was originally published on The Markup By: Corin Faife and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

Related Articles:


Check out Lynxotic on YouTube

Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

Facebook has a misinformation problem, and is blocking access to data about how much there is and who is affected

Leaked internal documents suggest Facebook – which recently renamed itself Meta – is doing far worse than it claims at minimizing COVID-19 vaccine misinformation on the Facebook social media platform. 

Online misinformation about the virus and vaccines is a major concern. In one study, survey respondents who got some or all of their news from Facebook were significantly more likely to resist the COVID-19 vaccine than those who got their news from mainstream media sources.

As a researcher who studies social and civic media, I believe it’s critically important to understand how misinformation spreads online. But this is easier said than done. Simply counting instances of misinformation found on a social media platform leaves two key questions unanswered: How likely are users to encounter misinformation, and are certain users especially likely to be affected by misinformation? These questions are the denominator problem and the distribution problem.

The COVID-19 misinformation study, “Facebook’s Algorithm: a Major Threat to Public Health”, published by public interest advocacy group Avaaz in August 2020, reported that sources that frequently shared health misinformation — 82 websites and 42 Facebook pages — had an estimated total reach of 3.8 billion views in a year.

At first glance, that’s a stunningly large number. But it’s important to remember that this is the numerator. To understand what 3.8 billion views in a year means, you also have to calculate the denominator. The numerator is the part of a fraction above the line, which is divided by the part of the fraction below line, the denominator.

Getting some perspective

One possible denominator is 2.9 billion monthly active Facebook users, in which case, on average, every Facebook user has been exposed to at least one piece of information from these health misinformation sources. But these are 3.8 billion content views, not discrete users. How many pieces of information does the average Facebook user encounter in a year? Facebook does not disclose that information.

Without knowing the denominator, a numerator doesn’t tell you very much. The Conversation U.S., CC BY-ND

Market researchers estimate that Facebook users spend from 19 minutes a day to 38 minutes a day on the platform. If the 1.93 billion daily active users of Facebook see an average of 10 posts in their daily sessions – a very conservative estimate – the denominator for that 3.8 billion pieces of information per year is 7.044 trillion (1.93 billion daily users times 10 daily posts times 365 days in a year). This means roughly 0.05% of content on Facebook is posts by these suspect Facebook pages. 

The 3.8 billion views figure encompasses all content published on these pages, including innocuous health content, so the proportion of Facebook posts that are health misinformation is smaller than one-twentieth of a percent.

Is it worrying that there’s enough misinformation on Facebook that everyone has likely encountered at least one instance? Or is it reassuring that 99.95% of what’s shared on Facebook is not from the sites Avaaz warns about? Neither. 

Misinformation distribution

In addition to estimating a denominator, it’s also important to consider the distribution of this information. Is everyone on Facebook equally likely to encounter health misinformation? Or are people who identify as anti-vaccine or who seek out “alternative health” information more likely to encounter this type of misinformation? 

Another social media study focusing on extremist content on YouTube offers a method for understanding the distribution of misinformation. Using browser data from 915 web users, an Anti-Defamation League team recruited a large, demographically diverse sample of U.S. web users and oversampled two groups: heavy users of YouTube, and individuals who showed strong negative racial or gender biases in a set of questions asked by the investigators. Oversampling is surveying a small subset of a population more than its proportion of the population to better record data about the subset.

The researchers found that 9.2% of participants viewed at least one video from an extremist channel, and 22.1% viewed at least one video from an alternative channel, during the months covered by the study. An important piece of context to note: A small group of people were responsible for most views of these videos. And more than 90% of views of extremist or “alternative” videos were by people who reported a high level of racial or gender resentment on the pre-study survey.

While roughly 1 in 10 people found extremist content on YouTube and 2 in 10 found content from right-wing provocateurs, most people who encountered such content “bounced off” it and went elsewhere. The group that found extremist content and sought more of it were people who presumably had an interest: people with strong racist and sexist attitudes. 

The authors concluded that “consumption of this potentially harmful content is instead concentrated among Americans who are already high in racial resentment,” and that YouTube’s algorithms may reinforce this pattern. In other words, just knowing the fraction of users who encounter extreme content doesn’t tell you how many people are consuming it. For that, you need to know the distribution as well.

Superspreaders or whack-a-mole?

A widely publicized study from the anti-hate speech advocacy group Center for Countering Digital Hate titled Pandemic Profiteers showed that of 30 anti-vaccine Facebook groups examined, 12 anti-vaccine celebrities were responsible for 70% of the content circulated in these groups, and the three most prominent were responsible for nearly half. But again, it’s critical to ask about denominators: How many anti-vaccine groups are hosted on Facebook? And what percent of Facebook users encounter the sort of information shared in these groups? 

Without information about denominators and distribution, the study reveals something interesting about these 30 anti-vaccine Facebook groups, but nothing about medical misinformation on Facebook as a whole.

These types of studies raise the question, “If researchers can find this content, why can’t the social media platforms identify it and remove it?” The Pandemic Profiteers study, which implies that Facebook could solve 70% of the medical misinformation problem by deleting only a dozen accounts, explicitly advocates for the deplatforming of these dealers of disinformation. However, I found that 10 of the 12 anti-vaccine influencers featured in the study have already been removed by Facebook.

Consider Del Bigtree, one of the three most prominent spreaders of vaccination disinformation on Facebook. The problem is not that Bigtree is recruiting new anti-vaccine followers on Facebook; it’s that Facebook users follow Bigtree on other websites and bring his content into their Facebook communities. It’s not 12 individuals and groups posting health misinformation online – it’s likely thousands of individual Facebook users sharing misinformation found elsewhere on the web, featuring these dozen people. It’s much harder to ban thousands of Facebook users than it is to ban 12 anti-vaccine celebrities.

This is why questions of denominator and distribution are critical to understanding misinformation online. Denominator and distribution allow researchers to ask how common or rare behaviors are online, and who engages in those behaviors. If millions of users are each encountering occasional bits of medical misinformation, warning labels might be an effective intervention. But if medical misinformation is consumed mostly by a smaller group that’s actively seeking out and sharing this content, those warning labels are most likely useless.

[You’re smart and curious about the world. So are The Conversation’s authors and editors. You can read us daily by subscribing to our newsletter.]

Getting the right data

Trying to understand misinformation by counting it, without considering denominators or distribution, is what happens when good intentions collide with poor tools. No social media platform makes it possible for researchers to accurately calculate how prominent a particular piece of content is across its platform. 

Facebook restricts most researchers to its Crowdtangle tool, which shares information about content engagement, but this is not the same as content views. Twitter explicitly prohibits researchers from calculating a denominator, either the number of Twitter users or the number of tweets shared in a day. YouTube makes it so difficult to find out how many videos are hosted on their service that Google routinely asks interview candidates to estimate the number of YouTube videos hosted to evaluate their quantitative skills. 

The leaders of social media platforms have argued that their tools, despite their problems, are good for society, but this argument would be more convincing if researchers could independently verify that claim.

As the societal impacts of social media become more prominent, pressure on the big tech platforms to release more data about their users and their content is likely to increase. If those companies respond by increasing the amount of information that researchers can access, look very closely: Will they let researchers study the denominator and the distribution of content online? And if not, are they afraid of what researchers will find?

This article was originally published on The Conversation By Ethan Zuckerman and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license (CC BY-NC-ND 4.0).

Related Articles:


Check out Lynxotic on YouTube

Find books on Big Tech and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

It’s time to face it: Politicians that propagate Disinformation for the Fossil Fuel Industry are Wrong and Evil, Period

If four years of the Former Guy taught us anything, it’s that we have no time left for evil, soulless greed run amok

Opinion & Analysis

Recent attempts by politicians, beholden to the fossil fuel industry in Texas, to use the collapse of the energy infrastructure during the recent weather disaster as an opportunity to bash and trash wind and solar energy is an example of an unfortunate, banal and still common form of pure evil.

The deeper connections, easily seen lurking just beneath the surface, are rich and multilayered.

If this extreme weather disaster is one of many that are linked to climate change, a manifestation of dangers that climate scientists have been warning of for decades, the irony goes beyond just sick.

Wind and solar energy exist as an early and tentative positive step toward somehow stopping, or at least slowing down, the negative man-made climate change repercussions before it is too late.

The real reasons behind the Texas power grid collapse are related to traditional fossil fuel based energy sources and bad management of the energy infrastructure that can be traced back to an arrogant belief that Texas is better off without connections to the national system.

The local political response to this eminently preventable catastrophe was to bash and trash and blame the very technology that, ultimately, is part of a tentative start to actually begin to solve the bigger problem of man-made climate change.

…the time is gone to accept “two sides” to an argument that, by postponing any real solutions, will kill us all.

Just as the history of the internal combustion engine and the fossil fuel and auto industry’s attempts to prolong its near monopoly, using disinformation and other tactics for over 50 years was evil, the anti-sustainable energy politics in Texas today is just a continuation of that effort.

The time is gone to accept “two sides” to an argument that has one side trying, by attempting to postpone any real solutions, to kill us all, in the name of short term greed.

Under unique circumstances lending legitimacy to evil is too costly to condone

Looking at “both sides” of an issue is a practice based on a theory that “reasonable people” can disagree on diametrically opposed views. This idea is often suspended, however, by unreasonable people for their own reasons. That is sometimes called “war”.

Reasonable people, people, for example that understand climate science and want to prevent the total destruction of the earth and the extinction of all inhabitants, are often reluctant to suspend this idea of “good people on both sides” by their very nature as caring individuals.

“Now we need to understand that the “silence of one good man” can spell disaster for all good people. Each of us who remained passive as our impending disaster continued might have been the one “good man” who didn’t act, didn’t speak out, didn’t resist…

Elayne Clift in Salon

Now is a time when huge changes are going to be forced by an external and highly powerful and dangerous threats to our survival. The changes that are needed involve radically new ways of thinking and acting across many spheres of activity.

Subscribe to our newsletter for all the latest updates directly to your inBox.

New technologies, such as the aforementioned wind turbines and solar collectors, new forms of transportation, new ways of looking at other causes of, and remedies to, the excessive expulsion of carbon into the atmosphere will be absolutely required.

The truth is that for these new ways of thinking and acting to take over in human commerce the old ways must be cancelled. With extreme prejudice.

The past and those that want to go back to it are a lost cause, unfortunately

Many many “rich” people will be unhappy about this. And they will have politicians in their pocket that will gladly spread lies and disinformation to try and sustain the sick, evil gravy-train of polluting, carbon spewing systems as long as possible.

Sick and evil, not because those ways of surviving for humanity, burning fossil fuels and using them for a million different things that were a benefit in the short term, but because the short term is over.

The various arguments that somehow it is a good idea not to change and for the changes to slow down and not step on any toes as they gradually become “viable” have zero validity as of today (really as of 25 years ago but that’s water under the bridge).

Eventually the climate itself will kill them for their mistakes. Unfortunately it will also kill the rest of us if we allow them to continue to postpone positive change with lies and disinformation.

– D.L.

There must be an understanding among “reasonable” people, people who want to be part of an urgent crusade to save the world, literally, that points of view and the people who espouse them represent evil, plain and simple.

They will scream that reasonable people are “femi-nazis and “eco-terrorists” and say and do whatever it takes to protect what’s left of a deadly status quo. But they are wrong.

Eventually the climate itself will kill them for their mistakes. Unfortunately it will also kill the rest of us if we allow them to continue to postpone positive change with lies and disinformation.

“Every one of these people is the banality of evil personified. Every one of them became what Arendt called a “leaf blowing in the whirlwind of time.” Now every one of them bears responsibility for what could lie ahead.”

Elayne Clift in Salon

This change in thinking about how to respond to this kind of evil will be a more important factor in the survival of humanity than all the technological advances combined.

“World War III is a guerrilla information war with no division between military and civilian participation.” – Marshall McLuhan (1970), Culture is Our Business, p. 66.

Marshall McLuhan

“Info-wars” were predicted as the battlefield of WWIII by Marshall McLuhan in 1970 and now we are in it and there must be an understanding of what is at stake.

When disinformation is used as a perennial weapon against positive, necessary change it is necessary to do more than disagree. It is necessary to expose the lies and, more importantly, the obvious sick and criminal motives for the lies. Over and over as often as necessary.


Find books on Politics,  Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page.

How Reliable is Coronavirus Data? Indications of Manipulation, not just in China

Many factors contribute to the haze of confusion surrounding the facts

The coronavirus pandemic is sowing confusion across the globe, not just medically but in the representation of fact vs. hopes. Since, 1918, most of the world has not experienced anything akin to a global outbreak of this magnitude. In order to navigate this novel landscape, quality, consistent, and factual information is essential. Unfortunately, many journalists are already getting the boot during the shutdown, which causes trusted reporting to be less available, but the stifling of facts does not stop there, as now scientific integrity may be in question too.

As harrowing as COVID-19 is, the disease is presenting new opportunities for scientists and medical researchers. Academically speaking, it is an irresistibly hot topic, and any significant contribution to its study could launch a career. Thus, there is a budding competitiveness amongst the scientific community, with many researchers rushing to uncover something (anything) about the coronavirus and get it published.

Read More: Words We Live By, a.k.a How Coronavirus has changed Language

According to a new post on Harvard Law’s Bill Of Health website, a recent study from Stanford University epitomizes the chaotic drive for scientific corona-findings right now. The Stanford study is documented in an unpublished paper titled, “COVID-19 Antibody Seroprevalence In Santa Clara County, California.” It describes a procedure whereby the scientists tested multiple Santa Clara County residents for the SARS-Cov-2 antibody, which causes COVID-19. The ultimate findings suggest that many more Santa Clara residents had the virus than sought treatment for it.

By extension, the scientists suppose that this conclusion could be true for other parts of the world as well. If it is, it could significantly alter the reported data as well as the global reaction to the virus.

Peers in the scientific community, however, express skepticism towards the Stanford study, citing dire flaws in its methodology. First off, Stanford improperly selected its subjects for the tests. Rather than creating an algorithm for testing random individuals from the Santa Clara area, it fished for volunteers on Facebook, attracting people more likely to seek out testing in the first place and ergo, more likely to have symptoms. Using social media also means that they probably drew in a younger crowd, catering to a demographic that is less at risk and therefore less prone to hospitalize or report feeling sick.

Critics also note some inconsistencies in the data itself, particularly the section that takes into account the risk-factor of faulty equipment or inaccurate results. Overall, the results are more than a little suspicious, depicting a possible example of scientists getting excited over this unprecedented natural phenomenon and jumping to conclusions.

Read More: “Wuhan Diary” reveals inside accounts of Coronavirus Lockdown During the Peak

Politics, ratings and money are putting pressure on journalists and scientists alike

Of course, the scientific community meets even greater discrepancies when findings get thrown into the blenders of media and politics. Even when the science remains rightfully impartial, different forces can twist or manipulate data to tell a different story.

“If refusing to mislead the public during a health crisis is insubordination, then I will wear that badge with honor,”

Rebekah Jones in an interview with Chris Cuomo of CNN

In Florida, for example, Department of Health scientist recently lost her job for refusing to skew data. In a statement to CBS, she said that the Department wanted her to “manually change data to drum up support for the plan to reopen.”

Jones’ job at the Department was to create Geographic Information Systems (maps) of Florida that topographically represented the spread of COVID-19 across the state. Her work was widely praised and her departure comes non-coincidentally around the same time that Florida Governor Ron Desantis is trying to reopen stores, restaurants, and barber shops across the Sunshine State.

In a leaked email, Jones warned other Florida Health Department workers to be weary of forthcoming data produced by the state, for it could easily be meddled in corrupt agendas.

Science, by definition, is the objective study of what is. When warped to fit a subjective point of view, though, it becomes something very dangerous—a destructively deceitful force disguised as the truth. Nowadays, truth is an unfortunately delicate term, but it is a necessity to conquer our current circumstance. If we lose science as the impartial study of truth, then we lose the facts, and thus lose our grasp on reality.


Find books on PandemicsSustainable EnergyEsoteric Spirituality and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac or subscribe to our newsletter.

Lynxotic may receive a small commission based on any purchases made by following links from this page.

Algorithms in Your Life: YouTube Claims it Pulled Bogus Propaganda but Google Algo not Designed for that

A Story that’s Getting Old, Lies and Deception are Flooding All Outlets on Precipice of 2020 Election Year

Over the past few months, false videos on YouTube posing as established American news outlets have garnered millions of views. Selling themselves as CNN or Fox News, these fake accounts present inflammatory and fabricated content to their viewers, effectively deceiving the American public by spreading misinformation.

The Google-owned YouTube says they have taken down as many of these videos as it can, but companies such as CNN insist that the website needs to do more to proactively inhibit such activity. After all, the source of the problem is rooted deep within the very fiber that keeps YouTube (and the current monopolized Internet as a whole) running.

What is really going on in the YouTube case is an exploitation of two fundamental aspects of the Internet. Namely, these fake accounts are taking advantage of the web’s free ranging platform, and they are manipulating the data-based algorithms that keep the Internet efficiently feeding billions in ad revenue to the platforms like, google search, YouTube, Facebook and Amazon.

The web’s “free” policy refers to the fact that anyone can post anything on the Internet, although “free” in this case is a deceiving concept. Long before the Internet was a global phenomenon, the system was built upon a somewhat libertarian foundation where all users had equal access and unrestricted contribution power to information. The potential fault in this model, however, is that there is little accountability or security. As we are seeing today, with so much unchecked info, lying becomes easy and the line between true and false greys.

Algorithms are the Gatekeepers, Automation for Advertising Dollars

As for the algorithms, websites like YouTube, Google, Amazon, Facebook, and so on, depend on formulas that learn more about you the more you use them. A term that is gradually beginning to become more important but not yet fully understood, an algorithm is a set of instructions, managed by Artificial Intelligence.

The key point is that the companies mentioned above maintain total secrecy as to the settings of the algorithm, however, by viewing the public results it is clear that in all cases the algorithm is programmed to benefit advertisers, and thereby increase profits for the companies.

As per wikipedia:

“In mathematics and computer science, an algorithm is a finite sequence of well-defined, computer-implementable instructions, typically to solve a class of problems or to perform a computation. Algorithms are unambiguous specifications for performing calculation, data processing, automated reasoning, and other tasks.”

This is how YouTube recommends videos for you, Facebook shows you suggested posts, Amazon advertises things that fit your taste, and Google can anticipate your searches before you even type anything in. It is based mostly off of your previous use—your activity provides data that these tech companies manipulate, own and sell (which you unwittingly agreed to by clicking the ubiquitous “terms of service” agreement).

However, as the access to Internet platforms, and therefore the ability to interact with others, has become a virtual monopoly controlled by the platforms, the ethics surrounding data rights and algorithms have become less clear.

Most Internet users have allowed access to their personal information in some way or another. Through “free” email accounts, social media, messages, pictures, purchases, and so on, your entire identity is encrypted somewhere in the cybernetic ether, and you have little control over it.

The consequences of this go beyond just getting offered offensive videos or unsolicited ads. The companies that made the bogus CNN accounts, for example, cleverly played YouTube’s algorithms so you would be redirected there after watching legitimate news stories. Because the majority of people consume news through their computers, fake news and real news have become increasingly difficult to distinguish.

More and More Political Manipulators are Gaming the Algos

Moreover, these misleading accounts are not always coming from Internet trolls. Some of them are run by malicious enterprises or foreign governments trying to influence geopolitical processes. Such was the case behind the now well known, infamous case of Cambridge Analytica’s interference in the 2016 Presidential Election.

Cambridge Analytica—a British political consulting firm—marketed for the Trump campaign using people’s Facebook data. At the height of the campaign, the company allegedly consulted with Russian officials to assist in Trump’s eventual election.

Due to the algorithmic control of websites like Facebook, once Cambridge Analytica had information on a single user, it was able to acquire information on every person that that single user ever interacted with online. Via just a handful of connections, the company was able to quickly collect data on nearly the entire nation. Thus, even if you avoided all of Cambridge Analytica’s tricks, you could still be targeted through just a few degrees of separation.

There is really no way of knowing who Facebook is sharing your data with or how they are using it. In fact, you don’t even know what your own data is, as most websites bar their users from accessing the very information that they provide. The only way to find out how you are being targeted is by consuming the suggested version of yourself that these tech companies feed back to you.

The situation is certainly eerie on a personal level, but it also transcends the individual to impact phenomena on a far greater scale. With the Trump administration as evidence, Cambridge Analytica’s approach obviously worked in some capacity. Likewise, businesses and organizations can manipulate data to promote their version of the world. Through the unrestricted world of the Internet, powerful users can alter history, conflate truths, and shape the American psyche into thinking whatever they deem real.

Certain sectors of the government have been working to try and fix this problem. Mark Zuckerberg has gone before Congress to answer for Facebook’s place in the Cambridge Analytica case. Likewise, the California Consumer Privacy Act will take effect on January 1st, giving people greater personal data rights in the Golden State.

Don’t expect the companies mentioned above, having a combined market value of more than $3 trillion, to cooperate or rein in this problem voluntarily. This algorithmic dictatorship benefits criminals like those that were behind the Trump election meddling, but most of all the system benefits the platforms themselves, at a level that is mind-bogglingly obscene. This system will change only when they are broken up or gone.

Data is the most profitable resource on the planet (recently topping oil), and it is because of our data inputs that Google and Facebook, among others, remain “free” websites. The real price for online “services” like search and social platforms is very high indeed and users are getting scammed out of more than they may realize.

Ultimately, like in politics and life itself, it is the masses, the users themselves in this case, that can decide if they want an algorithmic dictatorship, or if it is time to sweep away the current dysfunctional system and replace it with one where the price is not so steep.


Find books on Big TechSustainable EnergyEconomics and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac and subscribe to our newsletter.

This Week: Stories from the Climate Crisis, Tech, Tesla, Apple and more

Just in case you missed our recent coverage on the intersections of the Climate Crisis, Tech and Entertainment, we’ve compiled a list of articles for you to check out:

Graphic Collage / Lynxotic

Greta Thunberg: Climate Activist focused on Change now, not hopes for an Uncertain Future

Greta Thunberg is a sixteen-year-old Swedish girl who is rapidly becoming a flash point for those in the movement to raise awareness of the global emergency of global warming and climate change.

Photo / Adobe Stock

The Potential of Self-Driving Cars in Entertainment Media: First Foray

While it might be easy to imagine people in self-driving cars perpetually staring at their smart phones or laptops, there is the possibility that entertainment companies could collaborate with vehicle manufacturers to change the very design of vehicles and make car-riding a transmedia experience.

Photo / Apple

iOS 13 Tips: How to Use and Manage the new Share Menu for iPhone and iPadOS

The share menu can vary from app to app, many use it most often from within Safari or the Mail app, however, for this video, we chose the Apple News app as the operations are essentially the same.

Photo / Global Citizen / Ethan Judelson

Leonardo DiCaprio headlines Global Citizens Festival, continues fight to raise awareness of Climate Crisis

Leonardo DiCaprio had made several stances against climate change over the years. The actor spearheaded the issue in his 2016 documentary “Before The Flood” and even used the stage during his long-awaited Oscar acceptance speech to talk about the importance of preserving our natural world. Evidently, the man is a passionate environmentalist.

Graphic Collage / Lynxotic

Tesla and Elon Musk are Smiling: Gas Pumps Out, Charging Stations In

The news here, however is that these are stations that have decided to abandon gas, oil and, presumably, gasoline-based auto maintenance for EV charging and convenience. This is a trend that, hopefully, will accelerate.

Photo / Magnolia Pictures

‘Scandalous’: National Inquirer sets the Standard for Questionable News Coverage

If one even notices the title of the film printed in smaller letters in enormous tagline’s shadows, one might expect that “Scandalous” isa movie about conspiracy theories or some great national collusion that ties all of these pop-culture headlines in some absurd way. However, beneath the title on the poster, seemingly hidden, is the film’s subtitle. It reads “The Untold Story Of The National Enquirer.

Photo / Disney

5 New Trailers just Released: Check out the future fare from Sony, Disney and more

This week had a gaggle of new trailers hitting the street so we decided to choose five to showcase and feature in this post.

Photo / Warner Bros.

Eight Movies Out Now you might have missed

Just in case you missed our coverage of recent films, out now in theaters, we’ve compiled a graphic tour of a few noteworthy (or at least to be considered) titles among them.


Find books on Big TechSustainable EnergyEconomics and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac and subscribe to our newsletter.

Lynxotic may receive a small commission based on any purchases made by following links from this page.

‘Scandalous’: National Inquirer sets the Standard for Questionable News Coverage

https://movietrailers.apple.com/movies/magnolia_pictures/scandalous/scandalous-trailer-1b_h1080p.mov
official trailer for “scandalous”

“Scandalous” Documentary Film Reveals the Corrupt History Behind the National Enquirer, Entertains with a point about Fake News

The promotional poster for Magnolia Pictures and Mark Landsman’s new documentary shows off in giant bold letters the alluring tagline, “Sex, Drugs, and UFOs.” Billowing around the words are a bunch of newspaper front pages, each with an infamous headline such as “Flying Saucers Are Real,” “I Saw O.J. At The Murder Scene,” or “Elvis: The Untold Story.” 

If one even notices the title of the film printed in smaller letters in enormous tagline’s shadows, one might expect that “Scandalous” is a movie about conspiracy theories or some great national collusion that ties all of these pop-culture headlines together in some absurd way. However, beneath the title on the poster, seemingly hidden, is the film’s subtitle. It reads “The Untold Story Of The National Enquirer.” 

For sixty years, the National Enquirer has been an American news source reporting on the latest events in pop-culture gossip, catering their articles to the average everyday American who is voyeuristically intrigued in the lives of celebrities and public figures. As Landsman’s documentary shows, however, the National Enquirer toed an unsteady line between information and entertainment, using borderline unethical or illegal reporting techniques to get the full scoop, and then milking that scoop for all its worth in order to sell more copies.

Poster Photo / Magnolia Pictures

Living Squarely in a Gray Area and Embracing Ambiguity

Thus, despite the way the film is marketed on the poster, “Scandalous” is not about conspiracy theories, but rather about a single pseudo-news source that changed the game of reporting by promoting stories that were overblown and exaggerated for the American public.

It is actually a strangely relevant topic in today’s world. Obviously, the National Enquirer still exists—James Cohen of Hudson News recently purchased the company—and it probably still partakes in some of the ethical ambiguities covered in the film. On a larger scale, though, today’s political debates regarding fake news give “Scandalous” a timely twist. Did the National Enquirer ever explicitly produce fake news in their articles? Perhaps not. But did they ever overstate certain details and indulge in stories for the sake of gaining readers’ attention? Most certainly. Then again, what newspaper hasn’t?

There is somewhat of a paradox here, for when the National Enquirer bends the rules in order to get a story, it comes off as an egregious affront. At the same time, though, when a more esteemed news source such as The New York Times or the Washington Post goes undercover to retrieve information, they are usually applauded for exercising freedom of the press. Sometimes Steven Spielberg even commends them with an Oscar nominated movie starring Tom Hanks and Meryl Streep.

Perhaps it is the fact that the National Enquirer is not usually publishing stories that are pertinent to the American people’s safety or enlightenment. Exercising freedom of the press may be admired when it is for investigating an issue of national importance, but not so much when it is investigating a celebrity couple’s latest fight. Then, it just comes off as a paparazzi-like invasion of privacy.

Photo / Magnolia Pictures

For a cinephile, it is also hard to watch a film like “Scandalous” and wonder where the documentary itself falls on that line between information and entertainment. Documentaries, existing somewhere betwixt feature films or news reports, are neither entirely fictional nor restricted to objectivity. Typically, they are didactic in some way, but also artistic and meant to be please the audience to a certain degree. While we are watching “Scandalous” criticize the National Enquirer’s techniques and rhetoric, we may find ourselves questioning what kinds of stylistic choices or intentional omissions Mark Landsman made when curating the film.

The National Enquirer’s history is not all black and white. In their questionable form of journalism, they actually ended up uncovering and reporting on some pertinent information over the years. Do these occasional revelations really justify the source’s tactics? On the other hand, though, do they really need to justify themselves? After all, they do claim to be a newspaper.

“Scandalous” might not be the fake news story that we were expecting right now, and despite the criticism it offers, it may not be entirely innocent or objective in its own right. Nevertheless, it is subtly timely. Enough so that we just might learn something pertinent about journalism, history, and ethics along the way. Or we might just choose to enjoy it as an interesting exposé about a fascinating news source that reported on some of the biggest stories in pop-culture across the second half of the twentieth century. When it comes to watching a documentary film, the choice is up to the viewer.


Find books on Big TechSustainable EnergyEconomics and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac and subscribe to our newsletter.

Lynxotic may receive a small commission based on any purchases made by following links from this page.