Tag Archives: facebook

‘Declaration for the Future of the Internet’ Launched to Promote Open Web for All

The United States, the European Union, and dozens of other countries on Thursday launched a global Declaration for the Future of the Internet vowing online protection of human rights, respect for net neutrality, and no government-imposed shutdowns that was applauded by progressive advocates for a more open and democratic web.

“If acted upon,” the declaration “would ensure that people everywhere can connect, communicate, organize, and create new and amazing things that will benefit the entire world—not entrench the power of unaccountable billionaires and oligarchs.”

“Today, for the first time, like-minded countries from all over the world are setting out a shared vision for the future of the internet, to make sure that the values we hold true offline are also protected online, to make the internet a safe place and trusted space for everyone, and to ensure that the internet serves our individual freedom,” European Commission President Ursula von der Leyen said in a statement.

“Because the future of the internet,” she said, “is also the future of democracy, of humankind.”

The unveiling of the three-page document came months after President Joe Biden’s Summit for Democracy at which his administration was reportedly mulling the launch of an Alliance for the Future of the internet. It also comes amid swelling scrutiny over the power of big tech corporations and continued attacks to online access imposed by authoritarian regimes.

The nonbinding declaration references a rise in “the spread of disinformation and cybercrimes,” user privacy concerns as vast troves of personal data is collected online, and platforms that “have enabled an increase in the spread of illegal or harmful content.”

It further promotes the internet operating “as a single, decentralized network of networks—with global reach and governed through the multistakeholder approach, whereby governments and relevant authorities partner with academics, civil society, the private sector, technical community and others.”

Signed by over 55 nations—including all the E.U. member states, the U.K, and Ukraine—the document states in part:

We affirm our commitment to promote and sustain an internet that: is open, free, global, interoperable, reliable, and secure and to ensure that the internet reinforces democratic principles and human rights and fundamental freedoms; offers opportunities for collaborative research and commerce; is developed, governed, and deployed in an inclusive way so that unserved and underserved communities, particularly those coming online for the first time, can navigate it safely and with personal data privacy and protections in place; and is governed by multistakeholder processes. In short, an internet that can deliver on the promise of connecting humankind and helping societies and democracies to thrive.

The declaration won plaudits from U.S.-based digital rights group Free Press, whose co-CEO Craig Aaron said it “points to a vision of the internet that puts people first” and that, “if acted upon… would ensure that people everywhere can connect, communicate, organize, and create new and amazing things that will benefit the entire world—not entrench the power of unaccountable billionaires and oligarchs.”

“We’re encouraged by the declaration’s strong statements of support for net neutrality, affordable and inclusive internet access, and data-privacy protections, and its decisive stance against the spread of hate and disinformation,” he added.

Aaron called on the U.S. to “take the necessary steps to live up to these ideals—protecting the free flow of information online, safeguarding our privacy, ending unlawful surveillance, and making broadband affordable and available to everyone.”

The Center for Democracy & Technology also welcomed the declaration, describing it in a Twitter thread as “an important commitment by nations around the world to uphold human rights online and off, advance democratic ideals, and promote an open Internet.”

While it “hit on the right priorities” including protection of personal data privacy and a commitment to a multistakeholder internet governance process, the group called on each signatory to “review their own laws and policies against admirable standards articulated in the Declaration.”

“For the Declaration to have any persuasive power,” said the group, “the U.S. and other nations need to get their own houses in order.”

Jennifer Brody, U.S. advocacy manager at Access Now, also greeted the document with a tepid welcome.

“Of course we support calls in the declaration, like refraining from shutting down the internet and reinvigorating an inclusive approach to internet governance, but we have seen so many global principles and statements come and go without meaningful progress,” she said. “The burden is on the Biden administration and allies to do more than talk the talk.”

Originally published on Common Dreams and republished under a Creative Commons license (CC BY-NC-ND 3.0).

Related Articles:


Check out Lynxotic on YouTube

Find books on Music, Movies & Entertainment and many other topics at Bookshop.org

Lynxotic may receive a small commission based on any purchases made by following links from this page

Social Media Giants’ Climate Misinformation Policies Leave Users ‘In the Dark’: Report

“Despite half of U.S. and U.K. adults getting their news from social media, social media companies have not taken the steps necessary to fight industry-backed deception,” reads the report.

Weeks after the Intergovernmental Panel on Climate Change identified disinformation as a key driver of the planetary crisis, three advocacy groups published a report Wednesday ranking social media companies on their efforts to ensure users can get accurate data about the climate on their platforms—and found that major companies like Twitter and Facebook are failing to combat misinformation.

The report, titled In the Dark: How Social Media Companies’ Climate Disinformation Problem is Hidden from the Public and released by Friends of the Earth (FOE), Greenpeace, and online activist network Avaaz, detailed whether the companies have met 27 different benchmarks to stop the spread of anti-science misinformation and ensure transparency about how inaccurate data is analyzed.

“Despite half of U.S. and U.K. adults getting their news from social media, social media companies have not taken the steps necessary to fight industry-backed deception,” reads the report. “In fact, they continue to allow these climate lies to pollute users’ feeds.

The groups assessed five major social media platforms—Facebook, Twitter, YouTube, Pinterest, and TikTok—and found that the two best-performing companies, Pinterest and YouTube, scored 14 out of the 27 possible points.

As Common Dreams reported earlier this month, Pinterest has won praise from groups including FOE for establishing “clearly defined guidelines against false or misleading climate change information, including conspiracy theories, across content and ads.”

“One of the key objectives of this report is to allow for fact-based deliberation, discussion, and debate to flourish in an information ecosystem that is healthy and fair, and that allows both citizens and policymakers to make decisions based on the best available data.”

The company also garnered points in Wednesday’s report for being the only major social media platform to make clear the average time or views it allows for a piece of scientifically inaccurate content before it will take action to combat the misinformation and including “omission or cherry-picking” of data in its definition of mis- or disinformation.

Pinterest and YouTube were the only companies that won points for consulting with climate scientists to develop a climate mis- and disinformation policy.

The top-performing companies, however, joined the other firms in failing to articulate exactly how their misinformation policy is enforced and to detail how climate misinformation is prioritized for fact-checking.

“Social media companies are largely leaving the public in the dark about their efforts to combat the problem,” the report reads. “There is a gross lack of transparency, as these companies conceal much of the data about the prevalence of digital climate dis/misinformation and any internal measures taken to address its spread.”

Twitter was the worst-performing company, meeting only five of the 27 criteria.

“Twitter is not clear about how content is verified as dis/misinformation, nor explicit about engaging with climate experts to review dis/misinformation policies or flagged content,” reads the report. “Twitter’s total lack of reference to climate dis/misinformation, both in their policies and throughout their enforcement reports, earned them no points in either category.”

TikTok scored seven points, while Facebook garnered nine.

The report, using criteria developed by the Climate Disinformation Coalition, was released three weeks after NPR reported that inaccurate information about renewable energy sources has been disseminated widely in Facebook groups, and the spread has been linked to slowing progress on or shutting down local projects.

In rural Ohio, posts in two anti-wind power Facebook groups spread misinformation about wind turbines causing birth defects in horses, failing to reduce carbon emissions, and causing so-called “wind turbine syndrome” from low-frequency sounds—a supposed ailment that is not backed by scientific evidence. The posts increased “perceptions of human health and public safety risks related to wind” power, according to a study published last October in the journal Energy Research & Social Science.

As those false perceptions spread through the local community, NPRreported, the Ohio Power Siting Board rejected a wind farm proposal “citing geological concerns and the local opposition.”

Misinformation on social media “can really slow down the clean energy transition, and that has just as dire life and death consequences, not just in terms of climate change, but also in terms of air pollution, which overwhelmingly hits communities of color,” University of California, Santa Barbara professor Leah Stokes told NPR.

As the IPCC reported in its February report, “rhetoric and misinformation on climate change and the deliberate undermining of science have contributed to misperceptions of the scientific consensus, uncertainty, disregarded risk and urgency, and dissent.”

Wednesday’s report called on all social media companies to:

  • Establish, disclose, and enforce policies to reduce climate change dis- and misinformation;
  • Release in full the company’s current labeling, fact-checking, policy review, and algorithmic ranking systems related to climate change disinformation policies;
  • Disclose weekly reports on the scale and prevalence of climate change dis- and misinformation on the platform and mitigation efforts taken internally; and
  • Adopt privacy and data protection policies to protect individuals and communities who may be climate dis/misinformation targets.

“One of the key objectives of this report is to allow for fact-based deliberation, discussion, and debate to flourish in an information ecosystem that is healthy and fair, and that allows both citizens and policymakers to make decisions based on the best available data,” reads the report.

“We see a clear boundary between freedom of speech and freedom of reach,” it continues, “and believe that transparency on climate dis/misinformation and accountability for the actors who spread it is a precondition for a robust and constructive debate on climate change and the response to the climate crisis.”

Originally published on Common Dreams by JULIA CONLEY  and republished


Related:

Check out Lynxotic on YouTube

Find books on Music, Movies & Entertainment and many other topics at Bookshop.org

Lynxotic may receive a small commission based on any purchases made by following links from this page

Congressional Chair Asks Google and Apple to Help Stop Fraud Against U.S. Taxpayers on Telegram

Above: Photo Collage / Lynxotic / Apple / Telegram

The chairman of a congressional subcommittee has asked Apple and Google to help stop fraud against U.S. taxpayers on Telegram, a fast-growing messaging service distributed via their smartphone app stores. The request from the head of the House Select Subcommittee on the Coronavirus Crisis came after ProPublica reports last July and in January revealed how cybercriminals were using Telegram to sell and trade stolen identities and methods for filing fake unemployment insurance claims.

Rep. James E. Clyburn, D-S.C., who chairs the subcommittee (which is part of the House Committee on Oversight and Reform), cited ProPublica’s reporting in March 23 letters to the CEOs of Apple and Alphabet, Google’s parent company. The letters pointed out that enabling fraud against American taxpayers is inconsistent with Apple’s and Google’s policies for their respective app stores, which forbid apps that facilitate or promote illegal activities.

“There is substantial evidence that Telegram has not complied with these requirements by allowing its application to be used as a central platform for the facilitation of fraud against vital pandemic relief programs,” Clyburn wrote. He asked whether Apple and Alphabet “may be able to play a constructive role in combating this Telegram-facilitated fraud against the American public.”

Clyburn also requested that Apple and Google provide “all communications” between the companies and Telegram “related to fraud or other unlawful conduct on the Telegram platform, including fraud against pandemic relief programs” as well as what “policies and practices” the companies have implemented to monitor whether applications disseminated through their app stores are being used to “facilitate fraud” and “disseminate coronavirus misinformation.” He gave the companies until April 7 to provide the records.

Apple, which runs the iOS app store for its iPhones, did not reply to a request for comment. Google, which runs the Google Play app store for its Android devices, also did not respond.

The two companies’ app stores are vital distribution channels for messaging services such as Telegram, which markets itself as one of the world’s 10 most downloaded apps.The company has previously acknowledged theimportance of complying with Apple’s and Google’s app store policies. “Telegram — like all mobile apps — has to follow rules set by Apple and Google in order to remain available to users on iOS and Android,” Telegram CEO Pavel Durov wrote in a September blog post. He noted that, should Apple’s and Google’s app stores stop supporting Telegram in a given locale, the move would prevent software updates to the messaging service and ultimately neuter it.

By appealing to the two smartphone makers directly, Clyburn is increasing pressure on Telegram to take his concerns seriously. His letter noted that “Telegram’s very brief terms of service only prohibit users from ‘scam[ming]’ other Telegram users, appearing to permit the use of the platform to conspire to commit fraud against others.” He faulted Telegram for letting its users disseminate playbooks for defrauding state unemployment insurance systems on its platform and said its failure to stop that activity may have enabled large-scale fraud.

Clyburn wrote to Durov in December asking whether Telegram has “undertaken any serious efforts to prevent its platform from being used to enable large-scale fraud” against pandemic relief programs. Telegram “refused to engage” with the subcommittee, a spokesperson for Clyburn told ProPublica in January. (Since then, the app was briefly banned in Brazil for failing to respond to judicial orders to freeze accounts spreading disinformation. Brazil’s Supreme Court reversed the ban after Telegram finally responded to the requests.)

Telegram said in a statement to ProPublica that it’s working to expand its terms of service and moderation efforts to “explicitly restrict and more effectively combat” misuse of its messaging platform, “such as encouraging fraud.” Telegram also said that it has always “actively moderated harmful content” and banned millions of chats and accounts for violating its terms of service, which prohibit users from scamming each other, promoting violence or posting illegal pornographic content.

But ProPublica found that the company’s moderation efforts can amount to little more than a game of whack-a-mole. After a ProPublica inquiry last July, Telegram shut some public channels on its app in which users advertised methods for filing fake unemployment insurance claims using stolen identities. But various fraud tutorials are still openly advertised on the platform. Accounts that sell stolen identities can also pop back up after they’re shut down; the users behind them simply recycle their old account names with a small variation and are back in business within days.

The limited interventions are a reflection of Telegram’s hands-off approach to policing content on its messenger app, which is central to its business model. Durov asserted in his September blog post that “Telegram gives its users more freedom of speech than any other popular mobile application.” He reiterated that commitment in March, saying that Telegram users’ “right to privacy is sacred. Now — more than ever.”

The approach has helped Telegram grow and become a crucial communication tool in authoritarian regimes. Russia banned Telegram in 2018 for refusing to hand over encryption keys that would allow authorities to access user data, only to withdraw the ban two years later at least in part because users were able to get around it. More recently, Telegram has been credited as a rare place where Russians can find uncensored news about the invasion of Ukraine.

But the company’s iron-clad commitment to privacy also attracts cybercriminals looking to make money. After the COVID-19 pandemic prompted Congress to authorize hundreds of billions of small-business loans and extra aid to workers who lost their jobs, Telegram lit up with channels offering methods to defraud the programs. The scale of the fraud is yet unknown, but it could stretch into tens if not hundreds of billions of dollars. Its sheer size prompted the Department of Justice to announce, on March 10, the appointment of a chief prosecutor to focus on the most egregious cases of pandemic fraud, including identity theft by criminal syndicates.

Article first published on ProPublica by Cezary Podkul and republished under a Creative Commons License (CC BY-NC-ND 3.0)

Related Articles:


Check out Lynxotic on YouTube

Find books on Music, Movies & Entertainment and many other topics at Bookshop.org

Lynxotic may receive a small commission based on any purchases made by following links from this page

Consumer Rights Groups Applaud EU Passage of Law to Rein in Tech Titans

Above: Photo Collage / Lynxotic / Adobe Stock

The new law “will put an end to some of the most harmful practices of Big Tech and narrow the power imbalance between people and online platforms.”

Digital and consumer rights advocates on Friday hailed a landmark European Union law aimed at curbing Big Tech’s monopolistic behavior.

“This is a big moment for consumers and businesses who have suffered from Big Tech’s harmful practices.”

Negotiators from the European Parliament and European Council agreed late Thursday on the language of the Digital Markets Act (DMA), which aims to prevent major tech companies from anti-competitive practices by threatening large fines or possible breakup.

Ursula Pachl, deputy director-general at the European Consumer Organization (BEUC), an umbrella advocacy group, said in a statement that “this is a big moment for consumers and businesses who have suffered from Big Tech’s harmful practices.”

“This legislation will rebalance digital markets, increase consumer choice, and put an end to many of the worst practices that Big Tech has engaged in over the years,” she added. “It is a landmark law for the E.U.’s digital transformation.”

Cédric O, the French minister of state with responsibility for digital, said in a statement that “the European Union has had to impose record fines over the past 10 years for certain harmful business practices by very large digital players. The DMA will directly ban these practices and create a fairer and more competitive economic space for new players and European businesses.”

“These rules are key to stimulating and unlocking digital markets, enhancing consumer choice, enabling better value sharing in the digital economy, and boosting innovation,” he added.

Andreas Schwab, a member of the European Parliament from Germany, said that “the Digital Markets Act puts an end to the ever-increasing dominance of Big Tech companies. From now on, Big Tech companies must show that they also allow for fair competition on the internet. The new rules will help enforce that basic principle.”

BEUC’s Pachl offered examples of the new law’s benefits:

Google must stop promoting its own local, travel, or job services over those of competitors in Google Search results, while Apple will be unable to force users to use its payment service for app purchases. Consumers will also be able to collectively enforce their rights if a company breaks the rules in the Digital Markets Act.

Companies are also barred from pre-installing certain software and reusing certain private data collected “during a service for the purposes of another service.”

The DMA applies to companies deemed both “platforms” and “gatekeepers”—those with market capitalization greater than €75 billion ($82.4 billion), 45 million or more monthly end-users, and at least 10,000 E.U. business users. Companies that violate the law can be fined up to 10% of their total annual worldwide turnover, with repeat offenders subject to a doubling of the penalty.

“The DMA is a major step towards limiting the tremendous market power that today’s gatekeeper tech firms have.”

Diego Naranjo, head of policy at the advocacy group European Digital Rights (EDRi), said in a statement that “the DMA will put an end to some of the most harmful practices of Big Tech and narrow the power imbalance between people and online platforms. If correctly implemented, the new agreement will empower individuals to choose more freely the type of online experience and society we want to build in the digital era.”

To ensure effective implementation, BEUC’s Pachl called on E.U. member states to “now also provide the [European] Commission with the necessary enforcement resources to step in the moment there is foul play.”

EDRi senior policy adviser Jan Penfrat said that while “the DMA is a major step towards limiting the tremendous market power that today’s gatekeeper tech firms have,” policymakers “must now make sure that the new obligations not to reuse personal data and the prohibition of using sensitive data for surveillance advertising are respected and properly enforced by the European Commission.”

“Only then will the change be felt by people who depend on digital services every day,” he added.

Originally published on Common Dreams by BRETT WILKINS and republished under Creative Commons (CC BY-NC-ND 3.0).

Related Articles:


Check out Lynxotic on YouTube

Find books on Music, Movies & Entertainment and many other topics at Bookshop.org

Lynxotic may receive a small commission based on any purchases made by following links from this page

The Hidden Link between Corporate Greed and Inflation: Video by Robert Reich

Not new, perhaps, but getting worse by the day

In a new video from Robert Reich, former secretary of labor and accomplished author, the phenomena we are all experiencing on a daily basis, such as incredible high gas prices, crazy energy prices, more out-of-pocket at the grocery store, and what sure looks like price gouging and price hikes on almost everything, he takes on the root of it all, in other words: Inflation.

Naturally, with all of this being so obvious to you and me there’s no shortage of folks to explain the purported causes, from media outlets like The Washington Post, to Biden administration officials and pundits from left, right and center.

One explanation you will seldom hear, however, is that much of the pain we are experiencing is due to monopoly power, the inequality growing out of the economic concentration of the American economy and the ever increasing concentration of financial and market power to a relative handful of big corporations.

This perspective is not only refreshingly direct, but it actually has a remedy attached, unlike the usual reasons given, such as economic policy, government spending, irresponsible actions by the federal government and federal reserve and so on. While all of these are certainly good candidates for finger pointing, they generally have only one response attached that is suggested as a remedy: higher interest rates.

“How can this structural problem be fixed? Fighting corporate concentration with more aggressive antitrust enforcement. Biden has asked the Federal Trade Commission to investigate oil companies, and he’s appointed experienced antitrust lawyers to both the FTC and the Justice Department.”

– Robert Reich

The idea that corporate greed, massive corporate profits that keep rising, in spite of supply chain disruptions and other issues, could be at the root of the problems, and that aggressive use of antitrust law might just be an appropriate response to the deeper structural issue is spot on.

A real change via antitrust might help to reinstate tough competition, weed out greedy businesses and even slow down the increasing consolidation of the economy, and the concept comes across as a welcome revelation, or at least beats a job and economy crushing series of Paul Volcker-style (huge) interest rate hikes.

There’s an even bigger challenge on the horizon, however, which is the sheer size of the biggest tech firms, who make the companies mentioned in the video, such as Coke, Pepsi, Procter & Gamble, meat conglomerates and the pharmaceutical industry seem tiny by comparison. As noted by the Wall Street Journal, during the pandemic the behemoths such as Facebook, Amazon and Microsoft have surged.

This is evidence of even less competition than in the sectors mention and presented in the video, and yes, the energy sector, consumer goods, food prices are all showing little competition and that situation is getting worse.

In a recent New York Times article Economists Pin More Blame on Tech for Rising Inequality” the author, Steve Lohr, argues that, above and beyond the horrors outlined in The Hidden Link Between Corporate Greed and Inflation there’s an automation factor at work concentrating the already ludicrous levels of unending power faster and more efficiently. Great.

At least we have Mark Zuckerberg, from a recent YouTube interview with Lex Fridman, with his sunny personality shining through, saying that “what if playing with your friends is the point [of life]?, and further “I think over time, as we get more technology, the physical world is becoming less of a percent of the real world, and I think that opens up a lot of opportunities for people because you can you can work in different places you can stay closer to people who are in different places removing barriers of geography”. At least, then, there’s that. Thanks Mark.

The video text reads well also on the page. Charts, graphics and the charismatic voice of Robert Reich are worth the watch, but here is the full text, in case you prefer:

Inflation! Inflation! Everyone’s talking about it, but ignoring one of its biggest causes: corporate concentration.

Now, prices are undeniably rising. In response, the Fed is about to slow the economy — even though we’re still at least 4 million jobs short of where we were before the pandemic, and millions of American workers won’t get the raises they deserve. Republicans haven’t wasted any time hammering Biden and Democratic lawmakers about inflation. Don’t fall for their fear mongering.

Everybody’s ignoring the deeper structural reason for price increases: the concentration of the American economy into the hands of a few corporate giants with the power to raise prices.

If the market were actually competitive, corporations would keep their prices as low as possible as they competed for customers. Even if some of their costs increased, they would do everything they could to avoid passing them on to consumers in the form of higher prices, for fear of losing business to competitors.

But that’s the opposite of what we’re seeing. Corporations are raising prices even as they rake in record profits. Corporate profit margins hit record highs last year. You see, these corporations have so much market power they can raise prices with impunity.

So the underlying problem isn’t inflation per se. It’s a lack of competition. Corporations are using the excuse of inflation to raise prices and make fatter profits.

Take the energy sector. Only a few entities have access to the land and pipelines that control the oil and gas powering most of the world. They took a hit during the pandemic as most people stayed home. But they are more than making up for it now, limiting supply and ratcheting up prices.

Or look at consumer goods. In April 2021, Procter & Gamble raised prices on staples like diapers and toilet paper, citing increased costs in raw materials and transportation. But P&G has been making huge profits. After some of its price increases went into effect, it reported an almost 25% profit margin. Looking to buy your diapers elsewhere? Good luck. The market is dominated by P&G and Kimberly-Clark, which—NOT entirely coincidentally—raised its prices at the same time.

Another example: in April 2021, PepsiCo raised prices, blaming higher costs for ingredients, freight, and labor. It then recorded $3 billion in operating profits through September. How did it get away with this without losing customers? Pepsi has only one major competitor, Coca-Cola, which promptly raised its own prices. Coca-Cola recorded $10 billion in revenues in the third quarter of 2021, up 16% from the previous year.

Food prices are soaring, but half of that is from meat, which costs 15% more than last year. There are only four major meat processing companies in America, which are all raising their prices and enjoying record profits. Get the picture?

The underlying problem is not inflation. It’s corporate power. Since the 1980s, when the U.S. government all but abandoned antitrust enforcement, two-thirds of all American industries have become more concentrated. Most are now dominated by a handful of corporations that coordinate prices and production. This is true of: banks, broadband, pharmaceutical companies, airlines, meatpackers, and yes, soda.

Corporations in all these industries could easily absorb higher costs — including long overdue wage increases — without passing them on to consumers in the form of higher prices. But they aren’t. Instead, they’re using their massive profits to line the pockets of major investors and executives — while both consumers and workers get shafted.

How can this structural problem be fixed? Fighting corporate concentration with more aggressive antitrust enforcement. Biden has asked the Federal Trade Commission to investigate oil companies, and he’s appointed experienced antitrust lawyers to both the FTC and the Justice Department.

So don’t fall for Republicans’ fear mongering about inflation. The real culprit here is corporate power.


Check out Lynxotic on YouTube

Find books on Politics and many other topics at our sister site: Cherrybooks on Bookshop.org

Lynxotic may receive a small commission based on any purchases made by following links from this page

Is Momentum Shifting Toward a Ban on Behavioral Advertising?

Above: Photo / Adobe Stock

Data-driven personalized ads are the lifeblood of the internet. To a growing number of lawmakers, they’re also nefarious

Earlier this month, the European Union Parliament passed sweeping new rules aimed at limiting how companies and websites can track people online to target them with advertisements.

Targeted advertising based on people’s online behavior has long been the business model that underwrites the internet. It allows advertisers to use the mass of personal data collected by Meta, Google, and other tech companies as people browse the web to serve ads to users by sorting them into tens of thousands of hyperspecific categories.

But behavioral advertising is also controversial. Critics argue that the practice enables discrimination, potentially only offering certain groups of people economic opportunities. They also say serving people ads based on what big tech companies assume they’re interested in potentially leaves people vulnerable to scams, fraud, and disinformation. Notoriously, the consulting firm Cambridge Analytica used personal data gleaned from Facebook profiles to target certain Americans with pro-Trump messages and certain Britons with pro-Brexit ads. 

The 2016 U.S. presidential election and the Brexit vote, according to Jan Penfrat, a senior policy adviser at European digital rights group EDRi, were “wake-up calls” to the Europe Union to crack down. Lawmakers in the U.S. are also looking into ways to regulate behavioral advertising.

What Will the European Parliament’s New Regulations Do?

There’s been a long back and forth about how much to crack down on targeted advertising in the Digital Services Act (DSA), the EU’s big legislative package aimed at regulating Big Tech.

Everything from a total ban on behavioral advertising to more modest changes around ad transparency has at some point been on the table. 

On Jan. 19, the Parliament approved its final position on the bill. Included is a ban on targeted advertising to minors, a ban on tracking sensitive categories like religion, political affiliation, or sexual orientation, and a requirement for websites to provide “other fair and reasonable options” for access if users opt out of their data being tracked for targeted advertising. 

The bill also includes a ban on so-called dark patterns —“design choices that steer people into decisions they may not have made under normal conditions—such as the endless clicks it takes to opt out of being tracked by cookies on many websites.” 

Check out Lynxotic on YouTube

That measure is critical, according to Alexandre de Streel, the academic director of the think tank Centre on Regulation in Europe, because of how tech companies responded to the General Data Protection Regulation (GDPR), the EU’s 2016 tech regulation. 

In a study on online advertising for the Parliament’s crucial Committee on the Internal Market and Consumer Protection, de Streel and nearly a dozen other experts documented how “dark patterns” had become a major tool used by websites and platforms to persuade users to provide consent for sharing their data. Their recommendations for the DSA—which included more robust enforcement of the GDPR, stricter rules about obtaining consent, and the dark patterns ban—were included in the final bill.

“We are going in the right direction if we better enforce the GDPR and add these amendments on ‘dark patterns,’ ” De Streel told The Markup.

German member of European Parliament Patrick Breyer joined with more than 20 other MEPs and more than 50 public and private organizations last year to form the Tracking Free Ads Coalition. Though its push for a total ban on targeted advertising failed, the coalition was behind many of the more stringent restrictions. Breyer told The Markup the new rules were “a major achievement.”

“The Parliament stopped short of prohibiting surveillance advertising, but giving people a true choice [of whether to be targeted] is a major step forward, and I think the vast majority of people will use this option,” he said.

The EU will address digital political advertising in a separate bill that could potentially be more stringent around targeting and using personal data.

Despite passing the European Parliament, the DSA is far from settled. Due to the EU’s unique law-making process, the legislation must now be negotiated with the European Commission and the bloc’s 27 countries. The member states, as represented by the European Council, have adopted an official position considerably less aggressive—opting for only improved transparency on targeted advertising—and, according to Breyer, are “traditionally very open to [industry] lobbying.”

Whether the DSA’s wins against targeted advertising survive this process “will depend to a large degree on public pressure,” said Breyer. 

How Has Big Tech Responded?

So far, Big Tech companies have publicly tread lightly in response to the European push to limit targeted advertising. 

In response to The Markup’s request for comment, Google spokesperson Karl Ryan said that Google supports the DSA and that it shares “the goal of MEPs to continue to make the internet safer for everyone….” 

“We will now take some time to analyze the final Parliament text to understand how it could impact us and our different users,” he said. 

Meta did not respond to a request for comment.

But privately, over the last two years, Google, Facebook, Amazon, Apple, and Microsoft have ramped up lobbying efforts in Brussels, spending more than $20 million in 2020.

The advertising industry, meanwhile, has been public in its opposition. In a statement on the recent vote, Interactive Advertising Bureau Europe director of public policy Greg Mroczkowski urged policymakers to reconsider.

“The use of personal data in advertising is already tightly regulated by existing legislation,” Mroczkowski said, apparently referencing the GDPR, which regulates data privacy in the EU generally. He further noted that the new rules “risk undermining” existing law and “the entire ad-supported digital economy.”

On Wednesday, the Belgian Data Protection Authority found IAB Europe–which developed and administered the system for companies to obtain consent for behavioral advertising while complying with GDPR—in violation of that law. In particular, the authority found that the pop-ups that ask for people’s consent to process their data as they visit websites failed to meet GDPR’s standards for transparency and consent. The pop-up posed “great risks to the fundamental rights” of Europeans, the ruling said. The authority ordered IAB to delete data collected under its Transparency and Consent Framework and has six months to comply.  

“This decision is momentous,” Johnny Ryan, a senior fellow at the Irish Council for Civil Liberties, told The Markup. “It means that digital rights are real. And there is a significance for the United States, too, because the IAB has introduced the same consent spam for the CCPA and CPRA [California Consumer Privacy Act and California Privacy Rights Act].”

In a statement to Tech Crunch, IAB Europe said it “reject[s] the finding that we are a data controller” in the context of its consent framework and is “considering all options with respect to a legal challenge.” Further, it said it is working on an “action plan to be executed within the prescribed six months” to bring it within GDPR compliance.

Google and Meta may be preparing for whichever way the wind is blowing. 

Google is developing a supposedly less-invasive targeted advertising system, which stores general topics of interest in a user’s browser while excluding sensitive categories like race. Meta is testing a protocol to target users without using tracking cookies. 

A handful of European companies like internet security company Avast, search engine DuckDuckGo (which is a contributor to The Markup), and publisher Axel Springer see tighter rules around data privacy as a means to push the industry toward contextual ads or tech that matches ads based on a website’s content, and to therefore break the Google-Meta duopoly over online advertising.

What’s Happening in the U.S.?

On Jan. 18, Reps. Anna Eshoo (D-CA) and Jan Schakowsky (D-IL) and Sen. Cory Booker (D-NJ) introduced legislation to Congress to prohibit advertisers from using personal data to target advertisements—particularly using data about a person’s race, gender, and religion. Exceptions would be made for “broad” location information and contextual advertising. 

“The hoarding of people’s personal data not only abuses privacy, but also drives the spread of misinformation, domestic extremism, racial division, and violence,” Booker said in a statement announcing the bill in January.

While there is bipartisan desire to rein in Big Tech, there is no consensus on how to do it. The bill most likely to pass the divided Congress is designed to stop Amazon, Apple, Google, and other tech giants from privileging their own products. Congressional action on targeted advertising does not appear likely.

Still, it is possible the Federal Trade Commission will take action.

Last summer, President Biden issued an executive order directing the FTC to use its rulemaking authority to curtail “unfair data collection and surveillance practices.” In December, the FTC sought public comment for a petition by nonprofit Accountable Tech to develop new data privacy rules.

Meanwhile, many U.S. digital rights activists, such as nonprofit Electronic Frontier Foundation, are hopeful that new rules in Europe will force changes globally, as occurred after the GDPR. “The EU Parliament’s position, if it becomes law, could change the rules of the game for all platforms,” wrote EFF’s international policy director Christopher Schmon.

It’s still early days, but many see the tide turning against targeted advertising. These types of conversations, according to Penfrat at EDRi, were unthinkable a few years ago.

“The fact that a ban on surveillance-based advertising has been brought into the mainstream is a huge success,” he said.

This article was originally published on The Markup By: Harrison Jacobs and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.


Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

Lynxotic may receive a small commission based on any purchases made by following links from this page

Why It’s So Hard to Regulate Algorithms

photo: adobe

Governments increasingly use algorithms to do everything from assign benefits to dole out punishment—but attempts to regulate them have been unsuccessful

In 2018, the New York City Council created a task force to study the city’s use of automated decision systems (ADS). The concern: Algorithms, not just in New York but around the country, were increasingly being employed by government agencies to do everything from informing criminal sentencing and detecting unemployment fraud to prioritizing child abuse cases and distributing health benefits. And lawmakers, let alone the people governed by the automated decisions, knew little about how the calculations were being made. 

Rare glimpses into how these algorithms were performing were not comforting: In several states, algorithms used to determine how much help residents will receive from home health aides have automatically cut benefits for thousands. Police departments across the country use the PredPol software to predict where future crimes will occur, but the program disproportionately sends police to Black and Hispanic neighborhoods. And in Michigan, an algorithm designed to detect fraudulent unemployment claims famously improperly flagged thousands of applicants, forcing residents who should have received assistance to lose their homes and file for bankruptcy.

Watch Deep Mind Music Video

New York City’s was the first legislation in the country aimed at shedding light on how government agencies use artificial intelligence to make decisions about people and policies.

At the time, the creation of the task force was heralded as a “watershed” moment that would usher in a new era of oversight. And indeed, in the four years since, a steady stream of reporting about the harms caused by high-stakes algorithms has prompted lawmakers across the country to introduce nearly 40 bills designed to study or regulate government agencies’ use of ADS, according to The Markup’s review of state legislation. 

The bills range from proposals to create study groups to requiring agencies to audit algorithms for bias before purchasing systems from vendors. But the dozens of reforms proposed have shared a common fate: They have largely either died immediately upon introduction or expired in committees after brief hearings, according to The Markup’s review.

In New York City, that initial working group took two years to make a set of broad, nonbinding recommendations for further research and oversight. One task force member described the endeavor as a “waste.” The group could not even agree on a definition for automated decision systems, and several of its members, at the time and since, have said they did not believe city agencies and officials had bought into the process.

Elsewhere, nearly all proposals to study or regulate algorithms have failed to pass. Bills to create study groups to examine the use of algorithms failed in Massachusetts, New York state, California, Hawaii, and Virginia. Bills requiring audits of algorithms or prohibiting algorithmic discrimination have died in California, Maryland, New Jersey, and Washington state. In several cases—California, New Jersey, Massachusetts, Michigan, and Vermont—ADS oversight or study bills remain pending in the legislature, but their prospects this session are slim, according to sponsors and advocates in those states.

The only state bill to pass so far, Vermont’s, created a task force whose recommendations—to form a permanent AI commission and adopt regulations—have so far been ignored, state representative Brian Cina told The Markup. 

The Markup interviewed lawmakers and lobbyists and reviewed written and oral testimony on dozens of ADS bills to examine why legislatures have failed to regulate these tools.

We found two key through lines: Lawmakers and the public lack fundamental access to information about what algorithms their agencies are using, how they’re designed, and how significantly they influence decisions. In many of the states The Markup examined, lawmakers and activists said state agencies had rebuffed their attempts to gather basic information, such as the names of tools being used.

Meanwhile, Big Tech and government contractors have successfully derailed legislation by arguing that proposals are too broad—in some cases claiming they would prevent public officials from using calculators and spreadsheets—and that requiring agencies to examine whether an ADS system is discriminatory would kill innovation and increase the price of government procurement.

Lawmakers Struggled to Figure Out What Algorithms Were Even in Use

One of the biggest challenges lawmakers have faced when seeking to regulate ADS tools is simply knowing what they are and what they do.

Following its task force’s landmark report, New York City conducted a subsequent survey of city agencies. It resulted in a list of only 16 automated decision systems across nine agencies, which members of the task force told The Markup they suspect is a severe underestimation.

“We don’t actually know where government entities or businesses use these systems, so it’s hard to make [regulations] more concrete,” said Julia Stoyanovich, a New York University computer science professor and task force member.

In 2018, Vermont became the first state to create its own ADS study group. At the conclusion of its work in 2020, the group reported that “there are examples of where state and local governments have used artificial intelligence applications, but in general the Task Force has not identified many of these applications.”

“Just because nothing popped up in a few weeks of testimony doesn’t mean that they don’t exist,” said Cina. “It’s not like we asked every single state agency to look at every single thing they use.”

In February, he introduced a bill that would have required the state to develop basic standards for agency use of ADS systems. It has sat in committee without a hearing since then.

In 2019, the Hawaii Senate passed a resolution requesting that the state convene a task force to study agency use of artificial intelligence systems, but the resolution was nonbinding and no task force convened, according to the Hawaii Legislative Reference Bureau. Legislators tried to pass a binding resolution again the next year, but it failed.

Legislators and advocacy groups who authored ADS bills in California, Maryland, Massachusetts, Michigan, New York, and Washington told The Markup that they have no clear understanding of the extent to which their state agencies use ADS tools. 

Advocacy groups like the Electronic Privacy Information Center (EPIC) that have attempted to survey government agencies regarding their use of ADS systems say they routinely receive incomplete information.

“The results we’re getting are straight-up non-responses or truly pulling teeth about every little thing,” said Ben Winters, who leads EPIC’s AI and Human Rights Project.

In Washington, after an ADS regulation bill failed in 2020, the legislature created a study group tasked with making recommendations for future legislation. The ACLU of Washington proposed that the group should survey state agencies to gather more information about the tools they were using, but the study group rejected the idea, according to public minutes from the group’s meetings.

“We thought it was a simple ask,” said Jennifer Lee, the technology and liberty project manager for the ACLU of Washington. “One of the barriers we kept getting when talking to lawmakers about regulating ADS is they didn’t have an understanding of how prevalent the issue was. They kept asking, ‘What kind of systems are being used across Washington state?’ ”

Ben Winters, who leads EPIC’s AI and Human Rights Project

Lawmakers Say Corporate Influence a Hurdle

Washington’s most recent bill has stalled in committee, but an updated version will likely be reintroduced this year now that the study group has completed its final report, said state senator Bob Hasegawa, the bill’s sponsor

The legislation would have required any state agency seeking to implement an ADS system  to produce an algorithmic accountability report disclosing the name and purpose of the system, what data it would use, and whether the system had been independently tested for biases, among other requirements.

The bill would also have banned the use of ADS tools that are discriminatory and required that anyone affected by an algorithmic decision be notified and have a right to appeal that decision.

“The big obstacle is corporate influence in our governmental processes,” said Hasegawa. “Washington is a pretty high-tech state and so corporate high tech has a lot of influence in our systems here. That’s where most of the pushback has been coming from because the impacted communities are pretty much unanimous that this needs to be fixed.”

California’s bill, which is similar, is still pending in committee. It encourages, but does not require, vendors seeking to sell ADS tools to government agencies to submit an ADS impact report along with their bid, which would include similar disclosures to those required by Washington’s bill.

It would also require the state’s Department of Technology to post the impact reports for active systems on its website.

Led by the California Chamber of Commerce, 26 industry groups—from big tech representatives like the Internet Association and TechNet to organizations representing banks, insurance companies, and medical device makers—signed on to a letter opposing the bill.

“There are a lot of business interests here, and they have the ears of a lot of legislators,” said Vinhcent Le, legal counsel at the nonprofit Greenlining Institute, who helped author the bill.

Originally, the Greenlining Institute and other supporters sought to regulate ADS in the private sector as well as the public but quickly encountered pushback. 

“When we narrowed it to just government AI systems we thought it would make it easier,” Le said. “The argument [from industry] switched to ‘This is going to cost California taxpayers millions more.’ That cost angle, that innovation angle, that anti-business angle is something that legislators are concerned about.”

The California Chamber of Commerce declined an interview request for this story but provided a copy of the letter signed by dozens of industry groups opposing the bill. The letter states that the bill would “discourage participation in the state procurement process” because the bill encourages vendors to complete an impact assessment for their tools. The letter said the suggestion, which is not a requirement, was too burdensome. The chamber also argued that the bill’s definition of automated decision systems was too broad.

Industry lobbyists have repeatedly criticized legislation in recent years for overly broad definitions of automated decision systems despite the fact that the definitions mirror those used in internationally recognized AI ethics frameworks, regulations in Canada, and proposed regulations in the European Union.

During a committee hearing on Washington’s bill, James McMahan, policy director for the Washington Association of Sheriffs and Police Chiefs, told legislators he believed the bill would apply to “most if not all” of the state crime lab’s operations, including DNA, fingerprint, and firearm analysis.

Internet Association lobbyist Vicki Christophersen, testifying at the same hearing, suggested that the bill would prohibit the use of red light cameras. The Internet Association did not respond to an interview request.

“It’s a funny talking point,” Le said. “We actually had to put in language to say this doesn’t include a calculator or spreadsheet.”

Maryland’s bill, which died in committee, would also have required agencies to produce reports detailing the basic purpose and functions of ADS tools and would have prohibited the use of discriminatory systems.

“We’re not telling you you can’t do it [use ADS],” said Delegate Terri Hill, who sponsored the Maryland bill. “We’re just saying identify what your biases are up front and identify if they’re consistent with the state’s overarching goals and with this purpose.”

The Maryland Tech Council, an industry group representing small and large technology firms in the state, opposed the bill, arguing that the prohibitions against discrimination were premature and would hurt innovation in the state, according to written and oral testimony the group provided.

“The ability to adequately evaluate whether or not there is bias is an emerging area, and we would say that, on behalf of the tech council, putting in place this at this time is jumping ahead of where we are,” Pam Kasemeyer, the council’s lobbyist, said during a March committee hearing on the bill. “It almost stops the desire for companies to continue to try to develop and refine these out of fear that they’re going to be viewed as discriminatory.”

Limited Success in the Private Sector

There have been fewer attempts by state and local legislatures to regulate private companies’ use of ADS systems—such as those The Markup has exposed in the tenant screening and car insurance industries—but in recent years, those measures have been marginally more successful.

The New York City Council passed a bill that would require private companies to conduct bias audits of algorithmic hiring tools before using them. The tools are used by many employers to screen job candidates without the use of a human interviewer.

The legislation, which was enacted in January but does not take effect until 2023, has been panned by some of its early supporters, however, for being too weak.

Illinois also enacted a state law in 2019 that requires private employers to notify job candidates when they’re being evaluated by algorithmic hiring tools. And in 2021, the legislature amended the law to require employers who use such tools to report demographic data about job candidates to a state agency to be analyzed for evidence of biased decisions. 

This year the Colorado legislature also passed a law, which will take effect in 2023, that will create a framework for evaluating insurance underwriting algorithms and ban the use of discriminatory algorithms in the industry. 

This article was originally published on The Markup By: Todd Feathers and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.


Check out Lynxotic on YouTube

Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

From metaverse to DAOs, a guide to 2021’s tech buzzwords

  • From ‘metaverse’ to ‘NFT’ – here’s a wrap-up of the key buzzwords that shaped 2021 in the tech industry.
  • These subjects were the talk of the town in 2021, as the tech industry transitions into a new age.
  • A DAO tried to buy a rare copy of the U.S. Constitution, whilst NFTs took the art world by storm.

This year, tech CEOs drew inspiration from a 1990s sci-fi novel, Reddit investors’ lexicon seeped into the mainstream as “diamond hands” and “apes” shook Wall Street, and something called a DAO tried to buy a rare copy of the U.S. Constitution.

If you’re still drawing a blank as 2021 wraps up, here’s a short glossary:

Metaverse

The metaverse broadly refers to shared, immersive digital environments which people can move between and may access via virtual reality or augmented reality headsets or computer screens. read more

Some tech CEOs are betting it will be the successor to the mobile internet. The term was coined in the dystopian novel “Snow Crash” three decades ago. This year CEOs of tech companies from Microsoft to Match Group have discussed their roles in building the metaverse. In October, Facebook renamed itself Meta to reflect its new metaverse focus.

Web3

Web3 is used to describe a potential next phase of the internet: a decentralized internet run on the record-keeping technology blockchain.

This model, where users would have ownership stakes in platforms and applications, would differ from today’s internet, known as Web2, where a few major tech giants like Facebook and Alphabet’s Google control the platforms.

Social audio

Tech companies waxed lyrical this year about tools for live audio conversations, rushing to release features after the buzzy, once invite-only app Clubhouse saw an initial surge amid COVID-19 lockdowns. read more

NFT

Non-fungible tokens, which exploded in popularity this year, are a type of digital asset that exists on a blockchain, a record of transactions kept on networked computers. read more

In March, a work by American artist Beeple sold for nearly $70 million at Christie’s, the first ever sale by a major auction house of art that does not exist in physical form.

Decentralization 

Decentralizing, or the transfer of power and operations from central authorities like companies or governments to the hands of users, emerged as a key theme in the tech industry.

Such shifts could affect everything from how industries and markets are organized to functions like content moderation of platforms. Twitter, for example, is investing in a project to build a decentralized common standard for social networks, dubbed Bluesky

DAO

A decentralized autonomous organization (DAO) is generally an internet community owned by its members and run on blockchain technology. DAOs use smart contracts, pieces of code that establish the group’s rules and automatically execute decisions.

In recent months, crowd-funded crypto-group ConstitutionDAO tried and failed to buy a rare copy of the U.S. Constitution in an auction held by Sotheby’s. 

Stonks

This deliberate misspelling of “stocks,” which originated with an internet meme, made headlines as online traders congregating in forums like Reddit’s WallStreetBets drove up stocks including GameStop and AMC. The lingo of these traders, calling themselves “apes” or praising the “diamond hands” who held positions during big market swings, became mainstream.

GameFi

GameFi is a broad term referring to the trend of gamers earning cryptocurrency through playing video games, where players can make money through mechanisms like getting financial tokens for winning battles in the popular game Axie Infinity.

Altcoin

The term covers all cryptocurrencies aside from Bitcoin, ranging from ethereum, which aims to be the backbone of a future financial system, to Dogecoin, a digital currency originally created as a joke and popularized by Tesla CEO Elon Musk.

FSD BETA

Tesla released a test version of its upgraded Full Self-Driving (FSD) software, a system of driving-assistance features – like automatically changing lanes and make turns – to the wider public this year.

The name of the much-scrutinized software has itself been contentious, with regulators and users saying it misrepresents its capabilities as it still requires driver attention.

Fabs

“Fabs,” short for a semiconductor fabrication plant, entered the mainstream lexicon this year as a shortage of chips from fabs were blamed for the global shortage of everything from cars to gadgets.

Net zero

A term, popularized this year thanks to the COP26 U.N. climate talks in Glasgow, for saying a country, company, or product does not contribute to global greenhouse gas emissions. That’s usually accomplished by cutting emissions, such as use of fossil fuels, and balancing any remaining emissions with efforts to soak up carbon, like planting trees. Critics say any emissions are unacceptable.

Originally published on World Economic Forum and republished under  Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License.

Check out Lynxotic on YouTube

Related Articles:


Find books on Sci-Fi, VR and The Metaverse and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

Leaked Facebook Documents Reveal How Company Failed on Election Promise

CEO Mark Zuckerberg had repeatedly promised to stop recommending political groups to users to squelch the spread of misinformation

Leaked internal Facebook documents show that a combination of technical miscommunications and high-level decisions led to one of the social media giant’s biggest broken promises of the 2020 election—that it would stop recommending political groups to users.

The Markup first revealed on Jan. 19 that Facebook was continuing to recommend political groups—including some in which users advocated violence and storming the U.S. Capitol—in spite of multiple promises not to do so, including one made under oath to Congress

The day the article ran, a Facebook team started investigating the “leakage,” according to documents provided by Frances Haugen to Congress and shared with The Markup, and the problem was escalated to the highest level to be “reviewed by Mark.” Over the course of the next week, Facebook employees identified several causes for the broken promise.

The company, according to work log entries in the leaked documents, was updating its list of designated political groups, which it refers to as civic groups, in real time. But the systems that recommend groups to users were cached on servers and users’ devices and only updated every 24 to 48 hours in some cases. The lag resulted in users receiving recommendations for groups that had recently been designated political, according to the logs.

That technical oversight was compounded by a decision Facebook officials made about how to determine whether or not a particular group was political in nature.

When The Markup examined group recommendations using data from our Citizen Browser project—a paid, nationwide panel of Facebook users who automatically supply us data from their Facebook feeds—we designated groups as political or not based on their names, about pages, rules, and posted content. We found 12 political groups among the top 100 groups most frequently recommended to our panelists. 

Facebook chose to define groups as political in a different way—by looking at the last seven days’ worth of content in a given group.

“Civic filter uses last 7 day content that is created/viewed in the group to determine if the group is civic or not,” according to a summary of the problem written by a Facebook employee working to solve the issue. 

As a result, the company was seeing a “12% churn” in its list of groups designated as political. If a group went seven days without posting content the company’s algorithms deemed political, it would be taken off the blacklist and could once again be recommended to users.

Almost 90 percent of the impressions—the number of times a recommendation was seen—on political groups that Facebook tallied while trying to solve the recommendation problem were a result of the day-to-day turnover on the civic group blacklist, according to the documents.

Facebook did not directly respond to questions for this story.

“We learned that some civic groups were recommended to users, and we looked into it,” Facebook spokesperson Leonard Lam wrote in an email to The Markup. “The issue stemmed from the filtering process after designation that allowed some Groups to remain in the recommendation pool and be visible to a small number of people when they should not have been. Since becoming aware of the issue, we worked quickly to update our processes, and we continue this work to improve our designation and filtering processes to make them as accurate and effective as possible.”

Social networking and misinformation researchers say that the company’s decision to classify groups as political based on seven days’ worth of content was always likely to fall short.

“They’re definitely going to be missing signals with that because groups are extremely dynamic,” said Jane Lytvynenko, a research fellow at the Harvard Shorenstein Center’s Technology and Social Change Project. “Looking at the last seven days, rather than groups as a whole and the stated intent of groups, is going to give you different results. It seems like maybe what they were trying to do is not cast too wide of a net with political groups.”

Many of the groups Facebook recommended to Citizen Browser users had overtly political names.

More than 19 percent of Citizen Browser panelists who voted for Donald Trump received recommendations for a group called Candace Owens for POTUS, 2024, for example. While Joe Biden voters were less likely to be nudged toward political groups, some received recommendations for groups like Lincoln Project Americans Protecting Democracy.

The internal Facebook investigation into the political recommendations confirmed these problems. By Jan. 25, six days after The Markup’s original article, a Facebook employee declared that the problem was “mitigated,” although root causes were still under investigation.

On Feb. 10, Facebook blamed the problem on “technical issues” in a letter it sent to U.S. senator Ed Markey, who had demanded an explanation.

In the early days after the company’s internal investigation, the issue appeared to have been resolved. Both Citizen Browser and Facebook’s internal data showed that recommendations for political groups had virtually disappeared.

But when The Markup reexamined Facebook’s recommendations in June, we discovered that the platform was once again nudging Citizen Browser users toward political groups, including some in which members explicitly advocated violence.

From February to June, just under one-third of Citizen Browser’s 2,315 panelists received recommendations to join a political group. That included groups with names like Progressive Democrats of Nevada, Michigan Republicans, Liberty lovers for Ted Cruz, and Bernie Sanders for President, 2020.

This article was originally published on The Markup By: Todd Feathers and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license (CC BY-NC-ND 4.0).

Related Articles:


Check out Lynxotic on YouTube

Find books on Big Tech and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

Facebook Isn’t Telling You How Popular Right-Wing Content Is on the Platform

Above: Photo Collage / Lynxotic

Facebook insists that mainstream news sites perform the best on its platform. But by other measures, sensationalist, partisan content reigns

In early November, Facebook published its Q3 Widely Viewed Content Report, the second in a series meant to rebut critics who said that its algorithms were boosting extremist and sensational content. The report declared that, among other things, the most popular informational content on Facebook came from sources like UNICEF, ABC News, or the CDC.

But data collected by The Markup suggests that, on the contrary, sensationalist news or viral content with little original reporting performs just as well as—and often better than—many mainstream sources when it comes to how often it’s seen by platform users.

Data from The Markup’s Citizen Browser project shows that during the period from July 1 to Sept. 30, 2021, outlets like The Daily Wire, The Western Journal, and BuzzFeed’s viral content arm were among the top-viewed domains in our sample. 

Citizen Browser is a national panel of paid Facebook users who automatically share their news feed data with The Markup.

To analyze the websites whose content performs the best on Facebook, we counted the total number of times that links from any domain appeared in our panelists’ news feeds—a metric known as “impressions”—over a three-month period (the same time covered by Facebook’s Q3 Widely Viewed Content Report). Facebook, by contrast, chose a different metric, calculating the “most-viewed” domains by tallying only the number of users who saw links, regardless of whether each user saw a link once or hundreds of times.

By our calculation, the top performing domains were those that surfaced in users’ feeds over and over—including some highly partisan, polarizing sites that effectively bombarded some Facebook users with content. 

These findings chime with recent revelations from Facebook whistleblower Frances Haugen, who has repeatedly said the company has a tendency to cherry-pick statistics to release to the press and the public. 

“They are very good at dancing with data,” Haugen told British lawmakers during a European tour.

When presented with The Markup’s findings and asked whether its own report’s statistics might be misleading or incomplete, Ariana Anthony, a spokesperson for Meta, Facebook’s parent company, said in an emailed statement, “The focus of the Widely Viewed Content Report is to show the content that is seen by the most people on Facebook, not the content that is posted most frequently. That said, we will continue to refine and improve these reports as we engage with academics, civil society groups, and researchers to identify the parts of these reports they find most valuable, which metrics need more context, and how we can best support greater understanding of content distribution on Facebook moving forward.”

Anthony did not directly respond to questions from The Markup on whether the company would release data on the total number of link views or the content that was seen most frequently on the platform.

The Battle Over Data

There are many ways to measure popularity on Facebook, and each tells a different story about the platform and what kind of content its algorithms favor. 

For years, the startup CrowdTangle’s “engagement” metric—essentially measuring a combination of how many likes, comments, and other interactions any domain’s posts garner—has been the most publicly visible way of measuring popularity. Facebook bought CrowdTangle in 2016 and, according to reporting in The New York Times, has since largely tried to downplay data showing that ultra-conservative commentators like The Daily Wire’s Ben Shapiro produce the most engaged-with content on the platform. 

Shortly after the end of the second quarter of this year, Facebook came out with its first transparency report, framed in the introduction as a way to “provide clarity” on “the most-viewed domains, links, Pages and posts on the platform during the quarter.” (More accurately, the Q2 report was the first publicly released transparency report, after a Q1 report was, The New York Times reported, suppressed for making the company look bad and only released later after details emerged.)

For the Q2 and Q3 reports, Facebook turned to a specific metric, known as “reach,” to quantify most-viewed domains. For any given domain, say youtube.com or twitter.com, reach represents the number of unique Facebook accounts that had at least one post containing a link to a tweet or a YouTube video in their news feeds during the quarter. On that basis, Facebook found that those domains, and other mainstream staples like Amazon, Spotify, and TikTok, had wide reach.

When applying this metric, The Markup found similar results in our Citizen Browser data, as detailed in depth in our methodology. But this calculation ignores a reality for a lot of Facebook users: bombardment with content from the same site.

Citizen Browser data shows, for instance, that from July through September of this year, articles from far-right news site Newsmax appeared in the feed of a 58-year-old woman in New Mexico 1,065 times—but under Facebook’s calculation of reach, this would count as one single unit. Similarly, a 37-year-old man in New Hampshire was shown 245 unique links to satirical posts from The Onion, which appeared in his feed more than 500 times—but again, he would have been counted just once by Facebook’s method.

When The Markup instead counted each appearance of a domain on a user’s feed during Q3—e.g., Newsmax as 1,065 instead of 1—we found that polarizing, partisan content jumped in the performance rankings. Indeed, the same trend is true of the domains in Facebook’s Q2 report, for which analysis can be found in our data repository on GitHub.

We found that outlets like The Daily Wire, BuzzFeed’s viral content arm, Fox News, and Yahoo News jumped in the popularity rankings when we used the impressions metric. Most striking, The Western Journal—which, similarly to The Daily Wire, does little original reporting and instead repackages stories to fit with right-wing narratives—improved its ranking by almost 200 places.

“To me these findings raise a number of questions,” said Jane Lytvynenko, senior research fellow at the Harvard Kennedy School Shorenstein Center. 

“Was Facebook’s research genuine, or was it part of an attempt to change the narrative around top 10 lists that were previously put out? It matters a lot whether a person sees a link one time or if they see it 20 times, and to not account for that in a report, to me, is misleading,” Lytvynenko said.

Using a narrow range of data to gauge popularity is suspect, said Alixandra Barasch, associate professor of marketing at NYU’s Stern School of Business.

“It just goes against everything we teach and know about advertising to focus on one [metric] rather than the other,” she said. 

In fact, when it comes to the core business model of selling space to advertisers, Facebook encourages them to consider yet another metric, “frequency”—how many times to show a post to each user on average—when trying to optimize brand messaging.

Data from Citizen Browser shows that domains seen with high frequency in the Facebook news feed are mostly news domains, since news websites tend to publish multiple articles over the course of a day or week. But Facebook’s own content report does not take this data into account.

“[This] clarifies the point that what we need is independent access for researchers to check the math,” said Justin Hendrix, co-author of a report on social media and polarization and editor at Tech Policy Press, after reviewing The Markup’s data.

This article was originally published on The Markup By: Corin Faife and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

Related Articles:


Check out Lynxotic on YouTube

Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

Facebook has a misinformation problem, and is blocking access to data about how much there is and who is affected

Leaked internal documents suggest Facebook – which recently renamed itself Meta – is doing far worse than it claims at minimizing COVID-19 vaccine misinformation on the Facebook social media platform. 

Online misinformation about the virus and vaccines is a major concern. In one study, survey respondents who got some or all of their news from Facebook were significantly more likely to resist the COVID-19 vaccine than those who got their news from mainstream media sources.

As a researcher who studies social and civic media, I believe it’s critically important to understand how misinformation spreads online. But this is easier said than done. Simply counting instances of misinformation found on a social media platform leaves two key questions unanswered: How likely are users to encounter misinformation, and are certain users especially likely to be affected by misinformation? These questions are the denominator problem and the distribution problem.

The COVID-19 misinformation study, “Facebook’s Algorithm: a Major Threat to Public Health”, published by public interest advocacy group Avaaz in August 2020, reported that sources that frequently shared health misinformation — 82 websites and 42 Facebook pages — had an estimated total reach of 3.8 billion views in a year.

At first glance, that’s a stunningly large number. But it’s important to remember that this is the numerator. To understand what 3.8 billion views in a year means, you also have to calculate the denominator. The numerator is the part of a fraction above the line, which is divided by the part of the fraction below line, the denominator.

Getting some perspective

One possible denominator is 2.9 billion monthly active Facebook users, in which case, on average, every Facebook user has been exposed to at least one piece of information from these health misinformation sources. But these are 3.8 billion content views, not discrete users. How many pieces of information does the average Facebook user encounter in a year? Facebook does not disclose that information.

Without knowing the denominator, a numerator doesn’t tell you very much. The Conversation U.S., CC BY-ND

Market researchers estimate that Facebook users spend from 19 minutes a day to 38 minutes a day on the platform. If the 1.93 billion daily active users of Facebook see an average of 10 posts in their daily sessions – a very conservative estimate – the denominator for that 3.8 billion pieces of information per year is 7.044 trillion (1.93 billion daily users times 10 daily posts times 365 days in a year). This means roughly 0.05% of content on Facebook is posts by these suspect Facebook pages. 

The 3.8 billion views figure encompasses all content published on these pages, including innocuous health content, so the proportion of Facebook posts that are health misinformation is smaller than one-twentieth of a percent.

Is it worrying that there’s enough misinformation on Facebook that everyone has likely encountered at least one instance? Or is it reassuring that 99.95% of what’s shared on Facebook is not from the sites Avaaz warns about? Neither. 

Misinformation distribution

In addition to estimating a denominator, it’s also important to consider the distribution of this information. Is everyone on Facebook equally likely to encounter health misinformation? Or are people who identify as anti-vaccine or who seek out “alternative health” information more likely to encounter this type of misinformation? 

Another social media study focusing on extremist content on YouTube offers a method for understanding the distribution of misinformation. Using browser data from 915 web users, an Anti-Defamation League team recruited a large, demographically diverse sample of U.S. web users and oversampled two groups: heavy users of YouTube, and individuals who showed strong negative racial or gender biases in a set of questions asked by the investigators. Oversampling is surveying a small subset of a population more than its proportion of the population to better record data about the subset.

The researchers found that 9.2% of participants viewed at least one video from an extremist channel, and 22.1% viewed at least one video from an alternative channel, during the months covered by the study. An important piece of context to note: A small group of people were responsible for most views of these videos. And more than 90% of views of extremist or “alternative” videos were by people who reported a high level of racial or gender resentment on the pre-study survey.

While roughly 1 in 10 people found extremist content on YouTube and 2 in 10 found content from right-wing provocateurs, most people who encountered such content “bounced off” it and went elsewhere. The group that found extremist content and sought more of it were people who presumably had an interest: people with strong racist and sexist attitudes. 

The authors concluded that “consumption of this potentially harmful content is instead concentrated among Americans who are already high in racial resentment,” and that YouTube’s algorithms may reinforce this pattern. In other words, just knowing the fraction of users who encounter extreme content doesn’t tell you how many people are consuming it. For that, you need to know the distribution as well.

Superspreaders or whack-a-mole?

A widely publicized study from the anti-hate speech advocacy group Center for Countering Digital Hate titled Pandemic Profiteers showed that of 30 anti-vaccine Facebook groups examined, 12 anti-vaccine celebrities were responsible for 70% of the content circulated in these groups, and the three most prominent were responsible for nearly half. But again, it’s critical to ask about denominators: How many anti-vaccine groups are hosted on Facebook? And what percent of Facebook users encounter the sort of information shared in these groups? 

Without information about denominators and distribution, the study reveals something interesting about these 30 anti-vaccine Facebook groups, but nothing about medical misinformation on Facebook as a whole.

These types of studies raise the question, “If researchers can find this content, why can’t the social media platforms identify it and remove it?” The Pandemic Profiteers study, which implies that Facebook could solve 70% of the medical misinformation problem by deleting only a dozen accounts, explicitly advocates for the deplatforming of these dealers of disinformation. However, I found that 10 of the 12 anti-vaccine influencers featured in the study have already been removed by Facebook.

Consider Del Bigtree, one of the three most prominent spreaders of vaccination disinformation on Facebook. The problem is not that Bigtree is recruiting new anti-vaccine followers on Facebook; it’s that Facebook users follow Bigtree on other websites and bring his content into their Facebook communities. It’s not 12 individuals and groups posting health misinformation online – it’s likely thousands of individual Facebook users sharing misinformation found elsewhere on the web, featuring these dozen people. It’s much harder to ban thousands of Facebook users than it is to ban 12 anti-vaccine celebrities.

This is why questions of denominator and distribution are critical to understanding misinformation online. Denominator and distribution allow researchers to ask how common or rare behaviors are online, and who engages in those behaviors. If millions of users are each encountering occasional bits of medical misinformation, warning labels might be an effective intervention. But if medical misinformation is consumed mostly by a smaller group that’s actively seeking out and sharing this content, those warning labels are most likely useless.

[You’re smart and curious about the world. So are The Conversation’s authors and editors. You can read us daily by subscribing to our newsletter.]

Getting the right data

Trying to understand misinformation by counting it, without considering denominators or distribution, is what happens when good intentions collide with poor tools. No social media platform makes it possible for researchers to accurately calculate how prominent a particular piece of content is across its platform. 

Facebook restricts most researchers to its Crowdtangle tool, which shares information about content engagement, but this is not the same as content views. Twitter explicitly prohibits researchers from calculating a denominator, either the number of Twitter users or the number of tweets shared in a day. YouTube makes it so difficult to find out how many videos are hosted on their service that Google routinely asks interview candidates to estimate the number of YouTube videos hosted to evaluate their quantitative skills. 

The leaders of social media platforms have argued that their tools, despite their problems, are good for society, but this argument would be more convincing if researchers could independently verify that claim.

As the societal impacts of social media become more prominent, pressure on the big tech platforms to release more data about their users and their content is likely to increase. If those companies respond by increasing the amount of information that researchers can access, look very closely: Will they let researchers study the denominator and the distribution of content online? And if not, are they afraid of what researchers will find?

This article was originally published on The Conversation By Ethan Zuckerman and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license (CC BY-NC-ND 4.0).

Related Articles:


Check out Lynxotic on YouTube

Find books on Big Tech and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

‘Pivotal Moment’ as Facebook Ditches ‘Dangerous’ Facial Recognition System

Above: Photo Collage / Lynxotic / Adobe Stock

Digital rights advocates on Tuesday welcomed Facebook’s announcement that it plans to jettison its facial recognition system, which critics contend is dangerous and often inaccurate technology abused by governments and corporations to violate people’s privacy and other rights.

“Corporate use of face surveillance is very dangerous to people’s privacy.”

Adam Schwartz, a senior staff attorney at the Electronic Frontier Foundation (EFF) who last month called facial recognition technology “a special menace to privacy, racial justice, free expression, and information security,” commended the new Facebook policy.

“Facebook getting out of the face recognition business is a pivotal moment in the growing national discomfort with this technology,” he said. “Corporate use of face surveillance is very dangerous to people’s privacy.”

The social networking giant first introduced facial recognition software in late 2010 as a feature to help users identify and “tag” friends without the need to comb through photos. The company subsequently amassed one of the world’s largest digital photo archives, which was largely compiled through the system. Facebook says over one billion of those photos will be deleted, although the company will keep DeepFace, the advanced algorithm that powers the facial recognition system.

In a blog post, Jerome Presenti, the vice president of artificial intelligence at Meta—the new name of Facebook’s parent company following a rebranding last week that was widely condemned as a ploy to distract from recent damning whistleblower revelations—described the policy change as “one of the largest shifts in facial recognition usage in the technology’s history.”

“The many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole,” he wrote.

The New York Times reports:

Facial recognition technology, which has advanced in accuracy and power in recent years, has increasingly been the focus of debate because of how it can be misused by governments, law enforcement, and companies. In China, authorities use the capabilities to track and control the Uighurs, a largely Muslim minority. In the United States, law enforcement has turned to the software to aid policing, leading to fears of overreach and mistaken arrests.

Concerns over actual and potential misuse of facial recognition systems have prompted bans on the technology in over a dozen U.S. locales, beginning with San Francisco in 2019 and subsequently proliferating from Portland, Maine to Portland, Oregon.

Caitlin Seeley George, campaign director at Fight for the Future, was among the online privacy campaigners who welcomed Facebook’s move. In a statement, she said that “facial recognition is one of the most dangerous and politically toxic technologies ever created. Even Facebook knows that.”

Seeley George continued:

From misidentifying Black and Brown people (which has already led to wrongful arrests) to making it impossible to move through our lives without being constantly surveilled, we cannot trust governments, law enforcement, or private companies with this kind of invasive surveillance.

“Even as algorithms improve, facial recognition will only be more dangerous,” she argued. “This technology will enable authoritarian governments to target and crack down on religious minorities and political dissent; it will automate the funneling of people into prisons without making us safer; it will create new tools for stalking, abuse, and identity theft.”

Seeley George says the “only logical action” for lawmakers and companies to take is banning facial recognition.

Amid applause for the company’s announcement, some critics took exception to Facebook’s retention of DeepFace, as well as its consideration of “potential future applications” for facial recognition technology.

Originally published on Common Dreams by BRETT WILKINS and republished under a Creative Commons license (CC BY-NC-ND 3.0)

Related Articles:


Check out Lynxotic on YouTube

Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

‘Don’t Be Fooled’: Critics of Facebook Say Name Change Can’t Hide Company’s Harm

Above: Photo Collage / Lynxotic

“Changing their name doesn’t change reality: Facebook is destroying our democracy and is the world’s leading peddler of disinformation and hate.”

Tech ethicists and branding professionals on Thursday said consumers should not be hoodwinked by Facebook’s name change, which numerous observers compared to earlier efforts by tobacco and fossil fuel companies to distract attention from their societal harms.

“Don’t be fooled. Nothing changes here. This is just a publicity stunt hatched by Facebook’s PR department to deflect attention as Zuckerberg squirms.”

Facebook co-founder and CEO Mark Zuckerberg announced the Meta rechristening during Facebook Connect, the company’s annual virtual and augmented reality conference, explaining that “we are a company that builds technology to connect people and the metaverse is the next frontier, just like social networking was when we got started.”

“Some of you might be wondering why we’re doing this right now,” he added. “The answer is that I believe that we’re put on this Earth to create. I believe that technology can make our lives better.”

Many critics found Zuckerberg’s explanation unconvincing at best and, at worst, disingenuous.

“Changing their name doesn’t change reality: Facebook is destroying our democracy and is the world’s leading peddler of disinformation and hate,” the watchdog group Real Facebook Oversight Board said in a statement. “Their meaningless name change should not distract from the investigation, regulation, and real, independent oversight needed to hold Facebook accountable.”

Vahid Razavi, founder of the advocacy group Ethics in Tech, told Common Dreams: “Don’t be fooled. Nothing changes here. This is just a publicity stunt hatched by Facebook’s PR department to deflect attention as Zuckerberg squirms” over the negative press from recent whistleblower revelations.

Former Facebook employees-turned whistleblowers say the company’s profit-seeking algorithms—and its executives who know their insidious impacts—are responsible for the mass dissemination of harmful content, including hate speech and political, climate, and Covid-19 misinformation.

Siva Vaidhyanathan, a media studies professor at the University of Virginia and author of the book Antisocial Mediatold Time that “the Facebook of today has never been the end game for Zuckerberg.” 

“He’s always wanted his company to be the operating system of our lives that can socially engineer how we live and what we know,” Vaidhyanathan continued, adding that the new name is “not going to change his vision for his company—he’s never let anybody on the outside change his mind.”

Zuckerberg, he said, “wants to take the dynamic of algorithmic guidance out of our phones and off of our computers and build that system into our lives and our consciousness, so our eyeglasses become our screens, and our hands become the mouse.”

Some observers compared Facebook’s attempt to rebrand itself to what they called similar efforts by Big Tobacco and fossil fuel corporations.

“It didn’t do anything,” Laurel Sutton, co-founder of the branding agency Catchword, told Time. “People still knew that Altria was Philip Morris and they didn’t rehabilitate their reputation simply because they changed the name.” 

“There’s no name that’s going to rehabilitate the behavior that they’ve displayed so far,” Sutton said of the social media giant. “Maybe put that time and energy into rehabilitating their morals and ethics and business decisions rather than just trying to slap a new name on something.”

Originally published on Creative Commons by BRETT WILKINS and republished under a Creative Commons License (CC BY-NC-ND 3.0)

Related Articles:


Check out Lynxotic on YouTube

Find books on Big Tech and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

What is the metaverse? 2 media and information experts explain

Above: Photo / Pixabay

The metaverse is a network of always-on virtual environments in which many people can interact with one another and digital objects while operating virtual representations – or avatars – of themselves. Think of a combination of immersive virtual reality, a massively multiplayer online role-playing game and the web. 

The metaverse is a concept from science fiction that many people in the technology industry envision as the successor to today’s internet. It’s only a vision at this point, but technology companies like Facebook are aiming to make it the setting for many online activities, including work, play, studying and shopping.

Metaverse is a portmanteau of meta, meaning transcendent, and verse, from universe. Sci-fi novelist Neal Stephenson coined the term in his 1992 novel “Snow Crash” to describe the virtual world in which the protagonist, Hiro Protagonist, socializes, shops and vanquishes real-world enemies through his avatar. The concept predates “Snow Crash” and was popularized as “cyberspace” in William Gibson’s groundbreaking 1984 novel “Neuromancer.”

There are three key aspects of the metaverse: presence, interoperability and standardization. 

Presence is the feeling of actually being in a virtual space, with virtual others. Decades of research has shown that this sense of embodiment improves the quality of online interactions. This sense of presence is achieved through virtual reality technologies such as head-mounted displays.

Interoperability means being able to seamlessly travel between virtual spaces with the same virtual assets, such as avatars and digital items. ReadyPlayerMe allows people to create an avatar that they can use in hundreds of different virtual worlds, including in Zoom meetings through apps like Animaze. Meanwhile, blockchain technologies such as cryptocurrenciesand nonfungible tokens facilitate the transfer of digital goods across virtual borders.

Standardization is what enables interoperability of platforms and services across the metaverse. As with all mass-media technologies – from the printing press to texting – common technological standards are essential for widespread adoption. International organizations such as the Open Metaverse Interoperability Group define these standards. 

Why the metaverse matters

If the metaverse does become the successor to the internet, who builds it, and how, is extremely important to the future of the economy and society as a whole. Facebook is aiming to play a leading role in shaping the metaverse, in part by investing heavily in virtual reality. Facebook CEO Mark Zuckerberg explained in an interview his view that the metaverse spans non-immersive platforms like today’s social media as well as immersive 3D media technologies such as virtual reality, and that it will be for work as well as play.Hollywood has embraced the metaverse in movies like ‘Ready Player One.’

The metaverse might one day resemble the flashy fictional Oasis of Ernest Cline’s “Ready Player One,” but until then you can turn to games like Fortnite and Roblox, virtual reality social media platforms like VRChat and AltspaceVR, and virtual work environments like Immersed for a taste of the immersive and connected metaverse experience. As these siloed spaces converge and become increasingly interoperable, watch for a truly singular metaverse to emerge.

Originally published on The Conversation by Rabindra Ratan & Yiming Lei and republished under a Creative Common License (CC BY-ND 4.0).

Related Articles:


Check out Lynxotic on YouTube

Find books on Sci-Fi, VR and The Metaverse and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

Bloomberg: Facebook Changes Name to Meta in Embrace of Virtual Reality

Facebook Inc. has rebranded itself, now, as Meta, most likely as a means to separate the corporate identity of the social network that has been tied to a myriad of ugly controversies. The name change is meant to highlight the company’s shift to virtual reality and the metaverse.

CEO Zuckerberg spoke at the Facebook’s Connect virtual conference and commented on the name change, “From now on, we’re going to be metaverse-first, not Facebook-first.”

The new name change does not affect the company’s share data or corporate structure, however the company will start trading under the new ticker, MVRS starting December 1.

Needless to say, Twitter comments and memes instantly rolled in after the rebrand announcement:

Read More at:


Related Articles:


Check out Lynxotic on YouTube:

Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

Lynxotic may receive a small commission based on any purchases made by following links from this page

How to Avoid Being Scammed by Fake Job Ads

Above: Photo Collage / Lynxotic

As ProPublica has reported, cybercriminals are flooding the internet with fake job ads and even bogus company hiring websites whose purpose is to steal your identity and use it to commit fraud. It’s a good reminder that you should vet potential employers as closely as they vet you.

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for The Big Story newsletter to receive stories like this one in your inbox.

Here are ten tips on how to spot such scams:

1. Beware of abnormally high salaries

One of the ways criminals entice people is by advertising unusually generous pay. If the salary being offered in a job ad is way above what you see in other ads for similar positions, be wary. You can get an idea of average weekly earnings by industry using the Quarterly Census of Employment and Wages or check out salary calculators on websites such as Glassdoor.

2. Don’t accept jobs you didn’t apply for

Sometimes cybercriminals obtain the contact information of people who have submitted their résumés to job-seeking websites and then email them to say they are preapproved for a job. These are bogus messages whose main purpose is to get people to share additional information, which the scammers will use to commit fraud. The emails may also include malware that can infect your computer. Ignore such messages and don’t open any attachments.

3. Be wary of job ads touting the need to verify your identity at the outset

Ads that demand you share your driver’s license or Social Security number as part of an initial application, or very soon after, are a significant red flag. Legitimate employers rarely request such information until much later in the hiring process.

4. Take the text of the job ad and put it in Google

Cybercriminals sometimes reuse the same job ads over and over, posting them on LinkedIn, Facebook and other online platforms with only slight modifications. If you spot an ad that features virtually identical language to that used by various employers all over the country, it could be a scam.

5. Research the identity of the person posting the ad

Cybercriminals are creating fake profiles on LinkedIn and Facebook meant to resemble individuals at real companies who are posting job ads. One clue: a person claiming to work for a company in the U.S. while showing check-ins at locations in other countries. When in doubt, contact the companies directly to ask if they’re actually recruiting for the positions. If they’re not, report the suspect profiles to LinkedIn and Facebook.

6. Check the spelling and domains of company names

When you vet companies, be aware that cybercriminals sometimes steer potential applicants to fake websites they’ve created that mimic the sites of real companies — except that, say, an extra letter has been added to the company’s name. When job applicants can’t spell a company’s name right in a cover letter, recruiters are apt to toss those applications in the trash. Do the same with any companies that seemingly can’t spell their own names.

7. Avoid text-only interviews

The pandemic has made it necessary for many employers to conduct job interviews remotely via services like Zoom. But be cautious of hiring managers who insist on communicating only by email or text or using messaging platforms such as Telegram to conduct interviews. Sooner or later, a real employer will want to see and interact with a recruit, whether through a video call or in person. Cybercriminals typically don’t want you to hear their voices or see their faces, since it raises the chances you’ll realize they’re not who they say they are.

8. Don’t give out your credit card or phone account login

A real employer doesn’t need to know your credit card number, credit score or phone account login to process your job application. Cybercriminals sometimes ask for such information up front to commandeer your phone and finances, often under the pretense of needing to set you up with a company phone plan or purchase equipment you’ll need to do your job (see next item).

9. Don’t buy things on behalf of a potential employer

Beware of companies that, before you’re hired, offer to send you a check to purchase a computer or other equipment. It’s a variation on an old scam that involves criminals asking marks to send their own money to some third party with the promise that they will reimburse the marks. Inevitably, the reimbursement doesn’t come through, and the mark is left holding the bag.

10. If something feels suspicious, investigate — or walk away

If at any point in the job application or interview stage something feels wrong to you, don’t ignore the feeling. Ask yourself if you see any of the warning signs outlined above. Or pause and ask a trusted friend or relative for a reality check.

Originally published on ProPublica by Cezary Podkul and republished under a Creative Commons License (CC BY-NC-ND 3.0)

Related Articles:


Check out Lynxotic on YouTube

Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

Profits Before People: ‘The Facebook Papers’ Expose Tech Giant Greed

Above: Photo Collage / Lynxotic

“This industry is rotten at its core,” said one critic, “and the clearest proof of that is what it’s doing to our children.”

Internal documents dubbed “The Facebook Papers” were published widely Monday by an international consortium of news outlets who jointly obtained the redacted materials recently made available to the U.S. Congress by company whistleblower Frances Haugen.

“It’s time for immediate action to hold the company accountable for the many harms it’s inflicted on our democracy.”

The papers were shared among 17 U.S. outlets as well as a separate group of news agencies in Europe, with all the journalists involved sharing the same publication date but performing their own reporting based on the documents.

According to the Financial Times, the “thousands of pages of leaked documents paint a damaging picture of a company that has prioritized growth” over other concerns. And the Washington Post concluded that the choices made by founder and CEO Mark Zuckerberg, as detailed in the revelations, “led to disastrous outcomes” for the social media giant and its users.

From an overview of the documents and the reporting project by the Associated Press:

The papers themselves are redacted versions of disclosures that Haugen has made over several months to the Securities and Exchange Commission, alleging Facebook was prioritizing profits over safety and hiding its own research from investors and the public.

These complaints cover a range of topics, from its efforts to continue growing its audience, to how its platforms might harm children, to its alleged role in inciting political violence. The same redacted versions of those filings are being provided to members of Congress as part of its investigation. And that process continues as Haugen’s legal team goes through the process of redacting the SEC filings by removing the names of Facebook users and lower-level employees and turns them over to Congress.

One key revelation highlighted by the Financial Times is that Facebook has been perplexed by its own algorithms and another was that the company “fiddled while the Capitol burned” during the January 6th insurrection staged by loyalists to former President Donald Trump trying to halt the certification of last year’s election.

CNN warned that the totality of what’s contained in the documents “may be the biggest crisis in the company’s history,” but critics have long said that at the heart of the company’s problem is the business model upon which it was built and the mentality that governs it from the top, namely Zuckerberg himself.

On Friday, following reporting based on a second former employee of the company coming forward after Haugen, Free Press Action co-CEO Jessica J. González said “the latest whistleblower revelations confirm what many of us have been sounding the alarm about for years.”

“Facebook is not fit to govern itself,” said González. “The social-media giant is already trying to minimize the value and impact of these whistleblower exposés, including Frances Haugen’s. The information these brave individuals have brought forth is of immense importance to the public and we are grateful that these and other truth-tellers are stepping up.”

While Zuckerberg has testified multiple times before Congress, González said nothing has changed. “It’s time for Congress and the Biden administration to investigate a Facebook business model that profits from spreading the most extreme hate and disinformation,” she said. “It’s time for immediate action to hold the company accountable for the many harms it’s inflicted on our democracy.”

“Kids don’t stand a chance against the multibillion dollar Facebook machine, primed to feed them content that causes severe harm to mental and physical well being.”

With Haugen set to testify before the U.K. Parliament on Monday, activists in London staged protests against Facebook and Zuckerberg, making clear that the giant social media company should be seen as a global problem.

Flora Rebello Arduini, senior campaigner with the corporate accountability group, was part of a team that erected a large cardboard display of Zuckerberg “surfing a wave of cash” outside of Parliament with a flag that read, “I know we harm kids, but I don’t care”—a rip on a video Zuckerberg posted of himself earlier this year riding a hydrofoil while holding an American flag.

While Zuckerberg refused an invitation to tesify in the U.K. about the company’s activities, including the way it manipulates and potentially harms young users on the platform, critics like Arduini said the giant tech company must be held to account.

“Kids don’t stand a chance against the multibillion dollar Facebook machine, primed to feed them content that causes severe harm to mental and physical well being,” she said. “This industry is rotten at its core and the clearest proof of that is what it’s doing to our children. Lawmakers must urgently step in and pull the tech giants into line.”

“Right now, Mark [Zuckerberg] is unaccountable,” Haugen told the Guardian in an interview ahead of her testimony. “He has all the control. He has no oversight, and he has not demonstrated that he is willing to govern the company at the level that is necessary for public safety.”

Correction: This article has been updated to more accurately reflect the context of the comments made by Jessica González of Free Press, who responded to the revelations of a second whistleblower not those of Frances Haugen.

Originally published on Common Dreams by JON QUEALLY and republished under a Creative Commons License (CC BY-NC-ND 3.0).

Related Articles:


Find books on Politics and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

Congress want Amazon to Prove Bezos didn’t give perjured Testimony

Above: Photo / Lynxotic

While still CEO of Amazon, Jeff Bezos testified in Congress by video conference on July 29, 2020. Now, there are at least Five members of a congressional committee alleging that he and other executives may have lied under oath andmisled lawmakers.

In a press release by the House Judiciary Antitrust Subcommittee the lawmakers state that they are giving Amazon a “Final Chance to Correct the Record Following a Series of Misleading Testimony and Statements”.

CurrentAmazon CEO Andy Jassy, who, in July, succeeded Bezos is being asked to respond to the discrepancies, including information found by The Markup published in a recent article

Read More at:


Related Articles:


Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

After Docs ‘Show What We Feared’ About Amazon’s Monopoly Power, Warren Says ‘Break It Up’

Leaked documents reveal the e-commerce company’s private-brands team in India “secretly exploited internal data” to copy products from other sellers and rigged search results.

U.S. Sen. Elizabeth Warren on Wednesday renewed her call to break up Amazon after internal documents obtained by Reuters revealed that the e-commerce giant engaged in anti-competitive behavior in India that it has long denied, including in testimonies from company leaders to Congress.

“These documents show what we feared about Amazon’s monopoly power—that the company is willing and able to rig its platform to benefit its bottom line while stiffing small businesses and entrepreneurs,” tweeted Warren (D-Mass.) “This is one of the many reasons we need to break it up.”

Warren is a vocal advocate of breaking up tech giants including but not limited to Amazon. The company faces investigations regarding alleged anti-competitive behavior in the United States as well as Europe and India. The investigative report may ramp up such probes.

Aditya Karla and Steve Stecklow report that “thousands of pages of internal Amazon documents examined by Reuters—including emails, strategy papers, and business plans—show the company ran a systematic campaign of creating knockoffs and manipulating search results to boost its own product lines in India, one of the company’s largest growth markets.”

“The documents reveal how Amazon’s private-brands team in India secretly exploited internal data from Amazon.in to copy products sold by other companies, and then offered them on its platform,” according to the reporters. “The employees also stoked sales of Amazon private-brand products by rigging Amazon’s search results.”

As Reuters notes:

In sworn testimony before the U.S. Congress in 2020, Amazon founder Jeff Bezos explained that the e-commerce giant prohibits its employees from using the data on individual sellers to help its private-label business. And, in 2019, another Amazon executive testified that the company does not use such data to create its own private-label products or alter its search results to favor them.

But the internal documents seen by Reuters show for the first time that, at least in India, manipulating search results to favor Amazon’s own products, as well as copying other sellers’ goods, were part of a formal, clandestine strategy at Amazon—and that high-level executives were told about it. The documents show that two executives reviewed the India strategy—senior vice presidents Diego Piacentini, who has since left the company, and Russell Grandinetti, who currently runs Amazon’s international consumer business.

While neither Piacentini nor Grandinetti responded to Reuters‘ requests for comment, Amazon provided a written response that did not address the reporters’ questions.

“As Reuters hasn’t shared the documents or their provenance with us, we are unable to confirm the veracity or otherwise of the information and claims as stated,” Amazon said. “We believe these claims are factually incorrect and unsubstantiated.”

“We display search results based on relevance to the customer’s search query, irrespective of whether such products have private brands offered by sellers or not,” the company said, adding that it “strictly prohibits the use or sharing of nonpublic, seller-specific data for the benefit of any seller, including sellers of private brands.”

Warren was not alone in calling for the breakup of Amazon following the report.

“This is not shocking. But it is appalling,” the American Economic Liberties Project said in a series of tweets. “Independent businesses have sounded the alarm for years—providing evidence that Amazon stole their intellectual property.”

“We said back in 2020 that a perjury referral was in order—and it still is,” the group added, highlighting testimony from Bezos and Nate Sutton, Amazon’s associate general counsel. “But Amazon will remain an anti-business behemoth, flagrantly breaking the law and daring policymakers to stop them.”

Highlighting a report from a trio of its experts, Economic Liberties added that “it’s time to break Amazon up.”

Originally published on Common Dreams by JESSICA CORBETT and republished under a Creative Commons license  (CC BY-NC-ND 3.0).

Related Articles:


Find books on Political Recommendations and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

‘System Is Blinking Red’: Experts Condemn Facebook’s Profit-Seeking Algorithms

Above: Photo Collage / Lynxotic

“How many more insurrections have to happen before we hold Facebook to account?” one group asked after whistleblower Frances Haugen said the corporation is unwilling to confront hate speech and disinformation.

Following whistleblower Frances Haugen’s Sunday night allegation that Facebook’s refusal to combat dangerous lies and hateful content on its platforms is driven by profit, social media experts denounced the corporation for embracing a business model that encourages violence and endangers democracy—and urged the federal government to take action.

Haugen, who copied a “trove of private Facebook research” before she resigned from the social media company in May, told CBS‘s Scott Pelley during a “60 Minutes” interview that the tech giant took some steps to limit misinformation ahead of the 2020 election because it understood that then-President Donald Trump’s incessant lies about voter fraud posed a serious threat. Many of the safety measures that Facebook implemented, however, were temporary, she added.

“As soon as the election was over,” Haugen said, “they turned them back off or they changed the settings back to what they were before to prioritize growth over safety. And that really feels like a betrayal of democracy to me.”

Facebook officials claim that some of the anti-misinformation systems remained in place, but in the interregnum between Election Day and President Joe Biden’s inauguration, far-right extremists used the social networking site to organize the deadly January 6 coup attempt—something acknowledged by an internal task force’s report on Facebook’s failure to neutralize “Stop the Steal” activity on its platforms.

There is, according to Haugen, a simple explanation for why executives at the company refuse to do more to mitigate harmful social media behavior: “Facebook has realized that if they change the algorithm to be safer, people will spend less time on the site, they’ll click on less ads, they’ll make less money,” she said.

“The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook,” Haugen told Pelley. “And Facebook, over and over again, chose to optimize for its own interests, like making more money.”

Haugen—who first revealed her identity on Sunday after having secretly shared internal documents with federal regulators, reported on in the Wall Street Journal‘s series, “The Facebook Files”—also said the corporation is lying to the public about how effective it is at curbing hate speech and disinformation, arguing that “Facebook has demonstrated it cannot operate independently.”

In the wake of Haugen’s bombshell interview, social media experts condemned Facebook for prioritizing “profits above all else.”

“Facebook runs on a hate-and-lie-for-profit business model that amplifies all sorts of toxicity on its platforms,” Jessica J. González, co-CEO of Free Press, said Monday in a statement. “Thanks to this brave whistleblower, we now have further proof that Facebook’s executives—all the way up to CEO Mark Zuckerberg and COO Sheryl Sandberg—routinely chose profits over public safety.”

González, co-founder of Ya Basta Facebook and the Change the Terms coalition, added that Facebook executives “designed the company’s algorithms to put engagement, growth, and profits above all else, even allowing lies about the 2020 election results to spread to millions in advance of the white-nationalist assault on the U.S. Capitol.”

Longtime critics of Facebook argued that the “new revelations” about the company demand immediate federal intervention.

“How many more insurrections have to happen before we hold Facebook to account?” the Real Facebook Oversight Board, a coalition of civil rights leaders and academics, asked in a statement released after Haugen’s interview aired. “The system is blinking red, and without real, meaningful, independent, and robust oversight and investigation of Facebook, more lives will be lost.”

“The goal,” added the group, “is no longer to save Facebook—Facebook is beyond hope. The goal now is to save democracy.”

Free Press summarized the Journal‘s key findings on Facebook, which we now know stem from internal documents provided by Haugen:

Facebook exempted high-profile users from some or all of its rules; Instagram is harmful to millions of young users; Facebook’s 2018 algorithm change promotes objectionable or harmful content; Facebook’s tools were used to sow doubt about Covid-19 vaccines; and globally, Facebook is used to incite violence against ethnic minorities and facilitat[e] action against political dissent. 

Shireen Mitchell, founder of Stop Online Violence Against Women, praised Haugen for exposing Facebook’s “amplification and use of hate to keep users on the platform engaged.”

Facebook has “weaponized… data in harmful ways against users,” Mitchell continued, and failed to consider the negative effects of “hate-filled rhetoric” even after the Myanmar military used Facebook to launch a genocide in 2018.

González argued that Haugen “turned evidence of this gross negligence over to the government at great personal risk, and now we need the government to respond with decisive action to hold the company responsible for protecting public safety.”

“The government must demand full transparency on how Facebook collects, processes, and shares our data, and enact civil rights and privacy policies to protect the public from Facebook’s toxic business model,” said González.

“Facebook must also act swiftly to remedy the harms it is continuing to inflict on the public at large,” she added. “It must end special protections for powerful politicians, ban white supremacists and dangerous conspiracy theorists, and institute wholesale changes to strengthen content moderation in English and other languages—and we need this all now.”

According to Carole Cadwalladr, a journalist at The Guardian and co-founder of the Real Facebook Oversight Board, “Facebook is a rogue state, lying to regulators, investors, and its own oversight board.”

“What we are seeing today is a market failure with profound, devastating global consequences,” she said. “Executives and board members must be held to account. There is evidence to suggest that their behavior was not just immoral but also criminal.”

Shoshana Zuboff, professor emeritus at Harvard Business School and author of The Age of Surveillance Capitalismargued that “even as we feel outrage toward Mr. Zuckerberg and his corporation, the cause of this crisis is not a single company, not even one as powerful as Facebook.”

“The cause is the economic institution of surveillance capitalism,” said Zuboff. “The economic logic of these systems, the data operations that feed them, and the markets that support them are not limited to Facebook.”

“The imperatives of surveillance economics determine the engineering of these operations—their products, objectives, and financial incentives—along with those of the other tech empires, their extensive ecosystems, and thousands of companies in diverse sectors far from Silicon Valley,” she continued. “The damage already done is intolerable. The damage that most certainly lies ahead is unthinkable.”

Zuboff added that the only “durable solution to this crisis” is to “undertake the work of interrupting and outlawing the dangerous operations of surveillance capitalism and its predictable social harms that assault human autonomy, splinter society, and undermine democracy.”

Haugen is scheduled to testify on Tuesday at a Senate subcommittee hearing on “Protecting Kids Online.”

Originally published on Common Dreams by KENNY STANCIL and republished under a Creative Commons license  (CC BY-NC-ND 3.0).

Related Articles:


Find books on Big Tech and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

In Scathing Senate Testimony, Whistleblower Warns Facebook a Threat to Children and Democracy

Above: Photo Collage / Lynxotic

Frances Haugen said the company’s leaders know how to make their platforms safer, “but won’t make the necessary changes because they have put their astronomical profits before people.

Two days after a bombshell “60 Minutes” interview in which she accused Facebook of knowingly failing to stop the spread of dangerous lies andhateful content, whistleblower Frances Haugen testified Tuesday before U.S. senators, imploring Congress to hold the company and its CEO accountable for the many harms they cause.

Haugen—a former Facebook product manager—told the senators she went to work at the social media giant because she believed in its “potential to bring out the best in us.”

“But I’m here today because I believe Facebook’s products harm children, stoke division, and weaken our democracy,” she said during her opening testimony. “The company’s leadership knows how to make Facebook and Instagram safer, but won’t make the necessary changes because they have put their astronomical profits before people.”

“The documents I have provided to Congress prove that Facebook has repeatedly misled the public about what its own research reveals about the safety of children, the efficacy of its artificial intelligence systems, and its role in spreading divisive and extreme messages,” she continued. “I came forward because I believe that every human being deserves the dignity of truth.”

“I saw Facebook repeatedly encounter conflicts between its own profits and our safety,” Haugen added. “Facebook consistently resolved its conflicts in favor of its own profits.”

“In some cases, this dangerous online talk has led to actual violence that harms and even kills people,” she said.

Addressing Monday’s worldwide Facebook outage, Haugen said that “for more than five hours, Facebook wasn’t used to deepen divides, destabilize democracies, and make young girls and women feel bad about their bodies.”

“It also means that millions of small businesses weren’t able to reach potential customers, and countless photos of new babies weren’t joyously celebrated by family and friends around the world,” she added. “I believe in the potential of Facebook. We can have social media we enjoy that connects us without tearing apart our democracy, putting our children in danger, and sowing ethnic violence around the world. We can do better.”

Doing better will require Congress to act, because Facebook “won’t solve this crisis without your help,” Haugen told the senators, echoing experts and activists who continue to call for breaking up tech giants, banning the surveillance capitalist business model, and protecting rights and democracy online.

She added that “there is nobody currently holding Zuckerberg accountable but himself,” referring to Facebook co-founder and CEO Mark Zuckerberg.

Sen. Richard Blumenthal (D-Conn.)—chair of the Senate Consumer Protection, Product Safety, and Data Security Subcommittee—called on Zuckerberg to testify before the panel.

“Mark Zuckerberg ought to be looking at himself in the mirror today and yet rather than taking responsibility, and showing leadership, Mr. Zuckerberg is going sailing,” he said.

“Big Tech now faces a Big Tobacco, jaw-dropping moment of truth. It is documented proof that Facebook knows its products can be addictive and toxic to children,” Blumenthal continued.

“The damage to self-interest and self-worth inflicted by Facebook today will haunt a generation,” he added. “Feelings of inadequacy and insecurity, rejection, and self-hatred will impact this generation for years to come. Our children are the ones who are victims.”

Originally published on Common Dreams by BRETT WILKINSand republished under Creative Commons (CC BY-NC-ND 3.0).

Related Articles:


Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page