Tag Archives: Coding

The Real Meaning of Web3 is not yet understood

Is there a cookie (pie) big enough for us all?

Burning down a straw man before it’s built is a charade and a sham that predictably looks for dreams to kill, just take a peek at various articles predicting the end of web3 before it starts.

Recently there’s been a series of dust-ups and take-down attempts gunning for web3, crypto and anything decentralized or connected to buzz words like, DAOs, Defi, etc.

As logically sound as these diatribes may appear, in every case there’s a fatal flaw that’s oddly never mentioned: that at its essence web3 is a desire, an aspiration and, above all, a proposed remedy to what’s wrong with all that is, internet-wise in the present day.

Maybe it’s because the current wave of developers and venture capitalists that are at the forefront of, supposedly, building a new web, are basing the entire enterprise on a build-it-and-they-will-come mentality.

But will they come? In the end it is the crowds that make the concert. Even something as terrible and flawed as Facebook couldn’t be stopped because the crowds, both fake and, later, real-ish, did come.

Regardless of the theoretical merits of an idea or movement, if the masses do not cooperate in creating critical mass for the idea, there will be no coming out party, ever.

This is the true hope that lies beneath. The dream that dare not speak its name is not based on logic, or realistic viability, it is based on a desire that can’t be stopped, a need that does not just die out because of flawed models of centralized, decentralized or any other wannabe structure of interaction.

Web2 is dying before our eyes. Something will replace it. The rumblings from beneath in a Chinese music app named after the sound a clock makes and even from this very medium are that peer to peer power is what will drive web3 into whatever it will become.

Decentralized? The fight over defining web3 is lost in space

Peer to peer power does not rise from the barrel of a gun or even from the blockchain. It comes from the rejection of hierarchical structures that strangle creativity, and more importantly, that have no place for broadly distributed communication and prosperity to flourish.

The zero-sum mindset that Elon Musk calls wrong and that leads to “morally questionable” acts is not built to last and the top-down economics of 1 Zuckerberg per each billion users is dead and dying fast.

Can there be a Robin hood Parable 3.0: steal from nobody and give to everybody?

The infinite pie theory is the only one that fits and, according to Musk, it’s about the mindset, and adopting it all the way can take you all the way to the promised land.

The TikTok army of souls knows this and will only respond to a sustainable vision of renumeration that is, if not decentralized, at the very least widely distributed, peer-to-peer and devoid of outdated vertical top-heavy crap systems and platforms that do not work for the individual at a broad based level.

The Sun is always shining, even at night

If a “small section of southeastern Utah” can power the energy needs of the entire USA via solar, at a minimal cost compared to setting fossilized forests ablaze, then why can’t the wealth benefits of that energy be distributed across the population in a more equitable way than Malthus and the zero-sum mafia would have you believe is inevitable?

The answer to that question, beyond the benefits of asking it, is beyond the scope here, but in the case of web3 coming about it is not possible to say that it will rise as nothing more than web2 in sheep’s clothing, as Professor Scott would have you believe.

Because it will take a revolution to change and tear down the mistakes of web2 (and some other outdated baggage along the way) and that is already building in the need and desire of the population that “benefits”, or not, from the current system.

The technology that is abandoned will be the tech that is not able to exist in a world where building pyramids of crap for the Pharaohs of Facebook will just not cut it anymore. And, just as web3 already exists, not in structures built to corral and kill its spirit, but in the spirit and the need for a change and better way to make use of the network.

Related Articles:


Check out Lynxotic on YouTube

Enjoy Lynxotic at Google News and Apple News on your iPhone, iPad or Mac.

Find books on Music, Movies & Entertainment and many other topics at Bookshop.org

Lynxotic may receive a small commission based on any purchases made by following links from this page

Why It’s So Hard to Regulate Algorithms

photo: adobe

Governments increasingly use algorithms to do everything from assign benefits to dole out punishment—but attempts to regulate them have been unsuccessful

In 2018, the New York City Council created a task force to study the city’s use of automated decision systems (ADS). The concern: Algorithms, not just in New York but around the country, were increasingly being employed by government agencies to do everything from informing criminal sentencing and detecting unemployment fraud to prioritizing child abuse cases and distributing health benefits. And lawmakers, let alone the people governed by the automated decisions, knew little about how the calculations were being made. 

Rare glimpses into how these algorithms were performing were not comforting: In several states, algorithms used to determine how much help residents will receive from home health aides have automatically cut benefits for thousands. Police departments across the country use the PredPol software to predict where future crimes will occur, but the program disproportionately sends police to Black and Hispanic neighborhoods. And in Michigan, an algorithm designed to detect fraudulent unemployment claims famously improperly flagged thousands of applicants, forcing residents who should have received assistance to lose their homes and file for bankruptcy.

Watch Deep Mind Music Video

New York City’s was the first legislation in the country aimed at shedding light on how government agencies use artificial intelligence to make decisions about people and policies.

At the time, the creation of the task force was heralded as a “watershed” moment that would usher in a new era of oversight. And indeed, in the four years since, a steady stream of reporting about the harms caused by high-stakes algorithms has prompted lawmakers across the country to introduce nearly 40 bills designed to study or regulate government agencies’ use of ADS, according to The Markup’s review of state legislation. 

The bills range from proposals to create study groups to requiring agencies to audit algorithms for bias before purchasing systems from vendors. But the dozens of reforms proposed have shared a common fate: They have largely either died immediately upon introduction or expired in committees after brief hearings, according to The Markup’s review.

In New York City, that initial working group took two years to make a set of broad, nonbinding recommendations for further research and oversight. One task force member described the endeavor as a “waste.” The group could not even agree on a definition for automated decision systems, and several of its members, at the time and since, have said they did not believe city agencies and officials had bought into the process.

Elsewhere, nearly all proposals to study or regulate algorithms have failed to pass. Bills to create study groups to examine the use of algorithms failed in Massachusetts, New York state, California, Hawaii, and Virginia. Bills requiring audits of algorithms or prohibiting algorithmic discrimination have died in California, Maryland, New Jersey, and Washington state. In several cases—California, New Jersey, Massachusetts, Michigan, and Vermont—ADS oversight or study bills remain pending in the legislature, but their prospects this session are slim, according to sponsors and advocates in those states.

The only state bill to pass so far, Vermont’s, created a task force whose recommendations—to form a permanent AI commission and adopt regulations—have so far been ignored, state representative Brian Cina told The Markup. 

The Markup interviewed lawmakers and lobbyists and reviewed written and oral testimony on dozens of ADS bills to examine why legislatures have failed to regulate these tools.

We found two key through lines: Lawmakers and the public lack fundamental access to information about what algorithms their agencies are using, how they’re designed, and how significantly they influence decisions. In many of the states The Markup examined, lawmakers and activists said state agencies had rebuffed their attempts to gather basic information, such as the names of tools being used.

Meanwhile, Big Tech and government contractors have successfully derailed legislation by arguing that proposals are too broad—in some cases claiming they would prevent public officials from using calculators and spreadsheets—and that requiring agencies to examine whether an ADS system is discriminatory would kill innovation and increase the price of government procurement.

Lawmakers Struggled to Figure Out What Algorithms Were Even in Use

One of the biggest challenges lawmakers have faced when seeking to regulate ADS tools is simply knowing what they are and what they do.

Following its task force’s landmark report, New York City conducted a subsequent survey of city agencies. It resulted in a list of only 16 automated decision systems across nine agencies, which members of the task force told The Markup they suspect is a severe underestimation.

“We don’t actually know where government entities or businesses use these systems, so it’s hard to make [regulations] more concrete,” said Julia Stoyanovich, a New York University computer science professor and task force member.

In 2018, Vermont became the first state to create its own ADS study group. At the conclusion of its work in 2020, the group reported that “there are examples of where state and local governments have used artificial intelligence applications, but in general the Task Force has not identified many of these applications.”

“Just because nothing popped up in a few weeks of testimony doesn’t mean that they don’t exist,” said Cina. “It’s not like we asked every single state agency to look at every single thing they use.”

In February, he introduced a bill that would have required the state to develop basic standards for agency use of ADS systems. It has sat in committee without a hearing since then.

In 2019, the Hawaii Senate passed a resolution requesting that the state convene a task force to study agency use of artificial intelligence systems, but the resolution was nonbinding and no task force convened, according to the Hawaii Legislative Reference Bureau. Legislators tried to pass a binding resolution again the next year, but it failed.

Legislators and advocacy groups who authored ADS bills in California, Maryland, Massachusetts, Michigan, New York, and Washington told The Markup that they have no clear understanding of the extent to which their state agencies use ADS tools. 

Advocacy groups like the Electronic Privacy Information Center (EPIC) that have attempted to survey government agencies regarding their use of ADS systems say they routinely receive incomplete information.

“The results we’re getting are straight-up non-responses or truly pulling teeth about every little thing,” said Ben Winters, who leads EPIC’s AI and Human Rights Project.

In Washington, after an ADS regulation bill failed in 2020, the legislature created a study group tasked with making recommendations for future legislation. The ACLU of Washington proposed that the group should survey state agencies to gather more information about the tools they were using, but the study group rejected the idea, according to public minutes from the group’s meetings.

“We thought it was a simple ask,” said Jennifer Lee, the technology and liberty project manager for the ACLU of Washington. “One of the barriers we kept getting when talking to lawmakers about regulating ADS is they didn’t have an understanding of how prevalent the issue was. They kept asking, ‘What kind of systems are being used across Washington state?’ ”

Ben Winters, who leads EPIC’s AI and Human Rights Project

Lawmakers Say Corporate Influence a Hurdle

Washington’s most recent bill has stalled in committee, but an updated version will likely be reintroduced this year now that the study group has completed its final report, said state senator Bob Hasegawa, the bill’s sponsor

The legislation would have required any state agency seeking to implement an ADS system  to produce an algorithmic accountability report disclosing the name and purpose of the system, what data it would use, and whether the system had been independently tested for biases, among other requirements.

The bill would also have banned the use of ADS tools that are discriminatory and required that anyone affected by an algorithmic decision be notified and have a right to appeal that decision.

“The big obstacle is corporate influence in our governmental processes,” said Hasegawa. “Washington is a pretty high-tech state and so corporate high tech has a lot of influence in our systems here. That’s where most of the pushback has been coming from because the impacted communities are pretty much unanimous that this needs to be fixed.”

California’s bill, which is similar, is still pending in committee. It encourages, but does not require, vendors seeking to sell ADS tools to government agencies to submit an ADS impact report along with their bid, which would include similar disclosures to those required by Washington’s bill.

It would also require the state’s Department of Technology to post the impact reports for active systems on its website.

Led by the California Chamber of Commerce, 26 industry groups—from big tech representatives like the Internet Association and TechNet to organizations representing banks, insurance companies, and medical device makers—signed on to a letter opposing the bill.

“There are a lot of business interests here, and they have the ears of a lot of legislators,” said Vinhcent Le, legal counsel at the nonprofit Greenlining Institute, who helped author the bill.

Originally, the Greenlining Institute and other supporters sought to regulate ADS in the private sector as well as the public but quickly encountered pushback. 

“When we narrowed it to just government AI systems we thought it would make it easier,” Le said. “The argument [from industry] switched to ‘This is going to cost California taxpayers millions more.’ That cost angle, that innovation angle, that anti-business angle is something that legislators are concerned about.”

The California Chamber of Commerce declined an interview request for this story but provided a copy of the letter signed by dozens of industry groups opposing the bill. The letter states that the bill would “discourage participation in the state procurement process” because the bill encourages vendors to complete an impact assessment for their tools. The letter said the suggestion, which is not a requirement, was too burdensome. The chamber also argued that the bill’s definition of automated decision systems was too broad.

Industry lobbyists have repeatedly criticized legislation in recent years for overly broad definitions of automated decision systems despite the fact that the definitions mirror those used in internationally recognized AI ethics frameworks, regulations in Canada, and proposed regulations in the European Union.

During a committee hearing on Washington’s bill, James McMahan, policy director for the Washington Association of Sheriffs and Police Chiefs, told legislators he believed the bill would apply to “most if not all” of the state crime lab’s operations, including DNA, fingerprint, and firearm analysis.

Internet Association lobbyist Vicki Christophersen, testifying at the same hearing, suggested that the bill would prohibit the use of red light cameras. The Internet Association did not respond to an interview request.

“It’s a funny talking point,” Le said. “We actually had to put in language to say this doesn’t include a calculator or spreadsheet.”

Maryland’s bill, which died in committee, would also have required agencies to produce reports detailing the basic purpose and functions of ADS tools and would have prohibited the use of discriminatory systems.

“We’re not telling you you can’t do it [use ADS],” said Delegate Terri Hill, who sponsored the Maryland bill. “We’re just saying identify what your biases are up front and identify if they’re consistent with the state’s overarching goals and with this purpose.”

The Maryland Tech Council, an industry group representing small and large technology firms in the state, opposed the bill, arguing that the prohibitions against discrimination were premature and would hurt innovation in the state, according to written and oral testimony the group provided.

“The ability to adequately evaluate whether or not there is bias is an emerging area, and we would say that, on behalf of the tech council, putting in place this at this time is jumping ahead of where we are,” Pam Kasemeyer, the council’s lobbyist, said during a March committee hearing on the bill. “It almost stops the desire for companies to continue to try to develop and refine these out of fear that they’re going to be viewed as discriminatory.”

Limited Success in the Private Sector

There have been fewer attempts by state and local legislatures to regulate private companies’ use of ADS systems—such as those The Markup has exposed in the tenant screening and car insurance industries—but in recent years, those measures have been marginally more successful.

The New York City Council passed a bill that would require private companies to conduct bias audits of algorithmic hiring tools before using them. The tools are used by many employers to screen job candidates without the use of a human interviewer.

The legislation, which was enacted in January but does not take effect until 2023, has been panned by some of its early supporters, however, for being too weak.

Illinois also enacted a state law in 2019 that requires private employers to notify job candidates when they’re being evaluated by algorithmic hiring tools. And in 2021, the legislature amended the law to require employers who use such tools to report demographic data about job candidates to a state agency to be analyzed for evidence of biased decisions. 

This year the Colorado legislature also passed a law, which will take effect in 2023, that will create a framework for evaluating insurance underwriting algorithms and ban the use of discriminatory algorithms in the industry. 

This article was originally published on The Markup By: Todd Feathers and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.


Check out Lynxotic on YouTube

Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

Peter Thiel’s Origin Story

Photo Collage / Lynxotic

Thiel is getting a lot of likely unwanted press this week, looks like he deserves it…

A new feature book profile published in NYMag details the origins of Peter Thiel. His spectacular story, leading to what to some is a toxic, libertarian right-wing, stance that included support of Donald Trump and various other infamous acts, and more recently, such as a huge bankroll pushing his agenda in Washington political lobbying. Not to mention his Roth IRA story of non-taxed treasures worth billions.

The fascinating piece details the biographical details, culled from the book, beginning around 1988 when Thiel was a boy of twenty and first arriving in Northern California.

The article, showing how his eventual political perspectives were already emerging at that young age, it goes on to detail the entire story to nearly the present day as is chronicled in the new book:

The Contrarian: Peter Thiel and Silicon Valley’s Pursuit of Power

Above: “The Contrarian” – Release date on September 21,2021. Available to order on Bookshop and Amazon.

His ideology dominates Silicon Valley. It began to form when he was an angry young man.

In many ways the book’s release seems to dovetail perfectly with the building thread of details regarding how he rose from obscurity to becoming an obscenely wealthy silicon valley “god”, and one that seems to seek inordinate influence over the direction of our common futures. Not only in the tech arena. Not only in his association with Facebook’s beginnings and origins of PayPal.

This character portrait is a must read. It goes along with why it feels like we also all need to follow the Trump saga to its conclusion, no matter how ignoble or tragic. Or the trial of Elizabeth Holmes, for that matter, to get a sense of how the runaway powers that are sometimes obtained, wether through force of will or just serendipity, and how they can, later, potentially grow so dangerous that the influences can infect and affect us all.

Release date on September 21,2021. Available to order on Bookshop and Amazon.

Read more on:

Related Articles:


Find books on Political Recommendations and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page