Tag Archives: computer science

How QR codes work and what makes them dangerous – a computer scientist explains

QR codes are visual patterns that store data smartphones can read. Photo- Adobe Stock

Scott Ruoti, University of Tennessee

Among the many changes brought about by the pandemic is the widespread use of QR codes, graphical representations of digital data that can be printed and later scanned by a smartphone or other device.

QR codes have a wide range of uses that help people avoid contact with objects and close interactions with other people, including for sharing restaurant menus, email list sign-ups, car and home sales information, and checking in and out of medical and professional appointments.

QR codes are a close cousin of the bar codes on product packaging that cashiers scan with infrared scanners to let the checkout computer know what products are being purchased.

Bar codes store information along one axis, horizontally. QR codes store information in both vertical and horizontal axes, which allows them to hold significantly more data. That extra amount of data is what makes QR codes so versatile.

Anatomy of a QR code

While it is easy for people to read Arabic numerals, it is hard for a computer. Bar codes encode alphanumeric data as a series of black and white lines of various widths. At the store, bar codes record the set of numbers that specify a product’s ID. Critically, data stored in bar codes is redundant. Even if part of the bar code is destroyed or obscured, it is still possible for a device to read the product ID.

QR codes are designed to be scanned using a camera, such as those found on your smartphone. QR code scanning is built into many camera apps for Android and iOS. QR codes are most often used to store web links; however, they can store arbitrary data, such as text or images.

When you scan a QR code, the QR reader in your phone’s camera deciphers the code, and the resulting information triggers an action on your phone. If the QR code holds a URL, your phone will present you with the URL. Tap it, and your phone’s default browser will open the webpage.

QR codes are composed of several parts: data, position markers, quiet zone and optional logos.

The QR code anatomy: data (1), position markers (2), quiet zone (3) and optional logos (4). Scott Ruoti, CC BY-ND

The data in a QR code is a series of dots in a square grid. Each dot represents a one and each blank a zero in binary code, and the patterns encode sets of numbers, letters or both, including URLs. At its smallest this grid is 21 rows by 21 columns, and at its largest it is 177 rows by 177 columns. In most cases, QR codes use black squares on a white background, making the dots easy to distinguish. However, this is not a strict requirement, and QR codes can use any color or shape for the dots and background.

Position markers are squares placed in a QR code’s top-left, top-right, and bottom-left corners. These markers let a smartphone camera or other device orient the QR code when scanning it. QR codes are surrounded by blank space, the quiet zone, to help the computer determine where the QR code begins and ends. QR codes can include an optional logo in the middle.

Like barcodes, QR codes are designed with data redundancy. Even if as much as 30% of the QR code is destroyed or difficult to read, the data can still be recovered. In fact, logos are not actually part of the QR code; they cover up some of the QR code’s data. However, due to the QR code’s redundancy, the data represented by these missing dots can be recovered by looking at the remaining visible dots.

Are QR codes dangerous?

QR codes are not inherently dangerous. They are simply a way to store data. However, just as it can be hazardous to click links in emails, visiting URLs stored in QR codes can also be risky in several ways.

The QR code’s URL can take you to a phishing website that tries to trick you into entering your username or password for another website. The URL could take you to a legitimate website and trick that website into doing something harmful, such as giving an attacker access to your account. While such an attack requires a flaw in the website you are visiting, such vulnerabilities are common on the internet. The URL can take you to a malicious website that tricks another website you are logged into on the same device to take an unauthorized action.

A malicious URL could open an application on your device and cause it to take some action. Maybe you’ve seen this behavior when you clicked a Zoom link, and the Zoom application opened and automatically joined a meeting. While such behavior is ordinarily benign, an attacker could use this to trick some apps into revealing your data.

[Understand key political developments, each week. Subscribe to The Conversation’s politics newsletter.]

It is critical that when you open a link in a QR code, you ensure that the URL is safe and comes from a trusted source. Just because the QR code has a logo you recognize doesn’t mean you should click on the URL it contains.

There is also a slight chance that the app used to scan the QR code could contain a vulnerability that allows malicious QR codes to take over your device. This attack would succeed by just scanning the QR code, even if you don’t click the link stored in it. To avoid this threat, you should use trusted apps provided by the device manufacturer to scan QR codes and avoid downloading custom QR code apps.

Scott Ruoti, Assistant Professor of Computer Science, University of Tennessee

This article is republished from The Conversation under a Creative Commons license. Read the original article.

More from Lynxotic:


Check out Lynxotic on YouTube

Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

Lynxotic may receive a small commission based on any purchases made by following links from this page

Why It’s So Hard to Regulate Algorithms

photo: adobe

Governments increasingly use algorithms to do everything from assign benefits to dole out punishment—but attempts to regulate them have been unsuccessful

In 2018, the New York City Council created a task force to study the city’s use of automated decision systems (ADS). The concern: Algorithms, not just in New York but around the country, were increasingly being employed by government agencies to do everything from informing criminal sentencing and detecting unemployment fraud to prioritizing child abuse cases and distributing health benefits. And lawmakers, let alone the people governed by the automated decisions, knew little about how the calculations were being made. 

Rare glimpses into how these algorithms were performing were not comforting: In several states, algorithms used to determine how much help residents will receive from home health aides have automatically cut benefits for thousands. Police departments across the country use the PredPol software to predict where future crimes will occur, but the program disproportionately sends police to Black and Hispanic neighborhoods. And in Michigan, an algorithm designed to detect fraudulent unemployment claims famously improperly flagged thousands of applicants, forcing residents who should have received assistance to lose their homes and file for bankruptcy.

Watch Deep Mind Music Video

New York City’s was the first legislation in the country aimed at shedding light on how government agencies use artificial intelligence to make decisions about people and policies.

At the time, the creation of the task force was heralded as a “watershed” moment that would usher in a new era of oversight. And indeed, in the four years since, a steady stream of reporting about the harms caused by high-stakes algorithms has prompted lawmakers across the country to introduce nearly 40 bills designed to study or regulate government agencies’ use of ADS, according to The Markup’s review of state legislation. 

The bills range from proposals to create study groups to requiring agencies to audit algorithms for bias before purchasing systems from vendors. But the dozens of reforms proposed have shared a common fate: They have largely either died immediately upon introduction or expired in committees after brief hearings, according to The Markup’s review.

In New York City, that initial working group took two years to make a set of broad, nonbinding recommendations for further research and oversight. One task force member described the endeavor as a “waste.” The group could not even agree on a definition for automated decision systems, and several of its members, at the time and since, have said they did not believe city agencies and officials had bought into the process.

Elsewhere, nearly all proposals to study or regulate algorithms have failed to pass. Bills to create study groups to examine the use of algorithms failed in Massachusetts, New York state, California, Hawaii, and Virginia. Bills requiring audits of algorithms or prohibiting algorithmic discrimination have died in California, Maryland, New Jersey, and Washington state. In several cases—California, New Jersey, Massachusetts, Michigan, and Vermont—ADS oversight or study bills remain pending in the legislature, but their prospects this session are slim, according to sponsors and advocates in those states.

The only state bill to pass so far, Vermont’s, created a task force whose recommendations—to form a permanent AI commission and adopt regulations—have so far been ignored, state representative Brian Cina told The Markup. 

The Markup interviewed lawmakers and lobbyists and reviewed written and oral testimony on dozens of ADS bills to examine why legislatures have failed to regulate these tools.

We found two key through lines: Lawmakers and the public lack fundamental access to information about what algorithms their agencies are using, how they’re designed, and how significantly they influence decisions. In many of the states The Markup examined, lawmakers and activists said state agencies had rebuffed their attempts to gather basic information, such as the names of tools being used.

Meanwhile, Big Tech and government contractors have successfully derailed legislation by arguing that proposals are too broad—in some cases claiming they would prevent public officials from using calculators and spreadsheets—and that requiring agencies to examine whether an ADS system is discriminatory would kill innovation and increase the price of government procurement.

Lawmakers Struggled to Figure Out What Algorithms Were Even in Use

One of the biggest challenges lawmakers have faced when seeking to regulate ADS tools is simply knowing what they are and what they do.

Following its task force’s landmark report, New York City conducted a subsequent survey of city agencies. It resulted in a list of only 16 automated decision systems across nine agencies, which members of the task force told The Markup they suspect is a severe underestimation.

“We don’t actually know where government entities or businesses use these systems, so it’s hard to make [regulations] more concrete,” said Julia Stoyanovich, a New York University computer science professor and task force member.

In 2018, Vermont became the first state to create its own ADS study group. At the conclusion of its work in 2020, the group reported that “there are examples of where state and local governments have used artificial intelligence applications, but in general the Task Force has not identified many of these applications.”

“Just because nothing popped up in a few weeks of testimony doesn’t mean that they don’t exist,” said Cina. “It’s not like we asked every single state agency to look at every single thing they use.”

In February, he introduced a bill that would have required the state to develop basic standards for agency use of ADS systems. It has sat in committee without a hearing since then.

In 2019, the Hawaii Senate passed a resolution requesting that the state convene a task force to study agency use of artificial intelligence systems, but the resolution was nonbinding and no task force convened, according to the Hawaii Legislative Reference Bureau. Legislators tried to pass a binding resolution again the next year, but it failed.

Legislators and advocacy groups who authored ADS bills in California, Maryland, Massachusetts, Michigan, New York, and Washington told The Markup that they have no clear understanding of the extent to which their state agencies use ADS tools. 

Advocacy groups like the Electronic Privacy Information Center (EPIC) that have attempted to survey government agencies regarding their use of ADS systems say they routinely receive incomplete information.

“The results we’re getting are straight-up non-responses or truly pulling teeth about every little thing,” said Ben Winters, who leads EPIC’s AI and Human Rights Project.

In Washington, after an ADS regulation bill failed in 2020, the legislature created a study group tasked with making recommendations for future legislation. The ACLU of Washington proposed that the group should survey state agencies to gather more information about the tools they were using, but the study group rejected the idea, according to public minutes from the group’s meetings.

“We thought it was a simple ask,” said Jennifer Lee, the technology and liberty project manager for the ACLU of Washington. “One of the barriers we kept getting when talking to lawmakers about regulating ADS is they didn’t have an understanding of how prevalent the issue was. They kept asking, ‘What kind of systems are being used across Washington state?’ ”

Ben Winters, who leads EPIC’s AI and Human Rights Project

Lawmakers Say Corporate Influence a Hurdle

Washington’s most recent bill has stalled in committee, but an updated version will likely be reintroduced this year now that the study group has completed its final report, said state senator Bob Hasegawa, the bill’s sponsor

The legislation would have required any state agency seeking to implement an ADS system  to produce an algorithmic accountability report disclosing the name and purpose of the system, what data it would use, and whether the system had been independently tested for biases, among other requirements.

The bill would also have banned the use of ADS tools that are discriminatory and required that anyone affected by an algorithmic decision be notified and have a right to appeal that decision.

“The big obstacle is corporate influence in our governmental processes,” said Hasegawa. “Washington is a pretty high-tech state and so corporate high tech has a lot of influence in our systems here. That’s where most of the pushback has been coming from because the impacted communities are pretty much unanimous that this needs to be fixed.”

California’s bill, which is similar, is still pending in committee. It encourages, but does not require, vendors seeking to sell ADS tools to government agencies to submit an ADS impact report along with their bid, which would include similar disclosures to those required by Washington’s bill.

It would also require the state’s Department of Technology to post the impact reports for active systems on its website.

Led by the California Chamber of Commerce, 26 industry groups—from big tech representatives like the Internet Association and TechNet to organizations representing banks, insurance companies, and medical device makers—signed on to a letter opposing the bill.

“There are a lot of business interests here, and they have the ears of a lot of legislators,” said Vinhcent Le, legal counsel at the nonprofit Greenlining Institute, who helped author the bill.

Originally, the Greenlining Institute and other supporters sought to regulate ADS in the private sector as well as the public but quickly encountered pushback. 

“When we narrowed it to just government AI systems we thought it would make it easier,” Le said. “The argument [from industry] switched to ‘This is going to cost California taxpayers millions more.’ That cost angle, that innovation angle, that anti-business angle is something that legislators are concerned about.”

The California Chamber of Commerce declined an interview request for this story but provided a copy of the letter signed by dozens of industry groups opposing the bill. The letter states that the bill would “discourage participation in the state procurement process” because the bill encourages vendors to complete an impact assessment for their tools. The letter said the suggestion, which is not a requirement, was too burdensome. The chamber also argued that the bill’s definition of automated decision systems was too broad.

Industry lobbyists have repeatedly criticized legislation in recent years for overly broad definitions of automated decision systems despite the fact that the definitions mirror those used in internationally recognized AI ethics frameworks, regulations in Canada, and proposed regulations in the European Union.

During a committee hearing on Washington’s bill, James McMahan, policy director for the Washington Association of Sheriffs and Police Chiefs, told legislators he believed the bill would apply to “most if not all” of the state crime lab’s operations, including DNA, fingerprint, and firearm analysis.

Internet Association lobbyist Vicki Christophersen, testifying at the same hearing, suggested that the bill would prohibit the use of red light cameras. The Internet Association did not respond to an interview request.

“It’s a funny talking point,” Le said. “We actually had to put in language to say this doesn’t include a calculator or spreadsheet.”

Maryland’s bill, which died in committee, would also have required agencies to produce reports detailing the basic purpose and functions of ADS tools and would have prohibited the use of discriminatory systems.

“We’re not telling you you can’t do it [use ADS],” said Delegate Terri Hill, who sponsored the Maryland bill. “We’re just saying identify what your biases are up front and identify if they’re consistent with the state’s overarching goals and with this purpose.”

The Maryland Tech Council, an industry group representing small and large technology firms in the state, opposed the bill, arguing that the prohibitions against discrimination were premature and would hurt innovation in the state, according to written and oral testimony the group provided.

“The ability to adequately evaluate whether or not there is bias is an emerging area, and we would say that, on behalf of the tech council, putting in place this at this time is jumping ahead of where we are,” Pam Kasemeyer, the council’s lobbyist, said during a March committee hearing on the bill. “It almost stops the desire for companies to continue to try to develop and refine these out of fear that they’re going to be viewed as discriminatory.”

Limited Success in the Private Sector

There have been fewer attempts by state and local legislatures to regulate private companies’ use of ADS systems—such as those The Markup has exposed in the tenant screening and car insurance industries—but in recent years, those measures have been marginally more successful.

The New York City Council passed a bill that would require private companies to conduct bias audits of algorithmic hiring tools before using them. The tools are used by many employers to screen job candidates without the use of a human interviewer.

The legislation, which was enacted in January but does not take effect until 2023, has been panned by some of its early supporters, however, for being too weak.

Illinois also enacted a state law in 2019 that requires private employers to notify job candidates when they’re being evaluated by algorithmic hiring tools. And in 2021, the legislature amended the law to require employers who use such tools to report demographic data about job candidates to a state agency to be analyzed for evidence of biased decisions. 

This year the Colorado legislature also passed a law, which will take effect in 2023, that will create a framework for evaluating insurance underwriting algorithms and ban the use of discriminatory algorithms in the industry. 

This article was originally published on The Markup By: Todd Feathers and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.


Check out Lynxotic on YouTube

Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

Algorithms define our lives, the Metaverse is already our home and Dark Patterns follow us everywhere

Photo: Adobe Stock

What is the metaverse?

I can’t link to a particular article explaining it because most of what’s out there is misleading. The truth is that nobody knows. The term comes from various science fiction sources, the most recent and least accurate is from “Ready Player One”.

The general idea of that book & film example is a future scenario where many, particularly the young, spend endless hours logged into a shared virtual reality game-like scenario where they can create a unique identity, via 3D avatars, and can interact in a realistic, yet magical, virtual reality environment.

There are many individuals and companies, such as Facebook that are advocating a link from the current online “world” to this type of “enhanced” 3D interactive “metaverse”. They even use the term and try to define its meaning based on their “vision” for the future of social media and the internet.

Zuckerberg monopolizing the Metaverse before it even exists?

The problem is, they are almost certainly wrong in this future prediction. The metaverse is already here, albeit in a very primitive form, where it will lead and what it will eventually turn into is completely open and up to all who inhabit it now and going forward.

The problem is, they are almost certainly wrong in this future prediction. The metaverse is already here, albeit in a very primitive form; where it will lead and what it will eventually become is completely open, and up to all who inhabit it now and going forward.

Elon Musk once said “We are all already Cyborgs” referring to the way cell phones (and for Tesla owners the onboard computer in their cars) extend our senses in a nearly continuous manner. We really can’t live the digital life most of us currently lead without our technological enhancements via hand-held (for now) computing.

Since this progression from the primitive early internet and web to the current, still primitive, phase of work-from-home and zoom business and education the is a continuous extension of our “world” into an artificial computer-aided meta-universe that is slowly becoming more responsive to our unspoken needs and wants.

“our electric global networks now begin to simulate the condition of our central nervous system. But a con-scious computer would still be one that was an extension of our consciousness, as a telescope is an extension of our eyes, or as a ventriloquist’s dummy is an extension of the ventriloquist.

Marshall Mcluhan, from “Understanding Media, pg. 388

What are “Dark Patterns”

Another recently coined term, dark patterns, has come to mean the ways that software designers use user interfaces to influence behavior and elicit a desired outcome, such as clicking a “buy button”.

Another way to imagine it is the digital equivalent to the grocery store designs that put necessities and staples like milk & eggs as far away as possible from the entrance, to try and entice impulse buying, while filling the check-out aisles with candy and other low cost / high margin goodies.

“We drive into the future using only our rearview mirror”

Marshall Mcluhan

The disconnect in this analogy is that people intuitively believe that the digital dark patters are less powerful and have less impact since they operate in cyberspace, while in fact is that the ability to manipulate behavior is much, much more powerful in the digital realm.

The “Dead Internet Conspiracy Theory” is just reality bumping into the truth

A recent article in the Atlantic noted the existence of the theory, and concluded that, though it had a ring of truth, ultimately the fact that this theory, on an obscure web page was possible to find, meant that the internet is not dead, and therefore the theory is invalid.

Nothing could be further from the truth. The rise of Dark Patterns, even as the devices we use and the sites we surf to and exist inside of (like Facebook) are evolving, and the endless self-inflating systems and algorithms that surround us are literally killing the internet and destroying our digital lives.

Infanticide would be a more accurate term, perhaps, since we are all baby cyborgs of the pre-metaverse and have barely had a chance to live, while these powers expand endlessly into a death-machine for our extended consciousness.

Infanticide would be a more accurate term, perhaps, since we are all baby cyborgs of the pre-metaverse and have barely had a chance to live, while these powers expand endlessly into a death-machine for our extended consciousness.

The internet is currently on life-support, because the one thing that it is innately predisposed toward, the enhancement and amplification of human interconnected communication, is at odds with the corporate goals of the gatekeepers, mainly Amazon, Facebook and Google.

Free and open communication, coupled with ever evolving and improving upgrades to the software of our lives, is nearly extinct, before it has even begun, due to this infinite conflict of interest.

Algorithms define our lives, the Metaverse is already our home and Dark Patterns follow us everywhere

The above, a dramatically described and yet painfully obvious truth, is what has even the US government, in the form of the FTC and its chair, Lina Khan, looking at antitrust remedies for the economic devastation that has been caused by the dead internet paradox.

And it has inspired legions of blockchain and coding resistance fighters to start the long process of finding a way to launch WW3, and other independent ways to connect humans using computers that are in are pockets, in our living rooms, and perhaps soon, implanted in our bodies.

Another example is Pi, a new and upcoming cryptocurrency, based on a future where a billion people will be mining and sharing the proceeds equitably using cell phones, and since they will all be connected via the mining software, the realization of this goal would automatically create, for a billion people worldwide, an alternative network, one without gatekeepers to block people from freely interacting with each other.

Oddly, it is the dim realization that the internet is, in fact, already dead in its current form, that will lead to the changes that will ultimately bring about a digital communication revolution, one that will make WWW1 look like a mistake from a primitive and misguided time.

Oddly, it is the dim realization that the internet is, in fact, already dead in its current form, that will lead to the changes that will bring about a digital communication revolution, one that will make WWW1 look like a mistake from a primitive and misguided time.

Anything, and anyone, that can wake us up to what we lack, and what we are missing, in our digital worlds and our lives – in the pre-metaverse – is a hero of the future and must be praised as such. Starting now.


Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

Lynxotic may receive a small commission based on any purchases made by following links from this page

Digging Deep Can Payoff: Netflix Suggestion Engine can be Challenging

Finding ‘The Professor and the Madman’ was an exception to the often frustrating process

Click to see ” Professor and the Madman” Also available on Amazon.

How many times have you searched or browsed the various suggestions prepared for you by the Netflix algorithm, only to get lost in confusion? Perhaps it’s a little like a self-driving car or a spell-checker, when it works you feel magically guided to your destination (or spelling) but when it doesn’t work, you are likely in trouble. 

Choosing the newest or the most watched is no fool-proof either. Often, when a better movie rises organically to the Netflix top ten, it’s an older film that people discovered all at once, for some reason, rather than a new release or “original” production. 

Such was the case when, after I made a series of unwatchable depressing choices, and then stumbled on “The Professor and the Madman”.

In this time of mandatory streaming, big screen production values are more important than ever

Based on a loved book of the same name, originally published as “The Surgeon of Crowthorne: A Tale of Murder, Madness and the Love of Words” by British writer Simon Winchester, first published in England in 1998. For the USA and Canada the title was changed to “The Professor and the Madman: A Tale of Murder, Insanity, and the Making of the Oxford English Dictionary

Unlike many featured Netflix titles which come across as budget-conscious direct to streaming productions the fist thing noticeable in the opening sequence is that this is a “real movie” with a serious cinematic presentation. It only gets better from there. 

Above: Photo / Netflix

Starring Mel Gibson, Sean Penn, Natalie Dormer, Eddie Marsan, Jennifer Ehle, Jeremy Irvine, David O’Hara, Ioan Gruffudd, Stephen Dillane, Laurence Fox, and Steve Coogan, there’s a rare combination of megastar acting talent in a setting that is both age appropriate (the lead characters are both late in life as the drama unfolds) and produced with absolutely impeccable and ensemble acting.

Read more: Netflix excites with 71 Movies to be released during 2021

Unlike so many films that appear to have a concept that was half based on a calculation in the production budget – for example “An Imperfect Murder” and “The Midnight Sky” which seem to reduce the number of characters and screen time as a way to produce something with a higher change of recouping costs and producing profit, rather than any artistic or aesthetic inspiration, “The Professor and the Madman” is a full cinematic experience that translates to any screen. 

https://youtu.be/DxTAGf6-Av8

Subscribe to our newsletter for all the latest updates directly to your inBox.

Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac 

Lynxotic may receive a small commission based on any purchases made by following links from this page