Tag Archives: Machine Learning

Hey ChatGPT, Be my Oracle, my Mirror, my Research Intern (!?)

Many are twisting themselves into a pretzel trying to define ChatGPT, the reality is interesting, but less dramatic

By now you probably know about the AI chatbot that hit the scene with a dramatic upgrade on December 15th 2022. Many articles dissect the performance looking for flaws or wonders.

While both are relatively easy to find, the bottom line, for someone primarily working with text and language (there are other articles on coding, and graphic uses that I won’t get into here) is that it is a powerful, useful tool that needs a lot of human management (yay for humans).

There are obvious stated limitations, some of which might well have been added intentionally. The main limitation is that the data and learning has a hard cutoff date of 2021. That means, for example, that statistics for the year 2021 that are routinely compiled, once the year is over, are 100% not available. In some cases even 2020 is a stretch.

This also means, obviously, that anything related to current events or the what the media industry calls “news” is completely off limits.

The other oft cited and obvious limitation is that the darn thing is not god, in other words it is known to be wrong at times. In some cases very wrong.

The Pythian priestess or even the Oracle of Delphi it’s not, but wait a minute…

Many would digest the limitations listed above (which are certainly not the only ones) and think “fuggeddiboutit”: not ready for prime time.

But if you take a step back and think – it’s actually much more useful to have a tool that does what you want – but only if you ask the right question for the right reasons.

Think of it like a very energetic intern or research assistant that has already done the research and can spit out responses instantly (as fast as it can “type). And. If you ask a question that is too vague or general, you will get a boring, generic, bland pile of junk text.

On the other hand, if you go through logical steps that take into account the limitations listed above – for example – only ask about something that you know a lot about so that you will notice if there are errors, ask about something that is “evergreen” and not dependent on anything that’s happened since around 2021, and try to be extremely specific, and break things down into steps – you can get some interesting results that can be very useful.

A few examples of ChatGPT in action

Human:

Can you write a 500 word article on the history of bulldozers in the gonzo style of Hunter S. Thomson

ChatGPT:

Ah, bulldozers. Those mighty machines that have transformed the landscape of our world and shaped the way we build and destroy. But where did these behemoths of metal and horsepower come from? Let’s take a journey through the history of bulldozers, Gonzo-style.

It all starts with a man named Benjamin Holt, who in the late 1800s was trying to figure out a way to improve the efficiency of his agricultural machinery company. One day, while driving his steam-powered tractor through a muddy field, he noticed that the tracks of the tractor were sinking into the ground. This gave him the idea to create a machine with a blade attached to the front that could push dirt and debris out of the way. And thus, the first bulldozer was born.

But this was just the beginning. As the 20th century rolled on, bulldozers evolved and became more powerful and versatile. In the 1930s, Caterpillar introduced the first diesel-powered bulldozer, and in the 1950s, they introduced the first hydraulic bulldozer, which allowed operators to control the blade with greater precision and power.

Bulldozers were used in a variety of industries, from construction and mining to agriculture and forestry. But it wasn’t until the 1960s and 1970s that bulldozers really came into their own as symbols of destruction and progress.

Enter Hunter S. Thompson, the godfather of Gonzo journalism. Thompson was a man who loved nothing more than to stick it to the man and write about it in his own unique, wild style. And in the 1960s and 1970s, there was no better way to stick it to the man than to write about bulldozers tearing down neighborhoods and wiping out entire ecosystems in the name of progress.

Thompson’s articles and books about bulldozers were filled with vivid descriptions of the destruction they wrought and the people who wielded them like weapons. He saw bulldozers as symbols of the corrupt and greedy forces that were tearing down the world in the name of progress.

But even as Thompson raged against the machines, he couldn’t help but be drawn to their raw power and the thrill of the ride. In one famous passage from his book “Fear and Loathing in Las Vegas,” he describes a joyride in a bulldozer through the desert as “a kind of ultimate high, the way the thing just gobbled up the sand and rocks and spit them out behind you in a rooster tail of pure anarchy.”

In the end, bulldozers will always be a symbol of progress and destruction, a tool that can shape the world in ways both good and bad. And as long as there are people like Hunter S. Thompson around to write about them, they will always be a source of fascination and controversy. So let’s raise a glass to the mighty bulldozer, that hulking beast of metal and horsepower that has shaped our world and our imaginations.

Or follow a chain of inventions to the present day:

Human:

What type of solar system is Ivanpah? Cite references.

Ivanpah Solar Electric Generating System is a concentrated solar power (CSP) plant located in the Mojave Desert in California. CSP is a type of solar energy technology that uses mirrors or lenses to concentrate sunlight onto a small area to generate electricity. There are several different types of CSP systems, including parabolic troughs, linear fresnel reflectors, and solar towers. Ivanpah uses solar towers, which use large mirrors called heliostats to reflect sunlight onto a receiver on top of a tall tower. The receiver absorbs the concentrated sunlight and converts it into heat, which is then used to generate electricity through a steam turbine.

Reference:

“Ivanpah Solar Electric Generating System.” Wikipedia. https://en.wikipedia.org/wiki/IvanpahSolarElectricGeneratingSystem

Human:

When was the mirror system used for Ivanpah invented and by whom. Cite references.

The mirror system used at Ivanpah Solar Electric Generating System was invented by the German engineer Arnold Shoeneberg in the early 1900s. Shoeneberg’s design was inspired by the work of the French scientist Augustin Mouchot, who had developed a similar system in the 1860s.

Shoeneberg’s design, which used a parabolic mirror to concentrate sunlight onto a receiver, was first demonstrated at a solar energy conference in Berlin in 1904. It was later refined and improved by other engineers, and eventually became the basis for the concentrated solar power (CSP) technology that is used at Ivanpah and other CSP plants around the world.

References:

“Arnold Shoeneberg.” Wikipedia. https://en.wikipedia.org/wiki/ArnoldShoeneberg

“Concentrated Solar Power.” Wikipedia. https://en.wikipedia.org/wiki/Concentratedsolarpower

Human:

What prior inventions was Augustin Mouchot’s system in the 1860s based on

Augustin Mouchot’s system in the 1860s, which used a parabolic mirror to concentrate sunlight onto a receiver, was based on the work of several earlier inventors and scientists.

One of the key influences on Mouchot’s work was the design of a solar steam engine developed by the French scientist Nicolas-Léonard-Sadi Carnot in the early 1800s. Carnot’s design used a concave mirror to focus sunlight onto a receiver, which was then used to generate steam that drove a turbine.

Mouchot’s design was also influenced by the work of the British scientist John Herschel, who had developed a similar system in the 1840s. Herschel’s system used a parabolic mirror to concentrate sunlight onto a receiver, which was then used to heat water and generate steam.

Overall, Mouchot’s system was based on the principles of solar thermal energy, which had been explored and developed by several inventors and scientists throughout the 19th century.

References:

“Nicolas Léonard Sadi Carnot.” Wikipedia. https://en.wikipedia.org/wiki/NicolasL%C3%A9onardSadiCarnot

“John Herschel.” Wikipedia. https://en.wikipedia.org/wiki/JohnHerschel

Famous last words

Human:

Write a 400 word article on the premise: humans will learn to work with AI and use its help to become more creative and free

ChatGPT:

As artificial intelligence (AI) continues to advance and become more integrated into our daily lives, many people are concerned about the potential negative impacts it could have on society. Some fear that AI will take over jobs and lead to widespread unemployment, while others worry about the ethical implications of machines becoming more intelligent than humans.

However, there is also a compelling argument that AI has the potential to help humans become more creative and free. By working with AI and leveraging its capabilities, we can free ourselves from the constraints of traditional ways of thinking and allow our creativity to flourish.

One way that AI can help humans become more creative is by taking on tedious and time-consuming tasks, allowing us to focus on more meaningful and innovative work. For example, AI can be used to analyze and interpret large data sets, freeing up human analysts to concentrate on developing new insights and ideas.

AI can also help humans to think more creatively by providing access to a vast pool of information and resources. With the help of AI, we can quickly and easily find and synthesize information from a wide variety of sources, opening up new avenues for exploration and inspiration.

In addition, AI can help humans to break free from the biases and limitations of our own perspectives. By providing objective and unbiased analysis, AI can help us to see things from new angles and consider alternative viewpoints, leading to more creative and innovative solutions.

Overall, the key to realizing the full potential of AI as a creative tool is to work with it rather than against it. By embracing AI and leveraging its capabilities, we can free ourselves from the constraints of traditional ways of thinking and allow our creativity to flourish. As we continue to advance and develop new technologies, it is essential that we take a collaborative approach and work together to create a better future for all of us.

Lynxotic may receive a small commission based on any purchases made by following links from this page

    Please help keep us publishing the content you love

    Introducing Amazon Brand Detector

    Above: Photo / Collage / Lynxotic

    A browser extension that reveals Amazon brand and exclusive products while you shop on the site

    Amazon has registered more than 150 private-label brands with the U.S. Patent and Trademark Office and carries hundreds of thousands of items from these house brands on its site.

    A recent investigation by The Markup found that the online shopping behemoth often gives its own brands and exclusive products a leg up in search results over better-rated competitors. We also found Amazon is inconsistent in disclosing to shoppers that those products are Amazon-brand products or exclusives.

    Few respondents in a 1,000-person national survey we commissioned recognized the best-selling Amazon brands as owned by the company, apart from Amazon Basics.

    So we decided to add some transparency for Amazon shoppers. The Markup created a browser extension that identifies these products and makes their affiliation to Amazon clear.

    Brand Detector highlights product listings of Amazon brands and exclusive products by placing a box around them in Amazon’s signature orange. This happens live while shoppers browse the website. 

    Watch Video

    Check out Lynxotic on YouTube

    The selective staining is inspired by a lab technique in biology called an assay, which we also applied to web pages in a past investigation about Google. That investigation revealed that the tech giant’s search engine gave Google properties 41 percent of real estate on the first page of popular searches.

    How Does It Work?

    The browser extension uses various techniques developed and refined during our year-long investigation to identify Amazon brands and exclusive products (read more in our methodology).This includes checking a list of proprietary products we created and cross-referencing Amazon’s “our brands” filter. The extension is available for Chrome (and other chromium-based browsers) and Firefox browsers.

    The extension sits in the background until the user visits Amazon’s portal in the United States (amazon.com), Australia (amazon.com.au), Canada (amazon.ca), Germany (amazon.de), India (amazon.in), Italy (amazon.it), Japan (amazon.co.jp), Mexico (amazon.com.mx), Spain (amazon.es), or the United Kingdom (amazon.co.uk) and searches for something. At that point, Brand Detector identifies Amazon brands and exclusives and highlights them on the search results page. (It does not extend the product page.) 

    Because the “our brands” filter is not comprehensive, the extension also cross-references products against a list of proprietary electronics we found from Amazon’s best sellers section (which Amazon doesn’t include in the “our brands” filter) and performs partial text matching for phrases like “Amazon brand” and “Featured from our brands” and full text-matching for “AmazonBasics” and a few other brand names that didn’t tend to return false positives in our tests.

    Even with these techniques, the extension may still miss some Amazon brand or exclusive products from time to time.

    Amazon Brand Detector does not collect any data, in keeping with The Markup’s privacy policy. We won’t know how you used it, if at all, what you searched for or what you end up buying. 

    The extension only works on desktop browsers, not mobile apps.

    Cross-Extension Compatibility

    The extension can work in conjunction with other extensions, such as Fakespot, which affixes a letter grade to any Amazon product based on the authenticity of reviews for that product. Users can use these extensions together to find Amazon brands and exclusive products and their Fakespot grades.

    The extension also works with full-page screenshot extensions, like “Awesome Screenshot & Screen Recorder.” You can use these to capture an entire search page stained by the extension.

    The Markup is not affiliated with these extensions, nor do we endorse them.

    Try It Out:

    Enhance your Amazon shopping by knowing which products are from Amazon’s own brands and exclusives.

    This article was originally published on The Markup By: Leon Yin and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.


    Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

    Lynxotic may receive a small commission based on any purchases made by following links from this page

    iPadOS 15 Preview: Get Ready for AI and Machine Learning that will Blow You Away

    Boring? Are you kidding me? Time to look under the hood…

    Somewhere in the land of media herding there was a familiar refrain. iOS 15 and iPadOS 15 are “boring”. Apparently the idea behind this is that there is no single feature that changes the entire experience of the iPhone or iPad – no “killer app” or killer upgrade.

    The “boring” crowd have focused on things like “you can banish your ex from memories in iOS 15”. I saw a slew of articles with a variation on that title.

    The biggest problem with the attitude, which must have been initiated by someone that has not really been hands on with any of the new iOS software (which is still in non-public beta only) is that it’s not true. (A public beta is expected in July but it is not recommended unless you are a developer testing on “non-critical” devices).)

    Why? Because there are so many killer upgrades that it’s overwhelming, basically due to the avalanche of amazing new features and improvements. This article will attempt to give an illustration of that by focusing on only one feature inside one built-in app: Memories inside of the Photos app.

    First a short digression. We have been testing on several devices including a MacBook Pro 15” from 2017, an original 1st generation iPad Pro (2015) and an iPhone XS Max from 2018. None of these machines have the new Apple Silicon chips and for that reason they are only able to produce the upgraded features that don’t require it.

    That makes the improvements that are possible without buying any new hardware even more amazing. Stunningly, of the three devices we upgraded the MacBook Pro was the most stable right out of the gate. Any beta software will have bugs, glitches and sometimes crash but that does not prevent one from testing out features that are new.

    The iPad pro, in a non-technical observation almost appears as if the screen resolution has been increased, obviously not possible but, as you will read below, could be part of a stunning emphasis on increased beauty, sensuality and luxurious feel in the new suite of OSs.

    Memory movies on iPad OS15 are an amazing example of how AI and machine learning are evolving

    For those not familiar with “Memories” they are auto-generated film clips that can be found in the “For You” tab in your photos app on iPhone and iPad. While you are sleeping this feature scans everything in your photos library and uses artificial intelligence, machine learning and neural networks to choose and edit the clips, as the name says, for you.

    One not confirmed but almost certain technical backdrop to this is that the learning is improving even between updates to the OS. Not only that but all Apple devices on earth are “cooperating” to help each other learn. That’s a powerful force that spreads across over 1.65 billion devices.

    This feature was added in iOS 12 but started to function in iOS 14 on a much higher level. If you had tested and used the feature over the last few years as we have you’d have noticed that the ability of the AI to “see” and select photos and videos to include was limited and, at times, comical. Not any more.

    Much of the data that clues the software in as to what photos belong together is from the embedded meta data. The date, time and location information helps to tell the AI that you took a group of images or videos on a day in a particular location.

    The difference in iPad OS15 (iPhone too, of course) is that the more difficult to accomplish tasks, such as recognizing the subjective quality of one photo verses another (humans often take several photos of the same scene to try to capture the best out of a bunch). Or, more importantly, who and what are the subject of a photo.

    All of this began to get interesting in iPad OS 14 and many groups of photos and videos were already being chosen, edited and enhanced by the software to a level that was fairly impressive.

    AI and aesthetics collide and the result is a Joy to witness

    Something that is starting to become a thread and a definitive direction that Apple is taking, particularly with the iPad Pro series, is, true to the name, a Pro level of visual production and manipulation throughout the OS.

    Center Stage, for example and many other video and photo related upgrades were some of the big features in the newest generation of iPad Pro. Those are great, but require a new iPad along with the OS upgrade.

    When it comes to the memory movie clips what we found is that even on the oldest iPad Pro from 2015 the evolution of the software due to the constant learning by the AI is already taking a huge step forward doing all the things that it was already doing only much better.

    Apple’s upgrade took that and give it an additional kick up a notch with somewhat that the company is known for: good taste.

    What has changed specifically?

    In iPad and iPhone OS 14 there were a few things that felt awkward in the way movies were created. The biggest shortfall was in the softwares ability to deal with various aspect ratios.

    These days when we shoot photos and videos with an iPhone it is tempting and, at times, wonderful to use the vertical orientation. Other times, for landscapes and other scenes we might prefer a traditional film aspect or even use the panorama feature to get an ultra-wide screen “cinema-scope” style.

    Until now this was dealt with very poorly by the software. Mostly the photos would constantly zoom in (the so called “Ken Burns” effect) and if shown without zooming in a vertical portrait shot would have ugly side bars (like a vertical letterbox effect).

    The zooming and most of the effects in general destroyed the resolution and therefore the quality of many photos by enlarging them and adding the effects.

    Additionally the effects that were added, while cute and fun, were not much more than a way to add fun and not what would likely be used by a human editor. All of this and more made for a kind of novelty feel to the whole process that was nice to have, but many never even bothered to look at the movies that software created for them.

    That’s about to be over.

    A whole new array of options for the AI to use while trying to entertain

    In iPad OS 15, as can be seen on the photos and videos in this article, the ways that the software solves the aspect ratio issue as described above is genius and, dare I say it, beautiful.

    In a collaboration between the AI and the software itself it now has a new bag of tricks to use and, boy, does it work. One feature that is fantastic is the letterbox generator for any wide screen photos in any aspect ratio.

    How this works is that it takes the iPad aspect ratio and then uses the photo in it original at 100% full resolution and then adds a letterbox. But this is not the usual plain black bars we are all familiar with – the software and AI are able to see and analyze the photo and create a custom gradient letterbox that can be any shade or color.

    Photos in clip above courtesy of The 2021 International Portrait Photographer of the Year
    Copyright © 2021. www.internationalportraitphotographer.com

    The effect is often astoundingly tasteful and often makes the original photo look even better. We tested it on award winning photos (video above) and the result is, basically art. Also on our own “nice” photos, chosen 100% by the AI and software, look amazing also.

    Actually, all the photos and videos in the clips generated from the library look much better than I had remembered. That turns out to be because the software and AI now do automated color grading on all the photos and videos in all the generated memories !

    Color grading also known as color correction, especially for video, has traditionally required an expensive expert and high end software (and hardware) to enhance and color match various photos and clips, that have often been taken at different times and places, where lighting conditions vary and sometimes were shot with different camera.

    AI and machine learning software on iPad OS15 (and iOS 15) now has a virtual colorist actively adjusting your shots and enhancing and color matching them while you sleep. That is basically insane. That’s probably why it appeared that the photos and even the iPad itself had been upgraded.

    Ok, I could go on and on about that one feature, but let’s move to some more features. There are also new effects that are added that vary with each memory (there are a lot more clips being generated, including various versions of the same idea to choose from).

    In the experiments so far the effects are clearly better and more subtle than in iOS 14. Again in many cases I found myself saying the word “beautiful” when I tried find an adjective to describe the results.

    For shots that have a vertical bias there’s a vertical geometric split screen effect, often with a thin black border, and it has a kind of 60’s on steroids feel with the bars sliding in and out and resizing into place.

    Another effect not seen in iOS 14 is a kind of circular rotation – great for landscapes – it’s not a common effect probably because it is computationally complex, but for the AI, it’s a snap. Sometimes this effect has a kind of blur-dissolve added which makes it fun and, again, still tasteful.

    It appears that the effects are not only better and there’s a larger bag of them, but they appear to evolve and adapt to the content, that is to say that the speed and depth of each changes with the music combined with the photo and video content.

    Oh, and the music. OMG. Each clip has 6 songs pre-selected and the entire clip adapts, in real time (!), when you change the song, showing you various styles and looks that match. Apparently Apple Music is also connected if you have a subscription.

    As a mater of fact, it is hard to be certain, as we have not had more than a few hours to test this, but nearly everything appears to be “live” and constantly evolving in real time. In order to “freeze” a version of a memory you have to “favorite” it (with the typical heart symbol) and then “add to memories” in order to edit (change the names or choose more images – or remove anything if it is not to your liking).

    There is so much more not yet mentioned here: this article could probably be a book

    The AI is also getting creative with names and “concepts” for the clips. For example, if you had lunch (or took photos) over the years in the same city (for me it was Knoxville, TN) it might look at the coincidence that you tended to take photos around midday in that town and then create a memory clip called “Lunch in Knoxville over the Years”. Or for example the clip at the head of this article: “Golden Hour Over The Years”.

    This is an early and primitive foretaste of the literary ambitions of AI. In the new Photos App in iOS 15 it is beginning to “think” about when, where and why humans take photos and videos and then conceiving a story that fits the behavior it is witnessing.

    Other titles go beyond the basic “Amsterdam in 2016” and start to use the understanding and visual ability to “see” what is in the photo to create a clip like : “Playing in the Snow at Christmas”. Snow? Does it know it’s cold? Maybe just that it’s white and happens in the northern hemisphere in December. This is just the very beginning of something that will evolve, hourly, from now on. I can’t wait.

    Latest Related Articles:

    Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

    Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac.

    Lynxotic may receive a small commission based on any purchases made by following links from this page

    Digging Deep Can Payoff: Netflix Suggestion Engine can be Challenging

    Finding ‘The Professor and the Madman’ was an exception to the often frustrating process

    Click to see ” Professor and the Madman” Also available on Amazon.

    How many times have you searched or browsed the various suggestions prepared for you by the Netflix algorithm, only to get lost in confusion? Perhaps it’s a little like a self-driving car or a spell-checker, when it works you feel magically guided to your destination (or spelling) but when it doesn’t work, you are likely in trouble. 

    Choosing the newest or the most watched is no fool-proof either. Often, when a better movie rises organically to the Netflix top ten, it’s an older film that people discovered all at once, for some reason, rather than a new release or “original” production. 

    Such was the case when, after I made a series of unwatchable depressing choices, and then stumbled on “The Professor and the Madman”.

    In this time of mandatory streaming, big screen production values are more important than ever

    Based on a loved book of the same name, originally published as “The Surgeon of Crowthorne: A Tale of Murder, Madness and the Love of Words” by British writer Simon Winchester, first published in England in 1998. For the USA and Canada the title was changed to “The Professor and the Madman: A Tale of Murder, Insanity, and the Making of the Oxford English Dictionary

    Unlike many featured Netflix titles which come across as budget-conscious direct to streaming productions the fist thing noticeable in the opening sequence is that this is a “real movie” with a serious cinematic presentation. It only gets better from there. 

    Above: Photo / Netflix

    Starring Mel Gibson, Sean Penn, Natalie Dormer, Eddie Marsan, Jennifer Ehle, Jeremy Irvine, David O’Hara, Ioan Gruffudd, Stephen Dillane, Laurence Fox, and Steve Coogan, there’s a rare combination of megastar acting talent in a setting that is both age appropriate (the lead characters are both late in life as the drama unfolds) and produced with absolutely impeccable and ensemble acting.

    Read more: Netflix excites with 71 Movies to be released during 2021

    Unlike so many films that appear to have a concept that was half based on a calculation in the production budget – for example “An Imperfect Murder” and “The Midnight Sky” which seem to reduce the number of characters and screen time as a way to produce something with a higher change of recouping costs and producing profit, rather than any artistic or aesthetic inspiration, “The Professor and the Madman” is a full cinematic experience that translates to any screen. 

    https://youtu.be/DxTAGf6-Av8

    Subscribe to our newsletter for all the latest updates directly to your inBox.

    Find books on Music, Movies & Entertainment and many other topics at our sister site: Cherrybooks on Bookshop.org

    Enjoy Lynxotic at Apple News on your iPhone, iPad or Mac 

    Lynxotic may receive a small commission based on any purchases made by following links from this page