Connect with us

AI

Should Google Block all ChatGPT Generated Content?

Published

on

Most agree that it’s possible to identify AI generated content: what about partial and hybrid?

There’s a gold rush of sorts in the peripheral app market with an explosion of tools coming out with various ChatGPT related services. On the one side there are AI detectors, meant to identify “non-human” text, at the other end of the spectrum are rephrasing tools to help “cheaters” in schools, or perhaps just blog writers to disguise AI origins with minimal (human) work.

There is, by many, the assumption that Google is or will block or penalize any AI generated content in search results. At the same time companies like CNET, and recently announced Buzzfeed, plus undoubtedly many others, are already publishing or developing content written wholly or partially by AI.

Of course, many publishers have been using computer generated content for years in areas like the stock market and weather updates, where human text would be too expensive to use when the volume of ever changing data points is so high.

It is also questionable if the various apps even function as advertised. For example if a rephrasing app can fool a public AI detector can it also fool Google’s proprietary software?

The bigger issue is: why?

Derivative content and low quality information has been a staple of the online experience since the first bulletin board chat rooms of yore. Even if possible, is it necessary or clearly desirable to filter out all content that has an AI component from Google search results, for example?

Could this also be seen as an anticompetitive move to try to reduce the value of the software of a competitor? And what percentage of AI “infection” should be considered taboo? 90%? How about 47%? And if those percentages are calculated using tools that themselves have an error rate of 10% or more, what is really happening?

Will humans be rewarded by Google for not using AI to assist, at least not directly in content creation or publishing? If not, then how are those humans laboring at a disadvantage going to be able to compete with those that do take the assistance?

ChatGPT and other AI tools are already in circulation and growing

The output of ChatGPT is usually somewhat bland, and depending on the subject matter, often gleaned from obvious sources – it is designed to avoid misinformation by avoiding unusual “untrusted” sources.

This creates text that reads as “safe”, and while generally highly readable and grammatically accurate, is not particularly creative, at least not by human standards. My human prediction is that, with the horse already out of the barn, so to speak, there is no way for any sensible filter to be used that would eliminate AI generated text or AI influenced content from existing side by side with “pure” human generated content.

As Google was quoted to have indicated in tweets, basically, content is content and if it is well written and helpful for humans to read and use in human life, then it is just as good for consumption, with or without AI assistance. They point out that mainly content that is manufactured specifically to trick search engines is what they absolutely hope to filter, not all AI assisted content, even if that were possible. Simple right?

No chatbots were harmed in the making of this content

All that’s left then is the question of best uses for AI in the writing and content publishing process. Best, not in the sense of having AI “fingerprints” or not, but rather in the quality and usefulness of the final product. Same as it ever was. Right?


Lynxotic may receive a small commission based on any purchases made by following links from this page

Please help keep us publishing the content you love

Trending

Subscribe To Our Newsletter

Subscribe for free premium stories and the latest news

Lynxotic Logo

You have Successfully Subscribed!