You are viewing a single comment's thread from:

RE: Grapes ๐Ÿ‡ : A Taste of History in Every Bite ! Health Benefits ๐Ÿ‡

in Hindwhale Community โ€ข 2 years ago (edited)

It's interesting. From these comment threads, I have now bookmarked four detection links, and I've played around with all of them a little. I'm not sure that any particular one is obviously better than the others (but they're all FAR better than the one I had before. ;-)

Basically, if any one of them says it was created by a human, it seems not to be a very strong claim. You really need two or three or four to agree. So, as with plagiarism, it's going to be time consuming to check for this, and especially difficult to do at scale (without automation).

So English isn't a strength. Therefore, an impeccably written article in English is unlikely.

This is often a big indicator to me. If the writing style in the comments is different than the writing style in the top-level post, that's an attention grabber.

Sort: ย 

I've now bookmarked this comment so that I've got easy access to those 4 bookmarks ๐Ÿ˜‰

There's definitely a big advantage in being a native English speaker. Even before the author has replied to a comment, there's a soulless feeling to any article that is AI generated or plagiarised. Which becomes even more obvious when you compare it to their early content when they're most likely to be making an effort.

And like you say, it's time consuming and that time would be better spent reading content from authors who deserve the attention - especially when it's impossible to prove with 100% certainty... and the author inevitably denies it!

Perhaps few people will go into these cases in depth, but it is necessary to pay attention to them so that they do not abuse the facilities provided by some applications for content creation. I think an action should be implemented in general where each positive case by IA is identified so that others avoid supporting them, in this way we will avoid wasting a lot of time in wanting to support the results, but also, we avoid supporting content created by a computer.

I've checked the four detection links with ChatGPT-texts and my own posts:

AI Text Classifier marks all as AI generated ;-((
AI Content Detector doesn't work ;-((
ChatGPT Detector by ZeroGPT and GPT-2 Output Detector give exact results (but the last one is only partly functional demo version...)

My recommendation is clear: ChatGPT Detector by ZeroGPT is the most trustworthy tool for me. May be @the-gorilla can confirm or contradict...?

I've only done a very quick test (it's past my bed time) and initial results are fairly consistent. A couple of them have character limits and I tend to get more consistent results when analysing 3 or 4 paragraphs.

I took the "Some Definitions" section from my lazy or unmotivated article...

โœ… AI Text Classifier said it's "Very Unlikely" to be AI Generated
โœ… GPT-2 Output Detector gave me a 99.98% REAL rating
โœ… AI Content Detector gave me a 100% human content rating
๐Ÿ˜• ZeroGPT said it was 24.5% AI generated - one paragraph that I wrote and one which was quoted from another source.

If I then generate 3 paragraphs with the prompt "write 3 paragraphs about confused pandas":

โœ… AI Text Classifier said it's "Likely" to be AI Generated
โœ… GPT-2 Output Detector gave me a 99.69% FAKE rating
๐Ÿ’ฉ AI Content Detector gave me a 47% human content rating
โœ… ZeroGPT said it was 97.25% AI generated

My thoughts are that by submitting 3 to 4 paragraphs of text will give a better result than an entire article (because the article could be 5 or 6 prompts combined or the tool will just get confused with a large amount of data) and my brief test shows that none of the tools are terrible.

I think that like @remlaps suggests, a combination of tools and a bit of brain power should lead to a decent estimation as to whether something's AI generated or not.

a combination of tools and a bit of brain power should lead to a decent estimation as to whether something's AI generated or not.

One thing that worries me is the combination of these tools with language translation web sites. I'm under the impression that translation web sites use the same sort of statistical/word-frequency analysis as LLMs. Not sure how the AI detection sites would react to a legitimate article after machine translation.

I tried taking an English article, using google translate to translate to German then Spanish then back to English, and passed that through the "GPT-2 Output Detector". The detector did alright with it, but I'm hoping to find time to run some more trials.

Thank you very much - I'm really confused by this new problem...

ย 2 years agoย 

I am as confused as you are, but I can't help it, so I will see how far it goes and in which direction. I can see that some of the newbies who have been handed the charge of moderating will get even more confused. And the way they are reacting on this page, I can imagine you will see the result sooner than your expectations.

ย 2 years agoย 

I can see that some of the newbies who have been handed the charge of moderating will get even more confused.

You were right @dove11, just check out this post and you will know. ๐Ÿ˜„

Coin Marketplace

STEEM 0.17
TRX 0.15
JST 0.028
BTC 59880.83
ETH 2301.55
USDT 1.00
SBD 2.50