Even though it’s only mid-November, it isn’t too early to think about next year.
As a ghostwriter who last week wrote about working on the business versus working in the business (read it here), I realize now is the time to start considering how I will innovate and continue to grow my business.
One way I mentioned in my post was to keep track of what artificial intelligence, or AI, is doing. That reminded me to check AI detecting programs to see if my writing is still seen as human.
I put parts of last week’s column—entirely written by me but with research help from AI—through ten different programs, sometimes more than once.
First, the unsurprising outcomes: Five programs correctly determined my copy was 100% human: Copyleaks, Grammarly, Quillbot, Undetectable, and Scribbr, although Scribbr seemed to be using Quillbot’s program.
ZeroGPT declared my writing was 99.1% human. Originality determined it was 98% human.
Then came the surprises.
Phrasly first computed my writing to be 100% human, but then I pasted a different part of the post, and it came back 83% human.
Decopy declared with 53% probability that my writing was AI generated. Huh?
Just Done really threw me. First, my copy showed 81% signs of AI generation, then 73%, then 70%. What?
Even weirder, Just Done offered to make my content 100% original? Wait, you’re offering me the chance to take my human writing and make it 100% human?
When I took them up on the offer, I was required to pay, which I didn’t.
The site Trade Press Services reported it ran the U.S. Constitution through an unnamed AI detector, which determined the document was 98.53 percent likely AI-generated. The point is a human can recognize a famous human-written document; AI can’t.
I had to find out why there was such inconsistency. I found an answer on community.openai.com from 2023: “Even OpenAI recently took down their tool because it was not reliable enough.”
What’s the deal?
How AI detectors work
To start with, there needs to be a refresher on large language models, of LLM. Basically, LLMs “learn” from being fed massive amounts of information and data so they can “learn” basic and complex language patterns (in other words, so they can sound like humans). The largest LLMs are generative pre-trained transformers (GPTs) that allow chatbots to exist. Put in enough info and data and LLMs can start to sound more and more like humans.
However, I found several online sources that said there not only is no universal algorithm used to “teach” these LLMs, there also is no transparency in how they work. According to the SEO firm Surfer, AI detectors use four techniques: classifiers, embeddings, perplexity, and burstiness.
Classifying text means to sort it into recognizable categories the LLMs have learned. To embed is to turn words into numbers to show how meanings are related.
Perplexity refers to analyzing how unique a sentence is compared to what is found elsewhere among the data and info LLMs are fed. Burstiness measures a sentence’s consistency, length, and structure.
The problems with each
Of course, there are problems with each. With classifying, if those categories are filled with incomplete, inaccurate, or inconsistent data, AI won’t be able to tell what’s factual, what’s partly true, and what’s completely false. Also, categories and systems can fall prey to bias, discrimination, lack of fairness, lack of consistency, and privacy issues.
Embeddings generalize text, so some specific words, details and less dominant themes are lost or rarely presented. So, if there’s an obscure fact, it’s going to be harder for AI to find it when a human asks for it. Also, embeddings can’t differentiate meanings of words. For example, “cold” can refer to temperature or be a slang term to describe a person’s actions.
The problems with perplexity include the more often an LLM sees the same sentence, the more likely it will call it AI. Also this is where AI creates false data (called “hallucinations”) such as fake court cases, or can’t tell that the same misrepresentation found over and over again is false.
Burstiness might be the best way to tell the difference between human writings and AI copy because, per Illinois State University, “We, as humans, do not tend to write sentences of exactly the same length. We’re taught in writing courses that we should vary our sentence structure and length both for rhetorical impact and to keep our writing from being monotonous.”
The problem is that machines don’t know how to do this, which is why human writing is so much more dynamic than AI. But another problem is that some genres are monotonous. Illinois State mentions memos and policy documents. I would add research papers written in passive voice.
In an article called “The Dark Side of AI Detectors: Why Accuracy is not Guaranteed,” Gerri Knilans quoted Surfer SEOs writer Petar Marinkovic: “AI detectors don’t understand language as well as humans do. They only rely on historical data from their training sets to make predictions as confidently as possible.”
What ghostwriters can do
The answer seems obvious: Stress the advantages of human writing: emotional depth, empathy, creativity, originality, context, cultural understanding, accountability, knowing what a fact is, ethical judgment, authentic voice, relatability, and incorporating lived experiences.
MIT has a series of recommendations aimed at students, but I think they apply to ghostwriters, too.
• Set clear policies and expectations: Tell clients if you’re using AI and how you’re using it, or that you absolutely won’t use it. Some will be okay with AI, others won’t. Some will applaud that you’re keeping it real, so to speak, others might want you to speed up the process and use AI accordingly.
• Promote transparency and dialogue: answer all questions clients have about AI. Explain your rationale for using or not using it. If you use it, cite it so you aren’t accused of anything.
• Include an outline: A ghostwriter does this anyway, but in addition to determining what goes in what chapter, include what AI stuff will be included and where.
I still recommend keeping all writings human because humans can still do so much that AI can’t, AI writing is still average compared to what an individual human is capable of, and AI work can’t be copyrighted. But if a ghostwriter serving Florida (or client) wants to use AI, keep these suggestions in mind.
Feel free to read and check out my other posts related to ghostwriting. Go to leebarnathan.com/blog.
Let's Start A New Project Together
Contact me and we can explore how a ghostwriter or editor can benefit you.