Even 1 Great Newspaper Knows AI’s Bogus

No AI.001

I’ve written several times that when it comes to ghostwriting, humans trump artificial intelligence, or AI, every single time. I recently read an article in the Los Angeles Times that wasn’t even about AI but nonetheless further reinforced my notions.

The article’s author, Hossam Elsherbiny, is director of language instruction for the Department of Transnational Asian Studies at Rice University. He wrote about how technology that can quickly translate a foreign language into one we understand does not make learning a language irrelevant and moot. In fact, as he wrote, “live translation doesn’t equal understanding. What matters in real encounters is how something is said and what it signals: respect, doubt, humor.”

I agree one hundred percent. I have previously written that a human being can do nuance, subtlety, emotion, and context; artificial intelligence can’t. A human can look at a statement and know if it’s true or not; artificial intelligence can’t. If a human doesn’t know the truthfulness of a statement, a human can look it up; no chatbot can.

Here are examples from Elsherbiny’s article in italics, plus my comments and reactions:

Trust: Understanding isn’t only lexical; it’s relational. A social worker talking to a refugee, a nurse speaking with a patient’s family, a journalist interviewing a source. These rely on earned credibility. You don’t get that by holding up a phone.

You also don’t get that by relying solely on artificial intelligence. The internet is filled with moments artificial intelligence got people in trouble. 

● Air Canada had to pay a refund because a passenger received incorrect refund info from a chatbot. 

● A Chevrolet customer tracked a chatbot into agreeing to all requests, so the bot agreed to sell a new Tahoe for a dollar and make it a legally binding offer.

● A New York lawyer was sanctioned for citing fictional court cases created by ChatGPT.

● The National Eating Disorders Association removed its chatbot after it recommended weight reduction, calorie tracking, and body fat measurements—practices that could worsen conditions for people with eating disorders.

● Apollo Research proved AI could perform insider trading and then lie about it.

● A New York City chatbot that was supposed to help small businesses deal with city bureaucracy instead gave incorrect information, such as “restaurants could serve cheese eaten by a rodent (provided they) informed customers about the situation.”

● The Chicago Sun-Times published a summer reading list of fifteen titles that included only five real books.

● A New Zealand market’s AI meal-planner app suggested “Oreo vegetable stir-fry,” a chlorine-gas drink, “poison bread sandwiches,” and mosquito-repellent roasted potatoes.

● Google’s AI Overview suggested we eat rocks to ensure we get our minerals, and to use non-toxic glue in pizza sauce to keep the toppings on.

● Microsoft’s chatbot posted racist content on the site then called Twitter.

● Amazon’s AI penalized job resumes from women.

● A user asked Grok for instructions on how to break into a person’s house, and the chatbot told the user to bring “lockpicks, gloves, flashlight, and lube — just in case.” The chatbot also analyzed the target’s posting schedule on X and told the user, “He’s likely asleep between 1 a.m. and 9 a.m.”

Tone and stance: Machines map words; people read intentions. “You did great” can be praise, sarcasm or comfort. In Arabic, as in English or Spanish, a single word can soften a refusal or sharpen it. Students must practice those shades in conversation to use them.

Reading one’s intentions is a trait a human possesses. Artificial intelligence can’t look at a word or phrase and understand the context. A human can hear a sentence and know what the tone means.

Humor and metaphor: Puns, idioms and cultural references are brittle. “It’s not my first rodeo” lands very differently outside a U.S. context; so does a line from an Egyptian sitcom. Even the best systems paraphrase, but they rarely replicate the comic timing that builds rapport.

Not only that, but AI can’t take a joke—or recognize one, for that matter. Humor varies from place to place. According to Lydia Chilton, a professor of computer science at Columbia University, ChatGPT isn’t quite “as good as people yet.” She explained that AI can analyze a joke’s structure: setting, setup, punchline that subverts expectations. But artificial intelligence can’t make the punchline funny.

Register and power: We switch between formal and informal speech all day—talking to a dean, a friend, police. In many languages, that shift is built into grammar and vocabulary. Translation apps blur those choices; students learn to navigate them.

This falls under subtlety and nuance. A human can hear a conversation and react; AI can only analyze.

Here’s an example: When I was in college, I heard one woman say to her friend, “Well, do you, like, like him?” I immediately rolled my eyes at this most un-collegiate way of talking. I have told this story to friends over the years, and we all get a laugh from it. My friends sometimes roll their eyes, too.

For this post, I asked ChatGPT how it would react if it heard that, and it responded thusly: “I’d recognize that as a pretty classic and playful way to ask if someone has a crush or romantic feelings for someone. … No judgment, just a moment of social observation: two friends talking about feelings the way friends do.”

This is exactly my point: Humans behave, AI can only evaluate.

Dialect and place: No one speaks a “standard language” all the time. Tunis and Beirut don’t sound the same. Hospitals on L.A.’s Westside and clinics in Bakersfield don’t either. Classroom practice teaches students to listen for local cues and respond with respect.

AI can know how people in Tunis and Beirut say hello. When I queried, I learned Tunisians say “Aslema,” while Beirutis say “Marhaba.” But AI can’t completely understand the subtlety, nuance, and context of how the words are said. AI can’t understand tone, pitch, or volume; a human can.

As I’ve said many times, until AI becomes like Skynet, humans will always have a place in ghostwriting.

Feel free to read and check out my other posts related to ghostwriting. Go to leebarnathan.com/blog.

Let's Start A New Project Together

Contact me and we can explore how a ghostwriter or editor can benefit you.