I recently attended a monthly meeting with Ally Machate, who runs The Writers Ally, a book editing, publishing, and marketing firm. People ask her anything related to the book publishing and editing worlds. One guy asked about artificial intelligence, and in listening to Ally’s reply, something dawned on me about large language models, or LLMs.
They are like calculators, only with words.
I suddenly remembered the old (and quickly forgotten because it was quickly obsolete) Schoolhouse Rock segment entitled “Scooter Computer and Mr. Chips.”
Some people assume that simply because a computer can gobble up all kinds of numbers and facts and figures and whatever data you happen to feed it, some people assume because a computer knows how to remember instructions and data, whatever it’s told and deliver it back whenever you need it as quick as a wink, some people assume a computer can think. … I’m not equipped to be smart. I’m not equipped to think.
I can’t believe I am the only one to ever make this connection, I thought, so I researched—and found I wasn’t.
In fact, several sources I found online credit Open AI CEO Sam Altman as calling generative AI “a calculator for words.”
This has opened a debate. Critics say a true calculator does exactly what it’s programmed to do: kick out numbers based on the mathematical principles with which it has been programmed. That’s why every time a person punches in “23 x 25,” the answer comes back 575. No guessing.
AI, on the other hand, infers, guesses, hallucinates, persuades, educates, opines, undermines, and kicks out reliable and unreliable information because it was programmed by humans using inconsistent principles, biases, opinions, and information.
“(T)he analogy of AI as a word calculator has been criticized for obscuring the troubling aspects of generative AI,” according to an article in Gigazine.net. “(C)alculators have no built-in biases and are not prone to errors or ethical dilemmas, but generative AI is unavoidable.”
This is a valid point. However, here are eight reasons AI is a word calculator in that it’s not conscious, not sentient; not living, breathing, or thinking.
1. It predicts the next word, not the idea. At its core, the model calculates which word is statistically most likely to come next based on patterns in data. That is how AI can generate complete sentences: There’s enough data that these LLMs have gobbled up to be able to string together words that create an understandable sentence. That also applies to any language in which the LLM has enough data.
2. It doesn’t understand meaning. A calculator cannot tell you the value of the number three, but many humans have decided there is something magical and mystical about that number. These humans have decided that there is meaning in the number three.
What AI does is manipulate symbols and represent meaning to humans, but it doesn’t actually “know” what those symbols refer to in the real world.
3. It’s all math. What looks like words coming out of an AI program such as ChatGPT really emerges from numerical optimization, not understanding.
Text is first turned into vectors, which are long lists of numbers that encode statistical relationships between words and concepts. These vectors are combined and transformed using matrices and weights, which determine how strongly different pieces of information influence one another. When the model produces an output, it’s not choosing a sentence for its meaning; it’s selecting the next token based on which option has the highest probability given the current numerical state.
Those weights didn’t come from insight or comprehension. They were learned through numerical optimization—specifically, adjusting millions or billions of parameters via gradients to minimize error across vast amounts of training data. If a prediction is wrong, the math nudges the weights slightly so future predictions are more statistically aligned with observed patterns.
What looks like fluent language is really the surface effect of layered mathematical operations converging on likely sequences. Meaning, coherence, and style emerge from the math—but under the hood, it’s all vectors being multiplied, added, and optimized, step by step.
Remove the math and there’s no “there” there. There’s no “mind” left over once you strip away the calculations—just equations producing text.
4. No beliefs, intentions or goals. I’m reminded of Bill Clinton: “It depends on what the meaning of the word ‘is’ is.” That’s an example of a human making a comment that belies an obvious agenda.
However, a calculator doesn’t want the answer to be 575, it just computes it. LLMs are the same way. There’s no internal purpose beyond kicking out the information that the searcher seeks.
5. If there’s context, it’s statistical not experiential. When an LLM “remembers” context in a series of requests. It’s tracking past conversations. It’s not recalling lived experiences. The program calls for it to use probability—a fundamental branch of mathematics—to provide answers as similarly as in the past.
LLMs are not Skynet.
6. Probability over reasoning. If a calculator had a personality, it would probably be very cocky because it gets the answer right every single time. Since LLM optimizes for plausible text and doesn’t understand what “truth” is, it can kick out obvious, ridiculous, fluent nonsense—and still string together words that give the impression that it’s correct as well as arrogant. In reality, this is a sign that the program’s use of probability is dominating.
Another example: Given the same prompt and settings, outputs follow predictable probability distributions, just like math functions. Ask the same questions, get the same answers.
7. It doesn’t learn like a human. Humans discover concepts by interacting with the world (as I did in listening to Ally and realizing LLMs are word calculators). But AI can’t do that. It can only ingest correlations that it has been fed from its massive amounts of data, then it uses probability to give the “correct” answer.
8. It’s not grounded in reality. Ask a human to think of an apple and the human will create an image in the mind. That same human will also probably make associations to that apple: how it smells, feels, looks, tastes, plus any memories associated with the fruit (for me, I go to Passover and fondly recall my mother’s charoset).
All an LLM can do is create an image of an apple.
Ask AI “What is pain?” and be prepared for a definition and some examples of types of pain. Ask a human the same question and be ready to hear of specific times in that human’s life when he/she/they experienced pain.
To AI, words are just clusters of data unless connected to senses, bodies, or experiences, which AI doesn’t have. Humans, however, do.
I’m not about to say one should take AI and LLMs at their word. These programs still can’t do nuance, subtlety, emotion, and facts. Humans can, and it is up to humans to verify that what they’re being told by an LLM is accurate.
Feel free to read and check out my other posts related to ghostwriting. Go to https://leebarnathan.com/blog/
Let's Start A New Project Together
Contact me and we can explore how a ghostwriter or editor can benefit you.