Diatribes of Jay

This blog has essays on public policy. It shuns ideology and applies facts, logic and math to social problems. It has a subject-matter index, a list of recent posts, and permalinks at the ends of posts. Comments are moderated and may take time to appear.

02 December 2025

Is an “AI” Really “Intelligent”?


Don’t get me wrong. After some hesitation, I’ve taken to AI as a duck takes to water. I seldom use “normal” online searches anymore. Most “normal” online search results now appear in the order for which for-profit providers of goods or services have paid. I don’t want to know who paid the most to capture my eyeballs, but which product or service best matches the Boolean logic of my search.

Online search engines no longer provide that information, at least not without the “pollution” of many—if not most—annoyingly unresponsive “hits” that someone paid for. They don’t even come close to realizing electronic computers’ promise of instantaneous, accurate information. Instead, they waste my time and make me angry.

Hence, on to “AI.” But what is AI really? Is it really an electronic simulacrum of human intelligence, as the name “artificial intelligence” suggests? I doubt it. Bear with me for a few minutes while I explain why.

So-called “AI” companies go to great lengths to enlist lawyers and “cybersecurity” experts to keep the details of their “AI” protocols and algorithms secret. But as far as I can tell, they all work pretty much the same way.

They all base their results on a gargantuan, online, digitized “concordances” of the English language (or whatever other languages they may digest). In essence, these concordances calculate with what frequency each and every other word “normally” occurs next or near to a subject word. In some, if not most, cases, the concordance is expressed in verrrrrry long polynomials, in which the weight of each factor represents the probability of a particular word occurring within a certain number of words before or after the word that the polynomial represents.

These polynomials represent the “meanings” of words associated with them only by association with other words. An AI has no idea of what a chair actually is (although it may also have pictures in its memory); it only knows that its most closely associated terms are “table,” and “sit” and its cognates, and that “sitting” is something that people do often, especially to eat. In other words, so-called “AI” attempts to “understand” sentences in human languages simply by gauging, and recording in memory, how often and in what combinations words come together in normal writing.

People who work in science and engineering have a word for this sort of thing. It’s called a “kludge” (pronounced “klooj”). That means a solution to a practical problem that works but is inelegant, awkward (sometimes spectacularly so), and often grossly inefficient. Getting machines to “understand” human language in this way is like scratching your left ear with your right hand, but multiplied by many orders of magnitude (powers of ten). It’s not just a paradigmatic kludge; it’s probably the greatest, most expensive and most illustrious kludge in human history.

But it does work. Why? Because the transistors on AI chips (mostly made by high-flyer Invidia) work at anywhere from a thousand to million times as fast as what computer nerds derogatorily call “wetware,” i.e., you, me and our brains. Wetware uses far, far less energy and thus produces infinitely less planetary heating per paragraph read and “understood.” But, hey, when has permanently changing our planetary climate (or anything else) held Silicon Valley back?

So what should we call this kludge? Is it really, as everyone (especially its purveyors!) says, artificial “intelligence”?

It’s not easy to answer this question, especially if you haven’t yet overcome your initial awe at having a machine answer your seemingly arbitrary questions in natural human language and giving you online links to further relevant citations. But think about what “AI” actually does, and potential answers become clearer.

A recent search of mine illustrates the problem. I was trying to track down a medical professional who had run tests on me within the last year, in order to recommend her to someone else. Calling her telephone number of record produced a “she’s no longer here” reply but no useful contact information.

So I queried an AI and got a reference to that institution and a link to her page (still up on its website), which no one had bothered to take down. Later, I wrote a second query to the same AI, explaining that she had left that institution and asking the AI to look deeper. It did so, finding that she had joined another institution hundreds of miles away. By way of excusing its earlier inaccuracy, the AI pointed out that sometimes obsolete web pages stay up on corporate websites for six months or more.

All’s well that ends well, right? Maybe so, but can you really call this “intelligent”? I would consider a human assistant that had done the exact same things stupid and/or lazy.

Why didn’t the AI heed my input that the person sought had left her initial posting and dig deeper in the first place? Could it be that there was no previous document, readily available with the AI’s word-associative search technology, that contained that information, so I had to make it even more explicit in my second search?

My legal work with AI provided additional insights. I have an unusual background: I spent some eleven years as a scientist/engineer (getting a Ph.D. in physics along the way) and then went to law school and worked as a lawyer and law professor for another 35 years. Recently, I’ve been helping a relative deal with legal issues in a big case. He’s part technologist, and his skillful and expert use of AI to analyze legal issues encouraged me to try my hand on a much smaller scale.

As a result, I think AI will have a far greater impact on the legal profession than on science and engineering. The reason is simple: while the law of course also involves abstract ideas, the law’s precise verbal expression in statutes, regulations and decisions in cases is paramount. The exact language matters. Therefore, what lawyers need most—what they up to now have spent much of their time doing, and what big firms mostly hire young lawyers for—is finding, reading and analyzing sources of law, especially verbatim reports by judges of their judicial decisions, i.e., “case law.”

Oddly, the legal profession had something similar to AI nearly half a century ago, when I was just graduating from law school. A computer service called “LEXIS,” and later a competitor called “WESTLAW” offered online access to decided cases, statutes and regulations with rudimentary word-association software. You could search, for example, for words and cognates beginning with the letters “bank . . . ” and so pick up anything with “bank,” “banking,” “banked,” “bankrupt’ and “bankruptcy”. And you could find any text with any of those words within so many words before or after other totally different words. In this way, you could find cases and other sources of law dealing with the precise legal issue of interest. This was true as early as 1978.

I have no idea where LEXIS and WESTLAW are now. I suspect they’re both obsolete, the more so that most modern AIs (all but Anthropic’s “Claude” in my experience), allow seemingly limitless use of their most primitive AI software for free.

As this history suggests, “AI” is likely to be of greatest use in the legal profession, in government and in corporate administration, where finding and analyzing previous legal, policy and instructive documents is a large part of the job. No competent, let alone outstanding, lawyer or executive is going to let a machine do his or her thinking, let alone when that thinking may win or lose a case or a large sum of money. But locating and producing the most pertinent documents for analysis is a task that almost every rational person would rather leave to machines. And in many white-collar fields, including especially the law, this new reality eventually, IMHO, will drastically thin the ranks.

Another area where AI would be useful is in helping consumers deal with complicated products, such as cars. I recently leased a Kia EV—a long-range lightweight “Wind” EV. It’s a great car with a few quirks, but its manuals are among the worst pieces of garbage writing I have ever seen. The main manual, some 500 pages, is a ghastly mix of tedious CYA, obvious precautionary instructions and infinite mindless repetition. I actually read most of this monstrosity before simply sitting in the car and pressing all the buttons seriatim to find out what they do.

Later I wanted to know how to maintain the low-voltage battery while letting the car rest for a while, as I had learned to do with other EVs and hybrids. Instead of reading the godawful manuals, I consulted AIs. In seconds, they told me what I needed to know. They also verified my suggestion that I should disconnect the high-voltage-to-low-voltage converter before connecting my low-voltage “battery maintainer”; and they told me how to do that.

So can AI’s “think”? Would I call all this artificial “intelligence”? No and no. That doesn’t mean that so-called AI’s aren’t useful, or even that they can’t justify their enormous expense and energy consumption.

I would liken them to ancient Egypt’s Library at Alexandria. They are vast, if not entirely complete, collections of the thoughts and writings of human civilization from its beginnings to the present day. That ain’t nothin’!

And they are infinitely more useful than the Library at Alexandria in two respects. First, they continually accumulate new knowledge as it accrues. Second, while the Library was available only to those who could travel to Alexandria and had command of the ancient Egyptian language and its hieroglyphics, the various “AIs” and their output—and thus indirectly their huge databases—are in theory available to anyone, anywhere, and can be translated into any language.

This is no mean feat. It’s a gigantic and unprecedented step toward making all of human knowledge available worldwide, and to everyone, to apply their own individual human intelligence to.

Can they really “think”? I doubt it. But they can point most everyone on this planet to what their peers and predecessors have thought on any given subject, and that’s a huge advance. Whether it’s worth the expense in energy and the risk of accelerating planetary heating, perhaps beyond control, is beyond my pay grade.

Endnote: Can an “AI” Really Think?

I don’t think so. It can simulate thinking by summarizing the discovered facts and reasoned conclusions of previous authors (including, today, the output of other AIs). But I don’t think it can really think in any serious way.

I offer the following example from the history of physics to illustrate my rationale. It’s one of the most illustrious examples of “thinking” in human history.

During the late nineteenth century, physicists were unsure of the speed of light as a fundamental physical constant. If you stand by a railway and a baseball pitcher on a speeding flatcar throws his best pitch at you, it will hit your glove at an even higher velocity, i.e., that of the pitch plus the speed of the train.

Is light like that baseball, whose speed depends on your frame of reference, or is it a fundamental constant—the ultimate natural speed limit that nothing in matter or radiation can exceed? The famous Michelson-Morley experiment of 1887 partially resolved this dilemma in favor of the ultimate speed limit.

But what did this mean for fundamental subatomic particles, which can approach, but never exceed, the speed of light? How did that apparent “absolute speed limit” affect their physical characteristics, which humans can detect only indirectly with specialized instruments?

Albert Einstein answered most of these questions in 1905, with his so-called “special theory of relativity.” (His special theory is different from, and much simpler than, his later general theory of relativity, which requires tensor calculus and an ability to conceive, if not visualize, the four dimensions of space-time. All the special theory requires is a bit of algebra.)

At first glance, it’s hard to conceive of a world in which the pitcher’s throw doesn’t hit your mitt with greater velocity and force when he’s standing on a flatcar moving by you at speed. But the Michelson-Morley experiment and some follow-ons proved that that’s exactly how the subatomic world works. Einstein’s great contribution was using math to calculate the practical consequences of that experimental fact.

Einstein didn’t actually develop the necessary math himself. A German mathematician named Konrad Lorenz already had done so. His math used mere algebra with a square root and some quadratic factors to add two near-light-speed velocities together in such a way that the physical “sum” would never exceed the speed of light.

Using Lorenz’ calculations, which physicists call the “Lorenz Transformation,” Einstein was able to deduce several strange results, which subsequent experiments proved beyond question. First, when radioactive particles approach the speed of light, their decay “clocks” slow down. Second, when rods approach the speed of light moving along their length, they get shorter. Third, when particles approach the speed of light, their masses increase, eventually becoming infinite, so one can never force them to exceed the speed of light. All these results have been shown by experiment to be reality, and all were predicted—all at once!—by the mathematics of the Lorenz Transformation, as any good student of algebra and high-school physics can now verify.

From a strictly human standpoint, Einstein’s conclusions were among the greatest leaps of imagination in history. He took the observed fact that the speed of light never seemed to change, regardless of the frame of reference (moving or not) in which it was observed. Then he took this pure algebraic theory of the Lorenz Transformation, which was at the time a mere mathematical curiosity. He put the two together, in a leap of imagination that pure, abstract math governed the physical world. And he hit the jackpot.

Not for nothing is Einstein considered one of the smartest people in human history. Long after he died, when doctors examined his preserved brain, they discovered something unusual. His “sulcus,” which for most of us is a big groove between the right and left hemispheres, in Einstein’s brain was full of neurons. So his superior intelligence, like his theories, had a rational basis in physical reality.

But here’s the question for our discussion today. Could an “AI” today make that leap from algebra to reality? Of course it could calculate the Lorentz Transformation for any hypothetical situation, and much quicker than any math whiz. But could it make the leap, as Einstein did, that the Transformation represented physical reality on a submicroscopic scale—a reality far different from the one we humans experience every day at our own physical scale?

My own view is “no.” Machines that find and correlate previous writings and recordings of ours, no matter how quickly and accurately, cannot decide what is real and what is not, or what is true or not. That’s why AIs are known to “hallucinate,” and why you can often stop their hallucinations cold with counter-instructions, or even just orders to be careful.

How do AIs distinguish fiction from fact? I suppose they do it the same way we do, by looking for “trigger words” like “story” or “fiction,” by analyzing whether the text is primarily about people’s words, feelings and emotions in real time, and by considering the many chains of word associations that are different in an instruction manual or a scientific or political paper, for example, and a short story. But they can’t do that nearly as well as any bright high-schooler because they aren’t alive and aren’t human.

At the end of the day, there is one thing they are unlikely ever to do as we do. They can’t value life or human suffering the same way that we can, with personal experience and empathy. And that’s why, IMHO, so-called “artificial intelligences” should never be used to make final decisions in warfare, let alone whether to use nuclear weapons and so risk self-extinction of our species.

Permalink to this post

0 Comments:

Post a Comment

<< Home