Diatribes of Jay

This blog has essays on public policy. It shuns ideology and applies facts, logic and math to social problems. It has a subject-matter index, a list of recent posts, and permalinks at the ends of posts. Comments are moderated and may take time to appear.

27 December 2025

“Bang for Your Buck” in Left-Leaning Political Contributions


As the new year impends, I’m looking for ways to make my political contributions matter. I want the money I can spare to make an impact, perhaps even a measurable one. I want it to “move the needle” in saving our Republic from despotism, restoring Enlightenment values, and resisting the abject idiocy, cruelty and dysfunction of the Trump regime and its many sycophants. And I want everyone who gives a dollar to a Democratic or left-leaning cause to have the same impact.

But how can I know? I have some reasonable hypotheses.

First, I think that most money spent on “traditional” for-profit “Democratic” media like MSNBC and CBS is wasted, for several reasons. First, our nearly entirely for-profit media charge outrageous sums for thirty-second video clips or radio sound bites that merely “preach to the choir” because their audiences are already left-leaning. I have heard that $1,000 per minute is not an unusual fee for a single video or audio clip—even on a purely local medium, such as a rural radio station. That sum would support a door-to-door field volunteer for weeks.

These pricey video clips have minimal impact on the most important groups of potential voters: reluctant, occasional, confused and undecided ones. Those voters, by and large, don’t use traditional media. And no thirty-second ad, no matter how expensive or clever, is going to persuade a died-in-the-wool Fox viewer to vote for a candidate like Kamala Harris. For all the evil that it does, Fox is, IMHO, the most effective political propaganda organization in human history. It’s all the more so because its motives are “pure” and untainted by sycophancy: a mindlessly loyal and devoted audience, private profit, lower taxes and less government regulation.

Second, IMHO traditional “political operatives” work like most of the rest of our corrupt and money-obsessed society. They and the people who appoint them measure their success by the amount of money they control, i.e., the amount of your and my contributions that they spend in whatever misguided attempt to move the needle. To put it bluntly, in our current twisted political universe the people who waste the biggest share of your and my (and big donors’) money have the most clout. This is perverse.

Third, our media universe itself is in the process of undergoing radical transformation in two ways. The media giants with the most cash are eating smaller media combines and independent stations like spiders eating their children. At the same time, the same “tech bros” who applauded Elon Musk’s chain saw are now putting their billions into buying and/or controlling “traditional” media firms. Billionaire Larry Ellison’s contribution to his son David’s bid for Paramount to buy CBS is just the most recent and glaring example. (I don’t believe that Larry Ellison or his son is a friend of real democracy.)

Finally, as a lifelong professional first trained as a scientist and later as a lawyer, I can’t see any means of assessing the impacts of these expensive political ads that strikes me as even remotely rational or scientific. The rationale for a lot of the left’s political spending, it seems to me, is as thoughtless and stupid as “the more you spend, the more you get.” (If occasional voters are anything like me, they watch the thirty-second clips only occasionally and only for amusement, if only because there are so many of them.)

So what to do? In the last election cycle, President Biden delayed far too long before bowing out, giving no chance for his chosen one, Sen. Harris, to move the needle on her own behalf on the then-most- important issue: immigration. I will go to my grave believing that if Biden had just publicly and forcefully turned the whole issue over to her, and she, not he, had turned off the immigration tap as he belatedly did, she would be president today, and our long national nightmare would be coming to a close. But second-guessing is never as much fun as winning, let alone as effective.

Anyway, the hypotheses listed above have led me to three tentative conclusions. First, the traditional Democratic-party apparatus is clueless about social and media changes and therefore useless in winning elections. With rare exceptions, I don’t contribute to the party, its internal organizations, or even individual candidates.

Second, the Democratic Party has largely “missed the boat” on the changes in how people get news and information today. The Dems have mostly ignored the vast and ongoing shift, particularly among the vital contingent of young and emerging voters, from traditional media to social media, podcasters, and self-appointed (and poorly paid) “influencers.” Their ignorance hurts us in two ways: ignoring the best and most effective channels and wasting money. The new channels, by and large, are more complex but far less expensive than the old.

Finally, as many polls and election analyses show, the principal “deciders” in our nearly-evenly-split electorate are new, reluctant, occasional and generally “undecided” voters. They are mostly but not exclusively young people.

New to the election game, young people are persuadable, and their persuasion (or failure to persuade them) will fix our nation’s political future and maybe its fate. That’s why, in the last election cycle, I contributed enough to buy a small car, almost exclusively to GOTV field organizations. These organizations tried to persuade uncommitted voters the old-fashioned way: literally meeting them, in person, where they live.

My tiny drop in the bucket had no visible effect. Trump won. Furthermore, as a consistent self-evaluator, I recognize that I, at age 80, know virtually nothing about the most important new media, their real influence, and how they work. So how can I know where to put my money this cycle?

Enter AI. It may not be the most scientific or reliable means of ascertaining reality. But, without significant experience in elections or polling, it’s all I’ve got. (My ignorance might also be a strange advantage: ignoring the conventional wisdom that “more (money) is better” may give me new and more realistic insights.) Maybe by pulling together a lot of information from the Internet—much of which I may be unaware of, and the vast volume of which far exceeds my individual grasp—I can begin to evaluate my hypotheses above with something resembling real data, i.e., science.

What follows, in this and maybe future essays, is my attempt to answer these questions through our modern “oracles,” various AI outlets. In each case, I will publish my AI prompt in full, verbatim. If the AI under query allows, I will also publish a link to the full result. But because I expect the results on such a fraught topic to be prolix, I may attempt to summarize them in my own words, quoting verbatim only portions I find particularly interesting or cogent. Not knowing what to expect, I will treat this project and the AI medium with respect and care: step by step.

What follows is my first inquiry. In these early inquiries I will use only “free” (unpaid) AIs. If I think it worth the money, I will “graduate” to paid access to the AIs I think most likely to give useful answers.

First AI prompt and response:

Here’s the first prompt, submitted to Google free AI on December 27, 2025:

Conventional wisdom and the results of the last presidential election suggest that voters in our nation are nearly equally divided between “progressive” and “conservative” ideologies. These, respectively, are labelled “blue” and “red” views or ideologies, and states or political districts that are mixed or undecided are labelled “purple.”

Among many other differences, “blue” voters tend to vote more favorably for equal treatment of racial, ethnic, religious and sexual minorities, and “red” voters tend to vote less favorably for that equality. In general, “blue” voters tend to focus more on rights and paths to success for individuals, while “red” voters tend to be more concerned with rights and freedoms for businesses and business owners.

Polls suggest that a large fraction of the electorate is deeply committed to blue or red ideologies and therefore unlikely to change political party or position in any particular election. Therefore conventional wisdom sees the results of vital elections, at both the national level and in “purple” states, as depending on persuading new, young, undecided, uncommitted and occasional voters to support a particular ideology, candidate or party.

What is the most cost-effective way of persuading those voters to vote progressive or “blue”? Is it traditional media advertising, consisting of short video or audio clips? Is it using “new” media like podcasts, social media, TikTok and “influencers” on them? Is it in-person, door-to-door canvassing by so-called “get out the vote (GOTV)” organizations? Or is it something else?

In preparing your answer, please avoid speculation and conjecture. Focus on published, quantitative polls and studies by reputable organizations on which political parties and candidates rely. Try, as much as you can, to eliminate or downplay propaganda and publications intended to mislead voters or organizations on the other side. Focus on hard data on which either or both sides in this national political battle appear to rely.

GOOGLE AI’s RESPONSE (response time: less than 30 seconds):

Shared link to complete results: https://share.google/aimode/BLGPkDECPjnAFd8Xn (Use this link to see and access cited source material; I have removed the distracting colored formatting and black background of Google’s original response below for easier reading; the table format and occasional emphases are also mine.)

Research and quantitative studies from the 2024 and 2025 election cycles indicate that there is no single “most” cost-effective method; rather, effectiveness depends on whether the goal is voter mobilization (getting people to the polls) or voter persuasion (changing their minds).

1. In-Person Canvassing (Highest Certainty)

Traditional door-to-door canvassing remains the gold standard for mobilization, though its cost-effectiveness for persuasion varies.

Cost per Vote: Historically, studies show that in-person canvassing can yield one additional vote for every 14 people contacted, at an estimated cost of $19 to $33 per vote.

Turnout Impact: Recent data confirms that face-to-face contact can increase voter turnout by approximately 6% to 8.7 percentage points.

"Deep Canvassing": Newer "deep canvassing" techniques—longer, empathetic conversations—are more time-consuming but significantly more effective at long-term persuasion than standard scripts.

2. Digital Media and Influencers (Rising Efficiency)

Digital strategies are often more cost-effective for reaching large numbers of voters quickly, though their impact on individual voting decisions is harder to quantify.

Influencer Impact: In 2024, approximately 6% of voters reported that influencer-produced content helped determine their voting decisions. This method is particularly effective for Gen Z, where 70% are involved in social or political causes.

Visual Content: Short-form video (TikTok, Instagram) and infographics are significantly more effective at engaging young voters than text-based content.

Micro-targeting: Digital ads allow campaigns to reach specific demographics at a fraction of the cost of direct mail. One study found digital impressions could be delivered for less than $0.01 each, compared to $0.30 for mail.

3. Comparison of Methods

Reputable political science experiments (such as those by Gerber and Green) provide a comparative look at cost per additional vote:

MethodEstimated Cost per VoteEffectiveness
Door-to-Door Canvassing$19 – $33High (Most reliable for turnout)
Volunteer Phone Banking~$36Moderate (Personalized calls only)
Direct Mail$59 – $200Low
Digital AdsVariableHigh reach; uncertain persuasion
Robocalls/Mass EmailIneffectiveNegligible effect


4. Key Strategic Insights for 2025

Persuasion vs. Turnout: Studies show that late-campaign contacts have minimal persuasive effect on undecided voters but are highly effective at ensuring supporters actually vote.

Authenticity: Young voters increasingly distrust traditional media (only 36% trust it) and prioritize “relatable” voices like doctors, teachers, or trusted influencers over partisan ads.

Relational Organizing: Persuasion is most effective when it comes from “trusted messengers”—friends or family—rather than “faceless” campaign entities.

Permalink to this post

02 December 2025

Is an “AI” Really “Intelligent”?


Don’t get me wrong. After some hesitation, I’ve taken to AI as a duck takes to water. I seldom use “normal” online searches anymore. Most “normal” online search results now appear in the order for which for-profit providers of goods or services have paid. I don’t want to know who paid the most to capture my eyeballs, but which product or service best matches the Boolean logic of my search.

Online search engines no longer provide that information, at least not without the “pollution” of many—if not most—annoyingly unresponsive “hits” that someone paid for. They don’t even come close to realizing electronic computers’ promise of instantaneous, accurate information. Instead, they waste my time and make me angry.

Hence, on to “AI.” But what is AI really? Is it really an electronic simulacrum of human intelligence, as the name “artificial intelligence” suggests? I doubt it. Bear with me for a few minutes while I explain why.

So-called “AI” companies go to great lengths to enlist lawyers and “cybersecurity” experts to keep the details of their “AI” protocols and algorithms secret. But as far as I can tell, they all work pretty much the same way.

They all base their results on a gargantuan, online, digitized “concordances” of the English language (or whatever other languages they may digest). In essence, these concordances calculate with what frequency each and every other word “normally” occurs next or near to a subject word. In some, if not most, cases, the concordance is expressed in verrrrrry long polynomials, in which the weight of each factor represents the probability of a particular word occurring within a certain number of words before or after the word that the polynomial represents.

These polynomials represent the “meanings” of words associated with them only by association with other words. An AI has no idea of what a chair actually is (although it may also have pictures in its memory); it only knows that its most closely associated terms are “table,” and “sit” and its cognates, and that “sitting” is something that people do often, especially to eat. In other words, so-called “AI” attempts to “understand” sentences in human languages simply by gauging, and recording in memory, how often and in what combinations words come together in normal writing.

People who work in science and engineering have a word for this sort of thing. It’s called a “kludge” (pronounced “klooj”). That means a solution to a practical problem that works but is inelegant, awkward (sometimes spectacularly so), and often grossly inefficient. Getting machines to “understand” human language in this way is like scratching your left ear with your right hand, but multiplied by many orders of magnitude (powers of ten). It’s not just a paradigmatic kludge; it’s probably the greatest, most expensive and most illustrious kludge in human history.

But it does work. Why? Because the transistors on AI chips (mostly made by high-flyer Invidia) work at anywhere from a thousand to million times as fast as what computer nerds derogatorily call “wetware,” i.e., you, me and our brains. Wetware uses far, far less energy and thus produces infinitely less planetary heating per paragraph read and “understood.” But, hey, when has permanently changing our planetary climate (or anything else) held Silicon Valley back?

So what should we call this kludge? Is it really, as everyone (especially its purveyors!) says, artificial “intelligence”?

It’s not easy to answer this question, especially if you haven’t yet overcome your initial awe at having a machine answer your seemingly arbitrary questions in natural human language and giving you online links to further relevant citations. But think about what “AI” actually does, and potential answers become clearer.

A recent search of mine illustrates the problem. I was trying to track down a medical professional who had run tests on me within the last year, in order to recommend her to someone else. Calling her telephone number of record produced a “she’s no longer here” reply but no useful contact information.

So I queried an AI and got a reference to that institution and a link to her page (still up on its website), which no one had bothered to take down. Later, I wrote a second query to the same AI, explaining that she had left that institution and asking the AI to look deeper. It did so, finding that she had joined another institution hundreds of miles away. By way of excusing its earlier inaccuracy, the AI pointed out that sometimes obsolete web pages stay up on corporate websites for six months or more.

All’s well that ends well, right? Maybe so, but can you really call this “intelligent”? I would consider a human assistant that had done the exact same things stupid and/or lazy.

Why didn’t the AI heed my input that the person sought had left her initial posting and dig deeper in the first place? Could it be that there was no previous document, readily available with the AI’s word-associative search technology, that contained that information, so I had to make it even more explicit in my second search?

My legal work with AI provided additional insights. I have an unusual background: I spent some eleven years as a scientist/engineer (getting a Ph.D. in physics along the way) and then went to law school and worked as a lawyer and law professor for another 35 years. Recently, I’ve been helping a relative deal with legal issues in a big case. He’s part technologist, and his skillful and expert use of AI to analyze legal issues encouraged me to try my hand on a much smaller scale.

As a result, I think AI will have a far greater impact on the legal profession than on science and engineering. The reason is simple: while the law of course also involves abstract ideas, the law’s precise verbal expression in statutes, regulations and decisions in cases is paramount. The exact language matters. Therefore, what lawyers need most—what they up to now have spent much of their time doing, and what big firms mostly hire young lawyers for—is finding, reading and analyzing sources of law, especially verbatim reports by judges of their judicial decisions, i.e., “case law.”

Oddly, the legal profession had something similar to AI nearly half a century ago, when I was just graduating from law school. A computer service called “LEXIS,” and later a competitor called “WESTLAW” offered online access to decided cases, statutes and regulations with rudimentary word-association software. You could search, for example, for words and cognates beginning with the letters “bank . . . ” and so pick up anything with “bank,” “banking,” “banked,” “bankrupt’ and “bankruptcy”. And you could find any text with any of those words within so many words before or after other totally different words. In this way, you could find cases and other sources of law dealing with the precise legal issue of interest. This was true as early as 1978.

I have no idea where LEXIS and WESTLAW are now. I suspect they’re both obsolete, the more so that most modern AIs (all but Anthropic’s “Claude” in my experience), allow seemingly limitless use of their most primitive AI software for free.

As this history suggests, “AI” is likely to be of greatest use in the legal profession, in government and in corporate administration, where finding and analyzing previous legal, policy and instructive documents is a large part of the job. No competent, let alone outstanding, lawyer or executive is going to let a machine do his or her thinking, let alone when that thinking may win or lose a case or a large sum of money. But locating and producing the most pertinent documents for analysis is a task that almost every rational person would rather leave to machines. And in many white-collar fields, including especially the law, this new reality eventually, IMHO, will drastically thin the ranks.

Another area where AI would be useful is in helping consumers deal with complicated products, such as cars. I recently leased a Kia EV—a long-range lightweight “Wind” EV. It’s a great car with a few quirks, but its manuals are among the worst pieces of garbage writing I have ever seen. The main manual, some 500 pages, is a ghastly mix of tedious CYA, obvious precautionary instructions and infinite mindless repetition. I actually read most of this monstrosity before simply sitting in the car and pressing all the buttons seriatim to find out what they do.

Later I wanted to know how to maintain the low-voltage battery while letting the car rest for a while, as I had learned to do with other EVs and hybrids. Instead of reading the godawful manuals, I consulted AIs. In seconds, they told me what I needed to know. They also verified my suggestion that I should disconnect the high-voltage-to-low-voltage converter before connecting my low-voltage “battery maintainer”; and they told me how to do that.

So can AI’s “think”? Would I call all this artificial “intelligence”? No and no. That doesn’t mean that so-called AI’s aren’t useful, or even that they can’t justify their enormous expense and energy consumption.

I would liken them to ancient Egypt’s Library at Alexandria. They are vast, if not entirely complete, collections of the thoughts and writings of human civilization from its beginnings to the present day. That ain’t nothin’!

And they are infinitely more useful than the Library at Alexandria in two respects. First, they continually accumulate new knowledge as it accrues. Second, while the Library was available only to those who could travel to Alexandria and had command of the ancient Egyptian language and its hieroglyphics, the various “AIs” and their output—and thus indirectly their huge databases—are in theory available to anyone, anywhere, and can be translated into any language.

This is no mean feat. It’s a gigantic and unprecedented step toward making all of human knowledge available worldwide, and to everyone, to apply their own individual human intelligence to.

Can they really “think”? I doubt it. But they can point most everyone on this planet to what their peers and predecessors have thought on any given subject, and that’s a huge advance. Whether it’s worth the expense in energy and the risk of accelerating planetary heating, perhaps beyond control, is beyond my pay grade.

Endnote: Can an “AI” Really Think?

I don’t think so. It can simulate thinking by summarizing the discovered facts and reasoned conclusions of previous authors (including, today, the output of other AIs). But I don’t think it can really think in any serious way.

I offer the following example from the history of physics to illustrate my rationale. It’s one of the most illustrious examples of “thinking” in human history.

During the late nineteenth century, physicists were unsure of the speed of light as a fundamental physical constant. If you stand by a railway and a baseball pitcher on a speeding flatcar throws his best pitch at you, it will hit your glove at an even higher velocity, i.e., that of the pitch plus the speed of the train.

Is light like that baseball, whose speed depends on your frame of reference, or is it a fundamental constant—the ultimate natural speed limit that nothing in matter or radiation can exceed? The famous Michelson-Morley experiment of 1887 partially resolved this dilemma in favor of the ultimate speed limit.

But what did this mean for fundamental subatomic particles, which can approach, but never exceed, the speed of light? How did that apparent “absolute speed limit” affect their physical characteristics, which humans can detect only indirectly with specialized instruments?

Albert Einstein answered most of these questions in 1905, with his so-called “special theory of relativity.” (His special theory is different from, and much simpler than, his later general theory of relativity, which requires tensor calculus and an ability to conceive, if not visualize, the four dimensions of space-time. All the special theory requires is a bit of algebra.)

At first glance, it’s hard to conceive of a world in which the pitcher’s throw doesn’t hit your mitt with greater velocity and force when he’s standing on a flatcar moving by you at speed. But the Michelson-Morley experiment and some follow-ons proved that that’s exactly how the subatomic world works. Einstein’s great contribution was using math to calculate the practical consequences of that experimental fact.

Einstein didn’t actually develop the necessary math himself. A German mathematician named Konrad Lorenz already had done so. His math used mere algebra with a square root and some quadratic factors to add two near-light-speed velocities together in such a way that the physical “sum” would never exceed the speed of light.

Using Lorenz’ calculations, which physicists call the “Lorenz Transformation,” Einstein was able to deduce several strange results, which subsequent experiments proved beyond question. First, when radioactive particles approach the speed of light, their decay “clocks” slow down. Second, when rods approach the speed of light moving along their length, they get shorter. Third, when particles approach the speed of light, their masses increase, eventually becoming infinite, so one can never force them to exceed the speed of light. All these results have been shown by experiment to be reality, and all were predicted—all at once!—by the mathematics of the Lorenz Transformation, as any good student of algebra and high-school physics can now verify.

From a strictly human standpoint, Einstein’s conclusions were among the greatest leaps of imagination in history. He took the observed fact that the speed of light never seemed to change, regardless of the frame of reference (moving or not) in which it was observed. Then he took this pure algebraic theory of the Lorenz Transformation, which was at the time a mere mathematical curiosity. He put the two together, in a leap of imagination that pure, abstract math governed the physical world. And he hit the jackpot.

Not for nothing is Einstein considered one of the smartest people in human history. Long after he died, when doctors examined his preserved brain, they discovered something unusual. His “sulcus,” which for most of us is a big groove between the right and left hemispheres, in Einstein’s brain was full of neurons. So his superior intelligence, like his theories, had a rational basis in physical reality.

But here’s the question for our discussion today. Could an “AI” today make that leap from algebra to reality? Of course it could calculate the Lorentz Transformation for any hypothetical situation, and much quicker than any math whiz. But could it make the leap, as Einstein did, that the Transformation represented physical reality on a submicroscopic scale—a reality far different from the one we humans experience every day at our own physical scale?

My own view is “no.” Machines that find and correlate previous writings and recordings of ours, no matter how quickly and accurately, cannot decide what is real and what is not, or what is true or not. That’s why AIs are known to “hallucinate,” and why you can often stop their hallucinations cold with counter-instructions, or even just orders to be careful.

How do AIs distinguish fiction from fact? I suppose they do it the same way we do, by looking for “trigger words” like “story” or “fiction,” by analyzing whether the text is primarily about people’s words, feelings and emotions in real time, and by considering the many chains of word associations that are different in an instruction manual or a scientific or political paper, for example, and a short story. But they can’t do that nearly as well as any bright high-schooler because they aren’t alive and aren’t human.

At the end of the day, there is one thing they are unlikely ever to do as we do. They can’t value life or human suffering the same way that we can, with personal experience and empathy. And that’s why, IMHO, so-called “artificial intelligences” should never be used to make final decisions in warfare, let alone whether to use nuclear weapons and so risk self-extinction of our species.

Permalink to this post