Diatribes of Jay

This blog has essays on public policy. It shuns ideology and applies facts, logic and math to social problems. It has a subject-matter index, a list of recent posts, and permalinks at the ends of posts. Comments are moderated and may take time to appear.

26 November 2022

Why Political Polls are Useless


Now that the midterm elections are over, the Internet, pollsters and mainstream pundits are all agog. They are asking themselves endlessly, “How did we get it so wrong? How did the ‘red wave’ that we all knew was coming not materialize?”

I must confess to some schadenfreude in all this. After having read through far too many “horse-race” predictions, or at least having skimmed far too many useless headlines, I find it amusing to watch their authors pointing more fingers than they possess. But once my sardonic laughter fades, I have the sinking feeling that our media—virtually all of them—have done and are doing our nation, our voters and our democracy possibly irreparable harm.

For starters, let’s analyze how bad, really really bad, the recent polling was. It was abysmal in at least three big ways:

1. Skewed Samples. The old paradigm of bad political polling is now so famous it’s become a cliché: the Dewey-Truman presidential race of 1948. There the pollsters wrongly predicted Dewey’s win because they contacted voters only by telephone.

At that time, a much larger proportion of wealthy people, who voted Republican, had telephones than of working folk, who didn’t. So the choice of means to contact voters for polling skewed the polling sample toward the Republican candidate, Dewey. The result was that wonderful front-page photo of Harry Truman, as president-elect, grinning like a Cheshire Cat while holding up the false headline “Dewey Defeats Truman.”

So how could this happen again this year, 74 years later? Well, in 1948 voters’ availability to pollsters was a binary decision. They either had a telephone (a now-mostly-obsolete land line!) or they didn’t. The only other way to reach them would have been door-to-door canvassing—a method too expensive and complex for a national or statewide poll.

But today things are far more complex. We have at least five widely-used means of reaching individuals directly: land-line telephones, mobile phones, e-mail, text messages, and interactive social media like Facebook and Twitter. We also have a related phenomenon: incessant unwanted robocalls that plague both land-line and cell-phone users.

Did it ever occur to pollsters that many voters might screen out polling calls automatically or reflexively, seeing them as unwanted robocalls, and that their tendency to do so—not to mention their ability to do so automatically—might depend on their educational level, which recent polls have suggested is related to party affiliation?

2. Voter Reticence. It gets worse. Some voters just don’t want to talk about their electoral choices, let alone while they are still in the process of making them. And who might those voters be?

I can think of one whole category especially relevant to the election just past. How about young women of child-bearing age, especially those from evangelical or otherwise “conservative” homes? Might some or even most of them not want to reveal to a stranger their secret and personal desire to have abortion available to them, just as a backup? Might some not want to risk their parents, husbands or lovers overhearing them discussing the subject, let alone with a stranger?

Then there are the voters who make up their minds slowly and don’t want uninvited intrusion into their decision-making process. Is it impossible to conceive that those voters might be less willing to reveal their thoughts than the loudmouth who is already irretrievably convinced that the Demagogue is the answer to any problem and that “only he can fix it”?

The point here is not that this analysis is necessarily right. The point is that it, like all attempts to “correct” sample selection for systematic bias, is guesswork. There is no accepted general mathematical or scientific theory for correcting samples for systematic bias, let alone in such a multidimensional application as politics. It’s all “heuristics”—a fancy word for guesswork.

3. Underestimating Voters’ Intelligence.The late misanthropic pundit H.L. Mencken once said, "No one ever went broke underestimating the intelligence of the American public.” While that might be a good rule for predicting the audience for the 167th sequel of Spiderman, is it a reliable rule for political polling? Is there really no floor under the public’s voting intelligence?

It seems enough that political pundits assumed, from a mere half-century of data, that deep losses for the president’s party in midterm elections are a law of life. But what about assuming, as an equally universal truth, that voters blame the president for whatever bad thing is happening at the moment? Might not that be a step too far? Is there literally no limit to the average voter’s stupidity?

Although sometimes I couldn’t help myself, I tried not to read polls or prognostications less than a month before the elections. But here’s a doozy that I couldn’t help reading because all our media repeated it endlessly and made it impossible to avoid:
“Inflation is high. Voters are worried and angry. The Republicans are blaming it all on the Democrats, and President Biden has low approval ratings. Therefore voters will vote Republican.”
Any freshman student in logic could poke multiple holes in that miserable excuse for a syllogism. The most glaring omissions are two: (1) the absence of any reason to believe the Republicans, and (2) the absence of any credible plan to make things better.

In repeating this broken syllogism ad infinitum and ad nauseum, the media not only assumed that hypothetical voters have no floor under their intelligence. They also encouraged voters not to think and, in the process, used their undeniable power and influence to propagate Republican propaganda.

This is journalism?

I could go on and on, but I think I’ve made the basic point. The kinds of statistics that pollsters claim to use to predict elections were, by and large, developed to predict the behavior of atoms and molecules in numbers like Avogadro’s number, 6.02 x 1023, which reflects the near-infinite quantity of atoms or molecules in human-sized volumes. Yet for purposes of thermodynamics and statistical mechanics, atoms and molecules have only a few relevant variables: their masses, their electrical charge (if any), and their velocities in three dimensions. Do you think that human beings, each of whom has three billion base pairs in his or her DNA, might be a bit more complicated and subtle?

Statistical mechanics works in physics because the number of variables (“degrees of freedom”) for each particle is usually less than ten, and the number of particles—6 with 23 zeroes after it—is incomparably larger, so individual differences don’t matter. The same techniques don’t work reliably with people because the number of degrees of freedom are many orders of magnitude (powers of ten) larger and the relative number of objects even smaller.

Pollsters try to account for these huge differences by varying the sample size and selection, but there is no reliable mathematical way to vary them, or even to figure out how to vary them. The answer invariably depends on the variability and plasticity of the human psyche, as in the case of abortion mentioned above. So the very application of the same mathematical methods used for molecular gasses to people is based on a fundamental misconception.

Sure, the larger the sample size, the better the prognostication. That simple relationship seems like a truism. But what if you need to “sample” just about the whole voting population to get accurate results? Isn’t that what we call an “election”?

I’m not aware of any mathematical technique, whether statistical or otherwise, that can predict how an individual human being will react to ambiguous data, whether he or she will pick up a landline or cell phone and open up to a stranger, what things will rise to the top of his or her importance list in the voting booth (or at home in filling out an absentee ballot), or whether a change of mind might occur between the polling and the election.

So there is absolutely no reliable mathematical basis for adjusting sample size, or for predicting elections among millions of voters by polling mere thousands, or even tens of thousands—let alone in cases where the polling sample is likely to be systematically skewed, and/or the electorate is closely divided, emotional issues rule, and the “hidden persuaders” are working overtime to motivate decisions based on lies, fear, name-calling and trivia.

So henceforth, for myself and my personal sanity, I intend to apply four hard rules. First and foremost, I will regard electoral polls as the modern equivalents of reading the entrails of dead birds, or the patterns of tea leaves left in a cup of tea after drinking. Second, I will hang up on any unknown caller not just after I hear the unmistakable background sounds of a “boiler room,” but even if I don’t hear a single-voiced “hello” within four seconds. Third, I will studiously avoid answering polls that try to reach me by way of any of the other five current means (with no doubt more to come) that communication technology now allows. (BTW, how accurate does anyone think Elon Musk’s recent Twitter poll was, which he cited for restoring the Demagogue’s Twitter access?) Finally, I will renew my registration on the FTC’s “Do Not Call” list, although I have little faith that our broken Congress and dispirited regulators will ever restore its long-degraded efficacy.

As for our media, they ought to leave lies, disinformation, misinformation and propaganda to the partisan political “operatives” who are already far too numerous and clever for our species’ good. They would improve their public impact immensely by reporting what (if any) practical solutions to real problems candidates for office propose. Rather than speculate endlessly on future results, reporters could analyze whether the solutions that each candidate proposes make sense, or at least telephone a few deep thinkers to do that analysis for them.

But they ought to leave the “horse race” to the handicappers of actual horse races. At least they have at their fingertips—in exhaustive statistics of horses’ past performance—something far more accurate, reliable, relevant and theoretically sound than political polls based on the unwarranted assumption that samples are properly “corrected“ for bias in multiple unknown and usually unconsidered dimensions.


For brief descriptions of and links to recent posts, click here. For an inverse-chronological list with links to all posts after January 23, 2017, click here. For a subject-matter index to posts before that date, click here.

Permalink to this post

24 November 2022

Thanksgiving 2022


This year the list may be shorter than usual. But there are still important things to be thankful for:

Give thanks to Volodymyr Oleksandrovych Zelensky and the Ukrainian people, whose courage and perseverance in agony show the value—and the price—of truth, democracy and self-determination.

Give thanks for the narrow but real victories, here at home in our recent elections, of common sense, reason, democracy, and majority rule.

Give thanks for modern medicine and genomic science, which have fought the Covid-19 pandemic to something of a standstill and may yet overcome it.

Give thanks for the misnamed “Inflation Reduction Act”—our nation’s first substantial effort to fight accelerating global warming.

Give thanks for our growing national repudiation of misinformation, disinformation, and over-the-top advertising and self-promotion.

Give thanks for our recent electoral rejection of many—but not all—men and women of such low character that epithets fail.

Give sardonic thanks for recent public and spectacular failures of ego and hubris, in our politics, in Twitter, in Tesla, and in Silicon-Valley’s layoffs. These failures just might portend a return to real research and development and to patience, perseverance and wisdom in solving real problems.

Give thanks, as ever, for our national credo that “all . . . are created equal.” Despite our many flaws and failures, it still draws millions of good people to endure unspeakable hardship just getting here to live with us and, in so doing, enriching our nation.

Most of all, give thanks for a day of peace, tranquility and refuge with loved ones, and new respect for the simple and true.

Happy Thanksgiving!

Permalink to this post

23 November 2022

America’s Lost Greatness: R & D


I was born in 1945. That was when newly-invented nuclear weapons brought an end to human history’s most horrible war. Our own country, nearly untouched at home, became the undisputed world leader, while Soviet Russia continued to suffer from having fought and won the most pyrrhic ground victories in human history.

The year 1945 was also the year when the USA became the most formidable scientific R & D juggernaut in human history. This essay traces the rise and fall of that source of our nation’s greatness and suggests how we might revive it.

Most people think of the Manhattan Project as a secret military effort. That it was. With military discipline, it commandeered the nation’s best minds, hands, and materiel. It built a huge scientific-engineering enterprise from scratch, with an entire town (Los Alamos) to support it, in the mountains of Northern New Mexico. It also built secret facilities at Oak Ridge, Tennessee and at Hanford, Washington, in a desert far from the coast.

At one point, the Project’s facilities drew about one-tenth of all the electricity generated in the entire United States. Yet so secret was its work that no one outside it had a clue about its progress. The world’s first real notice came with the Trinity Explosion at Alamogordo in July 1945, which drew some curious news coverage in New Mexico. Then, a mere three weeks later, came the devastation of Hiroshima, Japan, that changed war and history forever.

But the Manhattan Project was far, far more than just a secret military project. It was also the greatest scientific research-and-development project in human history. I’d like to elaborate on this theme a bit, in part to adjust our understanding of the words “research,” “development” and “technology.” Today these words have been hijacked by investors, promoters and advertisers and distorted beyond all recognition.

The most important thing to understand about the Manhattan Project is its intellectual foundation: a then-obscure scientific theory of nuclear fission. It was an untested theory developed by recent foreign immigrants—Italians, Germans, Hungarians and Jews—many of whom had fled the rise of Nazism and fascism in Europe. We now know, partly courtesy of Ken Burns, how many others like them died in the Holocaust due to racial and ethnic prejudice against them, in the US and elsewhere, while their illustrious peers were developing the weapons that later would save democracy and change the world.

When Albert Einstein (a German Jew) and Leo Szilard (a Hungarian Jew) wrote their famous letter to President Franklin Delano Roosevelt in August 1939, the theory of nuclear fission had worked only at tabletop scale. Just eight months before, experiments with collections of atoms had shown confusing results. Leading thinkers had interpreted them as neutrons causing fission of the atomic nuclei that they bombarded. The physicists credited with this discovery were Lise Meitner (an Austrian-Swedish woman) and Otto Frisch (an Austrian-British man).

Based on this then-recent theory, Einstein and Szilard believed that Nazi scientists would begin working on an atomic bomb and that we should, too. So they wrote their famous letter to FDR, who started the Manhattan Project. The theory itself was never applied to anything remotely like a bomb until, over three years later, Enrico Fermi (an Italian physicist) got his “atomic pile” working under the football stands at the University of Chicago in 1942. His device demonstrated, for the first-time ever, a controlled nuclear chain reaction.

It’s hard to know whether to classify Fermi’s device as research or development. No one had ever demonstrated a controlled nuclear chain-reaction before. Control was an important part of both theory and practice, as with the Bomb itself. (A bomb going off accidentally in the laboratory would probably have set the Project back!) Anyway, an atomic pile (another word for “battery”) might be useful in generating energy, but it wasn’t a weapon of war.

From there to the first-ever “Trinity” atomic-bomb explosion, at Alamogordo in July 1945, took less than two years, eight months. In scale, difficulty, ingenuity, and speed, that was probably the greatest single scientific development project in human history. On the way to its successful conclusion, its scientists and engineers had to invent things like Teflon (to prevent corrosion in stainless-steel tubes used in centrifuges to purify uranium) and sub-microsecond electronic timers (to trigger the conventional explosives used to produce an instantaneous implosion of enriched uranium parts).

Even with the general path (but not the details) well trodden—or with our stolen nuclear secrets—no one else has ever matched the speed and ingenuity of that development. But it took the united and concentrated focus of thousands of the world’s best scientists, engineers, machinists, and technicians. And it had a military commander trained in engineering, General Leslie Groves, authorized to commandeer all the physical and human resources of a great nation.

The interlocking of theory and practice—research and development—stayed with the Project from beginning to end. Six months before the Trinity Explosion, physicists Edward Teller (a Hungarian Jew) and Emil Konopinski (a Polish-American) published a quantitative theoretical study reassuring everyone that an atom bomb would not ignite the Earth’s entire atmosphere in nuclear fire and end all life on Earth. Teller reportedly was checking those calculations against actual data on the Trinity Device the night before its explosion.

The point of this history is not just that the Manhattan Project was a unique triumph of scientific research and development. It was also a triumph of human organization and administration, i.e., of government.

The Project had smart, practical, dedicated and well-organized people from the bottom—those who built the barracks at Los Alamos and fed the technical staff—to the very top—the military Commander General Groves and the President, FDR, and his “Brain Trust.” Every one of these men and women was totally focused on reaching the desired result, regardless of faith or political ideology. Every one did his or her job in complete and utter (and often counterproductive) secrecy. Everyone was competent, educated and willing to listen. Those who had esoteric knowledge of science and technology enjoyed the trust of those who didn’t. (J. Robert Oppenheimer, an American of German-Jewish descent who led the scientists on the Project, was later blackballed for suspected disloyalty only after the Project’s success.)

In assembling this unprecedented collection of international talent, the Project benefited from a unique set of historical circumstances. It had drawn almost all its great minds from a Europe plagued by war and despotism.

Nuclear physics, if not all of physics, had really been a European invention, if you include the Brits. From the landed gentry who dabbled in physics in the nineteenth century (Faraday, Kelvin) to the professors at great universities in the early twentieth, virtually all the great names were European. The only name of a native-born American physicist, not an offspring of immigrants, that comes easily to my mind is Ernest O. Lawrence, inventor of the cyclotron and Nobel Prize winner. And even his grandparents were Norwegian immigrants.

After the turmoil in Europe surrounding World War I, and then the rise of Nazism and fascism, the flood of immigrant thinkers became a torrent. While we Americans kept out ordinary Jews, Germans, Italians and Hungarians as “undesirables,” our elite had the good sense to let the great thinkers make safe homes and careers in America. Had they not done so, our efforts to turn the bare theory of nuclear fission into practical devices would never have begun. Europe’s turmoil and dark turn toward despotism had driven a golden flood of great minds to our shores—the products of great European universities that had stood for centuries.

What most attracted them? It certainly wasn’t the allure of our educational institutions, which were then focused on basic education, business and “the classics.” Rather, it was the very basics of productive human life: stability, safety and liberty.

In a few short decades, these three things allowed us to collect the great minds of a generation and put them to work exploring the unknown, ultimately boosting our war effort. Our society, while excluding “their kind” generally, recognized their talent and found places for them in universities and research establishments, where they thrived. So we borrowed, or perhaps stole, the brains of an epoch and put them to work making us smarter, richer and more powerful.

Thus the Manhattan Project is hardly just a story of nuclear weapons. It’s not just about physics. It’s a story of social organization and societal values. It’s a story about how enlightened leadership attracted and held the great minds of a generation and organized them for a single purpose.

With their help, we emerged from a period of unprecedented war, turmoil and conflict as the sole superpower and perhaps the world’s best place to live and prosper. We didn’t pick ourselves up by our bootstraps: we had had a big head start and a lot of carefully recruited help.

The contrast from today is both obvious and appalling. Physics has migrated away, along with many of its great minds. We gave up the Superconducting Super Collider in 1993, so particle physics migrated to CERN, the French acronym for the European Center for Nuclear Research, near Geneva. Some of our good minds went to China (or back to China, for some with Chinese roots), but some of them may now be returning as Xi Jinping’s new empire becomes more despotic and unstable, or as stable as death. Today’s relative stability and prosperity of Germany and Japan (even with its non-Roman alphabet) are also starting to attract leading minds.

So we no longer enjoy the same “brain-drain” advantages that we had in the early twentieth century. Maybe no single nation ever will again, at least anytime soon.

We have also set our intellectual sights so much lower. Our aspirations, it seems, are to produce billionaires and great business empires, like Amazon, Facebook and Google. But what they do, relative to what the twentieth-century scientific establishment did, now seems trivial, at least from the perspective of scientific and technical achievement.

Virtually all the great wealth of today’s Silicon Valley is based on clever but intellectually pedestrian uses of computers and software to improve the “efficiency” of communication or of logistics in sales and transport. Outside of quantum computing, most of which is top secret and has yet to make any economic impact, there is nothing to rival the invention of transistors, lasers, integrated circuits, personal computers and cell phones.

Even if you ignore their deleterious effects on labor and economic equality, today’s leading monopolies can claim little, if any, fundamental advances in science or technology. Just compare social media (Facebook, Twitter), Internet search (Google) and product-shipment and transport logistics firms (Amazon, Uber, Lyft) with real postwar advances in science and technology. In the few decades after World War II, the great minds who had joined us, with their American colleagues, had produced: television, space travel, satellite communication, nuclear energy, transistors, integrated circuits, lasers, CAT scans and MRI machines, personal computers, and cell phones.

Relatively speaking, what we have “invented” and produced since are advances in business, spreadsheets and accounting. Our reduced vision, combined with lax antitrust laws, has led to great strides in profiting from advertising that “earn” billions, while real advances in science and technology are slowly drying up.

Yet it’s still possible to imagine new things worthy of a second Manhattan Project. One is saving our species from accelerating global warming and its consequences. So many real applied-science projects could join that effort: (1) safer and smaller-scale nuclear fission power plants; (2) nuclear-fusion power plants; (3) better and more durable batteries, using less exotic, rare and expensive elements; (4) adapting hydrogen internal-combustion engines (whose only effluent is water) to air travel, long-haul trucks, and heavy equipment; (5) better and smarter electrical grids to support all the above; (6) adapting home and car batteries to grid-scale electricity storage; (7) carbon storage and sequestration—a longshot but still worth pursuing; and (8) if all else fails (and it might!) geoengineering to reduce solar irradiance hitting the Earth’s surface.

Our fight against the Covid-19 pandemic hints at what a modern Manhattan Project might accomplish. Consider the development of mRNA vaccines for Covid-19. They came at “warp speed” because they arose out of a near-decade-long government-funded push toward genomic vaccine development. The Pfizer vaccine came from the labs of Turkish-immigrant scientists in Germany—a small echo of the massive brain-drain from turmoil to stability that staffed our own Manhattan Project. The American Moderna vaccines came from not-so-massive but consistent BARPA investment of our own (gasp!) government.

But now the application of lessons learned to hoped-for nasal-spray vaccines is bogging down in proprietary claims of businesses seeking to make the most money from old technology, even at the cost of retarding the new. We seem to have become a nation of shopkeepers, more concerned with raising income and stock prices than improving our basic science and technology and making bold new technical ideas practical. A mini-Manhattan Project for new vaccines and therapies might spell the end of Covid-19 and, in the process, make our entire species far more secure against the next airborne pandemic.

Unfortunately, our memory of the Manhattan Project is tarnished with thoughts of war and destruction, even species’ self-extinction. Perhaps that’s inevitable. After all, the goal of the Project was nuclear weapons. Perhaps only the existential threat of Nazi conquest in war could have motivated such a concentrated, intelligent and practical project by a whole nation.

But suppose we could imagine a similar project today. What could it do against other, slower-moving existential threats, like self-accelerating global warming and the growing risks of ever-evolving pandemics?

To launch such a project, we would first have to overcome two big obstacles. The first is the private-profit mania that has overtaken our nation and much of the rest of the world. (Even China is fighting it, but not too successfully, and in the wrong way: with the dead hand of absolute state control, of despotism.) We Americans are living in a Second Gilded Age, in which those who ostentatiously sport historically unprecedented wealth—like Bezos and Musk and the Kochs—exercise power and influence way out of proportion to what made their wealth, let alone their personal contribution to humanity’s survival. Celebrity—often borne of or influenced by wealth—has become the coin of the realm. Science and technology sit in the back of the bus.

The second obstacle is the degradation of major democracies, like ours and the Brits’, in offering stability, safety and liberty. Whether separately or together, as the world’s two strongest and most stable democracies today, Germany and Japan are unlikely to make up for the downward global trend. So the chance of any one nation—let alone one so small—becoming the mecca for a whole generation of great minds is considerably reduced.

Nevertheless, the need is great. Never before has our species itself contributed to three ways in which we could suffer partial or full extinction: (1) nuclear war, (2) runaway global warming, and (3) a deadly pandemic spread globally in days or weeks by aircraft. If we Americans were prescient and effective enough to create the Manhattan Project to counter the Nazi threat, perhaps our species, or at least some of us, could do something equivalent to counter these very real and rapidly growing species-wide existential threats. If we can’t, perhaps we should change our self-awarded name away from Homo sapiens, to reflect our growing inability to deal with real threats rationally and collectively.


For brief descriptions of and links to recent posts, click here. For an inverse-chronological list with links to all posts after January 23, 2017, click here. For a subject-matter index to posts before that date, click here.

Permalink to this post

13 November 2022

How Business Schools Destroyed Silicon Valley, are Destroying America, and May Help Destroy our Planet

    “Oh we, like sheep, have gone astray, -ay”
    “Every one to his own way.” — Handel’s Messiah
This holiday season, Americans can give thanks that our democracy seems to have survived, if only temporarily, and if only by the skin of its teeth. But where are we headed?

There is general agreement that our nation is in decline. China and most American pundits think so. So do the nearly 63 million citizens who voted for the Demagogue in 2016. Their slogan said, “Make America Great Again.” (emphasis added)

But why is this so? I’m a 77-year-old with a unique career. I got my doctorate in physics just before government support for basic science tanked under Richard Nixon in the early seventies. I went to law school and became a lawyer near (and later in) Silicon Valley, specializing in intellectual property (IP). While teaching in that field, I paid a lot of attention to science and technology in the US and where they were headed. And now, in retirement, I’ve spent years thinking about what I’ve seen and what it all means, and summarizing my conclusions in this blog. Here’s my special perspective.

The picture I’ve seen is nothing pretty, and there’s no way to sugar-coat it. When I was in physics grad school, from 1966 to 1971, the two greatest applied-physics laboratories were private: AT&T Bell Laboratories and IBM’s research facility in Armonk, New York. Together, they created the scientific and engineering infrastructure for all of Silicon Valley. Their alumni invented transistors, computer chips, computers and other products of the “digital revolution” and ran the companies that made them.

That revolution led the way to the biotech revolution, still ongoing. Somehow we mere mortals managed to decode, remember and manipulate a human genome containing three billion base pairs. How could we ever have done so without computers?

As individuals, most of us can’t even remember where we put our eyeglasses or car keys. But the scientific/engineering infrastructure that Bell Labs and IBM Armonk created from nothing made genomic science possible. (As a byproduct of research on microwave communication, Bell Labs also discovered the universal microwave background that led us to our theory of the Big Bang and our current understanding of cosmology.)

Today, both of these great labs are gone. They have sunk beneath the waves of profit mania that have overwhelmed American business.

You would think that we might have replaced them with government laboratories of equal power and excellence. But no. This graph tells the woeful tale. As a percentage of GDP, federal spending on basic research has kept relatively constant since 1976—at about 0.40%, or 1/250-th of GDP. But development—the bit of federal support that turns raw big ideas into real products and services—dropped by half.

That huge drop might not have mattered if private industry had picked up the slack. But it didn’t.

Here’s where the business schools came in. They taught, and still teach, that profit is what matters. Why spend high-risk money on the engineering applications of leading-edge science, when you can squeeze the last drop of quick profit out of simple ideas? Making leading-edge science work in the real world is risky!

You can’t put “great potential,” let alone “progress in technology” or “what might have been,” on a balance sheet. You certainly can’t tally the raw ideas of physicists that led to transistor radios, computers-on-a-chip, and cell phones, eventually realizing the impossible dream of my youth: cartoon character Dick Tracy’s “two-way wrist radio.”

Without intensive government and private support of real technology, we Americans would be nowhere near where we are today. Without intensive government support of basic genomic technology, we never would have had our mRNA Covid vaccines, which we can now modify for every new variant of the Covid genome much like programming a computer. Without intensive government and private support of electronics beyond vacuum tubes, today’s Silicon Valley might be a bucolic refuge for the Bay Area’s rich.

Government and private disinvestment in real technology has been going on for decades. But one result appeared on the business pages just this week: the practical collapse of Silicon Valley. Tens of thousands of so-called “tech” workers have been laid off from “me-too” would-be moneymakers. The fallen include crypto banks and exchanges, “me-too” social media companies, competing ride-share and food-delivery logistics companies, and probably more soon. Even the giants Google and Amazon are laying off workers.

All these firms have a dirty little secret. They call themselves “tech” firms. But they are nothing like the real tech companies of the sixties, seventies or even eighties. They are not built on engineering that makes practical use of bold, new discoveries in science. They are built on supposedly “better” ways of organizing businesses. They feed on simple—dare I write “simplistic”?—ideas to improve business efficiency and make money, like improving the logistics of package ordering and delivery (Amazon) or of drivers and passengers in for-hire transport (Uber and Lyft). They have much more in common with finance and financial speculation than with science or engineering of the fifties and sixties, or what used to be called “research and development.”

Most, if not all, of these firms are based on Internet software. But software is not “technology.” It’s a way of organizing business using long-existing products. It’s a business idea performed automatically by machines and often, as a consequence, divorced from all human feeling and consequences.

How well are most of these brilliant business ideas actually working today? To answer that question, just think of how many different ways there are to input a telephone number on today’s websites. The right way, of course, is what programmers call “format free.” As long you put all the digits in, with or without whatever else, the software ought to pick them out, format them as the rest of the software demands, and display them that way. That’s what software is for; as IBM’s old slogan used to say, “Machines should work; people should think.”

There’s probably a free open-source mini-program for format-free input available on line. But how many websites demand that you input all seven digits, with or without special punctuation, in a single, specific way (with or without telling you), and reject your input and chastise you if you don’t? How many insist you type in a phone number precisely in a format that you’ve never used before? And how annoying and inefficient is it as you go from website to website and have to do it differently, over and over again?

This might seem a trivial example. But it illustrates experiences that most of us have had. And similar experiences get multiplied a hundred or a thousand times, making online activity a consistent chore rather than a pleasure.

In the old days of pre-software electronics, there was a body called the IEEE, the “Institute of Electrical and Electronic Engineers.” It used to promulgate standards for electrical and electronic components. One of many was the “resistor color code” for colored stripes on resistors that reveal their ohmage even if they are damaged or burned. The IEEE still exists. But why hasn’t it or another body promulgated standards or best practices for software and website navigation, so you don’t have to learn, unlearn or re-learn minute and annoying variations in use with every new website you visit?

Psychologists tell us that learning lots of things that are nearly the same but differ in detail taxes the human brain. By that standard, the range of American websites is a psychological torture chamber. But it has worked well to enrich a lot of investors in Silicon Valley. Until now. (For detailed discussion of how Tesla’s in-car graphical user interface epitomizes this problem, despite the quality of the car’s mechanical engineering, click here.)

It gets worse. When today’s software does have method to its madness, that method usually involves squeezing someone else for cash. So-called “ride-share” firms squeeze their drivers by “managing” their schedules in real time, with little or no warning. Then they “compensate” squeezed workers by calling them “independent contractors,” not “employees,” and denying them medical and other insurance, paid vacations and paid family leave. Online food-delivery companies drain restaurants by taking up to 30% of their revenue, not profits, merely for arranging delivery of food. Search and social-media firms “monetize” users’ private data for advertising and customer research, sometimes without even letting them know or know how.

You can argue whether all this is fair, just, or good for society, and whether it ought to be prohibited or restricted by law or regulation. But there’s a much deeper question, which underlies Silicon Valley’s retrenchment, if not its downfall. Is this science? Is it technology or engineering? Or is it just the type of exploitation of workers and customers, coupled with financial/business manipulation and speculation, that periodically causes abrupt economic retrenchments called “crashes”? Is it just the Gilded Age written in software?

Then there are those monstrous telephone queues. You know, the ones that keep you waiting for minutes, sometimes even when they call you back? When you finally trick yourself past the useless AI and get to talk to a human being, it’s never the same person twice. So there’s no human relationship and no possibility of one. But even that doesn’t matter, because the person you finally get to talk to may be in a foreign country, may not know much (if anything) about the business, and has no authority but to read from a script and ask you if there’s anything else he or she can do after failing to solve your problem. And if this depresses you as an occasional user, think how badly it must affect the representatives who have to suffer this joyless, choiceless, friendless, powerless, anonymous job as a career.

I wrote a parody of automated telephone queues nearly twenty years ago. Since then, they’ve only gotten worse. But B-school grads seem to like them because they make executives even more powerful and even less accessible than before. Apparently it’s much easier and cheaper to give the illusion of customer service than to train employees and develop systems to give the real thing. Try to find someone on a phone queue who even has a passable idea of how the business works, and you’ll see what I mean.

No one, apparently, has even conceived the simple expedient of identifying you by your phone number and connecting you with the same person, or at least one of a small group, more than once. Then you might actually have a basis for some human connection and continuity in repeated calls. But no, today it’s all anonymous unknown customers speaking with anonymous unknown reps with virtually no responsibility or authority, ad infinitum, ad nauseum. What a way to foster “customer relationships!”

If by now you think I see Silicon Valley as getting its just comeuppance recently, you’re right. But the B-school types who foisted this corporate dystopia on all of us won’t hurt the most.

Most hurt will be the workers, especially those misnamed “independent contractors,” who will lose their miserable jobs. Second will be us customers, as the waits for information and service get longer, the answers get yet more garbled, and corporate dystopia goes into overdrive. If you haven’t yet noticed how hard the B-schools-graduates’ dysfunctional systems make it to get real information about a product or service, correct a billing or other error, change an order, do anything or answer any question not on the website—or generally seek human solace from the infernal machine—you haven’t been paying attention. (And I’m a guy who once wrote software.)

And therein lies the tale. For two generations, we Americans have been taught that government is incompetent and even evil, and that the private sector can do it all. We’ve also been taught, as dogma, that profit is the only worthwhile goal, and that “free trade and free markets make everybody better off.”

B-schools spread these seductive falsehoods like pharisees in the temple. And so for a generation we transferred much of our manufacturing, factories, technology and research to China, lifting almost a billion people out of extreme poverty, but leaving millions of American skilled workers without jobs, factories, and eventually communities, and seeking solace in opioids. Apparently the “everyone” in the mantra about free trade doesn’t include them.

Never mind that such broad propositions can never be called “science.” Either is far too broad and vague to qualify even as an hypothesis testable by experiment or observation. Yet both are taken for granted in our political and social indoctrination. The great economist Alan Greenspan, as Fed chief, honestly recanted a corollary, that free-markets self-regulate, after his reliance on that nonsense helped cause the Crash of 2008. Yet still we believe, excusing economics, as a “dismal science,” for promoting nonsense that we can watch as it fails every day.

So we have all the evidence and reasoning to understand why our decline is getting steeper. For forty years we have been going down the wrong path, vigorously pursuing private profit separately, each on his own, while our commonwealth floundered, and while our real science and technology began to lag in fierce global competition, especially with China.

Truly we are like the sheep in Handel’s Messiah, about which many of us will sing next month. And B-schools are the shepherds who, in their numbers and their glory, have let us go astray. More than that, in their quest for profit as the summum bonum, they have led us downhill.

Before closing, I’d like briefly to recall the Manhattan Project. We Americans worked on it and paid for it together, to be run in great secrecy, in the midst of a terrible war. It was completely under government leadership and control. Less than two years, eight months, elapsed from its mere demonstration that nuclear fission was possible, at the University of Chicago, on December 2, 1942, to the first successful nuclear test at Alamogordo on July 16, 1945. To complete the development task, at one point ten percent of the whole nation’s electrical power output helped spin centrifuges to enrich uranium.

No nation has ever led such a big technology-development project to such complete success in such a short time. Every other nation that has tried to develop nuclear weapons, even with our help or with our stolen secrets, took longer. Even now that the path, although not the details, is mapped on the Internet, smaller nations are still struggling to reach that goal (sometimes with our or Israel’s spooks’ interference).

I do not mean to laud nuclear weapons, let alone their use. But the Manhattan Project shows what we as a nation can do, and how quickly, under rational leadership when faced with an existential threat, if we work together and abandon the gospel of private profit.

In those days, the existential threat was Nazis getting the Bomb first. Today, it’s rapidly-accelerating global warming. That very real phenomenon threatens to maim, if not destroy, us and all of human civilization. As still the most advanced advanced nation, ours may be the first and worst hit. In that respect, global warming is like nothing that has happened on our Earth since the Chicxulub Meteor extinguished the dinosaurs 66 million years ago.

We’re known about global warming for a very long time. We’ve understood its theoretical foundation, the greenhouse effect, since 1896. Scientists saw that it was actually happening in the fifties. And we’ve known that it could be catastrophic since scientist James Hansen’s testimony before Congress in 1988.

That was 34 years ago. Now the theory and measurements of positive feedback tell us that it could become self-sustaining, even if we all stopped burning fossil fuels tomorrow. Recent events—catastrophic floods and droughts in Pakistan and Southern China, with crop failures in the latter—tell us it probably will.

In 1942, the risk of the Nazis getting a Bomb first was speculative. Even the possibility of anyone developing a nuclear weapon was speculative. But with that merely possible existential risk in mind, we undertook the greatest crash project in technology development in human history, and it succeeded.

Now we know that global warming is happening, is accelerating and could get much worse. Shouldn’t we stop straying like sheep, work together, and take on this existential threat?

Elon Musk the Scatterbrain can move to Texas for lower taxes and take on Twitter while Tesla and Space X still need good stewardship. But the rest of us can’t afford to act like him. We must get our act together and work together. We must stop making money by exploiting each other, and wasting money by writing duplicative and annoying “me, too” software, and start figuring out how to stave off a catastrophe we humans have caused. That will take a return to real science and engineering, not software.

It isn’t high taxes that are killing Silicon Valley. It was and is neglecting real science and technology in a general rush to find an easy path to making a quick buck. It would be a shame if our leaders let us continue down that path while climate change made much of our world uninhabitable, thereby drowning human civilization in unimaginable migration, war and suffering. Maybe the demise of Silicon Valley, or what remains of it today, can put us on a better path.


For brief descriptions of and links to recent posts, click here. For an inverse-chronological list with links to all posts after January 23, 2017, click here. For a subject-matter index to posts before that date, click here.

Permalink to this post

11 November 2022

Why Section 230 Has To Go


Section 230 of the Telecommunications Act of 1996—in subsection (c)—immunizes Internet platforms from any liability for what others say or publish on them. The crucial part is subsection (c)(1), a single sentence, quoted below. There are four main reasons why it has to go:

First, it’s destroying democracy, social order, and social peace. It’s doing that not just in our own country, where somewhere between 50 million and 100 million people believe (incorrectly) that Joe Biden stole the last presidential election. It’s also played a huge role in ethnic pogroms against Rohingya Muslims in Myanmar and the election of near-tyrants in Brazil, Hungary and Venezuela. When great masses of people are made to believe things that just aren’t so, bad things happen.

Second, Section 230 was a so-called “midnight amendment” to a major statute, tossed in at the last minute at the behest of powerful corporations. It had few or no hearings and received virtually none of the careful consideration that major changes in law usually get from Congress. The whole subsection’s title doesn’t even match the contents of crucial subsection (c)(1).

Third, Section 230 marked a major departure from Anglo-American defamation law. For Internet platforms alone, it wipes out the structure of common law that courts in England and the US spent centuries developing for earlier major advances in human communication: the printing press, radio and TV.

Section 230 cancels defamation law for Internet platforms by immunizing them from publishing lies. It does so even though it’s impractical—often impossible—to curb the anonymous, obscure and often unknown or immune people who originate or repeat the lies on the platforms, by the millions.

Finally, Section 230 was an experiment in legal immunity that has already served its purpose. It was designed to foster development of the Internet in its infancy.

Now that the Internet no longer needs fostering, a nasty side-effect of Section 230 remains: it gives everyone from politicians to social-media “influencers” an incentive to lie by removing all practical accountability for lying. To stop an exponentially expanding lie on the Internet, aggrieved parties must sue millions of unknown authors, most of whom have few or no assets to seize and cannot (due to screen names) be found or identified—and some of whom are out of reach of the law entirely.

The recent case of Alex Jones illustrates the problem. Courts just ordered him to pay $1.43 billion in damages for lying that the Sandy Hook gun massacre of mostly kids was a “hoax” and the aggrieved parents of dead kids were actors in it. [See this post on compensatory damages and this one on punitive damages.] Nearly a billion dollars of the damage award were compensatory. They are to be paid to particular suing parents, who had been pestered, hounded and even threatened by Jones-inspired crazies for publicly grieving their kids’ violent deaths. The rest of the damages were punitive, imposed by the court for a lie that was particularly egregious, socially and personally damaging, and apparently deliberate. (The court also ordered Jones not to transfer assets out of the country, making any attempt to do so “contempt of court,” punishable by additional damages and jail time.)

But what if Jones’ lie had not come from Jones—a notorious public figure who had waxed rich, in plain sight, from years of outrageous speech? What if it had come from an anonymous social “influencer” hiding behind a screen name and a dark network? What if, worse yet, it had been deliberately concocted by a Russian or Chinese spook, outside of the jurisdiction of US courts, for the purpose of destabilizing American society?

In any of these “what if” cases, our Section 230 would have kept courts from punishing anyone for the lie. In the first case, the liar could not have been found. In the second, the liar could not have been held to account by virtue of geography. The effects of the lie would have been much the same—millions of people believing that a terrible tragedy had never happened, but had been made up out of whole cloth to promote gun control and justify seizing guns. Yet nothing could have been done to correct the lie or punish the liars because Section 230 protects the Internet platforms that passed the lie on to millions, and the anonymous blogger or spook who started it could not legally be reached.

That, in a nutshell, is why Section 230 has to go. The reason has nothing to do with “bias” against conservatives (although if conservative lawmakers think so, their help in getting rid of Section 230 would be welcome). It has to go because it’s destroying the global fabric of politics and society by removing all practical accountability for lying over the Internet.

By vitiating accountability, Section 230 actually encourages lies. The Internet was supposed to expand human consciousness by making access to truthful information available around the globe. Instead, Section 230 has made it a cesspool of lies.

The tension between free speech and suppressing lies is nothing new. It didn’t start with the Internet, and it won’t end when (if ever) we all get our “news” electronically from direct cortical implants.

The law of defamation helps resolve that tension by letting people injured by lies sue liars. Its very existence is proof that there are limits to free speech, even in America. No one has a “right” to injure others by lying. It’s just that Internet platforms managed to trick Congress into immunizing them from the same rules that already apply to every other form of communication.

Our own Supreme Court, in its better days, thought it had resolved the tension in the seminal case of New York Times v. Sullivan. There it ruled that a newspaper’s owners could be held liable for defaming a public figure, but only if they published an untruth with more than negligence. They have to publish a falsehood with “malice,” defined as reckless disregard for whether the statement is true or false.

That’s a high standard to meet. All it means is that a lie’s publisher must have made some honest effort to verify its truth or falsity. That effort need not even rise to the level of “reasonable care,” the opposite of negligence. It just has to clear the bar of not being reckless.

It’s by no means certain that Internet platforms deserve that same lowered standard of care. If they make money by publishing the speech of others, in the millions, and they are the only practical chokepoints, should they be held to a lower or a higher standard than traditional publishers, who only put forth their own carefully considered “speech”?

But we needn’t answer that question now because Section 230 removes even that low bar for Internet platforms. So it practically invites lies on the Internet, the most rapid, powerful, and widespread means of communication in human history.

Today, the only real disincentive for publishing lies on the Internet is political and social pressure. As Sarah Palin might ask, “How’s that workin’ out for ya, America?”

Mark Zuckerberg and his ilk appear repeatedly before Congress to express regret and confess that they need to do more to stamp out lying on their platforms. Then they go back to their offices and programming desks, maybe tweak their algorithms a bit, hire a few more misinformation monitors, and continue raking in millions by directing—dare I write “distracting”—their users’ attention with “clickbait.”

The goal of profiting dictates their prime directive: turn clickbait into dollars. How do they do it? They serve up stuff that is unexpected, boldly new, surprising or astonishing. And the more “information” meets that description, the more their algorithms serve it up. Is it any wonder that lies and misinformation become a large part of the mix, precisely because they are unexpected, boldly new, surprising or astonishing?

It gets worse. Modern neuroscience now lets us peer inside the human brain while it’s reacting to new information. And modern studies show that liberals and conservatives react differently to the same information, using different physical parts of their brains.

I suppose that divergence is inevitable when the information is real. But when it’s a lie, couldn’t it be used to great effect as selective propaganda to different audiences? And isn’t it likely that the difference, provoked and maybe entrenched by lies, is responsible for our current extreme polarization and division?

Section 230 was part of the Telecommunications Act of 1996, passed when the Internet was just getting started. It was an open experiment in encouraging what was then a nascent industry.

That experiment has already succeeded, maybe too well. Now the Internet is everywhere. It dominates our communication, news, politics, and commerce. Companies that publish on it or feed on it, such as Amazon, Apple, Facebook, Google, Microsoft, and Twitter, are among the most valuable and influential in human history. Collectively, they far surpass even the automobile industry in value.

So Section 230 already has served its purpose. The Internet that now dominates our society, politics, communication and culture doesn’t need further special incentives to grow, let alone by encouraging lying.

Now we are in an economic pause. Internet businesses, among others, have overreached and are retrenching. Their algorithms and stealthy invasions of privacy have reached their Peter Principle and are under sustained social and political siege. Internet Inc. is already considering how to change and downsize going forward.

So maybe now is a good time to get rid of Section 230—at very least by imposing the same no-recklessness standard that every newspaper and broadcaster already must meet. Maybe now is a good time to try to save humanity from the madness of universal lies. Depending on the as-yet-unclear outcome of the Midterms, Congress might devote its “lame duck” session to doing just that.

The culprit’s text. Here, verbatim, along with the whole section’s title, is the single sentence of Section 230(c)(1) that changed our world:
[Section 230](c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker[.] No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

For brief descriptions of and links to recent posts, click here. For an inverse-chronological list with links to all posts after January 23, 2017, click here. For a subject-matter index to posts before that date, click here.

Permalink to this post

10 November 2022

Fixing Tesla’s Big-Screen Graphical User Interface


My wife has had her Tesla Model 3 Long Range for almost a year now. Mechanically, it’s a marvelous car, a dream to drive. But its big-screen graphical user interface (GUI) spoils the driving experience.

When first learning to drive the car, I would have given the GUI a C+ grade. Now, after nearly a year of use, I would give it a D. Every time I have to use it, I hate it more.

In keeping with Elon Musk’s world-class arrogance, Tesla maintains no way to offer GUI feedback that I could find. So I’m writing this post in the hope that someone in Tesla’s management will discover it and fix the damn GUI. Here are my suggestions how:

1. General design principles. First, the programmers seem to have organized the GUI by abstract categories, before even acquiring much experience driving the car. At very least, the person in overall charge of the GUI should have owned and regularly driven a Tesla for at least a year, and have continuously improved the GUI accordingly. The results today suggest that that was never done.

Second, the GUI design must take into account the mechanics of the driving experience. Because of their big batteries, Teslas are extraordinarily heavy, weighing in at over 5,000 pounds with the size of a compact car. To add to that, their tires are hard (to reduce rolling resistance and extend range), and the steering and suspension are tight to simulate sports-car driving.

As a result, the Tesla driving experience is extraordinarily bouncy. This makes hitting tiny icons—let alone sliding-scale adjustments—while the car is moving a difficult exercise in hand-eye coordination.

This is so even for the passenger. For the driver, every use of the GUI while driving is a challenge and a safety hazard.

To reduce this annoyance and danger, no screen icon, slider or other control should be less than a centimeter in any dimension (about the width of a finger). All vernier adjustments not on the rotary steering-wheel dials should look the same, and all should appear in the same screen position, with a range of numerically labeled buttons, each no less than one centimeter square, running from 0 (off) to 10. (It might help to have a button that also puts HVAC adjustments on the steering-wheel rotary dials temporarily, say for ten seconds, so the driver could adjust the temperature and/or airflow without taking eyes off the road.)

Finally, programmers may think that tiny icons are cute, but they leave users guessing as to what they control, and how. Every icon should have an English label below it, translatable into whatever language is the driver’s native tongue. (Software makes this easy, and Apple seems to have no trouble doing it.) If programmers feel this clutters the GUI, they should make it a software-controlled user option, perhaps to be discontinued after a learning period. (I still find myself guessing at the icons for the HVAC almost a year later.)

2. Get rid of hand gestures entirely for GUI control. They may be practical on an iPhone, when you’re standing steady on the ground and your entire attention is focused on a few square inches in your hand. When you’re driving a car at 70+ MPH and the car is bouncing as a Tesla does, they’re frustrating and dangerous. So this putative aesthetic embellishment becomes an annoyance and a hazard. There’s nothing that you can do with hand gestures that can’t be done with buttons more easily, precisely, reliably and safely.

3. Put controls for all mechanical elements on the home screen. The purposes of Tesla’s big main screen were to save the cost of mechanical controls and allow simple and cheap modification. But the GUI often requires multiple screens, icons and gestures to do a single thing. The most egregious example is opening the glove box, which often takes hitting two buttons on opposite sides of two different screens!

The home screen should have language-labeled icons to activate every mechanical feature of the car (except the doors from inside, which have their own mechanical buttons). This includes: all interior and exterior lights, the trunk, the frunk, the door locks (all, driver’s, front and back separately), the glove box, the charge port, and the side-mirror folding and unfolding (preferably a toggle). The driver's personal adjustments—seat position, mirror positions, steering wheel, etc.—should have a separate icon on the main screen.

The HVAC controls, including heated seats and steering wheel, should have their own screen, reached with a prominent button on the home screen. Perhaps there should be separate virtual buttons for driver's side, passenger's side and back seats.

The airflow part of the HVAC should have a separate subsidiary screen with an entry button on the home screen. So should the radio, cell phone, and external audio interface (bluetooth). And every other screen but the home screen should have a big icon for the home screen, all located in the same place, probably the upper left.

4. Reorganize the home-screen buttons in a way that makes sense for the driver and passenger, not the programmer. The most important features for day-to-day use are mechanical-element controls, navigation, HVAC, audio controls (including radio, Web services, cell phones and bluetooth), charging controls, and charging-station location, in roughly that order. All these things should have prominent entry buttons on the home screen, placed in order of probable frequency of use. Better yet, the positions of icons on the main screen should be user-adjustable, like the icons on an iPhone.

5. Make navigation simpler to access and use. After nearly a year of use, it’s still a chore for both of us to find the address-input screen (which now requires two buttons and a gesture) for GPS guidance and to control the volume of voice guidance or turn it off. It shouldn’t be that hard.

The home screen should have a big, prominently labeled button for inputting a navigation address. Preferably, it should lead to an entire screen devoted to the address, showing charge stations along the way (including must-visit ones!). There should be button-triggered options for multiple stops, choosing which suggested charging stations to visit, showing a map side-by-side, and turning the audio guidance on and off. A full-screen map should be a separate option, automatically showing the specific address in a side box. And every map should have a compass direction/heads-up direction toggle button, clearly marked.

6. Don’t use the GUI as a marketing device. The early use of the audio-source selection screen to market various paid subscription services only enraged us. Isn’t that a bit tacky in a $60,000+ car? (Tesla since appears to have stuffed the marketing into subsidiary screens.) Any marketing on the GUI should be by request only, clearly marked with a dedicated button. And BTW, the radio station list should tune to each station when you touch it in the list.

7. Provide a feedback mechanism right in the car. Most big-business Websites now provide online feedback mechanisms, often with buttons on each Web page. Tesla would do well to emulate that customer-friendly practice.

A feedback mechanism right in the car would motivate users to provide feedback right at the point of frustration, while the source of trouble is still fresh in their minds. If coupled with rudimentary AI, it could let Tesla’s engineers tally customers’ “votes” for modifying specific systems without the need for special surveys.

The purposes of having software control most small mechanical functions is flexibility and continuous improvement, right? So why not actually implement continuous improvement, at customers’ behest, while Elon Musk is distracted with his latest toy, Twitter? Who knows, the management that actually makes the cars might achieve continuous improvement by inaugurating a (gasp!) user focus group. What a concept!

Endnote on the iPhone App. The foregoing comments apply only to the Tesla’s own internal big-screen GUI, not the iPhone App. Unfortunately, I hate the iPhone App even more.

The App’s organization is mind-bogglingly frustrating. Often features seem not to work for no apparent reason. No feature that I could find tells you whether or not the App is actually connected to the car. So I recently searched the App’s charging feature for the car’s current state of charge in vain (perhaps because the App was not connected). The App should show connection or not on every page; and it should be organized by the things that remote users are most concerned with: (a) locking status and unlocking action; (b) charging status and progress; and (c) internal temperature control.

At least the App does let me into the car, when the phone is close enough to it. There I can access the big screen. But my general impression is that Tesla management inappropriately mixed programmers with computer/machine and cell-phone experience and somehow consistently got the worst of both worlds. (Hence the overuse of gestures and tiny sliders in a bouncy car.)

The best that Tesla could do now is overhaul the big-screen GUI and then get the App to reproduce the same user experience as closely as possible. Consistency may be the hobgoblin of small minds, but it would make both learning a new system and routine day-to-day use easier.


For brief descriptions of and links to recent posts, click here. For an inverse-chronological list with links to all posts after January 23, 2017, click here. For a subject-matter index to posts before that date, click here.

Permalink to this post

09 November 2022

The Morning After

    Plus ça change, plus c'est la même chose.” — Jean-Baptiste Alphonse Karr (1849) (“The more things change, the more they stay the same.”)
Well, I won’t be taking my wife out for a celebratory dinner. I promised her an opulent night out if Stacey Abrams, Cheri Beasley, Val Demings and Reverend Warnock all won. It wasn’t a bet, only a promise to celebrate what I’d hoped would be sheer joy.

Of course that won’t happen now. Reverend Warnock is the only one who still has a chance to win, and we won’t know that answer until after the Georgia runoff December 6.

But if Warnock wins in Georgia, or if Catherine Cortez Masto, who is trailing now, wins in Nevada, the US Senate will be pretty much back where it started. It’ll be evenly divided, with VP Harris casting the deciding vote. Senators Manchin and Sinema will still clutch the filibuster like a sacred amulet, and nothing much will get done there. Even if both Warnock and Masto win, the resulting 51-vote majority will still require Manchin’s or Sinema’s vote to get anything filibusterable done.

As for the House, it’s likely to change hands. The spineless weathervane Kevin McCarthy will likely become Speaker. The radical rightists will launch a series of investigations. They will make Hunter Biden’s already miserable life even more miserable. They might even impeach his father, the President, without the faintest chance of convicting him in the Senate.

So the House will become a stage for political theater, nothing more. Democratic control of the Senate—not to mention the presidential veto—will prevent anything but truly bipartisan legislation from passing. The House may flirt with extortion again, as the Tea Party (now renamed “Freedom Caucus”) once did. But, one hopes, it will not go so far as to provoke a national-debt default and trigger a global financial meltdown.

Not much substantive will get done, except in the bowels of our federal agencies and our courts. Will our people begin to understand that those agencies—which the Demagogue derides as the “Deep State”—keep our air, airplanes, cars, drugs, food, highways, soil, water and workplaces safe, keep those Medicaid, Medicare, Social Security and VA checks coming, and collect the taxes to make it all possible?

Don’t count on it. Fox and friends will continue persuading vast swathes of American that the folks who protect us from unsafe surroundings, misfortune, unleashed oligarchs and unfettered corporate greed, not to mention Vladimir Putin, are incompetent bureaucrats, “Marxists” and their enemies.

Then what, if anything, will have changed, other than giving the right wing a bigger stage for performative mischief?

Deep down below, I think, something is moving. Abrams, Beasley, and Demings, like Mandela Barnes in Wisconsin, may not have won. But they came damn close—all within the low single digits. And Reverend Warnock appears to be on track to win his runoff in December.

What unites all five is that all are singularly attractive candidates, and all are Black. All except Barnes ran in the South, which is just beginning to take down the statues and amulets of slavery, 157 years after the Civil War ended.

Watching culture change is like watching grass grow, but in epochs, not days. Tribalism is deeply embedded in the human psyche, as shown by recent experiments with kids split into arbitrary “teams” with orange or green T-shirts. The particular kind of tribalism that we call “racism” is especially rooted, but even it is not immune to change.

Yes, all five candidates were and are attractive. Yes, all are smart, articulate, and relatively moderate. All were very careful not even to hint at any bias in themselves. But only one is on track to win: the Reverend Warnock.

For opponents of the other four, it was sufficient to mumble, in various coded ways, “But, she (or he) is Black.” The codes vary with the candidates, localities and seasons: “soft on crime,” “left-wing,” “extreme leftist,” “Marxist,” “socialist,” “enemy of mainstream values,” and “not one of us.”

For Warnock, that distinction was impossible to make, because his opponent, Herschel Walker, is also Black. So Georgia was faced with a choice between two Black men. One is a blithering idiot accused by women (claiming to be ex-girlfriends) of paying for abortions at his request. The other is an intelligent, caring, wholly decent man of the cloth, who’s already done the job for 18 months. The good people of Georgia appear to be making the obvious choice. That’s what progress looks like, at least in the South.

I write this with only a modicum of irony, because tribalism is a veritable force of nature. Just look at Russia and Ukraine to understand how nonsensical and durable it can be. So yes, Warnock’s future confirmation in his specially-elected Senate seat will be a baby step, but a firm step in the right direction.

As the Old Guard dies off, as youth arise to replace the gerontocracy, as our population changes through in-migration and differential birth rates, those steps will accelerate. And some day, maybe not in my lifetime, the best women and men will win despite any T-shirts they cannot remove.

To see what we’re all missing right now, you have only to watch Stacey Abrams’ gracious and eloquent concession speech. It’s one of the most moving I’ve ever seen. It covers all the points of Democratic policy in a few, well-chosen words. It rings the bells of decency and family. (Abrams’ own family is large.) It promises never to stop fighting. And it does it all with the grace, precision and beauty of language that one might expect from a graduate of Yale Law School, which Abrams is.

Watch that speech, and take encouragement. Abrams, who is still only 48, is not going anywhere but up. And so are the fortunes of all who can hear clearly, through the dog whistles of tribalism, the credo of unequivocal equality that is our nation’s gift to our species.


For brief descriptions of and links to recent posts, click here. For an inverse-chronological list with links to all posts after January 23, 2017, click here. For a subject-matter index to posts before that date, click here.

Permalink to this post

03 November 2022

The “Dope Club”


Since many “news” stories today seem to resemble short stories, I thought I would tell a short story of my own. It’s a a true story and one of the formative experiences of my early life. I have only memories to go on, no records. But it happened when I was in early grammar school, about six years old.

I was always a “smart” kid, impressing my teachers with my intelligence, precociousness and knowledge. My parents—a novelist and Hollywood writer and an artist-homemaker Mom—valued “smarts” highly. They were unabashedly elitist and encouraged me to be so, as much as I could understand what that meant at such a tender age. My teachers, too, did much the same, insofar as I can recall.

In my class there was another similar kid, whose name I still remember: Kingsley Bellows. Not only did he have high intelligence. He also had unusual athletic prowess for a six-year-old. He was the fastest runner in the class. His nickname “King” led all us kids, just getting used to the meaning and mystery of language, to think of him as some kind of superior being.

He and I became friends, although my athletic prowess, while passable, was nowhere near his. He was also more mature emotionally than I: he once explained to me, quite calmly, that our Sun would burn out a few billion years hence, and that all humans would die. I burst into tears and felt terrible until my Dad (who had an unusual interest in science) explained to me that that scenario, while realistic, was, like my own natural death at age six, a long way away.

King’s and my friendship developed over what seemed a long time to a six-year-old but was probably just days or weeks. It soon evoked a hostile reaction from the rest of the class, which formed a “Dope Club” in response. The name had nothing to do with marijuana or drugs, or today’s “wokeness,” all of which were worlds away from our school. It had everything to do with a majority of the class feeling somehow inferior, like “dopes.” The Club’s explicit purpose was to resist our elitism by ostracizing us two and making us feel unwelcome.

The next thing I remember was how we “had it out” on the playground. King, who also knew how to box, stood in boxing pose in the middle of a circle of boys, who ventured at random to run in and try to hit him. I ran around the periphery of the circle, smiting anyone I could, and mostly getting away with it, because everyone’s attention was focused on King at the center.

By the time we all laid down on our mats for a nap after recess, the Dope Club’s hate and ostracism seemed to have dissipated. Its name was heard no more, and life in our class quickly returned to normal for six-year-olds. At least I remember nothing about the rest of that school year.

In the ensuing 71 years, I’ve often recalled that story as a metaphor for national politics, especially today. I wondered whether what I then thought of as standing up to bullies had saved me and King, or whether the camaraderie and mutual respect that often replaces enmity when young boys fight took hold. More likely, our teachers became aware of what had happened and made a special effort to nurture self-esteem among all the kids. (Our school was part of UCLA’s teaching-training program, and operated much like a Montessori school today. It even had its own full-time child psychologist, whose name was Mr. Franco. I suppose that I saw him not infrequently, which is why I still remember his name.)

It doesn’t take much imagination to see the analogy between the Dope Club of my tender years and today’s Republican Party. Its driving force and electoral success derive largely, if not primarily, from resentment, even hate, of coastal elites, minorities, the rich, desperate immigrants, and even just the well-educated. Denial of science, vaccine resistance, and even election denial follow from this prime directive of anti-elitism. And poor Hillary unknowingly played right into it with her unfortunate “deplorables” remark.

But here’s the thing. My class’ Dope Club arose in early grammar school, among six-year olds. You could expect children, even today, to belittle, resent and even ostracize self-conscious intellectual elites, just as you could expect boys that age to tease and torment little girls for being mostly weaker and smaller and “different.” Six-year-olds have no way of knowing that resentment, even if justified, is not a plan, let alone a solid basis for forming a useful social organization. In that respect it’s like selfishness, the other mainstay of Republican ideology since Reagan.

Apparently these points have failed to reach our current crop of Republican leaders. Led (if reluctantly) by the Demagogue, they readily confess that they have no plan but resentment and blame, plus the occasional tax cut for the rich. Mitch McConnell, the Senate Minority Leader was quite open about it. When asked what he and his crew would do if they take control of Congress next January, he replied, “I’ll let you know when we take it back.”

My own early-childhood Armageddon produced no lasting harm because six-year-old boys, unarmed, are too small and weak to do lasting damage to each other. Not so the heavily armed men who conspired to kidnap Governor Gretchen Whitmer of Michigan and are now going to jail. Not so the grown men who deride and ridicule Speaker Pelosi’s husband, hospitalized by a resentment- and hate-filled attacker with a hammer. Not so the grown-up pols who deride and ridicule Speaker Pelosi herself for being female, old and frail (while nevertheless being one of the most skillful and consequential House Speakers in our national history).

We all know what happens when a Dope Club takes power in an advanced industrial democracy. That happened in Germany, some three decades after that nation had been universally considered (along with Britain) one humanity’s most advanced societies. It can happen here, too, and the odds of it happening have increased dramatically since 2016.

Grown men (and the occasional woman) who act like six-year-olds on a playground pose an entirely different level of risk, whether they carry or just push carrying automatic weapons, let alone when they seek control of a world-destroying arsenal of nuclear weapons.

Six-year-olds can base their actions on raw feelings. It’s what they do. If grown men and women who assume leadership do the same, the chances of our species surviving the nuclear age, let alone self-sustaining global warming, are not high. The easiest way of stopping them, with the least amount of human suffering, is to repudiate them decisively next Tuesday, by voting all blue.


For brief descriptions of and links to recent posts, click here. For an inverse-chronological list with links to all posts after January 23, 2017, click here. For a subject-matter index to posts before that date, click here.

Permalink to this post