Diatribes of Jay

This blog has essays on public policy. It shuns ideology and applies facts, logic and math to social problems. It has a subject-matter index, a list of recent posts, and permalinks at the ends of posts. Comments are moderated and may take time to appear.

31 July 2021

Newly Discovered Covid Risks and their Mask and Vaccine Logic


For brief descriptions of and links to recent posts, click here. For an inverse-chronological list with links to all posts after January 23, 2017, click here. For a subject-matter index to posts before that date, click here.

Among the newly discovered risks of Covid-19 is the risk of cognitive decline, especially among seniors. That seems plausible to me, for we appear to have a national epidemic of fuzzy thinking.

In the last two weeks or so, we’ve discovered three things about the delta variant (besides cognitive decline) that we didn’t know before:

A. People who are vaccinated can carry the disease, either asymptomatically or with relative mild symptoms, and pass it on to others;

B. The delta variant causes infected people to replicate about a thousand times more virus than earlier variants; and

C. Despite A and B, the risks of vaccinated people being hospitalized or dying, even from the delta variant, is still minuscule compared to the similar risks of unvaccinated people.

Now let’s suppose—contrary to every indication over the last 18 months—that our species is rational, as befits our self-awarded title Home sapiens. At a minimum, that would seem to imply that: (1) people don’t want to get sick or die; (2) they don’t want to increase the risk of others getting sick or dying (at least people they care about); and (3) all want their lives and the general economy to get back to normal ASAP.

If we newly take discovered risks (A), (B), and (C) together with idealized popular goals (1), (2) and (3), what conclusions can we draw?

First, everyone, whether vaccinated or unvaccinated, should wear masks when indoors with strangers. If you’re vaccinated and unmasked, you could still get infected. Even if you aren’t hospitalized and most likely won’t die, who wants to get sick? More important, if you catch Covid, you could become a “Covid Mary” and pass the disease on to others, who might get seriously sick and die (if unvaccinated), or might pass the disease on to yet more people (even if vaccinated). And if you’re prudent enough to get vaccinated, how about saving those scarce hospital and ICU resources for the people who really need them, namely, the unvaccinated? Finally, if any of this happens at scale, as appears likely, the dream of getting back to normal will recede into an idle fantasy.

If you’re unvaccinated, similar considerations apply just as much to you, plus two more. If you get the disease, you could be hospitalized or die. Or you could have all the debilitating effects of “long-haul” Covid, which would pretty much ruin your life.

The second major conclusion is that everyone who’s not vaccinated should get vaccinated ASAP. Now that we know that even vaccinated people can be carriers, the known risk to the unvaccinated just increased by half in red states and tripled in blue states. Why? The proportion of known potential carriers went up to the whole population (100%) from the roughly 66% unvaccinated in the worst red states and from the 33% unvaccinated in most blue states. And that’s without even factoring in the thousand-fold increase in viral load among from people infected with the delta variant. Think that might make the delta even more contagious?

The part of the CDC’s recent guidance that recommends looking at the hazard map has little or nothing to do with science or logic. Instead, it’s a sop to the know-nothings and extreme libertarians, to whom their “freedom” to be reckless and stupid is apparently more important than health and life. It’s diluting science with politics.

Why is this so? First, the hazards and maps change daily: the present nth wave of the pandemic is a fluid situation that is changing rapidly as the delta variant rages. How many people are going to get on the Web and consult the CDC’s website before going out to dinner or going to a store? We’d be lucky if we could just get people to grab a mask on their way out the door, and not wear it under their noses.

These conclusions apply without even considering the next Covid variant(s). New ones might render the existing vaccines less effective, be even more contagious, or be even more deadly. The more we pussyfoot around, and the longer we delay in getting the virus under control worldwide, the greater the chance of having to deal with one or more even more dreadful variants.

Viral evolution is an automatic, continuous and unstoppable process. Its speed is roughly proportional to the number of people infected and the number of viruses replicating in each. With the delta variant, we just saw a thousand-fold jump in the typical infected person’s viral load. And we know that, with most people unvaccinated outside the “developed” world, the number of infected people is going to go steadily up for the foreseeable future. What further horrors might arise in the next variant as the delta rages rampant in all the nearly totally unvaccinated parts of the developing world?

Except temporarily, in isolated places like Taiwan, New Zealand and South Korea, testing, contract tracing and quarantining have failed almost everywhere. The only things we’ve got that we know work now are masking, distancing and vaccines. Isn’t it about time we start getting serious about what we know works, before this virus and its mutants start decimating our nation and our species?

The longer we delay doing what we know works, the farther the dream of normalcy fades into a murky and uncertain future. I think we’ve proved by now that successive waves of infection are not the best way to restart the economy.

Permalink to this post

26 July 2021

How the New mRNA Vaccines Work


For brief descriptions of and links to recent posts, click here. For an inverse-chronological list with links to all posts after January 23, 2017, click here. For a subject-matter index to posts before that date, click here.

Our nation’s response to the Covid-19 pandemic has involved so many failures that it’s hard to count them. We failed to curtail travel from China when we could. We failed to set up robust procedures for testing, tracing and quarantining. We failed to foresee the need for tests and PPE and so failed to maintain adequate supplies. We failed in our messaging about masks, trying to restrict them at first to medical personnel and giving the false impression that they don’t work or are unnecessary. We failed to mandate masks and social distancing when we could have limited the disease to small, containable outbreaks. We failed to have any coherent national policy and messaging; in many cases individual states failed to have consistent policies and messaging.

In general, while fighting a horribly contagious pandemic, we Americans acted like a herd of 328 million cats. We strayed like the sheep in Handel’s Messiah: “every one to his own way.”

But in one respect only, our response to the pandemic has been spectacular: the new vaccines. Not only did we roll out safe and effective vaccines in record time. We and the Germans developed a whole new vaccine technology that offers exciting new vistas for preventative medicine. In the long run, the new vaccines promise to be as important in fighting contagious disease as Alexander Fleming’s discovery of the very first antibiotic, penicillin, in 1928.

I write, of course, about the two new so-called “mRNA” vaccines—the two-shot regimens offered by Moderna and Pfizer. This essay explains in some detail how they work and why they are so spectacular.

Vaccines themselves are nothing new. Although occasionally controversial, vaccines for smallpox have been around since the 1700s. Not surprisingly, smallpox was the first terrible contagious disease to have been wholly eradicated by vaccination.

To non-doctors and the uneducated, vaccines are counterintuitive. They protect you against a disease by giving something like the agent that causes the disease. They’re a bit like taking “the hair of the dog that bit you” to cure a hangover, but in reverse. You take the hair of the dog before being exposed to the disease agent. It protects you by provoking your body to arouse and “train” its natural defenses—antibodies, T-cells and B-cells—to recognize and fight the invader if it ever comes for real.

This counterintuitive aspect of vaccines makes them ripe for misinformation and demagoguery. How can giving you the same agent that causes the disease protect you?

But it does. As usual, the devil is the in details, here in the words “something like.” Obviously you can’t just be inoculated with the live, whole, active disease agent itself, or else you would get the disease. So the virus that causes disease must be modified in some way.

The smallpox vaccine developed by the British doctor Edward Jenner used the bug for the similar disease cowpox, which was much less dangerous. Later vaccines used viruses modified, inactivated or “killed” in the laboratory. These vaccines work because the body’s immune system can recognize and train itself to respond to certain proteins in the virus, even through the virus itself is “inactivated” from reproducing, or “killed.” (Viruses aren’t really alive like bacteria, so the word “killed” is technically inapt as applied to them.)

This was the state of art of vaccine technology for about two centuries, from the late 1700s until the late 1900s. Then it was all pretty much ad hoc experimental science, trial and error. Research doctors would find and isolate the disease agent, modify it in various ways, and see whether its use as a vaccine would provoke an effective immune response. There was no reliable general method or approach because no one understood the molecular basis of infection and immunity.

All that began to change with the discovery of DNA, or deoxyribonucleic acid. This long molecule, twisted in a double helix, is the repository of heredity and the basis of life. Three men and one woman discovered its double-helix physical structure, which makes it work. The men (Watson, Crick and Wilkins) got the Nobel Prize in 1962; the woman (Rosalind Franklin) did not because she had died by the time of the award, and Nobels must have live recipients.

It took decades for this monumental discovery of the structure of DNA to ripen into the whole field of study now known as “molecular biology.” It also took the contemporaneous development of whole fields of unrelated technology, such as computers, electron microscopes and X-ray crystallography. For DNA is no simple molecule like those you study in high-school or college inorganic chemistry. The human genome, for example, has 3.2 billion base pairs, each of which comprises two amino acids. There is no way that the human mind could conceive of such complex molecules, let alone handle them in detail, without the aid of digital computers that can store gigabytes of data in reliable memories.

Today, we have automated machines, called “sequencers,” that can “read” samples of DNA. We have enzymes that can cut specified sections out of the long molecules. We have a technique—polymerase chain reaction or PCR (the subject of yet another Nobel Prize)—that can amplify small samples of sections, or of whole DNA, for testing and sequencing. We even have a technique, called CRISPR-Cas/9—to edit DNA by inserting partial sequences in designated places.

This is just the briefest review of a whole series of tremendous advances in science and technology that made the Moderna and Pfizer mRNA vaccines possible. They could never have been developed without all the intermediate steps: the understanding of DNA, the use of electron microscopes, and the development of sequencers, PCR, and CRISPR-Cas/9.

Each of these steps, in itself, was a monumental discovery. Together, they represented half a century of the most sustained and miraculous advance of science and technology in human history. They now make it possible to “design” vaccines the way you would design a car or computer, rather than by finding what works by blind experimentation with patients, through repetitive trial and error.

To see how this is possible, we need to consider one more piece of the puzzle. DNA itself is more like a library, a catalog of information, than working parts of a molecular machine like a human cell. Proteins are the basic building blocks of working cells, and hence of our bodies. But how does the information in the DNA “library” get translated into the myriad proteins the make our bodies work?

The answer is an intermediate “blueprint” in another molecule, ribonucleic acid, or RNA. RNA serves as a template for building proteins in a cellular organelle called a “ribosome.” A specialized “messenger” RNA, or mRNA, molecule translates the blueprint in the corresponding part of the huge DNA molecule into a smaller blueprint for a particular protein. Then the ribosome uses the mRNA blueprint to construct the protein itself.

With this complex but necessary background, we can now understand how the Moderna and Pfizer mRNA vaccines work. They are the first-ever “designer” vaccines, designed and built at the molecular level to provoke an immune response to a particular protein—and only that protein—in a virus.

The protein that the designers chose for their immune target is the so-called “spike” protein of the coronavirus that causes Covid-19, namely SARS-CoV-2. This is the protein that cartoons of the virus depict as a spike sticking out of a spherical body, looking like the protruding trigger of an old-fashioned explosive sea mine.

Why is that choice so important? The “spike” protein is the means by which all known coronaviruses first enter a human cell. It’s the key to the molecular lock that lets the virus inside the cell to do its damage. Without access to a human cell’s inner machinery, the virus can’t replicate. It can’t reproduce and multiply, so it can’t cause disease. Without the spike protein entry key, the virus is just a random piece of useless junk in the bloodstream and lymphatic fluids, susceptible to elimination in urine or feces.

The Moderna and Pfizer vaccines are just bits of mRNA that code for the spike protein. They are not the spike protein itself. Instead, they instruct the ribosomes in human cells to make the spike protein, which the human immune system then recognizes as “foreign.” Over a couple of weeks, the immune system exposed to the spike protein trains itself to produce antibodies, T-cells and B-cells to attach to, neutralize and eliminate the spike protein and anything associated with it, such as the SARS-CoV-2 virus.

An additional complexity was getting the mRNA that codes for the spike protein safely inside human cells. For this, the vaccine designers had to use lipid nanoparticles containing the mRNA—essentially nanoparticles of a special fat to shield the mRNA until it can get inside the human cell and do its work. These nanoparticles took decades to develop; they require the low-temperature storage for which the mRNA vaccines are notorious.

These technical features of the mRNA vaccines underlie the reasons they are so spectacular. They did not require the usual years or decades of trial and error to develop. Instead, the mRNA vaccines were designed from the ground up, using the modern tools of molecular biology that can sequence (“read”), clip and synthesize arbitrary sequences of amino acids. The designers simply took the spike protein, read it, synthesized an mRNA sequence to code for it, synthesized the mRNA in quantity, inserted each molecule in a lipid nanoparticle, and made the result cold enough to keep everything stable until the shot in the arm. The whole thing was purpose-built at the molecular level.

The mRNA vaccines work by getting human cells themselves to manufacture the spike protein. Then the immune system recognizes it as foreign and trains itself to fight the virus. So far, every variant of the SARS-CoV-2 virus has the same spike protein, so the vaccines are effective against all known variants.

But here’s the most important point. The mRNA vaccines work by immunizing your body against the virus’ spike protein only. They do not contain (or code for) any other part of the virus. So there is no possible way—chemically, biologically or even theoretically—that the mRNA vaccines can make you sick with Covid-19. Unlike so-called “traditional” vaccines, which use a whole but inactivated virus, or parts of a virus, the mRNA vaccines do not contain or code for any part the viral machinery for self-replication, which causes disease.

Thus, if any symptoms of Covid-19 appear after a person receives an mRNA vaccine, there are only two logical possibilities. First, the patient may have been infected before being vaccinated, or shortly afterward but before the vaccine’s immunity kicked in. Second, for some reason the vaccine was not effective in that particular patient: even the unprecedented effectiveness of 95% allows some 5% of patients to get the disease after being vaccinated. There is absolutely no physical way that the vaccine itself could cause the disease or its respiratory symptoms.

In theory, the presence of the mRNA and/or spike protein in the body might cause some sort of adverse reaction, particularly in the long run. But that hasn’t yet been observed, either in the tens of thousands of patients in clinical trials or in the tens of millions of vaccinated patients in the US and around the world.

Nor is this kind of theoretical long-term effect at all likely. Pfizer’s recent data on waning of the antibody response some six months after the second shot suggest that the internal production of the spike protein is waning also. Otherwise, the antibody level would remain high, or there might even be some immune-system overreaction due to the long-term presence of the self-generated spike protein in the body.

The fact that none of these things has been observed suggests what one would expect. In the normal course of waste removal, both the self-generated spike protein and the artificial mRNA that codes for it are eliminated from the body over time. To maintain the high level of immune response, they have to be replenished, when and if desired, by a new injection, i.e., a “booster shot.”

The mRNA vaccines’ operation is thus extremely simple. It is limited to producing one molecule only, the spike protein. So it should be easy to test for residuals of the mRNA and/or spike protein in tissue, blood and lymph fluids, and to correlate their probable decay with decay in immune function. Since the vaccine contains no other part of the virus, and nothing else but the lipid nanoparticles, the risk of unintended consequences is far less than in other vaccines, which often include unknown or unintended parts of the virus. This indeed may be the source of the rare occurrences of blood clots with the Johnson and Johnson vaccine.

At the end of the day, the Moderna and Pfizer mRNA vaccines are the result of half a century of spectacular progress in medicine, molecular biology and related technology. The way that they work—targeting the specific spike protein and nothing else—makes unintended consequences unlikely, even in theory. More important, it makes “designer” vaccines, developed at unprecedented speed in response to new and evolving pathogens, practically possible. As the New York Times just reported, Dr. Fauci is pushing for federal money—a mere few billion dollars—to create vaccines against known deadly pathogens that might evolve to become more contagious.

As for me, I don’t just talk the talk; I also walk the walk. I had my second dose of the Moderna vaccine in February. From the very beginning, I wanted an mRNA vaccine because I understand how they work. (I chose Moderna over Pfizer only because Moderna’s cold-storage requirements were less severe.) As soon as a booster of either is available I will take one.

But here’s the most important point. The mRNA vaccines work by immunizing your body against the virus’ spike protein only. They do not contain (or code for) any other part of the virus. So there is no possible way—chemically, biologically or even theoretically—that the mRNA vaccines can make you sick with Covid-19. Unlike so-called “traditional” vaccines, which use a whole but inactivated virus, or unknown parts of a virus, the mRNA vaccines do not contain or code for any part the viral machinery for self-replication, which causes disease.

As for the future, I will seek out mRNA vaccines for any new or evolving pathogen (including new Covid variants) that arises during my lifetime. Not only is mRNA vaccine technology the wave of the future. It promises astoundingly rapid development of effective vaccines, precisely targeted at the weak point of any new virus that nature or our fellow man may throw at us. What could be more intelligent medicine, and what could have fewer unintended consequences, especially as compared to vaccines designed by trial and error?

ERRATUM: An earlier version of this post reported that smallpox is making a small comeback in the US due to vaccine refuseniks. That was wrong: smallpox is still eradicated; only measles is making a comeback. I regret the error and thank JMcDonald, whose comment on the DailyKos version of this post corrected it.

Permalink to this post

16 July 2021

American Child Support


For brief descriptions of and links to recent posts, click here. For an inverse-chronological list with links to all posts after January 23, 2017, click here. For a subject-matter index to posts before that date, click here.

The science is clear and unambiguous. The years immediately after a child’s birth are the most important for mental development. Long before a child ever attends school or even kindergarten—long before the child learns to speak—myriad neural connections are forming in the child’s brain. “[D]uring the first few years of life, more than one million neural connections are formed each second.”

Those connections fix the plasticity of the child’s brain for a lifetime. They influence, if not determine, his or her skill in observing, general intelligence, language and math skills, adaptability and emotional maturity. The richer and more numerous those connections are, the greater the child’s brainpower and appreciation of life.

As the great Greek philosopher Aristotle said, “Give me a child until he is seven, and I will show you the man.” In the US, formal education begins with kindergarten, at age five or six. So in our country formal education misses from 71% to 81% of the critical early period.

Wealthy parents fill this gap period by various means. They read their kids educational books, buy them educational toys, and expose kids to music and dance. Some hire superior day-care centers, even child psychologists. Some play Bach to fetuses still in the womb. Poor parents, who struggle to put food on the table and a roof over kids’ heads, do nothing of the kind.

This huge dichotomy in early-childhood care and education is, from a purely scientific perspective, the greatest conflict between America’s goals and the reality of life in the US. How can kids ever achieve equality of opportunity when their periods of most critical development are so unequal? It’s as if poor kids, as compared to middle-class and rich ones, start out life with sandbags tied around both feet.

Those sandbags stay there for life. On the average today, that means 75.1 years for males and 80.5 years for females. That’s a long time to carry extra weight!

Not all the government money that started flowing to parents with kids this week goes to kids in their early childhoods. Children up to seventeen are eligible. But the part that does go for early-childhood education marks a huge step toward enhancing social equality.

That step is long overdue. It’s too little and too late, and it only lasts one year. But it’s a lot better than nothing. We should make sure it’s just a beginning.

As a practical people, we Americans understand that education can make better citizens, smarter and more adaptable workers and (in rare cases) progenitors of scientific and technological “miracles.” That’s why we introduced compulsory basic education (through high school) and free or low-cost land-grant colleges, beginning in the nineteenth century.

But current science reveals the futility of those advances without proper early-childhood care and education. If we don’t pay attention to the immediate post-birth period, high-school and college may not matter. A kid’s future as a high-school or college dropout may already be baked in. Modern science tells us that early childhood is the fulcrum on which all later and higher education rests.

So if we want to be competitive in the coming global race for brains, we had better work on the part of life that matters the most: early childhood. At a minimum, we had better be sure that kids who spend more than half a day apart from their working parents have the best day care, with the best early-childhood education possible. Our nation’s future depends on it. That’s why PBS is spending an entire week on the issue, with a daily report on the dismal state of day care in our nation and what could be done about it.

Now that the money is beginning to roll out to families with kids, there are important political implications, too. Millions of parents will suddenly find it possible to feed, care for and educate their kids better. Some will spend the money on day care, or on better-quality day care.

At the moment, Democrats are energized and enraged about voting rights and voter suppression. There is talk of abolishing or downsizing the filibuster to stop the spate of vote-suppression measures passed by Republican-controlled state legislatures.

But fixing the filibuster is a heavy lift, and there’s more than one way to skin a cat. Millions of recipients of the child-friendly money will suddenly understand how government can work for them. Those who never voted, or voted seldom, will begin to see how their votes might matter. They will see more clearly as their children—better fed, better cared for and better educated—begin to thrive.

To say this is an opportunity for voter education is an understatement. It may be too late to make sure President Biden’s name appears on all the checks, as the Demagogue did with his Covid relief checks. (To speed the money to its parents, much of it has gone by direct deposit, not checks.) But it’s never too late to begin reminding recipients which party and what President is responsible for the wherewithal to take better care of their kids. The government must do what it can legally to make the point, and every voter education and empowerment group should take it from there. This is the policy-promotion opportunity of a generation.

But it’s far from the end. The current tranche of payments for children lasts only one year. The program needs to continue. Most important, it needs to create a nationwide system of universal, high-quality day care for children so that both parents can work and kids can receive the best early-childhood education our society can provide.

This is not a “women’s issue.” It’s a national-security, competitiveness and survival issue. Now that we know that the earliest years matter most, we have no excuse for not applying that science in action. Surely China, Germany and our other competitors will.

Our own forebears recognized the importance of high-school and college education. If we miss the boat on early-childhood education, our “exceptionalism” will dissolve, and in far less time than the average human life-span.

Permalink to this post

11 July 2021

Afghanistan’s Terrible Secret


For brief descriptions of and links to recent posts, click here. For an inverse-chronological list with links to all posts after January 23, 2017, click here. For a subject-matter index to posts before that date, click here.

Why does Afghanistan have a well-deserved reputation as the “graveyard of empires”? Alexander the Great, the Brits, the Soviets and now our own forces—all these foreign armies came and saw and conquered, just like Caesar. But they didn’t stay long.

Why is that? Was it the skill and tenacity of native Afghan fighters? Could they really match the Vietnamese, who not only kicked out the world’s greatest superpower, but actually defeated it, albeit at the cost of appalling losses of life, limb and property?

No, the secret of Afghanistan’s “success” is far more basic. It’s not a place that outsiders really want to stay and live. It’s high, arid, mountainous terrain is not suitable for farming or for bringing crops to market (except for opium poppies, which are valuable only because their product is highly sought but illegal). It has no oil or other valuable minerals, and those it may have are hard to find, extract and transport, let alone in a society perpetually mired in inter-ethnic turmoil. For a while, pols trying to justify our forever occupation touted Afghanistan as a source of rare earth metals, but you don’t hear that much anymore.

The truth is that Afghanistan is one of the poorest countries on Earth. Even today’s much-troubled Haiti has better soil and climate for farming, a winter that you don’t need stone homes burning rare trees or fossil fuels to survive through, and a topography far more suitable to transporting stuff by land or sea. “Hardscrabble” is a euphemism for Afghanistan, a kind word of encouragement.

So it wasn’t so much that indomitable Afghan fighters kicked the world’s conquerors out. It’s more that the conquerors all lost interest and went home.

That, of course, is precisely what we are now doing. Of all the many things I’ve read about Afghanistan in our twenty years of waging war there, a recent half-page op-ed piece best paints this sordid truth. Penned by one Farah Stockman, it describes in detail how our forces, contractors, camp followers and diplomats have distorted Afghanistan’s culture and tiny economy beyond all recognition. The report focuses sensitively on the abject plights of females and single men hoping to marry them. But it also tells the tale of an entire national economy bloated, twisted, corrupted and defiled by sustained contact with our incomparably larger one.

Two simple figures outline the story. Many Afghans who had a smattering of English volunteered to serve us as our translators because they could earn salaries of $1,000 per month or more—an impossible bonanza for ordinary Afghans. Yet since 2009, the Taliban have been able to entice fighters from the Afghan National Army by paying them salaries just a bit more than $70 per month. [Search linked source for “exploited.”]

How can the Taliban still afford to do that, after forty years of almost relentless war? Well, they own the countryside, where the food is grown. After we Yanks leave with our planes, Humvees, helicopter maintenance crews, financiers, consultants and Internet specialists, that’s what will provide the most basic source of wealth in Afghanistan: food. What we are witnessing as the artificial economy we have created and sustained for a generation implodes is a return to basics.

Without the generalizing that I have done here, that’s the precise picture that Stockman’s piece presents. We have inflated Afghanistan’s economy like a balloon, to an impossibly unnatural extent. As we leave, it’s already starting to collapse. Our pols and our people will never provide the whole nation with the standard of living that our conquerors and their enablers demanded while there, and that Afghans have never been able to provide for themselves on their own.

When you think about it, the prominence of food in the Afghan economy even explains some parts of its culture that we don’t like. In a country where food is hard to come by, and where women’s traditional role is confined to having and raising babies, things otherwise sick and strange to us start to make sense. It makes sense to cover women up, so that their sexual attraction doesn’t lead to more mouths to feed. It even makes more sense to treat women as commodities, to be sold into marriage for dowries, because it’s better for a richer man, who can feed the resulting extra mouths, to have them. That fact that all this leaves many young men without mates is mere social collateral damage. (Similar reasons motivated the Chinese custom of female infanticide while law regulated the number of children per family. One culture’s evil is another’s pragmatism or necessity.)

We in the “developed” world live in an incredibly complex and intricate society and culture, incomparably richer and more varied than Afghanistan’s. We have tens of thousands of foodstuffs in our supermarkets, and tens of thousands more items of hardware in our Lowe’s and Home Depot stores. So we tend to forget that ample (not just adequate) food is the fount of human civilization. Human civilization did not begin until agriculture and animal husbandry grew advanced enough to allow only a portion of our human population to feed the whole, leaving the rest to pursue other things.

Afghanistan is one of the few nations on Earth that teeters close to that pre-civilization precipice today. It has for most of its history. That’s why no conqueror has stayed there for long. And that is Afghanistan’s terrible secret.

But that secret is not unique to Afghanistan. Its truth will remain with our species always, however much we may claim to have mastered our earthly environment, and however far from reality and sense our hubris takes us. For famine and the primacy of food—at least at some times and some places—are never far away or beyond the risk of chance.

Climate change threatens to make other parts of the world much more like Afghanistan. Already we have serious scholars reporting that drought in the Middle East and the Horn of Africa has helped motivate migration, civil wars, and turmoil in places like Syria, Yemen, Eritrea and Somalia. When the land no longer supports the people who live on it, they seek other places and others’ lands. Mass migration and war, including civil war, often follows.

These disasters in other parts of the world are canaries in the coal mine for us and the rest of the so-called “developed” world. What happens when drought decimates the farms of California’s Central Valley, America’s predominant breadbasket for fruits, vegetables and nuts? What happens when freak hailstorms, cold snaps and heat waves cut crop yields nationwide? What happens when similar events occurring worldwide cut the reliability of the “global food supply chain” and set nation against nation?

There are already far too many of us humans on our small and ecologically fragile planet. As climate change presses our collective food supply, we will all begin to discover how much more a loaf of bread is worth than an iPhone or a popular digital song. Then we will begin to learn Afghanistan’s terrible secret for ourselves.

If we don’t start thinking seriously now about how to avoid that calamity, when it comes to us unbidden at home, anything we do will probably be too late. Afghanistan’s terrible secret will come to afflict us all.

Permalink to this post

04 July 2021

War with No Plan


For brief descriptions of and links to recent posts, click here. For an inverse-chronological list with links to all posts after January 23, 2017, click here. For a subject-matter index to posts before that date, click here.
    “When will they ever learn? When will they ev-er learn?” Pete Seeger, “Where have all the flowers gone?” (early 1960s)
The last major war the US actually won was Gulf I. Colin Powell, then the Chairman of our Joint Chiefs, was in charge. He won the war in 42 days. So Gulf I was not just the most stunningly successful of all our major wars. It was by far the shortest, including our War of Independence (six years), our Civil War (less than four) and the two World Wars.

How did Powell win so quickly? With a plan, a good plan. His plan became known as the “Powell Doctrine.” It had three simple parts: (1) a clear and achievable objective; (2) overwhelming force; and (3) a clear exit strategy. In other words, you have a simple goal; you go in with overwhelming force; and, as soon as you achieve your goal you get the hell out.

To provide the overwhelming force, President G.H.W. Bush took months to assemble a coalition of 35 allies. General Powell took five months to transport half a million troops and their armor into the theater. But they never invaded Baghdad. They never deposed Saddam Hussein. They didn’t mess with the extraordinary internal complexities of Iraqi society—a chimera that the Brits had sewn together from three warring ethnics groups (Shiites, Sunnis and Kurds) for the precise purpose of making it ungovernable save by external force.

Instead, Powell stuck to his plan. He expelled Saddam’s invasion forces from the Kuwaiti oil fields that they had occupied, destroyed most of their invasion force and its armor, declared victory and went home.

The contrast with Vietnam and our two most recent major wars is beyond dismal. We’ve been in Afghanistan for nearly twenty years and in Iraq for over eighteen. The comparative tally of lives lost and money spent so far goes as follows:

War’s Unintended Consequences

WarUS DeathsEnemy/Other DeathsUS CostDuration
Vietnam58,2003.35 million843.6 billion 2019 dollars21 years
Afghanistan2,30074,100$978 billion20 years*
Iraq4,586208,547 civilians$1.922 trillion18 years*
Gulf I14826,000102 billion 2019 dollars0.12 years

* So far

A glance at this table tells you all you need to know about making war without a plan. When the Taliban take over Afghanistan, as most observers now expect within two years, we will have chalked up our second unambiguous loss, after Vietnam. While Iraq is still a stalemate and may remain so for some time, it’s clear that our messing around inside Iraq has benefitted no one more than Iran and its proxies inside and outside Iraq.

So who’s at fault? Is it our Pentagon? our men and women warriors? I don’t think so. They have the best equipment, the highest technology, and the most meticulous education and training of any military in the world. And despite all the attempts to politicize them, they still salute and obey their civilian leaders, as our Constitution requires.

The problem lies with those leaders. In Iraq they made war based on inaccurate intelligence (and a fact-free belief) that Saddam had WMD. In Afghanistan they made war to keep that nation from becoming a launching pad for terrorism—a goal that President Obama and his team achieved, insofar as involved Osama bin Laden, with two helicopters and a team of Navy Seals. After bin Laden’s execution, the goal of our long occupation of Afghanistan became as murky as that of our invasion of Iraq.

Sure, when you are attacked or invaded, you may not have a choice of when and how to go to war. But the last time we were attacked by a recognizable nation-state was at Pearl Harbor. No nation-state today, including North Korea, is going to repeat that atrocity against a force with a huge and accurate nuclear arsenal and a fleet of nuclear submarines ready to deliver incineration to any enemy, virtually anywhere in the world, in fifteen minutes or less.

So what we face for the indefinite future is optional wars, like our now-two-longest ones, in which we chose when, how, and whether to fight. Whenever we have a choice, we ought to have a plan before proceeding. And as I have argued previously, our debacles in Vietnam and Iraq push for a plan in which the professionals in the Pentagon, not amateur pols, at least buy in fully before we move. In Iraq and Afghanistan, as in Vietnam, that never happened.

The sad truth is that our pols and our system learned nothing from our catastrophic loss in Vietnam. Our Army and Marines learned how better to fight insurgencies. But our pols learned precisely the wrong lesson: they thought that an “all-volunteer” military, staffed by troops with little opportunity for other work and therefore little political power, would be easier to manipulate for political purposes without popular blowback.

They were right about the weak blowback, but they were dead wrong about the “wisdom” of using military force without a good plan. They seem to have missed the central lesson of the now 42-year-old Islamic Revolution in Iran: the force of religion, namely Islam, seems to be the only thing capable of removing a dictator in Islamic nations. As a result, our leaders have caused or accelerated the metastasis of extremist Islam in Saudi Arabia (Al Qaeda), Iraq and Syria (ISIS), and significant parts of Africa (Boko Haram, Al Shabaab, etc.).

We may never again have another SecDef as wrongheaded and bullheaded as Robert S. McNamara or the late Donald Rumsfeld. Certainly it’s hard to imagine any future SecDef, like Rumsfeld, insisting on sending less than half the force to occupy an entire country that the most experienced generals recommended. But I hope we won’t ever again take that risk.

It seems to me that the waiver Congress had to pass to let General Lloyd J. Austin III become SecDef was counterproductive. In an age when wars will almost always be optional, we want to have experienced military leaders bringing a touch of realism and experience into the political cauldrons of the Situation Room and the Oval Office, not to mention the personal experience of combat. We just need safeguards to insure than no military leader is promised the SecDef position in advance for political reasons. And as for General Austin himself, I am overjoyed to have a distinguished Black general and combat veteran in ultimate charge of rooting out extremism and white supremacy from our military.

Our Pentagon has five sides. Three could state the three points of the Powell Doctrine, chiseled in stone, like the legend “Equal Justice under Law” over our Supreme Court. The fourth side could give credit to Colin Powell for pointing out that, when you have a choice to go to war, you ought to have a good plan. And the fifth side, facing the White House, should bear a stone inscription to the effect that “This Means You!!!”

We seem to have lost two wars unambiguously (Vietnam and Afghanistan) after going to war without a plan. A third (Iraq) is a stalemate with an uncertain and risky future. How many more such wars are we going to fight and lose or draw—with myriad unintended consequences—before we give the experts at planning, our own military, a bigger seat at the table?

The architects of our losing and least decisive wars were all, without exception, civilian pols with strong political visions but limited military experience and no effective plans for achieving their goals. Their wishful thinking about war on the cheap was not enough to go to war; it never will be. The leaders who made those blunders, not the ones who ended the debacles, should bear the blame.

Permalink to this post