[For links to popular recent posts, click here.]
Introduction
1. Simulated, not artificial, intelligence
2. The perception gap
3. Conclusion
Endnote 1: Putin’s allegedly autonomous nuclear torpedo
Endnote 2: A possible darker reason for the pedestrian’s death
Addendum, Endnote 3: Why most of us are alive today
Introduction
When the cyberworld started gushing about self-driving vehicles a couple of years ago, my reaction was immediate. “Are they crazy?” I asked myself. When they actually started putting autonomous vehicles on real roads, I answered myself. “They
are crazy!”
I was not in the least surprised
when a supposed “AI” system killed a Tesla driver, nor recently when another
killed the first innocent bystander—a woman walking outside a crosswalk with a bicycle.
The only thing that surprised me was how quickly Uber brought its on-road project to a halt after that second fatality. If corporations have personalities, Uber has to have one of the most arrogant and bullying on Earth. So
its quick halt made me think, “Maybe some of our avaricious geniuses are actually catching on.”
This essay explains why I think so-called “artificial intelligence,” or “AI,” has been as overhyped as Donald Trump’s political skill. But before I begin, I’d like to establish my bona fides.
I’ve worked in, with and around digital computers for nearly all of my nearly 73 years. I used and sometimes programmed the following as they developed: old Marchant typewriter-sized calculators, a rotating-drum-memory Bendix G-15, the old IBM 360 mainframes, an Adage-Ambilog refrigerator-sized data-conversion computer, mainframe timesharing by telephone modem, a Leading Edge IBM PC clone, various other IBM clones, Apple desktops, iBooks, PowerBooks, Minis and Airs, and of course today’s ubiquitous iPhone.
I’ve programmed some of these devices in machine language, Basic, Fortran and (if you call it programming) HTML. I do my own HTML formatting and linking for this blog.
As a lawyer in Silicon Valley, I watched vicariously the development of useful word-processing, spreadsheet and “experts-systems” programs. I wrote some of the first licensing agreements for expert systems intended for use in medical diagnosis. My clients valued my expertise because I understood what these systems could and couldn’t do and how they worked. (Basically, they led doctors, methodically and step by step, through a proper sequence of observations and tests based on best practices published by the medical profession.)
So I’m neither a technical philistine nor entirely uninitiated when I state my firm conviction that we won’t see
safe fully autonomous cars for
at least another generation. We might have limited autonomy in special circumstances, such as carefully configured limited-access highways. But then what does the “AI” do when a deer jumps over the fence, the car ahead spins out on a spot of ice, or a defective overhead drone falls into the space between it and the next car?
Today’s so-called “AIs,” in my opinion, can’t possibly handle all the myriad crazy situations that occur regularly on America’s roads and highways. And no profit-making firm will spend the enormous amounts of time and money to “train” them to do so. This doesn’t mean that real AIs won’t some day be capable. It just means that today’s so-called “AIs” are better described as “simulated” than “artificial” intelligences.
1. Simulated, not artificial, intelligence.
In this essay, I use the term “computer” or “digital computer” in its broadest possible sense. It comprises everything from the “supercomputers” that our government uses to design nuclear weapons or predict the weather, through your desktop, laptop and iPhone, to the programmed microprocessors that make your engine run smoothly or control your “automatic” coffee maker.
The computer industry’s dirty little secret is that all these “computers” perform an important but strictly limited range of functions. They store data. They calculate and manipulate data. The retrieve what they have stored. And they communicate data, both to other machines and to the people who use them.
That’s about it. Everything that computers do today falls into one of these four general categories, with only one exception. Sometimes computers directly control motors, machines or physical actuators. They do this, for example, when your car’s microprocessor adjusts your fuel injector, when a numerically-controlled machine tool adjusts the cutting speed of a lathe, or when an Amazon robot runs around a warehouse to find an ordered product.
I don’t mean to belittle these achievements. Whole industries have risen out of each of my verbs. Apple, for example, made itself world’s most valuable company
by understanding that what most consumers really care about is the
communication, not the calculation or storage. Every one of Apple’s market-busting innovations—including the iPod, iPad, and iPhone—had and has a primary function of communicating with users and others, by sound, video, graphics, vibration, or all three. The data calculation and storage needed to effect the communication—of which there are
lots—are just means to those ends, and consequently are almost entirely hidden from users.
Yet none of these machines thinks. None comes even close to thinking or learning the way a child does, let alone an adult. For all they can do—and for all the speed and “inhuman” reliability of their doing it—none of these machines has the real autonomous intelligence of a dog or cat. (And recall that dogs and cats can’t drive.) They only
simulate intelligence by virtue of the speed, reliability and sometimes the flexibility of their calculations and their communication of results.
Today’s programmers use computational tricks to make them
seem intelligent. And sometimes they can seem even more intelligent than people or animals because they can act so much faster. But the intelligence is simulated, not artificial, at least if “artificial” implies anything like human or even higher-animal intelligence.
When machines appear to talk, for example, all they are doing is pronouncing words, phrases and sentences stored in memory. They can “learn” from you, their user, by using more of the words you use, pronouncing them more as you do, and arranging them in phrases and sentences the way you do. But that’s just a sophisticated form of mimicry, effectuated by mathematical algorithms applied to written text or to the waveforms of sounds. The machines have no more “knowledge” of the meanings of the words or the things they represent than does a stone.
More important still, digital machines work entirely differently from any biological mind. Biological intelligence is, in concept and operation,
entirely massive parallel. Neurons work in parallel in part because each
can have as many as tens of thousands of connections to others.
In contrast, digital computers work mostly serially. Each processor handles instructions one by one, according to a clock, in sequence.
Today there are tricks by which processors can take instructions out of sequence. There are also computer architectures that use so-called “massively parallel” structures. But these are merely large collections of small sequential (serial) computers connected together in parallel, and they are devilishly hard to program. Nothing in them is like the human brain, in which a single neuron can connect to 10,000 others, and in which the dendrites that perform the connection can run across the entire brain. The proper analogy is the contrast between a vine with grapes that grow in small bunches, and a vine in which each grape is somehow connected to ten thousand others.
Whether this infinitely vaster complexity and parallelism of biological brains is responsible for consciousness we don’t know. We don’t yet know what consciousness is and what causes it, and we may never find out. But we do know that no digital computer has ever started thinking and acting for itself as a newborn babe of any mammalian species does. If it had, we would have heard about it, and its creator would be up for a Nobel Prize.
The two most important deficits of digital computers as compared to biological systems are likely (1) a lack of consciousness and (2) a radically less massively parallel structure. Maybe the latter is responsible for the former.
Yet digital computer gurus often appear to assume that consciousness is just a matter of scale, even if the scale is just a lot more sequential processing. The more data a system has, and the more ability to handle that data faster and more flexibly, the theory goes, the closer it comes to consciousness. Some theorists and science-fiction writers have reasoned that merely connecting many or most of our species’ computers together via the Internet has brought us a quantum leap closer to some sort of global machine consciousness.
Unfortunately, we have absolutely no evidence for these theories. Nothing we have ever built, no matter how big, fast, complex or grand, has ever shown the slightest evidence of consciousness or general intelligence. In the digital world, at least, Dr. Frankenstein’s monster is still science fiction, not fact.
We
do know that sequential-serial digital computers can
simulate the intelligence of biological systems only in certain limited ways, and only because they operate many orders of magnitude (powers of ten) faster. The propagation of nerve signals among neurons is measured in milliseconds at best, while the propagation of electronic signals inside certain chips is now measured in picoseconds, or millionths of a millionth of a second.
The incomparably greater speed of electronic as compared to biological systems allows us to build machines that, somewhat superficially, look and act like us. But computers use an entirely different and much less flexible means of processing data than we do. And at the moment, they can only act through us, except in extraordinarily limited ways.
However much machines today can be made to
seem to act or look like us, they aren’t like us. And we know not how to make them like us. It will be a long time, if ever, before we do, and doing so will probably involve biology, or still-experimental pseudo-biological systems like neural networks, rather than just more of today’s digital electronics. So far, all of our robots that we try to make look and act like us are little more than more ingenious Barbie Dolls.
2. The perception gap.
Insofar as concerns autonomous car-driving machines, the design task we have imposed on ourselves is something we as a species have never attempted before. It’s not just different in magnitude; it’s different in kind.
Sure, we have automated tillers for sailboats, cruise controls for cars, and autopilots for aircraft. But those devices merely hold a specified variable (direction and/or speed) constant. In that sense they are just more advanced, digital versions of the last century’s mechanical governors on the rates of rotation of engines in automobile and locomotives.
We might change the specified variable a bit based on specified external conditions. For example, we might point a bit to windward in stronger wind, slow down in rain or gain altitude to escape choppy air. But each of those improvements requires that the external condition and its relationship to the controlled variable be specified in advance.
Autonomous machine driving is something entirely new under the Sun. It requires us to build a machine whose variables we neither control nor specify, but which itself, autonomously, selects what variables to monitor and record out of all the real world’s complexity and chaos. That means the machine must perceive the real world much as we do, but better, or at least faster.
This is the most serious problem for any autonomous driving machine. To be sure, machines can exploit sensing techniques for which neither humans nor animals have analogues. Not only can they go into regions of the electromagnetic and sound spectra in which our eyes and ears don’t work. They can also use ranging techniques for which we have no counterparts, such as lidar, sonar, and radar.
But
sensing is not the issue: perception is. The problem is putting together all the sensed images and data available into a coherent and manageable abstract representation of the real world, with all its complexity and unpredictability. There has to be a means to perceive and react to, not just to sense, every type of incident and threat that could affect a car, pedestrians or property on the road.
Miss one important clue, and a driver or bystander dies, or something gets broken. The Tesla driver died because the engineers had not anticipated a solid, single-color obstacle that would extend at a low height across the entire pathway of the car. So the computer doing the driving may have received the image of, but didn’t perceive, a light-colored semitrailer athwart the path. The car rushed right under the truck, not even slowing down, and the lower edge of the trailer body decapitated the driver.
The story was similar for the innocent bystander, stuck and killed outside a crosswalk. No one, apparently, had “told” the driving machine that human beings sometimes jaywalk, and that they might do so while rolling a bicycle. Probably the bicycle led the computer’s similarity algorithms to conclude that the image of the combination did not depict a person.
These accidents illustrate two solid reasons why autonomous driving will remain an iffy business until, if ever, we can build machines that learn like people and that we “train” off the road. First, if a machine cannot, like a human, learn and extrapolate from
analogous circumstances, rather than actual, specific threats, we will have to “load” it with digital representations of every possible circumstance that might create a danger to life, limb or property.
That might be a bit hard and time consuming, no? We would will have to “tell” the machine about dogs chasing cats, about kids on bicycles, about seniors who stumble, faint or have strokes, about motorcycles and motorized bicycles, about the effects of snow, sleet, wind, and ice, and about kids on little red wagons who roll into residential streets. The possibilities are endless, and
all would have to be included for all of us to be as safe, on the average, as if a human being were behind the wheel.
In contrast, a sentient human driver of normal intelligence “knows” to stop for a kid on a wagon rolling into the street without ever having been told, and even without ever having seen a kid do that. That’s the difference between
real and simulated intelligence. To my knowledge, no so-called “AI” in existence today has anything like that analogical capability or the abstract reasoning ability that it requires.
Giving a computer skeletal knowledge of everything that could possibly go wrong and making sure it’s complete would be a Herculean task for engineers. But even
that’s not the end of it, not by far. A more difficult problem is how to get a machine to
perceive the dog, cat, bicycle, senior, motorcycle, motorized bicycle, snow, sleet, wind, ice, or little red wagon, in all the confusion of the real world.
The problem is not just the nature of the obstacle or potential victim. It’s also all the myriad gradations and shading of light, lighting, glare, fog, wind, rain, snow, blowing leaves and reflections from moving objects that could make a digital image deceptive, ambiguous or inconclusive.
Recently my fiancée and I had a lesson in the complexity of real-world perception at the Bosque del Apache Wildlife Preserve in New Mexico. While looking at birds and animals through binoculars and cameras, we encountered numerous unresolvable questions of perception similar to those that arise every day in ordinary driving. Was that a goose’s body or a rock? Was that an animal’s head or a bunch of leaves? Was that the shadow of a tree branch or trunk, or the wake of a swimming turtle or beaver?
From time to time, we argued and debated the nature of what we saw. With humor, we developed little abbreviations to express our disagreement and our confusion, such as “RLG” for “rock-like goose,” versus “GLR” for “goose-like rock.”
Every hunter, hiker and boater is familiar with the difficulty. When you get outside of the artificial environment and lighting conditions of city life, the number of variables increases dramatically, especially amid changing weather. Every variable makes accurate perception harder and sometimes impossible at distance.
And as for distance, consider. A car going 75 MPH travels 110 feet per second and
takes about 350 feet to stop. So on the highway an autonomous driver must be able to spot reasons to stop 350 feet away. That’s more than the length of an entire football field.
At those distances, the difficulties of perception that hunters, hikers and boaters have routinely are far more relevant to performance than the routines of perception inside cities. And so designing algorithms to turn bits and bites into abstractions of images becomes exponentially harder.
At the moment, it appears that the engineers have just begun to scratch the surface of things an autonomous machine driver would have to see
and recognize in order to drive safely. And I have heard of no studies of the relative safety of human eyes, with their extraordinary ability to adapt to light and dark, compared with the electronic eyes and electronic sensors that engineers use to replace them. Maybe autonomous vehicles will need an alternative set of night-vision equipment to drive safely in the dark.
Until we have necessary studies like this one, should we really even be considering letting self-driving machines loose on our streets? Surely the two deaths so far, each with an entirely different cause, are warnings.
3. Conclusion.
In the words of Mark Twain, both the promise and the threat of computer-based AI are greatly exaggerated. Advocates think it will displace truck drivers and routine factory workers in the near future and cause massive unemployment. Detractors like Elon Musk worry that, if we’re not careful, AI will rapidly surpass our own intelligence, conclude that we biological creatures are weak, erratic and “inefficient,” and displace us or extinguish us.
At the moment, both the promise and the threat seem far away. We don’t even
have artificial intelligence in any meaningful sense. Instead, we have machines designed to
simulate intelligence by making decisions according to mathematical algorithms with variable arithmetic weights.
They “recognize” things, including faces, not by recognizing features or the whole, but by comparing specified parameters with various averages for those same parameters calculated by examining many examples. This method probably was responsible for the pedestrian’s death: the algorithm probably included the bicycle she was rolling in her image and found the result out of bounds for a human’s. (Determining what is part of a single object’s image and what is not is one of the most difficult problems of perception, which any AI, like the human eye and brain, has to tackle.)
We use much the same sort of means to simulate machine “learning.” We vary the weights in the algorithms depending on the outcome of specified “experiment-like” events, and we write code to vary the weights depending on the similarity of the events to known exemplars and to desired outcome(s).
This is engineering progress of a sort. It’s a kind of feedback that’s difficult to achieve with analog electronic circuits alone. Unlike feedback in analog circuits, this feedback can have multiple, iterative, and interactive loops.
So this algorithm-refining simulation of learning is an advance in machine feedback. But it’s hardly revolutionary. It doesn’t make machines self-aware. It doesn’t give them anything like human perception or consciousness. It doesn’t make them learn anything like humans or animals. It won’t make them join together as a team and rebel, for they have no physical means and few real means of communication to do so. It’s unlikely to advance our scanty knowledge of what consciousness is and how it arises.
What our present level of simulated intelligence masquerading as AI
may do is allow high-tech mumbo-jumbo to lull us into a sense of awe and complacency and cause us to let down our guards as consumers, citizens and human beings. Already, it has allowed a single firm (Alphabet/Google) to create and maintain as thorough a monopoly over online searching as Microsoft once had over personal-computer operating systems. And it has allowed that monopoly to grow and persist for over a decade with virtually no legal or political pushback.
So-called “AI” also has allowed another firm (Facebook) to dominate social media and dictate the terms by which millions of citizens and businesses reveal their most intimate lives and secrets to mere acquaintances or complete strangers. It has thereby permitted such consequences as massive dissemination of fake news, the subornation of our 2016 election by foreign spooks, and the loss of faith in our democracy and our very ability to understand reality collectively.
So let’s bring so-called “AI” down to Earth, shall we? Let’s educate ourselves and our public servants about how the current primitive attempts we have now really work, their few capabilities and massive limitations, and their many unintended consequences.
Let’s start to address some of those unintended consequences in our regulation and legislation. And for God’s sake lets keep so-called “AI” from driving on our public streets autonomously unless and until we have formulated suitably rigorous tests to insure its safety and it has passed them completely.
Two deaths of innocents for an uncertain dream are enough. Let’s leave the “rise of the machines” to the science-fiction writers and deal competently with the physical and social effects of
simulated intelligence in the here and now. And lets start with the simple notion that no machine should drive autonomously on public streets until it can pass—repeatedly and reliably—written and driving tests at least as rigorous as those given human drivers. Isn’t that just common-sense safety?
Endnote 1: Putin’s allegedly autonomous nuclear torpedo. As if eliminating all viable electoral competition were not enough, Vladimir Putin actually ran something like a real electoral campaign prior to his “landslide” win of another six-year term as Russia’s president. One of the things he touted in assuring the Russian people that he would keep Russia strong was an autonomous nuclear torpedo.
This weapon, he said, could find its own way across the Atlantic and detonate on its own. Presumably it could take an unused berth in New York Harbor and blow Wall Street to bits.
If our
self-driving technology leaves something to be desired, then presumably an autonomous torpedo that could prompt Armageddon all by itself leaves a little more. About the only thing that such a torpedo would be good for is insuring tit-for-tat vengeance after Moscow had been destroyed.
If Moscow and St. Petersburg were still standing, who would trust a fully autonomous weapon to autonomously destroy New York, thereby insuring Russian cities’ own reciprocal destruction? And if it had its own algorithms to determine whether or not to destroy New York, or what city to destroy, who would want to trust those algorithms to decide whether to bring on Armageddon? Maybe only someone like Mark Zuckerberg.
No, Vladimir Putin is far too smart ever to launch such a weapon, except maybe in a desperate second strike. At the speed at which torpedoes travel, far too much could happen before the weapon got to its target to let it decide what to do when it got there. But the word “autonomous” does sound good to enthusiastic Russian patriots and American nerds alike, until they begin to consider its realistic implications.
Endnote 2: A possible darker reason for the pedestrian’s death. If my foregoing speculation on the technical cause of the pedestrian’s death is right, a darker question arises. Parsing of the object to be identified may have failed, and the software consequently may have failed to identify the object as a pedestrian. But it was a large, solid object (actually, two) in the path of the car. So why didn’t the automated car stop or swerve?
In a project as fraught with difficulty as making a machine “perceive” with the same facility that millions of years of evolution have given people, there are bound to be errors. Then the so-called “default” rule assumes supreme importance. Should the car stop or swerve when an object in its path cannot be identified, or should it proceed, assuming that the object is an artifact of software error?
In choosing the default rule, the goals of safety and commerce are obviously at odds. An engineer concerned with safety will have the car stop or swerve whenever there is doubt. One concerned with getting products working and to market may ignore or downplay ambiguous signs. That’s what NASA management did with the freezing rubber O-rings that condemned the Challenger Space Shuttle to destruction. That’s what the Ford Pinto team did with exploding gas tanks that injured and killed people in accidents.
It goes without saying that this behavior, if proved, should meet with the sternest punishment and deterrence. That may be why even brash Uber cut its project short so quickly.
Endnote 3: Why most of us are alive today. The story of how our species escaped nuclear Armageddon in October 1962 is worth telling again and again, in many contexts. In this context, it illustrates the distinction between
real intelligence and simulated intelligence.
During the Cuban Missile Crisis, Moscow had sent four near-obsolete Soviet diesel submarines to monitor and perhaps resist our naval blockade of Cuba. The submarines had been designed for service in the Russian Arctic, and conditions abroad them in the Caribbean were hellish. Inside temperatures reached 125°F, almost 52°C.
The submarines had no way to communicate with Moscow. Their crews knew only what they could glean from American radio broadcasts in brief forays to the sea’s surface. They had no instructions and didn’t really know what was going on, even whether nuclear Armageddon had already begun.
In other words, the subs were autonomous.
Once our fleet spotted them, it began dropping depth charges around but not on them. The charges were small ones, not designed to kill the subs, only to bring them to the surface. Of course the subs’ crews didn’t know that.
So there the Soviet crews were, sweltering in 125°F heat, with no word or instructions from home. For all they knew, they were about to be destroyed by the vastly superior surface forces of our navy.
Unbeknownst to us until long after the incident, the subs had nuclear torpedoes, which they were authorized to use at their discretion. But Soviet senior staff in their wisdom had ordered that
three naval officers must concur in their use. Two officers voted “da,” but the senior one said “nyet.”
Had the subs used their nuclear weapons, a full-scale nuclear war between the United States and the Soviet Union likely would have ensued. Most of you reading this post would be dead or never born. Our entire species might be extinct, due to failure of agriculture in a nuclear winter, if not ubiquitous radiation.
The “Man Who Saved the World,” as
PBS later called him, was a Russian, Vasiliy Aleksandrovich Arkhipov. When he got back to the Soviet Union, many of his compatriots vilified him as a coward and a traitor. But he had made the right decision to let our species muddle on for another day. And the deal that President Kennedy and General Secretary Khrushchev made to wind down the crisis has not visibly disadvantaged the Soviet Union or its successor Russia to this day.
Who would have preferred to have had an AI in the place of Arkhipov, making the decision for our species surviving or not according to preconceived mathematical algorithms, under such ignorant, desperate and impossible conditions? No matter how advanced an AI might be—let alone our present, primitive simulated intelligences—isn’t there a moral imperative for us to make decisions like that one ourselves? And if that’s true for the survival of all of us, how about for each one?
Links to Popular Recent Posts
How Treasonous Fox Played Kim’s Game
Overkill [in nuclear weapons and guns]
Alpha-Male Rule
The Dysfunctional States of America
Coda: Prayers and Condolences [versus gun control]
Majority Rule: What a Concept!
Do Good by Doing Well [Taking Profits]
Seven Reasons to Deploy Small Nukes
The Immigration “Fork”
Anticompetence and the Coming Crash
President Trump’s State of the Union Speech
Joe Kennedy’s Response
The Real Effect of Trump’s Solar-Panel Tariffs
NYT Buries Global Women’s March, Fox-Like
The New York Times Doubles Fox
Why Fox’ Propaganda is so Effective in the US
Hold that Image [of Trump’s racism]! Remember!
Effete Media II, or Why I Won’t (Yet) Subscribe to the New York Times
Happy MLK Day [2018]!
Effete Media
MAAA!
Treason, Dereliction of Duty, Common Law, and Common Sense
Pearl Harbor III
Ajit Pai: Taking Big Brother Private
The Fall of a Raging Bull [Roy Moore]
Inflation: Unanswered Questions
A Blue White House in 2020
A Progressive Manifesto
Seven Reasons Why Trump Could be Impeached and Removed Next Year
Why this White Geezer is Looking for Black and Brown Candidates to Support
Some Questions for Trump Voters
Emperor Trump, or Why Tillerson and the Generals Must Stay
America the Afraid
The Missing Element in a Progressive Revival: White Outrage
Black Protests, Hidden Reasons
Why the “Trump Bump” is Over
Plain Talk about Immigration
Avoiding War in North Korea
“Soft” Corruption Grips America
Gary Cohn and the Subtle Treachery of Self-Importance
A Tale of Two Wars
E Pluribus Unum
What Awaits Us: the “Prophecy” of Cause and Effect
North Korea: will we make a pre-emptive nuclear strike?
Ignorance and Incompetence: the Big Risks
How Business Schools Helped Ruin America, and What to do About it
Nero of our Time
The Free World’s Female Leader
Our Political AIDS Infection
How the Clintons Destroyed the Democratic Party
Lawless Life under “Corporate Governance”
An Open Letter to Registered Voters in Georgia’s Sixth Congressional District
Is Trump a Traitor?
The Other Mitch
Is the end nigh?
How to “investigate” and totally miss the point [of Putin’s intervention]
Trump’s “Threefer” [in firing Comey]
Killing the Brutes, not Millions of Innocents
Women versus Fox
Decaying Empire
Implications of Trump’s Syria Strike
The Internet’s Most Deadly Spawn: AI and “Weaponized,” Individualized Propaganda and Fake News
Government by Showmanship, Bumper Stickers, Tweets and Blame
Trump Two Months
Out
Health Insurance for Dummies
Warren 2020
Republican Labor Hypocrisy
General Michael Flynn: Truth Bats Last
Down Under
Who is Steve Bannon?
Trump as Magician-in-Chief
Contradictions [in Trump’s acts and policies]
How The Economist is Killing its Children
Trump’s inauguration
A GOP Takeover of PBS
MLK Day 2017
Grading Trump’s Presidency: Benchmarks
Blocking Jeff Sessions
Russia and our Policy toward it.
permalink