Monday, December 25, 2023

Predatory Sparrows in Iran


In the United States, fears of widespread hacking causing major national disruptions have so far been mostly unfounded.  There have been isolated foreign-based attacks on infrastructure here and there, but no one has so far been able to disrupt an important nationwide system deliberately for political reasons. 


Iran hasn't been so fortunate.  A hacker group calling itself Gonjeshke Darande, which translates as "Predatory Sparrow," claims responsibility for knocking out about 70% of Iran's gas stations in the last few days, according to an Associated Press report.  A related CNBC piece connects the Predatory Sparrows with Israel, although the connection is unconfirmed by the group. 


This isn't the first time the Sparrows have mounted cyberattacks in Iran.  The CNBC report recounts a fire in an Iranian steel plant in June of 2022 which the group claimed to have started.  The hackers say that they try to avoid inconveniencing civilians, but having 70% of a country's gas stations shut down is more than an inconvenience.  Iran reportedly disconnected most of its government infrastructure from the Internet after the Stuxnet virus damaged uranium-enrichment centrifuges in the late 2000s, but the hackers have evidently found a way around that obstacle.


Iran has been sanctioned for its support of terrorism in other countries, and these sanctions prevent hardware and software updates from being installed that might otherwise help the country defend itself against attacks such as these.  Reportedly, software pirating is widespread, but pirated software typically loses manufacturer support for security updates, with the result that such systems are comparatively easy to invade for nefarious purposes.


Iran is widely believed to be the power behind Hamas, the group which mounted the October 7 attacks in southern Israel.  Engineering ethics always has to operate before a background of cultural and historical events.  An action which can be construed as ethical in wartime, at least by some people, would be considered highly unethical in peacetime circumstances. 


As large-scale hacks go, the Predatory Sparrows' shutdown of most gas stations, which isn't the first time they've done something like this, is not life-threatening, at least to most people.  In tweets, the group claimed to have warned emergency services in advance, and so they at least appear to be trying to avoid serious harm to civilians.  Their idea seems to be that if the people of Iran get fed up enough with issues like not being able to buy gas for a time, they will rise up and throw off the chains of the present regime.  And that might happen, but the ayatollahs in charge have endured much worse challenges up to now, and unless their grip on power gets a lot shakier, they will probably shrug off this cyberattack as easily as they did the others.


Cyberattacks are still new enough to count as a novel addition to the warmonger's bag of tricks.  As with other forms of warfare, its success depends on how well-defended the enemy is.  For whatever reason, the United States seems to be doing a better job at defending itself against hacks than Iran has.  I suspect a large factor in this difference has to do with the wide range of systems employed in the U. S. compared to more top-down-governed places like Iran.


I have no way of knowing for sure, but it wouldn't surprise me if nearly all the gas stations in Iran use the same kind of hardware and software.  That uniformity makes a system much easier to hack compared to an infrastructure built out of several different brands and designs of technology.  This is why theories of how a national election was allegedly hacked in many U. S. states hold so little water.  A hacker would have to master and invade dozens or hundreds of different systems and would have to gain access to literally thousands of machines through individual county election offices in order to swing millions of votes. 


While the rule can be extrapolated beyond its range of usefulness, it is true that in technological systems, diversity lends a kind of strength.  If one brand of system falls to a hacker, the others may not.  Iran would probably like to have a robust market for software, but sanctions and the general economic climate have militated against that.  So in addition to having to limp along with outdated machinery, they suffer from Predatory Sparrows who take advantage of the vulnerabilities of outdated and pirated software.


What can the U. S. learn from this situation?  At least two things.


First, money spent on cybersecurity is generally worth it.  Regular updates and security patches are simply good practice, and most responsible organizations follow these guidelines. 


Second, in technological diversity there is strength.  Highly centralized national mandates dictating the details of any kind of cyber-infrastructure are liable to produce security vulnerabilities.  The software industry is still one of the most lightly-regulated ones in our economy, and the resulting variety and dynamism is a security advantage as well as providing customers with the latest and greatest, other things being equal.  Any attempt by government to do heavy-handed regulation is likely to lead to a uniformity that would not be in the best interests of customers, and it might make life easier for predatory sparrows and their like.


It's too bad that Iranians are having to wait in long lines at the 30% of gas stations that still operate (a fraction apparently chosen deliberately by the hackers), but when your government fights a proxy war, you can expect the enemy to get back at it by both fair means and foul.  With cyberattacks, the line between fair and foul is especially fuzzy, and Iranians should be glad that the hackers are as relatively polite as they are.  Still, it's a pain, and we can long for a day when neither Iran nor Hamas nor Israel has to resort to hacking, because peace has at long last come to earth. 


And that's what Christmas is all about.  But that's a story for another time.


Sources:  The AP report "A suspected cyberattack paralyzes the majority of gas stations across Iran" appeared prior to Dec. 18, 2023 on the AP website at  I also referred to a CNBC report at

Monday, December 18, 2023

Are We Ready for Mandatory Alcohol Detectors in Cars?


Drunk driving has been a problem ever since automobiles were invented.  The alertness needed to control a motor vehicle is incompatible with drinking more than a certain amount, and as a consequence of ignoring this fact, 13,400 people in the U. S. died in alcohol-related crashes, according to a recent AP news report.  That may be about to change, because in response to a law passed by Congress in 2021, the National Highway Traffic Safety Administration (NHTSA) announced on Dec. 12 that it is going to require all new passenger vehicles to have a device that will prevent drunk driving.


The NHTSA rule will not go into effect for a year or two, at least, because the agency's notice of proposed rule making first allows manufacturers to provide information about the state of the technology so as to have an orderly rollout and reasonable requirements once the rule is finalized.  But unless Congress changes its mind, sooner or later all new cars will have this feature—or bug, depending on your point of view.


So-called "ignition lockout" devices are not new.  A cursory search of the Internet turns up dozens of papers and project descriptions to implement versions of this technology.  It appears that some jurisdictions already require certain people convicted of drunk driving to install a lockout device on their car before they are permitted to drive again.  Some of these gizmos are pretty inconvenient—imagine having to blow into a tube every time you start your car.  But it's better than not being able to drive at all.


The AP article says that the new required device won't have a tube for drivers to blow into, and the average driver may not even be aware of its presence.  The optimum technology hasn't been decided on yet, but leading candidates include a sensor that would check the driver's breath from a distance (maybe mounted in the steering wheel?), or an optical spectrometer that would derive blood alcohol content from reflectance measurements on a finger. 


From a technical point of view, one can ask what the acceptable rate of false positives and false negatives are going to be.  For sober drivers and those who haven't consumed their legal limit, false positives will mean that your car won't start until the device decides that you're really sober.  Users of alcohol-based mouthwashes and breath-freshener sprays will have to avoid using them just before getting in the car in the morning.  This in itself is not a major problem, but other factors could cause false positives as well.  For the finger-spectrometer device, what if you happen to wear gloves?  Too bad, you'll have to take them off to drive. 


And then there's the question of setting a threshold.  As the instrument itself can be backed up by sophisticated statistical software, it may take other factors into account:  the weight of the driver (easily obtained from a strain gauge in the seat), the driver's motions as monitored by pedal and steering wheel activity, and history of alcohol use as detected by the system in the past.  But no system is going to be perfect.  We can expect some unanticipated problems when the systems are first deployed widely among drivers who don't drink, because there's nothing like the real world to come up with situations that even the most imaginative engineer can't predict.


Even worse will be the false negatives:  cases in which the driver is really drunk but the system doesn't detect it.  Habitual drunk drivers will have a strong motivation to defeat the system, and designers will have to take measures to ensure this doesn't happen.  "See that little hole in the steering wheel?  Plug it up with chewing gum and you can drive no matter how many you've had." Tricks like that will have to be prevented somehow.


Reducing the thousands of deaths annually due to drunk driving is worth something, certainly.  And adding one more required system to automobiles is not going to be noticed along with the many hardware and software enhancements—assisted driving chief among them—which are already being implemented voluntarily by carmakers. 


But an alcohol-detection system is different in kind from other systems, in that it monitors the driver's condition independent of how well he or she drives.  You can make the case that automatic braking systems step in and remove control from the driver when the system decides it's necessary, but that is determined by immediate road circumstances to avoid an imminent crash.  An alcohol-detection system uses a chemical sensor to conclude that the "meat system" called the driver is unsuitable for use, and simply shuts down the car until the driver sobers up. 


This may be the first step toward driver evaluation that is already implemented in some ways elsewhere.  "Dead man" lockout systems in certain types of industrial equipment require that a person always be touching the controls or holding a pedal down, and if the operator ceases to do so, the equipment automatically stops.  One can imagine alertness tests using subtle cues such as eye motion in response to instrument-panel changes, and if the car decides you're too sleepy to drive, it tells you to pull over or else.  Or else—what?  Stopping in the middle of traffic wouldn't be a good idea, but unless the car is semi-autonomous already, it's hard to think of what to do with a sleepy driver other than to tell him or her to get off the road, and hope that the driver obeys. 


Like it or not, all new cars will eventually have the alcohol-detection feature, which is already being required next year in some European Union countries.  And we will have to deal with the consequences, whatever they may be.  Reducing the number of drunk-driving crashes is a highly worthwhile goal, and if it means a few non-drinkers will be inconvenienced by false positives now and then, it's probably worth it. 


Sources:  The AP article "US agency takes first step toward requiring new vehicles to prevent drunk or impaired driving" was published at  I also referred to a website of the Chinese firm Winsen (which makes alcohol-vapor detectors) at 

Monday, December 11, 2023

Water Beads: A Small But Significant Ethics Issue


Water beads, which are spheres of a highly absorbent polymer compound that absorbs water to produce glassy-clear globules that are nearly all water, turn out to be the focus of an ethical issue as complex as many that involve more influential technologies.  I managed to survive until an advanced age in complete ignorance of the existence of water beads.  But now that I've found out about them, they turn out to be more controversial than you'd think.


It all began last Friday at a Christmas dinner and concert at a church some friends of ours attend.  The table decorations were clear plastic candle stands with a thin stem supporting a clear cup that had what looked like water in it.  Floating on the water was a disc-shaped candle, but what caught my interest was what I saw between the candle and the bottom of the cup.


Somehow, there were five or six small Christmas ornaments suspended at various heights in the cup, which was at least two or three inches (2.5-4 cm) high.  Some ornaments were at the bottom, some were suspended in the middle, and some were near the top.  This got my physics-oriented mind going:  what kept the ornaments from either all falling to the bottom or floating to the top?  Was it clear gelatin?  A touch of my fork to the top of the cup proved that no, there was plain water at the top.  (You see how I spend my time at parties.)  I leaned over to my friend at the next table, who is also technically inclined, and asked him how it worked.  He had no idea, but knew the lady who did the table decorations and said he'd ask her after the event was over.


When we caught up with her, she said, "You want to know my candle-holder secret?  Orbeez."


We didn't know what Orbeez were.


"Water beads.  See, here's a cup that got spilled."  On the table were dozens of what looked like clear marbles, maybe 5 mm (1/4 inch or so) in diameter.  Being almost all water, they are almost invisible when suspended in water.  If I looked through an intact cup with its ornaments, I could see sort of ripples in the clear fluid, like heat waves above a hot road in the summer, but nothing more than that.  The water beads in the water stay intact and support the Christmas decorations at various heights.  Mystery solved, but what the heck were Orbeez?


It's a trade name for water spheres made with a special polymer originally developed to make highly absorbent sanitary napkins in the 1970s.  When compressed dry into either spheres or various other shapes, the material expands when placed in water, but retains its relative shape.  Wikipedia's article on expandable water toys describes both the attraction they have for children and also the hazards they pose.


Especially if the objects are brightly colored or have interesting shapes resembling candy, it's easy to imagine a baby or young child eating them.  And this has happened—a lot.  The problem is that unless the object is already saturated with water, it will continue to absorb water and expand inside the digestive tract.  The U. S. Consumer Product Safety Commission (CPSC) has a grim webpage with an X-ray showing a child's colon filled with water beads that caused an intestinal blockage, which leads to severe illness or even death if untreated. 


According to an article on the website, 4,500 emergency-room visits in the U. S. were attributed to water beads from 2017 to 2022.  This is why last month, U. S. Representative Frank Pallone introduced legislation that would ban the sale of such beads altogether.


As with other engineering-ethics issues, the first step is to identify the parties involved.  The manufacture of the beads takes place offshore, mostly in China.  Retailers buy them either directly or from repackagers, and sell them to the public.  Both children and their parents buy the beads, and adults such as our table decorator as well as children use them.  Some types of beads have a maximum size of only a few millimeters, and are advertised as harmless except to very young children, whose small-bore internal plumbing could still be plugged by such objects.  Others can get as large as two inches (about 50 mm), and pose a clear hazard if ingested. 


What we have now is close to a libertarian approach to the problem.  Water beads as such are unregulated, but the CPSC has issued consumer recalls on specific brands of water beads that appear to be very likely to be misused.  For example, Target sold a product called "Chuckle and Roar Ultimate Water Bead Activity Kit."  The CPSC issued a recall notice for this product on Sept. 14 of this year.  It's not stated why this particular product was singled out, unless it was associated with an unusually high number of ER visits. 


Rep. Pallone's bill would take the government-knows-best approach and simply ban all such products, at least according to the brief report on it.  It's unclear whether responsible adult users such as florists and decorators would be allowed to buy them, perhaps after showing proof they are over 18, like ammunition is treated.  If enforcement is not any more rigorous than the regulations around buying ammunition, which I did online by simply checking a box saying that I was over 18, the new law might not be very effective.


Somehow, people with small children keep them alive in houses full of things that might hurt them:  drain cleaner, cleaning fluids, medicines, and so on.  While allowing a very young child who doesn't know the difference between food and plastic to play with water beads seems unwise, it's up to society at large to decide whether hundreds of ER trips every year for kids who eat water beads is worth the pleasure they derive from using them properly. 


To be frank, most of society is unaware that there is even an issue.  If Rep. Pallone's bill advances toward passage, you can count on the water-bead sellers to protest, and unless there is an organized group such as Parents Against Water Beads, the voices of the manufacturers and retailers may prevail.  In that case, we'll all just have to be more careful and try not to make this world any more hazardous than it is already for small children.  Even if water beads really are cool looking.


Sources:  The article describing Rep. Pallone's bill is at  The CPSC statement on the water-bead product recall is at

  I also referred to the Wikipedia article on "Expandable Water Toy." 

Monday, December 04, 2023

Consumer Reports Says Electric Cars Have More Problems


In a comprehensive survey covering vehicle model years 2021 through 2023, the publication Consumer Reports found that electric cars, SUVs, and pickups had among the worst reliability ratings compared to either all-internal-combustion-powered vehicles or IC-powered hybrids (not plug-in hybrids, which were also problem-prone). 


Results varied by brand.  Tesla, the largest seller of all-electric vehicles, rose in the reliability rankings from 19th out of 30 automakers in last year's survey to 14th out of 30 in the latest study.  This reflects an overall tendency that is probably the main cause of reliability problems with electric vehicles (EVs):  inexperience.


The first time you do anything, you're not likely to do it perfectly.  Young people sometimes don't understand this basic principle of life, and it leads to unfortunate consequences.  My mother once sent me to take tennis lessons when I was about ten.  When I discovered I couldn't serve like a pro right off the bat (or the racket), I promptly lost all interest and closed myself off to a lifetime of tennis enjoyment. 


The same thing that is true of individuals learning how to do new things is true of automakers learning how to make EVs.  An Associated Press article on the Consumer Reports survey quotes Jake Fisher, their senior director of auto testing, as saying the situation is mainly "growing pains."  No matter how detailed and accurate computer models and laboratory prototypes are, a manufacturer can't simulate the myriad of unlikely situations that will arise when a product is made in units of thousands and sent out to the great unwashed public, who will do a lot of crazy durn things that the maker could never think of. 


This sort of thing has been going on with internal-combustion (IC) cars since before 1900, and the automakers are supremely experienced with what can go wrong with that technology.  It may be surprising to learn, but the reliability requirements of military-grade technology are nowhere nearly as rigorous and demanding as the requirements for hardware used in the automotive industry.  Jet aircraft are inspected and serviced every few hundred hours.  But Grandma just drives her car until it breaks, and expects that to happen very rarely. 


Combine that consumer expectation with a radically new powertrain, control system, and body, which is what EVs represent, and you're going to have problems, even entirely new types of problems.  The issue of autonomous vehicles is formally independent of EVs, but as some of the most advanced autonomous-vehicle systems are found in EVs such as Teslas, the two often go together.  And autonomous driving is only one of the multitude of new features that EVs make either possible at all, or a lot easier to implement.


An EV is more of a hardware shell for a software platform than anything else, and reliability standards for software are a different kind of cat compared to automotive reliability expectations.  Software is at fault in many issues involving EVs, although it can increasingly cause problems with IC cars as well.    


The hope expressed by many EV makers is that consumers will recognize the higher problem rate as something temporary, and won't allow it to tarnish the overall reputation of the technology.  This depends on the age and psychology of the customer to a great and imponderable degree. 


Just last night, for instance, I was talking with a friend who bought his first Tesla about five months ago.  If he's had any problems with it, he didn't mention them.  I asked about charging times, and he said it was no problem.  He can charge his Tesla at his house overnight, and he knows where there are supercharging stations that will do it in only 30 minutes.


His attitude reminds me of a scene in the Woody Allen movie "Annie Hall."  In a split-screen scene, Alvy Singer's therapist asks him, "How often do you sleep together?"  Singer replies forlornly, "Hardly ever.  Maybe three times a week."  In the other half of the screen, his partner Annie Hall gets asked the same question by her therapist, and Annie says with annoyance, "Constantly.  I'd say three times a week."


My friend, a power engineer and Tesla enthusiast, sees charging an EV in thirty minutes as wonderful, hardly any time at all.  Someone like me, who is a dyed-in-the-wool IC traditionalist, can't help compare that half hour to the five minutes I usually spend at the gas pump, and the Tesla suffers by comparison.


The true-blue EV proponents will undoubtedly overlook or tolerate minor issues with their vehicles and rightly regard them as temporary stumbling blocks that will grow less frequent as the makers learn from their mistakes and improve reliability overall.  The big question is, are there enough such proponents to support the overwhelming market share growth that the automakers hope for, and that the federal government is standing by to enforce with a big stick if it doesn't happen?


The same AP article notes that the initially explosive growth of EV sales has slowed by about half in the last year.  It's a genuine open question as to where EV sales will stabilize, if they ever do, with regard to IC sales.  The problem that the automakers face is that as things currently stand, they must comply with the so-called CAFE standards for overall fleet fuel economy, or else pay heavy fees for non-compliance.  And the Biden administration has proposed steep increases in the fleet-mileage numbers that will require a large fraction of all cars on the roadways to be EVs in the coming years. 


One can question the propriety of government interference in the auto marketplace.  If left alone, the market will let all the EV enthusiasts satisfy their wants without driving up the overall price of cars or causing artificial scarcities of IC vehicles.  Both of these downsides are likely if the government forces Adam Smith's famed invisible hand to deal only the kinds of cars the government wants, without regard to consumer preferences or needs. 


Electric cars will become more reliable, but it's by no means clear if consumers will want enough of them to warrant the current pressures to overthrow the century-long reign of IC cars. 


Sources:  The AP article "Consumer Reports:  Electric vehicles less reliable, on average, than conventional cars and trucks" appeared on Nov. 29, 2023 at  I also referred to IMDB for the "Annie Hall" quote at, and for CAFE standards at

Monday, November 27, 2023

Corpus Christi Gets a New Harbor Bridge --- Eventually


When my wife and I took a short vacation down to the Gulf Coast in October, we used part of a day to visit the Texas State Aquarium in the North Beach part of Corpus Christi.  To get there, we had to take the (old) Harbor Bay Bridge that crosses a large industrial canal through which pass tankers going to the main industry of Corpus Christi, which is oil refining.  That bridge was built in 1959, and about twenty years ago, plans began to be made for a new bridge.  As we saw from miles away, the new Harbor Bridge is well under way and may be completed as soon as 2025.


The new bridge will be cable-stayed:  two tall pylons will hold sets of cables that slant out and down to connect to the bridge deck.  And I do mean tall.  One of the pylons is complete, and the builders are extending the deck out from it in both directions.  It is by far the tallest structure for hundreds of miles around, and the cantilevered-out parts extend so far that I got a little giddy just looking at the thing.  When it's finished, the bridge will allow much taller ships to pass underneath than the current bridge, and will have a pedestrian walkway and LED lighting.  Its planned cost is some $800 million, but that was before some delays occasioned in 2022 when an outside consultant raised safety concerns.  That halted construction on a part of the bridge for nine months, but the five safety issues were addressed, and construction resumed last April.


Later that month, as the Corpus Christi Hooks were playing a baseball game at nearby Whataburger Field (the eponymous fast-food firm's first restaurant was in Corpus Christi), a fire began near the rear of a construction crane on the bridge.  Subsequent videos obtained by news media show a load on the crane falling rapidly to the ground, and flying debris injured one spectator at the ball game.  A battalion chief for the Corpus Christi Fire Department said a cable failure caused enough friction to set grease on a cable reel afire.  An Internet search has not revealed any other major accidents since the bridge project formally began in 2016, but it is possible that some have escaped the news media's attention.


Originally scheduled to be completed in 2020, delays and engineering-firm changes have pushed back the anticipated completion date to 2025.  That's only two years from now, and while a good bit has been accomplished, there is still much remaining.


While any injuries or fatalities from construction projects are tragic, we have come a long way from the days when it was just an accepted fact that major bridges and tunnels would cost a certain number of human lives. 


In 1875, the 4.75-mile Hoosac Tunnel was completed in the hills of Western Massachusetts.  It took twenty years to build, and 135 verified deaths were associated with the project.  One of the worst accidents happened when a candle in the hoist building at the top of a ventilation shaft caught a naphtha-fueled lamp on fire, and the wooden hoist structure burned and collapsed down the shaft, trapping 13 workers at the bottom, who suffocated.  While the hazards were severe enough to inspire a workers' strike in 1865, this failed to stop construction, and the following year saw the highest number of fatalities: fourteen.


In 1937, San Francisco's Golden Gate Bridge opened after four years of work.  Its safety record was better than the Hoosac Tunnel, and would have been almost perfect except for a scaffolding failure that sent twelve men plunging through the safety net to the bay.  Two survived, and there was one other unrelated fatality, making a total of eleven.  Nineteen men fell into the safety net and survived to form an exclusive group they called the Half Way to Hell Club.


This completely unscientific survey of major construction project fatalities and injuries seems to indicate that over time, we as a culture in the U. S. have grown less tolerant of having workers killed on the job.  Credit for this improvement can be parceled out in a number of directions.


The contractors and engineers in charge of construction projects deserve a good share of the credit.  They are the ones who determine how the work will be done, and how important safety is compared to the bottom-line goal of getting the job done. 


The increased mechanization of construction labor has to be another factor.  One modern construction worker equipped with the proper tools can do the work that required several workers decades or a century ago.  So the simple fact that fewer people are needed to do a given job has made it less likely that people will be injured or killed on the job.


Government agencies—federal, state, and local—also deserve some credit.  The Federal Occupational Safety and Health Administration (OSHA) was founded in 1970, and its influence has undoubtedly led to safety improvements, although doing a cost-benefit analysis of OSHA would be a daunting task even for a team of historians and safety analysts. 


Labor unions hold safety as a high priority, and while there are probably statistics supporting the contention that unionized workers have better safety records than non-union employees, a lot of non-union workers manage to work safely too. 


If the worst accident that happens during the construction of the new harbor bridge in Corpus Christi turns out to be the flying-debris crane mishap, that will be a truly exemplary record for a project that will have taken nearly a decade and cost nearly a billion dollars.  The project still has a long way to go, and it's possible that the most hazardous operations lie in the future:  putting the rest of the cables in place and connecting the deck to finish off the bridge.  But the contractors have enforced safety sufficiently to get this far with no major incidents, and the hope is that this trend will continue.


The other question about the bridge is, of course, will it stay put once it's built?  The original engineering firm for the bridge, FIGG, was kicked off the project in 2022 after an independent review.  FIGG, by the way, was involved in the ill-fated Florida International University pedestrian bridge that collapsed in March of 2018.  Recent reports indicate that all the engineering concerns have been adequately addressed, but we won't know for sure until the bridge is finished and has withstood its first hurricane.  Stay tuned.


Sources:  I referred to the following sources:,,,, and, as well as the Wikipedia articles on the Hoosac Tunnel and the Golden Gate Bridge.

Monday, November 20, 2023

Cruise Gets Bruised


General Motors' autonomous-vehicle operation is called Cruise, and just last August, it received permission (along with Google's Waymo) to operate driverless robotaxis in San Francisco at all hours of the day and night.  This made the city the only one in the United States with two competing firms providing such services.


Only a week after the California Public Utilities Commission acted to allow 24-hour services, a Cruise robotaxi carrying a passenger was involved in a collision with a fire truck on an emergency call.  The oncoming fire truck had moved into the robotaxi's lane at an intersection surrounded by tall buildings and controlled by a traffic light, which had turned green for the robotaxi.  Engineers for Cruise said that the robotaxi identified the emergency vehicle's siren as soon as it rose above the ambient noise level, but couldn't track its path until it came into view, by which time it was too late for the robotaxi to avoid hitting it.  The passenger was taken to a hospital but was not seriously injured.


As a result, Cruise agreed with the Department of Motor Vehicles to reduce their active fleet of robotaxis by 50% until the investigation by DMV of this and other incidents was resolved.


In many engineering failures, warning signs of a comparatively minor nature appear before the major catastrophe, which usually attracts attention by loss of life, injuries, or significant property damage.  These minor signs are valuable indicators to those who wish to prevent the major tragedies from occurring, but they are not always heeded effectively.  The Cruise collision with a fire truck proved to be one such case.


On Oct. 2, a pedestrian was hit by a conventional human-piloted car on a busy San Francisco street.  This happens from time to time, but the difference in this case was that the impact sent the pedestrian toward a Cruise robotaxi.  When the unfortunate pedestrian hit the robotaxi, the vehicle's system interpreted the collision as "lateral," meaning something hit it from the side.  In the case of lateral collisions, the robotaxi is programmed to stop and then pull off the road to keep from obstructing traffic.


What the system didn't take into account was that the pedestrian was still stuck under one of the robotaxi's wheels, and when it pulled about six meters (20 feet) to the curb, it dragged the pedestrian with it, causing critical injuries. 


California regulators reacted swiftly.  The DMV revoked Cruise's license, and the firm announced it was going to suspend driverless operations nationwide, including a few vehicles in Austin, Texas, and other locations.  There were only 950 vehicles in the entire U. S. fleet, so the operation is clearly in its early stages. 


But now, after Cruise has done software recalls for human-piloted vehicles as well as driverless ones, it is far from clear what their path is back to viability.  According to an AP report, GM had big hopes for substantial revenue from Cruise operations, expecting on the order of $1 billion in 2025 after making only about a tenth of that in 2022.  And profitability, which would require recouping the billions GM already invested in the technology, is even farther in the future than it was before the problems in San Francisco.


The consequences of these events can be summarized under the headings of good news and bad news.


The good news:  Nobody got killed, although being dragged twenty feet under the wheel of a robot car that clearly has no idea what is going on might be a fate worse than death to some people.  After failing to heed the warning incident in August, Cruise has finally decided to react vigorously with significant and costly moves.  According to AP, it is adding a chief safety officer and asking a third-party engineering firm to find the technical cause of the Oct. 2 crash.  So after clearly inadequate responses to the earlier incidents, a major one has motivated Cruise management to act. 


The bad news:  Several people were injured, at least one critically, before Cruise realized that, at least in the complex environs of San Francisco, their robotaxis posed an unacceptable risk to pedestrians.  I'm sure Google's Waymo vehicles have a less-than-perfect safety record, but whatever start-up glitches they suffered are well in the past.  Cruise does not have the luxury of experience that Waymo has, and is in a sense operating in foreign territory.  Maybe Detroit would have been a better choice than San Francisco for a test market, but that would have neglected the cool factor, which is after all what is driving the robotaxi project in the first place.


Back when every elevator had a human operator, there was a valid economic and engineering argument to replace the manually-controlled units with automatic ones.  The main reason was to eliminate the salary of the operator.  Fortunately, the environment of an elevator is exceedingly well defined, and the relay-based technology of the 1920s sufficed to produce automatic elevators which met all safety requirements and were easy enough for the average passenger to operate.  Nevertheless, some places clung to manual elevators as recently as the 1980s, as I recall from a visit to a tax consultant in Northampton, Massachusetts, whose office was accessed by means of an elevator controlled not by buttons but by a rather seedy-looking old man.


Being an old man myself now, I come to the defense of everyone on the street who would like to confront real people behind the wheel, not some anonymous software that may—may!—figure out I'm a human being and not a tall piece of plastic wrap blowing in the wind, in time to stop before it hits me.  Yes, robotaxis are cool.  Yes, they save on taxi-driver salaries, but this ignores the fact that one of the few entry-level jobs that recent immigrants to this country can get which actually pays a living wage is that of taxi driver, many of whom are independent entrepreneurs. 


Robotaxis may be cool, but dangerous they should not be.  GM may patch up their Cruise operation and get it going again, but then again it may go the way of the Segway.  Time will tell.


Sources:  An article from the USA Today Network in the online Austin American-Statesman for Nov. 16, 2023 alerted me to the fact that Cruise was ramping down its nationwide operations, including those in Austin.  I consulted AP News articles at

(Nov. 8, 2023) and at (Aug. 19, 2023). 

Monday, November 13, 2023

Should Social Media Data Replace Opinion Polls—And Voting?


Pity the poor opinion pollsters of today.  Their job has been mightily complicated by the rapidly changing nature of communications media and the soaring costs of paying real people to do real things such as knocking on doors and asking questions.  In an age when even the Census Bureau has mostly abandoned the in-person method of counting the population, opinion polls can't compete either.  For a time—say 1950 to 2000—their job was made easier by the advent of the near-universal telephone.  But the rise of robocalling, mobile phone proliferation with the caller ID feature, and the consequent general aversion of nearly everybody to answering a call from someone you don't know, has made it much harder for opinion poll workers to approach the ideal of their business:  a truly representative sample of the relevant population.


So why not take advantage of the technological advances we have, and use data culled from social media to do opinion polling?  After all, we are told that some social-media and big-tech firms know more about our preferences than we do ourselves.  Out there in the bit void is a profile of everyone who has anything to do with mobile phones, computers, or the Internet—which is almost everyone, period.  And much of that data on people is either publicly available or can be obtained for a price that is a lot less than paying folks to walk around in seventeen carefully selected cities and countrysides knocking on one thousand doors. 


Well, anything a piker like me can think of, you can bet smarter people have thought of as well.  And sure enough, three researchers at the University of Lausanne in Switzerland have not only thought of it, but have collected nearly two hundred papers by other researchers who have also looked into the topic. 


In surveying the literature, Maud Reveilhac, Stephanie Steinmetz, and Davide Morselli apparently did not find anyone who has gone all the way from traditional opinion polling to relying mainly on social-media data (or SMD for short).  That is a bridge too far even now.  But they found many researchers trying to show how SMD can complement traditional survey data, leading to new insights and confirming or disconfirming poll findings.


With regard specifically to political polls, a subject many of the papers focused on, one can imagine a kind of hierarchy, with one's actual vote at the top.  Below that is the opinion a voter might tell a pollster in response to the question, "If the Presidential election were held today, who would you vote for?"  And below that, as far as I know, anyway, are the actions the voter takes on social media—the sites visited, the tweets subscribed to, the comments posted, etc. 


It only stands to reason that there is some correlation among these three classes of activity.  If someone watches hours of Trump speeches and says they are going to vote for Trump, it would be surprising to find that they actually voted for Bernie Sanders as a write-in, for example. 


But there is a time-honored tradition in democracies that the act of voting is somehow sacred and separate from anything else a person happens to do or say.  Because voting is the exercise of a right conferred by the government, in the moment of voting a person is acting in an official capacity.  It is essentially the same kind of act as when a governor or president signs a law, and should be safeguarded and respected in the same way.  A president may have said things that lead you to think he will sign a certain law.  He may even say he'll sign it when it comes to his desk.  But until he actually and consciously signs it, it's not yet a law.


There are laws against bribing executives and judges in order to influence their decisions, and so there are also laws against paying people to vote a certain way.  That is because in a democracy, we expect the judgment of each citizen to be exercised in a conscious and deliberate way.  And bribes or other forms of vote contamination corrupt this process.


Despite the findings of the University of Lausanne researchers that so far, no one has attempted to replace opinion polls wholesale with data garnered from social media or other sources, the danger still exists.  And with the advent of AI and its ability to ferret out correlations in inhumanly large data sets, I can easily imagine a scenario such as the following.


Suppose some hotshot polling organization finds that they can get a consistently high correlation between traditional voting, on the one hand, and "polling" based on a sophisticated use of social media and other Internet-extracted data—data extracted in most cases without the explicit knowledge of the people involved.  Right now, that sort of thing is not possible, but it may be achievable in the near future.


Suppose also that for whatever reason, participation in actual voting plummets.  This sounds far-fetched, but already we've seen how one person can singlehandedly cast effective aspersions on the validity of elections that by most historical measures were properly conducted. 


Someone may float the idea that, hey, we have this wonderful polling system that predicts the outcomes of elections so well that people don't even have to vote!  Let's just do it that way—ask the AI system to find out what people want, and then give it to them.


It sounds ridiculous now.  But in 1980, it sounded ridiculous to say that in the near future, soft-drink companies will be bottling ordinary water and selling it to you at a dollar a bottle.  And it sounded ridiculous to say that the U. S. Census Bureau would quit trying to count every last person in the country, and would rely instead on a combination of mailed questionnaires and "samples" collected in person. 


So if anybody in the future proposes replacing actual voting with opinion polls that people don't actually have to participate in, I'm here to say we should oppose the idea.  It betrays the notion of democratic voting at its core.  The social scientists can play with social-media data all they want, but there is no substitute for voting, and there never should be.


Sources:  The paper "A systematic literature review of how and whether social media data can complement traditional survey data to study public opinion," by Maud Reveilhac, Stephanie Steinmetz, and Davide Morselli appeared in Multimedia Tools and Applications, vol. 81, pp. 10107-10142, in 2022, and is available online at

Monday, November 06, 2023

The Biden Administration Tackles AI Regulation—Sort Of


In our three-branch system of government, the power of any one branch is intentionally limited so that the democratic exercise of the public will cannot be thwarted by any one branch going amok.  This division of power leads to inefficiency and sometimes confusion, but it also means that the damage done by any one branch—executive, legislative, or judicial—is limited compared to what a unified dictatorship could do.


We're seeing the consequences of this division in the recent executive order announced by the Biden administration on the regulation of artificial intelligence (AI).  One take on the fact sheet that preceded the 63-page order itself appeared on the website of IEEE Spectrum, a general-interest magazine for members of IEEE, the largest organization of professional engineers in the world. 


It's interesting that reactions from most of the technically-informed people interviewed by the Spectrum editor were guardedly positive.  Lee Tiedrich, a distinguished faculty fellow at Duke University's Initiative for Science and Society, said ". . . the White House has done a really good, really comprehensive job."  She thinks that while respecting the limitations of executive-branch power, the order addresses a wide variety of issues with calls to a number of Federal agencies to take actions that could make a positive difference.


For example, the order charges the National Institute of Standards and Technology (NIST) with developing standards for "red-team" testing of AI products for safety before public release.  Red-team testing involves purposefully trying to do malign things with a product to see how bad the results can get.  Although NIST doesn't have to do the testing itself, coming up with rigorous standards for such testing in the manifold different circumstances that AI is being used for may prove to be a challenge that exceeds the organization's current capability.  Nevertheless, you don't get what you don't ask for, and as a creature of the executive branch, NIST is obliged at least to try.


The U. S. Department of Commerce will develop per this order "guidance for content authentication and watermarking to clearly label AI-generated content."  Cynthia Rudin, a Duke professor of computer science, sees that some difficulty may arise when the question of watermarking AI-generated text comes up.  Her point seems to be that such watermarking is hard to imagine other than seeing (NOTE:  AI-GENERATED TEXT) inserted every so often in a paragraph, which would be annoying, to say the least.  (You have my guarantee that not one word of this blog is AI-generated, by the way.)


Other experts are concerned about the use of data sets for training AI-systems, especially the intimidatingly-named "foundational AI" ones that are used as a basis for other systems with more specific roles.  Many training data sets include a substantial fraction of worldwide Internet content, including millions of copyrighted documents, and concern has been raised about how copyrighted data is being exploited by AI systems without remuneration to the copyright holders.  Susan Ariel Aaronson of George Washington University hopes that Congress will take more definite action in this area to go beyond the largely advisory effect that Biden's executive order will have.


This order shares in common with other recent executive orders a tendency to spread responsibilities widely among many disparate agencies, a feature that is something of a hallmark of this administration.  On the one hand, this type of approach is good at addressing an issue that has multiple embodiments or aspects, which is certainly true of AI.  Everything from realistic-looking deepfake photos, to genuine-sounding legal briefs, to functioning computer code has been generated by AI, and so this broad-spectrum approach is an appropriate one for this case.


On the other hand, such a widely-spread initiative risks getting buried in the flood of other obligations and tasks that executive agencies have to deal with, ranging from their primary purposes (NIST must establish measurement standards; the Department of Commerce must deal with commerce, etc.) and other initiatives such as banning workplace discrimination against LGBT employees, one of the things that Biden issued an executive order for in his first day of office.  This is partly a matter of publicity and public perception, and partly a question of priorities that the various officials in charge of the various agencies set.  With the growing number of Federal employees, it's an open question as to what administrative bang the taxpayer is getting for his buck.  Regulation of AI is something that there is widespread agreement on—the extreme-case dangers have become clearer in recent months and years, and nobody wants AI to take over the government or the power grid and start treating us all like lab rats that the AI owner has no particular use for anymore. 


But how to avoid both the direst scenarios, as well as the shorter-term milder drawbacks that AI has already given rise to, is a thorny question, and the executive order will only go a short distance toward that goal.


One nagging aspect of AI regulation is the fact that the new large-scale "generative AI" systems trained on vast swathes of the Internet are starting to do things that even their developers didn't anticipate:  learning languages that the programmers hadn't intended the system to learn, for example.  One possible factor in this uncontrollability aspect of AI that no one in government seems to have considered, at least out loud, is dwelled on at length by Paul Kingsnorth, an Irish novelist and essayist who wrote "AI Demonic" in the November/December issue of Touchstone magazine.  Kingsnorth seriously considers the possibility that certain forms and embodiments of AI are being influenced by a "spiritual personification of the age of the Machine" which he calls Ahriman. 


The name Ahriman is associated with a Zoroastrian evil spirit of destruction, but Kingsnorth describes how it was taken up by the theosophist Rudolf Steiner, and then an obscure computer scientist named David Black who testified to feeling "drained" by his work with computers back in the 1980s.  The whole article should be read, as it's not easy to summarize in a few sentences.  But Kingsnorth's basic point is clear:  in trying to regulate AI, we may be dealing with something more than just piles of hardware and programs.  As St. Paul says in Ephesians 6:12, ". . . we wrestle not against flesh and blood [and server farms], but against principalities, against powers, against the rulers of the darkness of this world, against spiritual wickedness in high places." 


Anyone trying to regulate AI would be well advised to take the spiritual aspect of the struggle into account as well.


Sources:  The IEEE Spectrum website carried the article by Eliza Strickland, "What You Need to Know About Biden's Sweeping AI Order" at  I also referred to an article on AI on the Time website at  Paul Kingsnorth's article "AI Demonic" appeared in the November/December 2023 issue of Touchstone, pp. 29-40, and was reprinted from Kingsnorth's substack "The Abbey of Misrule."