Monday, December 04, 2023

Consumer Reports Says Electric Cars Have More Problems

 

In a comprehensive survey covering vehicle model years 2021 through 2023, the publication Consumer Reports found that electric cars, SUVs, and pickups had among the worst reliability ratings compared to either all-internal-combustion-powered vehicles or IC-powered hybrids (not plug-in hybrids, which were also problem-prone). 

 

Results varied by brand.  Tesla, the largest seller of all-electric vehicles, rose in the reliability rankings from 19th out of 30 automakers in last year's survey to 14th out of 30 in the latest study.  This reflects an overall tendency that is probably the main cause of reliability problems with electric vehicles (EVs):  inexperience.

 

The first time you do anything, you're not likely to do it perfectly.  Young people sometimes don't understand this basic principle of life, and it leads to unfortunate consequences.  My mother once sent me to take tennis lessons when I was about ten.  When I discovered I couldn't serve like a pro right off the bat (or the racket), I promptly lost all interest and closed myself off to a lifetime of tennis enjoyment. 

 

The same thing that is true of individuals learning how to do new things is true of automakers learning how to make EVs.  An Associated Press article on the Consumer Reports survey quotes Jake Fisher, their senior director of auto testing, as saying the situation is mainly "growing pains."  No matter how detailed and accurate computer models and laboratory prototypes are, a manufacturer can't simulate the myriad of unlikely situations that will arise when a product is made in units of thousands and sent out to the great unwashed public, who will do a lot of crazy durn things that the maker could never think of. 

 

This sort of thing has been going on with internal-combustion (IC) cars since before 1900, and the automakers are supremely experienced with what can go wrong with that technology.  It may be surprising to learn, but the reliability requirements of military-grade technology are nowhere nearly as rigorous and demanding as the requirements for hardware used in the automotive industry.  Jet aircraft are inspected and serviced every few hundred hours.  But Grandma just drives her car until it breaks, and expects that to happen very rarely. 

 

Combine that consumer expectation with a radically new powertrain, control system, and body, which is what EVs represent, and you're going to have problems, even entirely new types of problems.  The issue of autonomous vehicles is formally independent of EVs, but as some of the most advanced autonomous-vehicle systems are found in EVs such as Teslas, the two often go together.  And autonomous driving is only one of the multitude of new features that EVs make either possible at all, or a lot easier to implement.

 

An EV is more of a hardware shell for a software platform than anything else, and reliability standards for software are a different kind of cat compared to automotive reliability expectations.  Software is at fault in many issues involving EVs, although it can increasingly cause problems with IC cars as well.    

 

The hope expressed by many EV makers is that consumers will recognize the higher problem rate as something temporary, and won't allow it to tarnish the overall reputation of the technology.  This depends on the age and psychology of the customer to a great and imponderable degree. 

 

Just last night, for instance, I was talking with a friend who bought his first Tesla about five months ago.  If he's had any problems with it, he didn't mention them.  I asked about charging times, and he said it was no problem.  He can charge his Tesla at his house overnight, and he knows where there are supercharging stations that will do it in only 30 minutes.

 

His attitude reminds me of a scene in the Woody Allen movie "Annie Hall."  In a split-screen scene, Alvy Singer's therapist asks him, "How often do you sleep together?"  Singer replies forlornly, "Hardly ever.  Maybe three times a week."  In the other half of the screen, his partner Annie Hall gets asked the same question by her therapist, and Annie says with annoyance, "Constantly.  I'd say three times a week."

 

My friend, a power engineer and Tesla enthusiast, sees charging an EV in thirty minutes as wonderful, hardly any time at all.  Someone like me, who is a dyed-in-the-wool IC traditionalist, can't help compare that half hour to the five minutes I usually spend at the gas pump, and the Tesla suffers by comparison.

 

The true-blue EV proponents will undoubtedly overlook or tolerate minor issues with their vehicles and rightly regard them as temporary stumbling blocks that will grow less frequent as the makers learn from their mistakes and improve reliability overall.  The big question is, are there enough such proponents to support the overwhelming market share growth that the automakers hope for, and that the federal government is standing by to enforce with a big stick if it doesn't happen?

 

The same AP article notes that the initially explosive growth of EV sales has slowed by about half in the last year.  It's a genuine open question as to where EV sales will stabilize, if they ever do, with regard to IC sales.  The problem that the automakers face is that as things currently stand, they must comply with the so-called CAFE standards for overall fleet fuel economy, or else pay heavy fees for non-compliance.  And the Biden administration has proposed steep increases in the fleet-mileage numbers that will require a large fraction of all cars on the roadways to be EVs in the coming years. 

 

One can question the propriety of government interference in the auto marketplace.  If left alone, the market will let all the EV enthusiasts satisfy their wants without driving up the overall price of cars or causing artificial scarcities of IC vehicles.  Both of these downsides are likely if the government forces Adam Smith's famed invisible hand to deal only the kinds of cars the government wants, without regard to consumer preferences or needs. 

 

Electric cars will become more reliable, but it's by no means clear if consumers will want enough of them to warrant the current pressures to overthrow the century-long reign of IC cars. 

 

Sources:  The AP article "Consumer Reports:  Electric vehicles less reliable, on average, than conventional cars and trucks" appeared on Nov. 29, 2023 at https://apnews.com/article/electric-vehicles-consumer-reports-gasoline-vehicles-charging-eed9c3b8d86c1f7708b7c6e2d4dbf55e.  I also referred to IMDB for the "Annie Hall" quote at https://www.imdb.com/title/tt0075686/characters/nm0000095, and for CAFE standards at https://www.npr.org/2023/07/28/1190799503/new-fuel-economy-standards-cars-trucks.

Monday, November 27, 2023

Corpus Christi Gets a New Harbor Bridge --- Eventually

 

When my wife and I took a short vacation down to the Gulf Coast in October, we used part of a day to visit the Texas State Aquarium in the North Beach part of Corpus Christi.  To get there, we had to take the (old) Harbor Bay Bridge that crosses a large industrial canal through which pass tankers going to the main industry of Corpus Christi, which is oil refining.  That bridge was built in 1959, and about twenty years ago, plans began to be made for a new bridge.  As we saw from miles away, the new Harbor Bridge is well under way and may be completed as soon as 2025.

 

The new bridge will be cable-stayed:  two tall pylons will hold sets of cables that slant out and down to connect to the bridge deck.  And I do mean tall.  One of the pylons is complete, and the builders are extending the deck out from it in both directions.  It is by far the tallest structure for hundreds of miles around, and the cantilevered-out parts extend so far that I got a little giddy just looking at the thing.  When it's finished, the bridge will allow much taller ships to pass underneath than the current bridge, and will have a pedestrian walkway and LED lighting.  Its planned cost is some $800 million, but that was before some delays occasioned in 2022 when an outside consultant raised safety concerns.  That halted construction on a part of the bridge for nine months, but the five safety issues were addressed, and construction resumed last April.

 

Later that month, as the Corpus Christi Hooks were playing a baseball game at nearby Whataburger Field (the eponymous fast-food firm's first restaurant was in Corpus Christi), a fire began near the rear of a construction crane on the bridge.  Subsequent videos obtained by news media show a load on the crane falling rapidly to the ground, and flying debris injured one spectator at the ball game.  A battalion chief for the Corpus Christi Fire Department said a cable failure caused enough friction to set grease on a cable reel afire.  An Internet search has not revealed any other major accidents since the bridge project formally began in 2016, but it is possible that some have escaped the news media's attention.

 

Originally scheduled to be completed in 2020, delays and engineering-firm changes have pushed back the anticipated completion date to 2025.  That's only two years from now, and while a good bit has been accomplished, there is still much remaining.

 

While any injuries or fatalities from construction projects are tragic, we have come a long way from the days when it was just an accepted fact that major bridges and tunnels would cost a certain number of human lives. 

 

In 1875, the 4.75-mile Hoosac Tunnel was completed in the hills of Western Massachusetts.  It took twenty years to build, and 135 verified deaths were associated with the project.  One of the worst accidents happened when a candle in the hoist building at the top of a ventilation shaft caught a naphtha-fueled lamp on fire, and the wooden hoist structure burned and collapsed down the shaft, trapping 13 workers at the bottom, who suffocated.  While the hazards were severe enough to inspire a workers' strike in 1865, this failed to stop construction, and the following year saw the highest number of fatalities: fourteen.

 

In 1937, San Francisco's Golden Gate Bridge opened after four years of work.  Its safety record was better than the Hoosac Tunnel, and would have been almost perfect except for a scaffolding failure that sent twelve men plunging through the safety net to the bay.  Two survived, and there was one other unrelated fatality, making a total of eleven.  Nineteen men fell into the safety net and survived to form an exclusive group they called the Half Way to Hell Club.

 

This completely unscientific survey of major construction project fatalities and injuries seems to indicate that over time, we as a culture in the U. S. have grown less tolerant of having workers killed on the job.  Credit for this improvement can be parceled out in a number of directions.

 

The contractors and engineers in charge of construction projects deserve a good share of the credit.  They are the ones who determine how the work will be done, and how important safety is compared to the bottom-line goal of getting the job done. 

 

The increased mechanization of construction labor has to be another factor.  One modern construction worker equipped with the proper tools can do the work that required several workers decades or a century ago.  So the simple fact that fewer people are needed to do a given job has made it less likely that people will be injured or killed on the job.

 

Government agencies—federal, state, and local—also deserve some credit.  The Federal Occupational Safety and Health Administration (OSHA) was founded in 1970, and its influence has undoubtedly led to safety improvements, although doing a cost-benefit analysis of OSHA would be a daunting task even for a team of historians and safety analysts. 

 

Labor unions hold safety as a high priority, and while there are probably statistics supporting the contention that unionized workers have better safety records than non-union employees, a lot of non-union workers manage to work safely too. 

 

If the worst accident that happens during the construction of the new harbor bridge in Corpus Christi turns out to be the flying-debris crane mishap, that will be a truly exemplary record for a project that will have taken nearly a decade and cost nearly a billion dollars.  The project still has a long way to go, and it's possible that the most hazardous operations lie in the future:  putting the rest of the cables in place and connecting the deck to finish off the bridge.  But the contractors have enforced safety sufficiently to get this far with no major incidents, and the hope is that this trend will continue.

 

The other question about the bridge is, of course, will it stay put once it's built?  The original engineering firm for the bridge, FIGG, was kicked off the project in 2022 after an independent review.  FIGG, by the way, was involved in the ill-fated Florida International University pedestrian bridge that collapsed in March of 2018.  Recent reports indicate that all the engineering concerns have been adequately addressed, but we won't know for sure until the bridge is finished and has withstood its first hurricane.  Stay tuned.

 

Sources:  I referred to the following sources:  https://www.kiiitv.com/article/news/local/new-video-port-of-cc-cameras-show-initial-fire/503-52b32589-f72c-4653-926b-2160c83fa3fd, https://www.kristv.com/news/local-news/a-crane-fire-seen-from-up-above-and-felt-from-down-below, https://www.tpr.org/news/2022-09-16/construction-on-new-corpus-christi-bridge-halted-as-engineers-say-design-flaws-could-lead-to-collapse, https://www.kiiitv.com/article/news/local/whataburger-field-spectator-hospitalized/, and https://practical.engineering/blog/2022/9/15/what-really-happened-at-the-new-harbor-bridge-project, as well as the Wikipedia articles on the Hoosac Tunnel and the Golden Gate Bridge.

Monday, November 20, 2023

Cruise Gets Bruised

 

General Motors' autonomous-vehicle operation is called Cruise, and just last August, it received permission (along with Google's Waymo) to operate driverless robotaxis in San Francisco at all hours of the day and night.  This made the city the only one in the United States with two competing firms providing such services.

 

Only a week after the California Public Utilities Commission acted to allow 24-hour services, a Cruise robotaxi carrying a passenger was involved in a collision with a fire truck on an emergency call.  The oncoming fire truck had moved into the robotaxi's lane at an intersection surrounded by tall buildings and controlled by a traffic light, which had turned green for the robotaxi.  Engineers for Cruise said that the robotaxi identified the emergency vehicle's siren as soon as it rose above the ambient noise level, but couldn't track its path until it came into view, by which time it was too late for the robotaxi to avoid hitting it.  The passenger was taken to a hospital but was not seriously injured.

 

As a result, Cruise agreed with the Department of Motor Vehicles to reduce their active fleet of robotaxis by 50% until the investigation by DMV of this and other incidents was resolved.

 

In many engineering failures, warning signs of a comparatively minor nature appear before the major catastrophe, which usually attracts attention by loss of life, injuries, or significant property damage.  These minor signs are valuable indicators to those who wish to prevent the major tragedies from occurring, but they are not always heeded effectively.  The Cruise collision with a fire truck proved to be one such case.

 

On Oct. 2, a pedestrian was hit by a conventional human-piloted car on a busy San Francisco street.  This happens from time to time, but the difference in this case was that the impact sent the pedestrian toward a Cruise robotaxi.  When the unfortunate pedestrian hit the robotaxi, the vehicle's system interpreted the collision as "lateral," meaning something hit it from the side.  In the case of lateral collisions, the robotaxi is programmed to stop and then pull off the road to keep from obstructing traffic.

 

What the system didn't take into account was that the pedestrian was still stuck under one of the robotaxi's wheels, and when it pulled about six meters (20 feet) to the curb, it dragged the pedestrian with it, causing critical injuries. 

 

California regulators reacted swiftly.  The DMV revoked Cruise's license, and the firm announced it was going to suspend driverless operations nationwide, including a few vehicles in Austin, Texas, and other locations.  There were only 950 vehicles in the entire U. S. fleet, so the operation is clearly in its early stages. 

 

But now, after Cruise has done software recalls for human-piloted vehicles as well as driverless ones, it is far from clear what their path is back to viability.  According to an AP report, GM had big hopes for substantial revenue from Cruise operations, expecting on the order of $1 billion in 2025 after making only about a tenth of that in 2022.  And profitability, which would require recouping the billions GM already invested in the technology, is even farther in the future than it was before the problems in San Francisco.

 

The consequences of these events can be summarized under the headings of good news and bad news.

 

The good news:  Nobody got killed, although being dragged twenty feet under the wheel of a robot car that clearly has no idea what is going on might be a fate worse than death to some people.  After failing to heed the warning incident in August, Cruise has finally decided to react vigorously with significant and costly moves.  According to AP, it is adding a chief safety officer and asking a third-party engineering firm to find the technical cause of the Oct. 2 crash.  So after clearly inadequate responses to the earlier incidents, a major one has motivated Cruise management to act. 

 

The bad news:  Several people were injured, at least one critically, before Cruise realized that, at least in the complex environs of San Francisco, their robotaxis posed an unacceptable risk to pedestrians.  I'm sure Google's Waymo vehicles have a less-than-perfect safety record, but whatever start-up glitches they suffered are well in the past.  Cruise does not have the luxury of experience that Waymo has, and is in a sense operating in foreign territory.  Maybe Detroit would have been a better choice than San Francisco for a test market, but that would have neglected the cool factor, which is after all what is driving the robotaxi project in the first place.

 

Back when every elevator had a human operator, there was a valid economic and engineering argument to replace the manually-controlled units with automatic ones.  The main reason was to eliminate the salary of the operator.  Fortunately, the environment of an elevator is exceedingly well defined, and the relay-based technology of the 1920s sufficed to produce automatic elevators which met all safety requirements and were easy enough for the average passenger to operate.  Nevertheless, some places clung to manual elevators as recently as the 1980s, as I recall from a visit to a tax consultant in Northampton, Massachusetts, whose office was accessed by means of an elevator controlled not by buttons but by a rather seedy-looking old man.

 

Being an old man myself now, I come to the defense of everyone on the street who would like to confront real people behind the wheel, not some anonymous software that may—may!—figure out I'm a human being and not a tall piece of plastic wrap blowing in the wind, in time to stop before it hits me.  Yes, robotaxis are cool.  Yes, they save on taxi-driver salaries, but this ignores the fact that one of the few entry-level jobs that recent immigrants to this country can get which actually pays a living wage is that of taxi driver, many of whom are independent entrepreneurs. 

 

Robotaxis may be cool, but dangerous they should not be.  GM may patch up their Cruise operation and get it going again, but then again it may go the way of the Segway.  Time will tell.

 

Sources:  An article from the USA Today Network in the online Austin American-Statesman for Nov. 16, 2023 alerted me to the fact that Cruise was ramping down its nationwide operations, including those in Austin.  I consulted AP News articles at https://apnews.com/article/cruise-general-motors-pedestrian-recall-software-crash-bf08c0c6e7914649750b4dde598af5fc

(Nov. 8, 2023) and at https://apnews.com/article/san-francisco-cruise-robotaxi-crash-e721a81c1366c71a03c0aa50aa2e98f3 (Aug. 19, 2023). 

Monday, November 13, 2023

Should Social Media Data Replace Opinion Polls—And Voting?

 

Pity the poor opinion pollsters of today.  Their job has been mightily complicated by the rapidly changing nature of communications media and the soaring costs of paying real people to do real things such as knocking on doors and asking questions.  In an age when even the Census Bureau has mostly abandoned the in-person method of counting the population, opinion polls can't compete either.  For a time—say 1950 to 2000—their job was made easier by the advent of the near-universal telephone.  But the rise of robocalling, mobile phone proliferation with the caller ID feature, and the consequent general aversion of nearly everybody to answering a call from someone you don't know, has made it much harder for opinion poll workers to approach the ideal of their business:  a truly representative sample of the relevant population.

 

So why not take advantage of the technological advances we have, and use data culled from social media to do opinion polling?  After all, we are told that some social-media and big-tech firms know more about our preferences than we do ourselves.  Out there in the bit void is a profile of everyone who has anything to do with mobile phones, computers, or the Internet—which is almost everyone, period.  And much of that data on people is either publicly available or can be obtained for a price that is a lot less than paying folks to walk around in seventeen carefully selected cities and countrysides knocking on one thousand doors. 

 

Well, anything a piker like me can think of, you can bet smarter people have thought of as well.  And sure enough, three researchers at the University of Lausanne in Switzerland have not only thought of it, but have collected nearly two hundred papers by other researchers who have also looked into the topic. 

 

In surveying the literature, Maud Reveilhac, Stephanie Steinmetz, and Davide Morselli apparently did not find anyone who has gone all the way from traditional opinion polling to relying mainly on social-media data (or SMD for short).  That is a bridge too far even now.  But they found many researchers trying to show how SMD can complement traditional survey data, leading to new insights and confirming or disconfirming poll findings.

 

With regard specifically to political polls, a subject many of the papers focused on, one can imagine a kind of hierarchy, with one's actual vote at the top.  Below that is the opinion a voter might tell a pollster in response to the question, "If the Presidential election were held today, who would you vote for?"  And below that, as far as I know, anyway, are the actions the voter takes on social media—the sites visited, the tweets subscribed to, the comments posted, etc. 

 

It only stands to reason that there is some correlation among these three classes of activity.  If someone watches hours of Trump speeches and says they are going to vote for Trump, it would be surprising to find that they actually voted for Bernie Sanders as a write-in, for example. 

 

But there is a time-honored tradition in democracies that the act of voting is somehow sacred and separate from anything else a person happens to do or say.  Because voting is the exercise of a right conferred by the government, in the moment of voting a person is acting in an official capacity.  It is essentially the same kind of act as when a governor or president signs a law, and should be safeguarded and respected in the same way.  A president may have said things that lead you to think he will sign a certain law.  He may even say he'll sign it when it comes to his desk.  But until he actually and consciously signs it, it's not yet a law.

 

There are laws against bribing executives and judges in order to influence their decisions, and so there are also laws against paying people to vote a certain way.  That is because in a democracy, we expect the judgment of each citizen to be exercised in a conscious and deliberate way.  And bribes or other forms of vote contamination corrupt this process.

 

Despite the findings of the University of Lausanne researchers that so far, no one has attempted to replace opinion polls wholesale with data garnered from social media or other sources, the danger still exists.  And with the advent of AI and its ability to ferret out correlations in inhumanly large data sets, I can easily imagine a scenario such as the following.

 

Suppose some hotshot polling organization finds that they can get a consistently high correlation between traditional voting, on the one hand, and "polling" based on a sophisticated use of social media and other Internet-extracted data—data extracted in most cases without the explicit knowledge of the people involved.  Right now, that sort of thing is not possible, but it may be achievable in the near future.

 

Suppose also that for whatever reason, participation in actual voting plummets.  This sounds far-fetched, but already we've seen how one person can singlehandedly cast effective aspersions on the validity of elections that by most historical measures were properly conducted. 

 

Someone may float the idea that, hey, we have this wonderful polling system that predicts the outcomes of elections so well that people don't even have to vote!  Let's just do it that way—ask the AI system to find out what people want, and then give it to them.

 

It sounds ridiculous now.  But in 1980, it sounded ridiculous to say that in the near future, soft-drink companies will be bottling ordinary water and selling it to you at a dollar a bottle.  And it sounded ridiculous to say that the U. S. Census Bureau would quit trying to count every last person in the country, and would rely instead on a combination of mailed questionnaires and "samples" collected in person. 

 

So if anybody in the future proposes replacing actual voting with opinion polls that people don't actually have to participate in, I'm here to say we should oppose the idea.  It betrays the notion of democratic voting at its core.  The social scientists can play with social-media data all they want, but there is no substitute for voting, and there never should be.

 

Sources:  The paper "A systematic literature review of how and whether social media data can complement traditional survey data to study public opinion," by Maud Reveilhac, Stephanie Steinmetz, and Davide Morselli appeared in Multimedia Tools and Applications, vol. 81, pp. 10107-10142, in 2022, and is available online at https://link.springer.com/article/10.1007/s11042-022-12101-0.

Monday, November 06, 2023

The Biden Administration Tackles AI Regulation—Sort Of

 

In our three-branch system of government, the power of any one branch is intentionally limited so that the democratic exercise of the public will cannot be thwarted by any one branch going amok.  This division of power leads to inefficiency and sometimes confusion, but it also means that the damage done by any one branch—executive, legislative, or judicial—is limited compared to what a unified dictatorship could do.

 

We're seeing the consequences of this division in the recent executive order announced by the Biden administration on the regulation of artificial intelligence (AI).  One take on the fact sheet that preceded the 63-page order itself appeared on the website of IEEE Spectrum, a general-interest magazine for members of IEEE, the largest organization of professional engineers in the world. 

 

It's interesting that reactions from most of the technically-informed people interviewed by the Spectrum editor were guardedly positive.  Lee Tiedrich, a distinguished faculty fellow at Duke University's Initiative for Science and Society, said ". . . the White House has done a really good, really comprehensive job."  She thinks that while respecting the limitations of executive-branch power, the order addresses a wide variety of issues with calls to a number of Federal agencies to take actions that could make a positive difference.

 

For example, the order charges the National Institute of Standards and Technology (NIST) with developing standards for "red-team" testing of AI products for safety before public release.  Red-team testing involves purposefully trying to do malign things with a product to see how bad the results can get.  Although NIST doesn't have to do the testing itself, coming up with rigorous standards for such testing in the manifold different circumstances that AI is being used for may prove to be a challenge that exceeds the organization's current capability.  Nevertheless, you don't get what you don't ask for, and as a creature of the executive branch, NIST is obliged at least to try.

 

The U. S. Department of Commerce will develop per this order "guidance for content authentication and watermarking to clearly label AI-generated content."  Cynthia Rudin, a Duke professor of computer science, sees that some difficulty may arise when the question of watermarking AI-generated text comes up.  Her point seems to be that such watermarking is hard to imagine other than seeing (NOTE:  AI-GENERATED TEXT) inserted every so often in a paragraph, which would be annoying, to say the least.  (You have my guarantee that not one word of this blog is AI-generated, by the way.)

 

Other experts are concerned about the use of data sets for training AI-systems, especially the intimidatingly-named "foundational AI" ones that are used as a basis for other systems with more specific roles.  Many training data sets include a substantial fraction of worldwide Internet content, including millions of copyrighted documents, and concern has been raised about how copyrighted data is being exploited by AI systems without remuneration to the copyright holders.  Susan Ariel Aaronson of George Washington University hopes that Congress will take more definite action in this area to go beyond the largely advisory effect that Biden's executive order will have.

 

This order shares in common with other recent executive orders a tendency to spread responsibilities widely among many disparate agencies, a feature that is something of a hallmark of this administration.  On the one hand, this type of approach is good at addressing an issue that has multiple embodiments or aspects, which is certainly true of AI.  Everything from realistic-looking deepfake photos, to genuine-sounding legal briefs, to functioning computer code has been generated by AI, and so this broad-spectrum approach is an appropriate one for this case.

 

On the other hand, such a widely-spread initiative risks getting buried in the flood of other obligations and tasks that executive agencies have to deal with, ranging from their primary purposes (NIST must establish measurement standards; the Department of Commerce must deal with commerce, etc.) and other initiatives such as banning workplace discrimination against LGBT employees, one of the things that Biden issued an executive order for in his first day of office.  This is partly a matter of publicity and public perception, and partly a question of priorities that the various officials in charge of the various agencies set.  With the growing number of Federal employees, it's an open question as to what administrative bang the taxpayer is getting for his buck.  Regulation of AI is something that there is widespread agreement on—the extreme-case dangers have become clearer in recent months and years, and nobody wants AI to take over the government or the power grid and start treating us all like lab rats that the AI owner has no particular use for anymore. 

 

But how to avoid both the direst scenarios, as well as the shorter-term milder drawbacks that AI has already given rise to, is a thorny question, and the executive order will only go a short distance toward that goal.

 

One nagging aspect of AI regulation is the fact that the new large-scale "generative AI" systems trained on vast swathes of the Internet are starting to do things that even their developers didn't anticipate:  learning languages that the programmers hadn't intended the system to learn, for example.  One possible factor in this uncontrollability aspect of AI that no one in government seems to have considered, at least out loud, is dwelled on at length by Paul Kingsnorth, an Irish novelist and essayist who wrote "AI Demonic" in the November/December issue of Touchstone magazine.  Kingsnorth seriously considers the possibility that certain forms and embodiments of AI are being influenced by a "spiritual personification of the age of the Machine" which he calls Ahriman. 

 

The name Ahriman is associated with a Zoroastrian evil spirit of destruction, but Kingsnorth describes how it was taken up by the theosophist Rudolf Steiner, and then an obscure computer scientist named David Black who testified to feeling "drained" by his work with computers back in the 1980s.  The whole article should be read, as it's not easy to summarize in a few sentences.  But Kingsnorth's basic point is clear:  in trying to regulate AI, we may be dealing with something more than just piles of hardware and programs.  As St. Paul says in Ephesians 6:12, ". . . we wrestle not against flesh and blood [and server farms], but against principalities, against powers, against the rulers of the darkness of this world, against spiritual wickedness in high places." 

 

Anyone trying to regulate AI would be well advised to take the spiritual aspect of the struggle into account as well.

 

Sources:  The IEEE Spectrum website carried the article by Eliza Strickland, "What You Need to Know About Biden's Sweeping AI Order" at https://spectrum.ieee.org/biden-ai-executive-order.  I also referred to an article on AI on the Time website at https://time.com/6330652/biden-ai-order/.  Paul Kingsnorth's article "AI Demonic" appeared in the November/December 2023 issue of Touchstone, pp. 29-40, and was reprinted from Kingsnorth's substack "The Abbey of Misrule."

Monday, October 30, 2023

The Advent of Digital Twins: Should They Replace Caregivers?

 

It's 2027, and your father is in a rest home suffering from Alzheimer's disease.  You are considering a new service that takes samples of your voice and videoclips of you, and creates a highly realistic 3-D "digital twin" that your father can talk with on a screen any time he wants to.  The digital twin has your voice and mannerisms, and shows up on your father's phone to remind him to take his medicine and furnish what the company offering the service calls "companionship."  In the meantime, you yourself can simply go about your own life without having to do the largely tedious work of getting your father to take care of himself. 

 

Should you go ahead and pay for this service?  Or should you just continue with your daily visits to him, visits that are becoming increasingly inconvenient?

 

I put this scenario a few years in the future, but already academics are considering the ethical implications of using digital twins in healthcare.  Matthias Braun, an ethicist at the Friedrich-Alexander University in Germany, thinks that the answer to this question depends on the issue of how much control the original of the twin exerts over it.  Applying that notion to the situation I just outlined, who is involved, and what benefits and harms could result?

 

The people involved are you, your father, and the organization providing the digital twin.  Digital twins are not people—they are software, so while the digital twin is at the core of the issue, it has no ethical rights or responsibilities of its own. 

 

Consider your father first.  It may be that his mind is so fogged by Alzheimer's disease that he may be completely fooled into thinking he is talking on the phone with and watching you, when in fact he's speaking with a sophisticated piece of software.  So by means of the digital twin, your father may well be persuaded to believe something that is not objectively true. 

 

But people who deal with Alzheimer's patients know that sometimes the truth has to be at least elided, if not downright falsified.  When my wife's father with dementia lived with us, he would often ask, "Where's your mother?"  His wife had died some years previously.  An answer like, "She's not here right now," doesn't strictly violate the truth, but leaves an impression that is false.  Nevertheless, it's likely to be a less disruptive reply than something like, "You dummy!  Don't you remember she died in 2007?"

 

Then consider you.  One alternative to providing the digital twin is to hire a full-time personal caregiver, as some people can afford to do.  Besides the expense, there is the question of whether your father will get along with such a person.  While my father-in-law was with us, we tried hiring a caregiver for limited times so that my wife and I could get a few hours' break from continuous 24-hour caregiving.  Unfortunately, the caregiver—an older man—didn't appeal to his patient, and after one such visit we got an earful of complaints about "that guy," and it didn't work out.  So in addition to being expensive, personal caregivers don't always do the job the way you hoped they would.

 

From your perspective, the digital-twin caregiver has the advantage that if successful, your father will think he is really talking with a very familiar person, and is more likely to follow instructions than if a stranger is dealing with him.

 

So where's the harm?  What could possibly go wrong?

 

Consider hacking.  No computer system is 100% secure, and the opportunities for mischief ranging from random meddling to theft and murder are obviously present if someone managed to gain control of the digital twin's software.  It wouldn't be easy, but a lot of very difficult hacks have been carried out by criminals in the past, and if the motivation is there, they will find a way sooner or later. 

 

Even if criminals aren't interested in messing with digital-twin rest-home caregivers, what if your father starts to like the digital twin more than he likes your real physical presence?  After all, a digital twin could be programmed to have nearly infinite patience in dealing with the repeated questions that dementia patients often ask—"Where's your mother?" being a prime example.  How would you feel if you visit your father some day and he says, "I like you a lot better on the screen than I like you now."? 

 

And even if the digital twin doesn't manage to alienate you, the original of its copy, I can't rid myself of a feeling of distaste that if the twin succeeds in fooling your father into thinking it's really you, a species of fraud has been committed.

 

At a minimum, even a successful digital-twin substitution would mean that once again in our digital world, an "I-thou" relationship, in Martin Buber's terms, has been replaced by an "I-it" relationship.  Instead of continuing one of the most meaningful relationships anyone can have in this life—the relationship with one's father—that relationship would be replaced by one that connects your father to a machine.  Yes, a sophisticated machine, a machine that tricks him into thinking he's talking with you, but a machine nonetheless.  In the greater scheme of things, and even leaving religious considerations aside, it's hard to believe that both you and your father would be ultimately better off if your father spent his days talking with a computer and you went about whatever other business you have instead of spending time with him. 

 

Digital twins are not yet so thick on the ground that we have to deal with them as a routine thing—not yet.  But if the momentum of generative AI keeps up its current pace, it is only a matter of time before they will be a genuine option, and we'll have to decide whether to use them, not only in a medical context but in many others as well.  We should sort out what is right and wrong about their use now, before it's too late.

 

Sources:  Matthias Braun's article "Represent me: please! Towards an ethics of digital twins in medicine"  appeared in 2021 in the Journal of Medical Ethics, vol. 47, pp. 394-400. 

Monday, October 23, 2023

Carbon Indulgences: The South Pole Scandal

 

Repentance is hard.  Obtaining true forgiveness is even harder.  So it is no surprise that over the ages, people have tried to find shortcuts around the difficult chores of changing one's ways and being forgiven for going astray.  This week's New Yorker carries the story of one such effort:  the activities of the world's largest carbon-offsetting firm, South Pole, turn out to have been something of a shell game.

 

Carbon offsetting is based on the notion, accepted as gospel in many circles, that by using fossil fuels, humans are committing slow mass suicide that can be averted only by striving toward "net zero"—that is, not adding any more CO2 to the atmosphere than we take out.  A logical consequence of this notion is that doing anything that produces CO2 is the secular equivalent of a sin in Christian theology.  Unfortunately, this type of sinning is extremely hard to avoid, because ordinary things like turning on the lights, driving a car, flying, or running a business—especially a manufacturing business—necessitate committing manifold sins of this kind. 

 

As in the days of old when people sought out Catholic priests to get their sins absolved, people and corporations today want to get the same feeling of being washed clean of their carbon misdeeds, but without facing the hard tasks of doing without fossil fuels altogether.

 

Enter South Pole, and smaller outfits like them.  South Pole sells "carbon offsets."  The basic idea is simple:  if you want to make up for your carbon sins, conveniently measured in millions of tons of CO2, you simply pay South Pole the going rate, which has varied widely over the years like any other commodity.  And voila!—South Pole promises to preserve a Zimbabwean forest that would otherwise be cut down, and those precious trees will absorb not only your carbon sins, but those of all the other companies paying millions for carbon offsets.

 

Deforestation is another secular sin we've heard a lot about, so it makes a certain amount of sense to pay villagers not to cut down trees.  That's fine as long as the villagers really do refrain from destroying the forest, and also if it was certain that they would have cut it down otherwise.  You can already see a potential problem here, involving long-term hypotheticals.  How sure are we that the forest in question would have been destroyed if the offset money wasn't paid?  And how sure can we be that most of the money is really getting to the villagers whose behavior has to change?

 

Not so sure, it turns out.  Heidi Blake's article describes in great detail the dubious accounting of one Steve Wentzel, who was South Pole's man in Zimbabwe charged with actually implementing the forest preservation.  According to Blake, Wentzel promised a great deal more than he delivered.  While he still claims to have prevented the requisite amount of deforestation, he has no paper trail to prove it, and says that the erratic Zimbabwean economy and currency forced him to do what amounted to money-laundering in order to get U. S. currency with which to pay the villagers. 

 

Blake describes how one employee after another of South Pole left the organization once they realized that the firm was taking money mainly to make its customers feel better, not because they were doing anything objectively to improve the world's climate crisis.    In addition, the price of carbon offsets has gyrated wildly in recent years, affected by such things as the failure of the Kyoto Protocol agreement to commit most major carbon-emitting countries to substantial reductions. 

 

This situation reminds me of an episode in the Protestant Reformation that involved what are called indulgences.  In order to be fair to the Catholic side, I'm going to quote directly from the Catholic Encyclopedia (published around 1914) as to what an indulgence is:  "An indulgence is the extra-sacramental remission of the temporal punishment due, in God's justice, to sin that has been forgiven, which remission is granted by the Church in the exercise of the power of the keys, through the application of the superabundant merits of Christ and of the saints, and for some just and reasonable motive."  Preceding that is a definition of what an indulgence is not, which includes the following:  "It is not a permission to commit sin, nor a pardon of future sin; . . . [i]t is not the forgiveness of the guilt of sin . . . . "

 

The basic idea, as this Protestant understands it, is this.  Only God through Jesus Christ's atonement really forgives sins.  But even if a sin is forgiven, there remains "temporal punishment," meaning that souls who have died without being fully cleansed of their venial sins have to undergo some suffering in Purgatory before going to Heaven.  And, according to Catholic doctrine, prayers and good works by the living on behalf of those suffering in Purgatory can help them get out sooner than otherwise.

 

In the 16th century, someone (it isn't clear who) came up with the following jingle in Germany:  "Sobald der Pfenning im Kasten klingt, die Selle aus dem Fegfeuer springt."  In English:  "As soon as the coin in the coffer rings, the soul out of purgatory springs."  In other words, if you give me money, I can guarantee your Aunt Bertha will get out of Purgatory.  It was associated with one Johann Tetzel, who apparently preached in the spirit of such a saying without using the actual words. 

 

The misuse of indulgences was one of the inspirations for the Protestant Reformation, and although the Church still offers indulgences, it no longer puts a price on them. 

 

South Pole appears to be a modern-day secular equivalent of Johann Tetzel, promising more than it can possibly deliver.  But in a free market, the price of forgiveness can soar, and those who trade in it can profit mightily.  What they do with the money is another question.  The carbon-offset concept seems so inherently open to abuse that I, for one, think it should be abandoned for more practical short-term efforts to deal with the consequences of climate change.  But there will always be those seeking secular forgiveness, and those willing to sell it for a good price.

 

Sources:  Heidi Blake's article "Hot Air" appears on pp. 42-55 of the Oct. 23, 2023 issue of The New Yorker.  I found the German version of Tetzel's non-quote at James Swan's blog https://beggarsallreformation.blogspot.com/2012/01/did-tetzel-really-say-as-soon-as-coin.html, and referred to the online Catholic Encyclopedia definition of indulgence at https://www.newadvent.org/cathen/07783a.htm.