Monday, July 31, 2023

Uber Backup Driver in Self-Driving Car Fatality Pleads Guilty

 

When Elaine Herzberg started to cross a dark street in Tempe, Arizona on Mar. 18, 2018 with her bicycle, she didn't see the Volvo approaching.  Unfortunately for her, it wasn't an ordinary Volvo.  It was an experimental self-driving vehicle operated by Uber, and behind the wheel was Rafaela Vasquez, whose job was to monitor the car's performance and intervene before anything serious went wrong.  The Volvo was equipped with factory-installed safety features such as automatic braking, but Uber had disabled those features because they interfered with Uber's own self-driving system.  It was Vasquez's job to stop the car if it was about to run over something—or somebody.

 

Video footage recorded at the time showed that Vasquez was looking at something in her lap instead of watching where the car was going.  Prosecuting attorneys said it was a TV show she was streaming on her phone.  Her defense attorney claimed it was a messaging program on a work cellphone.  Whichever it was, it distracted Vasquez enough so that by the time she noticed Herzberg only a few feet ahead of the car, it was too late to do anything.  Herzberg died in the accident and Vasquez was charged with negligent homicide.  Last Friday, she pled guilty to a lesser charge and was sentenced to three years of supervised probation.

 

Most sources agree that this was the first fatal collision involving a fully autonomous car.  As such, it presented something of a puzzle to the prosecutors.  Clearly, Vasquez was not just an ordinary driver—her role was to monitor the car's performance and intervene only when it looked like it was getting into trouble.  Numerous other parties were involved as well:  Volvo itself, whose automatic braking features were disabled; Uber engineers who developed the self-driving features being tested; and Uber supervisors who issued orders and instructions to Vasquez. 

 

More than five years have passed since the accident, and the charge Vasquez pled guilty to is not the one she was originally charged with.  In a plea deal, she agreed to plead guilty to an "undesignated felony," which apparently can be converted into a misdemeanor if she successfully completes three years of supervised probation.  Vasquez is a convicted felon, having served prison time for attempted armed robbery, so she is no newcomer to the criminal justice system.  While Uber deserves credit for employing ex-convicts, their judgment may have erred in this case.

 

When I blogged on this accident back in 2018, the autonomous-vehicle landscape was very different.  Self-driving cars were a novelty and found mostly in experimental trials in a few cities.  Since that time, things have progressed in that field, but probably not as fast as its promoters wished.  Tesla, which has probably fielded the largest number of partially self-driving cars of any U. S. manufacturer, is currently under scrutiny by the National Highway Traffic and Safety Administration for accidents involving its so-called Autopilot feature, which drives the car without direct human-driver intervention but should not be used without constant monitoring for bad behavior.  I can testify that these days, I see at least one Tesla almost any time I'm out on the road for any length of time, and maybe several.  As Autopilot is an expensive option for an already costly car, I don't know how many drivers have it or use it, but so far I haven't seen a Tesla tooling down the expressway with nobody at the wheel—yet.

 

Simply because the current driving environment is so complex, the ultimate vision of so-called Level 5 autonomous vehicles, which could drive anywhere a human could without any human intervention at all, may not ever come to pass.  Fully autonomous vehicles can work in highly restricted and controlled environments such as open-pit mines, but the average city street is full of so many surprises and hard-to-determine obstacles that current technology cannot be trusted to navigate it without human help. 

 

If we are to realize the dream of totally autonomous cars, we might have to accept some geographic restrictions that will not be popular.  For example, if certain streets or blocks were designated for autonomous vehicles only, and no-jaywalking laws were strictly enforced, the environment could be modified enough so that fully-autonomous Level 5 cars would work with reasonable safety.  But that would require a coordination among local, state, and national governments, besides car manufacturers, that is so far lacking, and may never be achieved.

 

A few people have always enjoyed the equivalent of fully-autonomous cars:  those who can afford to hire a chauffeur.  The fact that some rich folks are driven around by hired drivers has had negligible impact on the transportation system so far, and if such an experience never makes it to prime time via the development of Level 5 autonomous vehicles, it will not signal the failure of transportation technology in general.  The only people who would really benefit in a major way from Level 5 vehicles are those who cannot drive:  the handicapped and disabled, the elderly, and children.  I am told that children used to ride the New York subway system unsupervised all the time, and some may still do so.  But we have a long way to go before anyone would trust their five-year-old to get in an autonomous vehicle for a ride to day care.

 

That dark night in Tempe, Rafaela Vasquez unwillingly made history through her negligence in trusting too much to the self-driving capabilities of the experimental Uber-modified vehicle she was hired to supervise.  The same mistake is being made by people who don't follow instructions to keep their hands on the wheel of an Autopilot-equipped Tesla, and the unfortunate thing is that they often get away with it.  But sometimes they don't, and that's what the NHTSA is looking into.  Uber was not charged in the Tempe accident, probably because there was evidence that they had told Vasquez to be vigilant and she clearly failed to do so.  But it's human nature to assume that if you can get away with something for a long time, you'll be able to get away with it indefinitely.  And the temptation to do that with advanced self-driving features is too great for some people, who should be held responsible, along with their car's manufacturer, when something goes wrong.

 

Sources:  The AP report on the Vasquez trial can be found at https://apnews.com/article/autonomous-vehicle-death-uber-charge-backup-driver-1c711426a9cf020d3662c47c0dd64e35.  I also referred to a CNBC report at https://www.cnbc.com/2023/07/06/nhtsa-presses-tesla-for-more-records-in-autopilot-safety-probe.html.  I blogged on the Tempe accident here and here.

Monday, July 24, 2023

Is AI Better Regulated After the White House Meeting?

 

The easy answer is, it's too soon to tell.  But for a number of reasons, the July 21 meeting between President Biden and leaders of seven Big Tech firms, including Google, Microsoft, and OpenAI, may prove to be more show than substance.

 

Admittedly, there is widespread agreement that some sort of regulation of artificial intelligence (AI) should be considered.  Even industry leaders such as Elon Musk have been warning that things are moving too fast, and there are small but real risks of huge catastrophes lurking out there that could be averted by agreed-upon restrictions or regulations of the burgeoning AI industry.

 

Last Friday's White House meeting of representatives from seven leading AI firms—Amazon, Anthropic, Google, Inflection, Meta (formerly Facebook), Microsoft, and OpenAI—produced a "fact sheet" that listed eight bullet-point commitments made by the participants.  The actual meeting was not open to the public, but one presumes the White House would not publish such things without at least the passive approval of the participants. 

 

Browsing through the items, I don't see many things that a prudent giant AI corporation wouldn't be doing already.  For example, take "The companies commit to internal and external security testing of their AI systems before their release."  Not to do any security testing would be foolish.  External testing, meaning testing by third-party security firms, is probably pretty common in the industry already, although not universal. 

 

The same thing goes for the commitment to "facilitating third-party discovery and reporting of vulnerabilities in their AI systems."  No tech firm worth its salt is going to ignore an outsider's legitimate report of finding a weak spot in their products, and so this is again something that the firms are probably doing already. 

 

The most technical commitment, but again one that the companies are probably doing already, is to "protect proprietary and unreleased model weights."  Unversed as I am at AI technicalities, I'm not sure exactly what this means, but the model weights appear to be something like the keys to how a given AI system runs once it's been trained, and so it only stands to reason that the companies would protect assets that cost them a lot of computing time to obtain, even before the White House told them to do that.

 

Four bullet points address "Earning the Public's Trust," which, incidentally, implies that the firms have a considerable way to go to earn it.  But we'll let that pass. 

 

The firms commit to developing some way of watermarking or otherwise indicating when "content is AI generated."  That's all very well, but the answer to such a question is rarely just yes or no.  What if some private citizen takes a watermarked AI product and incorporates it manually into something else that is no longer watermarked?  The intention is good, but the path to execution is foggy, to say the least.

 

Perhaps the commitment with the most bite is this one:  "The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use."  The wording is broad enough to drive a truck through, although again, the intention is good.  How often, how detailed, and how extensive such reports may be is left up to the company.

 

The last two public-trust items commit the firms to "prioritizing" research into the societal risks of AI, and to use AI to address "society's greatest challenges."  If I decide not to wash my car today, I have prioritized washing my car—negatively, it is true, but hey, I said I'd prioritize it!

 

So what is different about the way these firms will carry out their AI activities after the White House meeting?  A lot of good intentions were aired, and if the firms happened to enjoy a lot of public trust in the first place, these good intentions might have found an audience who believes that they will be carried out.

 

But the atmosphere of cynicism which has gradually encroached on almost all areas of public life make such an eventuality unlikely, to say the least.  And this cynicism has arisen due in no small part to the previous doings of the aforementioned Big Tech firms—specifically, their activities in social media.

 

When you compare the health of what you might call the body politic of the United States today with what it was say, fifty years ago, the comparison is breathtaking.  In 1973, 42% of those U. S. residents surveyed said they had either a "great deal" or "quite a lot" of confidence in Congress.  Only 16% said they had "very little" confidence.  In 2023, the number of people with either a great deal or quite a lot of confidence is only 8%, and fully 48% say they have "very little" confidence in Congress.  While this trend has been happening for years, much of it has occurred only since 2018, after the social-media phenomenon overtook legacy media as the main conduit of political information exchange—if one can call it that.

 

Never mind what AI may do in the future.  We are standing in the wreckage of something it has done already:  it has caused great and perhaps permanent damage to the primary means by which a nation has of governing itself.  Not AI alone, to be sure, but AI has played an essential role in the way companies have profited from encouraging the worst in people. 

 

It would be nice if last Friday's White House meeting triggered a revolution in the way Big Tech uses AI and its other boxes of tricks to encourage genuine human flourishing without the horrific side effects in both personal lives and in public institutions that we have seen already.  But getting some CEOs in a private room with the President and issuing a nice-sounding press release afterwards isn't likely to do that.  It's a step in the right direction, but a step so tiny that it's almost not worth talking about. 

 

Historically, needed technical regulations have come about only when vivid, graphic, and (usually) sudden harm has been caused.  The kinds of damage AI can do are rarely that striking, so we may have to wait quite a while before meaningful AI regulations are even considered.  But in my view, it was already high time years ago.

 

Sources:  The White House press release on the July 21 meeting of AI firms with President Biden can be found at https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.  I also referred to a PBS report on the meeting at

https://www.pbs.org/newshour/politics/watch-live-biden-announces-ai-safeguards-after-meeting-with-tech-leaders.  The Gallup poll historical data on confidence in Congress can be found at

https://news.gallup.com/poll/1597/confidence-institutions.aspx.

Monday, July 17, 2023

Air-Condition Texas Prisons!

 

An Open Letter To:

 

The Hon. Greg Abbott, Governor of the State of Texas

The Hon. Dan Patrick, Lieutenant Governor and President of the Senate

The Hon. Members of the Texas Senate

 

A July 9 editorial in the Austin American-Statesman alerted its readers to the fact that of the 128,000 or so prisoners incarcerated in Texas prisons, only about 42,000 sleep in air-conditioned cells.  Only 31% of the state's prisons provide air conditioning for prisoners as of May of this year, which has been one of the hottest ones in recent memory.

 

Just a few weeks ago, facing a $32 billion state surplus, the Texas House voted to spend less than 2% of that sum on the project of air-conditioning most Texas prisons.  At long last, the state which witnessed the first air-conditioned church building (First Presbyterian in Orange, Texas in 1914) and the first custom air-conditioned automobile, was going to extend the benefits of that characteristically Texas technology to its prisoners.

 

But to your shame, Texas Senators and Lt. Gov. Patrick, you declined to act on that measure, and to the extent Gov. Abbott failed to make it a priority during the special sessions, he shares some of the blame. 

 

Here are some objections I can think of to air-conditioning Texas prisons, with the rebuttals to each:

 

* It costs too much to retrofit old structures to be air-conditioned.  It didn't cost too much to retrofit the nineteenth-century pile called the Texas Capitol so that legislators can work in air-conditioned comfort back in 1955.  And 2% of the surplus doesn't strike me as "too much."

 

*  Prisons haven't had air conditioning before now, so why change? By that argument, we should never have installed indoor plumbing or electricity in them either.  If primitive conditions are what we want prisoners to have, why not make them go to the bathroom in open latrines and rely on kerosene lanterns?  Then we can add plagues of cholera and fires to heat prostration and the other hazards prisoners experience already.

 

* Many prisoners work outside in hot conditions anyway, so they're used to it.  Some people got used to living in Nazi or Soviet prison camps, too, but that doesn't mean it was a good thing.  Most farmers and other outdoor workers manage to work in air-conditioned equipment and environments whenever possible.  Withholding air conditioning all of the time because it can't be experienced some of the time is illogical.

 

*  Prison staff have to endure the heat too.  I am willing to wager that the chief official of each prison has an air-conditioned office.  The Statesman cites a 40% turnover rate for lower-level prison employees, which the lack of air conditioning surely contributes to.  If we simply passed a law that the average temperature in the warden's office cannot be more than two degrees less than that in the hottest cell, we would see a revolution in prison budgets overnight, even those operated by contractors, directing funds toward air-conditioning prison cells.

 

Many of you legislators make much of your Christianity.  Do you think God wasn't looking when you neglected to take up the prison air-conditioning bill?  Didn't Jesus say, "Depart from me, you cursed, into the eternal fire prepared for the devil and his angels; for I was hungry and you gave me no food, I was thirsty and you gave me no drink, I was . . . in prison and you did not visit me"?  Air conditioning isn't mentioned by name in that list, but I don't think any of you are literalist enough to miss the implications.

 

I lived in Massachusetts for many years before returning to my native Texas in large part because of the Christian-influenced culture that in many ways reflects my deepest beliefs about life and the universe.  And I still think Texas is the better place to live, at least if you're not in prison.  But if anyone from out of state asks me why Texas, of all places, the leader in the development of air conditioning, has failed to apply this blessing of humanity to some of its most neglected residents—its prisoners—I can only hang my head in shame. 

 

Besides the moral aspect, there are practical ones too.  Sooner or later, confinement to an un-air-conditioned cell in South Texas will lead a lawyer to convince a judge and jury that such is "cruel and unusual punishment," and the Feds will take over the prisons.  Nobody wants that, not even the prisoners, probably.  So before it's too late—and as your average age is 60, too late may be sooner than you think—remedy this sin of omission, and air-condition Texas prisons.

 

Karl D. Stephan, Professor

Ingram School of Engineering

Texas State University

Monday, July 10, 2023

Is the Bloom Off the Self-Driving Rose?

 

Pardon the mixed metaphor—roses don't drive—but I couldn't think of another way to summarize the current prospects for truly autonomous vehicles.  For several years now, we have been promised that self-driving cars are just around the corner.  In particular, Tesla has marketed an expensive option for their electric vehicles called "Full Self-Driving Capability."  But as the NHTSA is asking Tesla for ever more detailed information about how it has deployed and modified its autonomous-driving software since 2014, it's beginning to look like the promised future of leaving all the driving to robot chauffeurs while we nap or play cards in the back seat is nothing more than hype.

 

What is reality right now regarding self-driving cars?  The Society of Automotive Engineers has established six levels of autonomy, ranging from 0 (what a Model T had, requiring you to do everything yourself) up to Level 5.  A Level 5 car could drive you anywhere that is physically accessible by a car without your having to lift a finger. 

 

The most advanced self-driving systems currently on the market are Level 3.  A Level 3 system such as that in the Honda Legend, which was introduced only in Japan in limited quantities, really does drive the car without human intervention, but reserves the right to ask the human to take over if things get too hairy.  "Hairy" in this case can mean something as mild as a rainy day, which sets up weird reflections on streets and can confuse even deep-learning AI systems. 

 

Tesla's much-publicized "Full Self-Driving Mode" is only a Level 2 system, because the occupant is supposed to be prepared to start steering at any time.  People routinely violate this rule, however, which is how several accidents involving supposedly self-driving Teslas have happened.

 

Computer scientist and engineer Anthony Levandowsky ought to know the self-driving score if anybody does, as he got into the business way back in 2003 when he teamed with some fellow U. C. Berkeley engineers to enter a self-driving motorcycle in DARPA's 2004 Grand Challenge.  He went on to work for Google, left Google for Uber amid lawsuits charging theft of intellectual property, was convicted of same, and pardoned by President Trump on Trump's last day in office.

 

If anybody has an insider's perspective on self-driving vehicles, Levandowsky does.  What is he up to now?  He runs a company that converts giant open-pit quarry trucks to be autonomous vehicles.  And in an interview with Bloomberg News, he says that the kind of highly restricted environment in which such trucks operate may be the best that truly autonomous driving can do for the foreseeable future.

 

While there have been numerous technical advances in sensors, computing power, and AI in the two decades or so that Levandowsky has been in the business, self-driving cars are up against a truly astounding opponent:  the average driver.  As the Bloomberg article points out, suppose you see a couple of pigeons on the road ahead.  As an experienced human, you know that pigeons almost always fly away before your car lands on top of them.  And even if you are contending with a particularly dopey or hung-over pigeon whose situational awareness isn't up to snuff, running over a pigeon is not going to ruin your car.  So it's no big deal to see some pigeons in your path if you're driving.

 

But to an autonomous vehicle system that may never have encountered these particular pigeons on this particular stretch of road under these particular lighting and weather circumstances, it's a totally novel experience.  And most prudent programmers will insert a default "brake when in doubt" operation when the system encounters something that might be dangerous.  So what may well happen is that the car suddenly brakes, and the driver following you may not notice in time, leading to a rear-end collision or even a pileup on a busy freeway—all because of a pigeon.

 

Multiply this scenario by the thousands of other ones that come up all the time, and you begin to understand why self-driving taxis are found only in highly restricted areas of certain cities, and why so many of the demonstrations of self-driving cars take place in California, Arizona, and other places where clear skies can be counted on.  Levandowsky takes the position that every so-called self-driving car out there is really just a pilot project, and Tesla even makes this condition explicit, calling their system a "beta version," meaning it's still under development. 

 

So when, if ever, are we going to get to have our little cocktail among friends in the back seat, heedless of the weather or the traffic?  I can picture only two situations in which even some people can get to experience this in what remains of my lifetime.

 

One is if cities establish self-driving-only zones in which only self-driving cars with similar operating systems are allowed.  This type of operation will never be a mass-market phenomenon, but it essentially transfers Levandowsky's self-driving quarry trucks to dense downtown areas, where the environment can be controlled, perhaps with strictly-enforced rules on pedestrians.

 

The other would extend this principle to an entire region or country.  Maybe we should start with Lichtenstein.  Only self-driving cars would be allowed on the roads.  This will probably never fly in the U. S., but it might take place in some dictatorial environment such as China or North Korea—assuming North Koreans ever get rich enough to buy their own cars.  And even then, the rest of the environment is still going to cause lots of problems—pigeons can't read highway laws, and they're going to land on the road anyway. 

 

It's kind of a shame, really.  Part of me was looking forward to handing the whole responsibility of driving over to some system that had proven itself at least as trustworthy as your average taxi driver.  (Not the taxi driver I wound up with in Amherst, Massachusetts once, who had been out "fishing" with a six-pack and hit a curb, blew out a tire, stopped at a gas station, got the tire fixed, and still got us to our flight to China somehow.)  But now it looks like that's going to go the way of other Jetson-inspired dreams, like personal flying saucers.  Of course, if they do get batteries good enough to operate self-flying one-person drones, we could rewrite the FAA rules a lot more easily than redoing all the traffic laws.  But you'd still have to deal with those pesky pigeons.

 

Sources:  The Bloomberg News article "Even After $100 Billion, Self-Driving Cars Are Going Nowhere" appeared on Oct. 5, 2022 at https://www.bloomberg.com/news/features/2022-10-06/even-after-100-billion-self-driving-cars-are-going-nowhere.  I also referred to the J. D. Power website for the SAE's six levels of autonomous-vehicle operation at https://www.jdpower.com/cars/shopping-guides/levels-of-autonomous-driving-explained

and https://www.cbsnews.com/sanfrancisco/news/tesla-autopilot-driver-assist-system-nhtsa-seeks-details-recent-changes/, as well as the Wikipedia article on Anthony Levandowsky.

Monday, July 03, 2023

Is Science Basic Enough to Support Engineering?

 

The word "basic" is often misused, but its most straightforward meaning is to denote something that lies at the bottom of other things, causally speaking.  Without basic math, you can't do calculus.  Without basic English, you can't make head or tail of Milton's Paradise Lost (and maybe not even with it).  Modern science-based (there's that word again) engineering depends on science in manifold ways:  not just to provide the physical understanding of the materials and processes that engineers work with, but sometimes to give inspiration to novel ideas as well.

 

When the word "basic" is applied to science, we get one of a pair of phrases that are often used to emphasize a contrast, namely basic versus applied science.  J. Britt Holbook is a philosopher with an appointment in the Department of Humanities at the New Jersey Institute of Technology.  In a new collection of papers on science, technology, and society, he muses on the contrast between basic and applied science, and asks what role the state should play in supporting either one, both, or perhaps neither.

 

In the beginning, there was no state support of science, or as it was called then, natural philosophy.  Philosophers had to make a living in some practical arena (Socrates was reportedly a stone-carver), and do philosophy in their spare time.  A few managed to become teachers of philosophy, but still, one could not expect to earn a government salary by just thinking about the stars.  Your lectures had to attract enough students to pay the rent.

 

It was in nineteenth-century Germany that the state began to realize that supporting science scholarship might actually benefit not only the scientists, but society at large.  Holbrook cites the great Alexander von Humboldt (1769-1859) as one of the main early proponents of such support.  What Humboldt wanted the state to support was not what we would call science, narrowly defined.  Rather, it was the entire range of disciplines worthy of graduate study by both professors and their students, who would not only acquire what the professors had to pass on to them, but would discover new knowledge as well.  This included what we would call social and even moral sciences as well as the natural sciences.  And by 1870 or so, several German universities managed to put Humboldt's vision into action, and put Germany in the lead of what the Germans called Wissenschaft.

 

When we in the U. S. tried to import this pattern, we put our own twist on it, a notably practical one.  The birth of "big science," funded at huge levels by focused government programs, took place during World War II, largely overseen by Vannevar Bush, who directed the Office of Scientific Research and Development (OSRD) during the war.  While the OSRD's most historic product was the nuclear bomb, it also established precedents of funding levels that Bush attempted to maintain after the war.  Although the ensuing National Science Foundation, founded in 1950, was not exactly what Bush had in mind, it embodied elements of both Bush's ideas and Humboldt's notion that supporting scientists to investigate knowledge for knowledge's sake alone would benefit society both directly, in terms of new applications of science via engineering, and indirectly, by raising the tone of the educated populace in general.

 

In the following decades, Holbrook notes that the NSF has steered ever closer to the applied, technological, or engineering side of things.  In 2022, the agency formed its first new directorate in thirty years.  The Technology, Innovation, and Partnerships Directorate will further emphasize research focused on practical applications and technology transfer to industry.  This is only the most recent move in a long-established trend which some observers see as betraying the NSF's original purpose of funding research without regard to its possible future applications.

 

Holbrook sees a real danger in allowing public needs, as perceived by the government, to dictate the direction of science research.  Toward the end of his article, he says " . . . we run the risk that whatever intellectual activity we argue the state should support will be reduced to a mere means to satisfy the desires of the state.  There is no surer way to guarantee an absolutist state than to give in to the idea that all 'science' should support the needs of the state as defined by the state."

 

From my own worm's-eye view of university-based research, I have seen the NSF move from a position in which the technical science content of a proposal was almost all that mattered, to a position in which one's political credentials and even sex and race have a significant effect on the chances of getting funded.  While Holbrook did not directly address these issues, they are a part of the shift he is criticizing.

 

Humboldt's original vision was broader than just funding a technological arms race between countries (i. e. the U. S. versus China) or even providing the direct benefits that new scientific knowledge conveys by means of development in technology and engineering.  He had what may now sound like an old-fashioned and even naive trust that more knowledge will lead to an enlightened public and an atmosphere more favorable to human flourishing.

 

Although Holbrook didn't point this out, Humboldt's idea approaches Plato's notion that evil is simply the result of ignorance.  That is why Plato's ideal government would be put in charge of a philosopher-king, because the philosophers know more than anyone else and therefore have the best chance at being good.

 

I think the public's recent experience with scientific expertise during COVID-19 showed that scientists, far from being the most moral people of all, have flaws just like the rest of us do.  I agree with Holbrook that putting blinders on scientists and making them study only things of direct interest to the state is liable to cripple both science and the engineering on which it depends.  I believe we have strayed much too far from the seldom-realized ideal of paying scientists to study whatever interests them, whether we can imagine a payday resulting from their efforts or not.  But on the other hand, we shouldn't expect their discoveries to make us better people.  That's not their job.  That's up to us, to do with whatever natural and supernatural help we can find.

 

Sources:  J. Britt Holbrook's essay "An Effective History of the Basic-Applied Distinction in 'Science' Policy" appears on pp. 483-497 of G. Miller, H. M. Jerónimo, and Qin Zhu, Thinking Through Science and Technology (Lanham, MD:  Rowman & Littlefield, 2023).