Monday, May 29, 2023

Will AI End Civilization?

 

Notice I didn't say, "Will AI (artificial intelligence) End Civilization As We Know It?"  Because it will.  It already has.  Civilization as we knew it as recently as five years ago is considerably different from what we have now—better in some ways, worse in others—and a good part of those changes have been due to widespread adoption of AI.  But the speakers in a Mar. 9, 2023 talk put on YouTube by the Center for Humane Technology raises a more fundamental question:  what are the chances that humans as a species will "go extinct" because we lose control of AI?

 

Based on internal evidence, these speakers–Tristan Harris, a co-founder of the Center, and Aza Raskin—are worth listening to.  The Center is in the heart of Silicon Valley and seems to be very well connected with Big Tech insiders, as attested by the fact that they were introduced before their talk by Steve Wozniak, co-founder of Apple.  And in numerous references during the talk, they emphasized that things are happening very fast, so fast that they have to revise the content of their talk almost weekly to keep up with the explosion of AI progress.

 

I'd like to concentrate here on a list that Harris and Raskin showed when they examined the potential downside of the way AI is currently being deployed as a competitive edge by corporations such as Microsoft and Google.  This list appeared after they cited a chilling statistic from a survey of leading AI experts.  738 experts were asked, "What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?"  About half of the researchers think there is a greater than 10% chance of this disaster happening.  To put this result in perspective, Harris and Raskin then showed the burned-out hulk of a crashed airliner and asked in effect, "If 50% of the engineers who designed an airplane thought there was a 10% chance of it crashing, would you get on it?"  Yet we are all being taken down the AI road at breakneck speed by the corporations that see it as a business necessity.

 

OpenAI, a formerly non-profit AI behemoth that was recently turned into a profit-making enterprise, has famously offered its ChatGPT software to the public.  Simply turning powerful AI systems loose on the populace can lead to a number of dire consequences, many of which Raskin and Harris listed and showed examples of.  I'll focus on just a few of these that I think are near-term most likely.

 

"Trust collapse"—One of the leading features of modern economies is the mutual extension of trust.  If you fundamentally do not trust the person you're dealing with, you will spend most of your time and effort trying to avoid getting cheated, and won't have much energy left to do actual productive business.  In some countries, people with even moderate wealth by U. S. standards feel compelled to erect high masonry walls topped with broken glass around their dwellings, simply because if they don't, they will be robbed as a matter of course.  If messages or communications get so easy to fake that bad actors mimic your most close and trusted colleagues, it's hard to see how we could trust anybody anymore unless they are in the room with us. 

 

"Exponential scams [and] blackmail"—The AI experts seem to be most concerned that eventually, AI will develop a kind of super-con-artist ability that will fool even the cleverest and most sophisticated human being into doing stupid and harmful things.  In an interview on Fox News recently, Elon Musk worried that super-intelligent AI would be so persuasive that it could get us to do the civilizational equivalent of walking off a cliff.  It's hard to imagine a scenario that would make that credible, but I will have more to say about that below.

 

"Automated exploitation of code"—Computerized hacking, in other words.  Harris and Raskin showed an example of just such an activity they had carried out with ChatGPT after they told it in essence, "Hack this code." 

 

"Automated fake religions" and "Synthetic relationships"—I was a little surprised to see religion mentioned, but I put these two consequences together because religion involves the worship of something, or someone, and a synthetic relationship means a human would begin to treat a synthesized AI "person" as real.  Already there have been experiments in which disabled individuals (dementia patients, etc.) have gotten to know AI robots as "caregivers," and it is far from clear whether the patients understood that their new companion was only a pile of wires.  From a utilitarian point of view, there seems to be nothing wrong with this—after all, if we don't have enough real caregivers, why not make robots do the job?  But this approach puts superficial happiness above truth and reality, which is always a mistake.

 

For most of these dire things to happen, some human beings with either evil intent or with a short-sighted eagerness to profit ahead of the competition have to implement AI in a way that corrodes the social contract and pits ordinary human beings against a giant automated system that makes them putty in the hands of the robot.  As C. S. Lewis said long ago in The Abolition of Man, humanity's power over Nature, which has enabled it to produce the amazing advances in AI we see today, is really just the power of some small group of people (call them the controllers) over the rest of humanity—the controlled.  The tendency of all too many AI forecasters—including Harris and Raskin—is to treat AI as a wholly autonomous entity beyond the ability of anyone to control. 

 

While that is not a logical impossibility, the far more likely case is one in which bad actors take control of super-AI and use it for malevolent purposes—purposes which may not seem malevolent at the time, but which turn out to be that way long after we have become dependent on the systems that embody them.  This is a real and present danger, and I hope the scary scenarios portrayed by Harris and Raskin in their talk motivate the major players to stop driving us toward the AI cliff before it's too late—whenever that might be.

 

Sources:  A blog on the Salvo website by Robin Phillips on May 17, 2023 at https://salvomag.com/post/sam-altmans-greatest-fear had a link to the Center for Humane Technology talk by Tristan Harris and Aza Raskin at https://www.youtube.com/watch?v=xoVJKj8lcNQ, and that is how I found out about the talk.  It's over an hour long, but anyone concerned about the current dangers of AI should watch it. 

Monday, May 22, 2023

American Geophysical Union, Heal Thyself

 

The American Geophysical Union (AGU), founded in 1919, is possibly the world's premier association of earth scientists, and numbers among its members many leading climate experts.  I had the privilege of attending its annual Fall Meeting last December, held in Chicago, and I have never seen such a large concentration of scientific expertise in one place before. 

 

The AGU publishes a science-newsmagazine called EOS, which summarizes technical and political developments of interest to the 65,000 or so members of the organization.  I mention "political" because of the many scholarly publications I receive, EOS seems to be one of the most "woke." 

 

A good case in point is the article in the May 2023 issue of EOS with the title "The Mental Toll of Climate Change."  A notice at the head of the article reads, "Content Warning:  This article discusses suicide and potential risk factors of suicide."  The author Katherine Kornei, a science writer, interviewed mental-health providers and an "environmental psychologist" to explore the stresses brought on by both acute weather events (such as floods, tornadoes, and wildfires) and chronic issues (such as droughts and heat waves).  And all these things are directly linked by the author to climate change.  The few hard-science citations in the article referred to reports and papers that reinforce the notion that basically, anything that happens weather-wise that we don't like is due to climate change.

 

Lest you think that an exaggeration, consider the first such citation.  "In July 2018, an unprecedented heat wave in Japan killed more than a thousand people; researchers later showed that the event could not have happened without climate change."  This is a bold assertion, so I looked up the paper in question.  It was authored by several meteorological researchers in Japan, who used statistical distributions based on a climate model which they admit (in another paper, which I had to track down) ignores atmosphere-ocean interactions and is useful only for modeling periods of up to a few years. 

 

But to a science writer, their paper title ("The July 2018 High Temperature Event in Japan Could Not Have Happened without Human-Induced Global Warming") was too tempting to resist.  Here are a bunch of credentialed scientists saying that this deadly heat wave was the direct result of human activity.  Only when one digs down into the details, as I did, does one find that the model they use leaves out essential features.  Pretending the atmosphere doesn't interact with the ocean may simplify a model, but it ignores well-known phenomena that can completely transform a model's behavior.  And as Steven Koonin pointed out in a book I mentioned recently (Unsettled:  What Climate Science Tells Us, What It Doesn't, and Why It Matters), to say anything meaningful about climate means that you need to take at least 30-year averages of data.  A program that can only look at five years' worth of data is useless for predicting climate events,  although I'm sure it has enough free parameters to allow the researchers to obtain the results they wanted, namely, that the heat wave couldn't have happened without climate change.

 

The rest of Kornei's mental-health piece describes how "angry, baffled, and horrified" many people are when they hear that (a) climate change is soon going to bring civilization to a horrible end as we bake, freeze, drown, and/or blow away, and (b) there's nothing we can do about it, or if we do we'll have to go back to subsistence farming with mules and give up electricity and driving. 

 

Well, if I really believed both of those statements, I'd be angry, baffled, and horrified too.  Unfortunately, as Koonin points out in his book, climate scientists have joined forces with government leaders, commercial interests, and science journalists to paint this dismal picture, which Koonin, as an insider, says is highly distorted, to say the least.

 

Tackling the worst problem first, there is no logical way that any statistical model, even a good one (which the Japanese model is not) can "prove" a given weather event would not have happened without global warming.  The only way you can do that is to have two identical Earths going exactly the same way till about 1800 A. D. and then let one exploit fossil fuels and keep the other one from doing so, and see what differences arise in the weather patterns.  This experiment is impossible to do, and while essentially perfect climate and weather models could simulate such a thing, we are probably decades away from having such models, if indeed they can ever be made. 

 

This leads to the second and more serious problem, which is that experts have irresponsibly given in to the temptation to go with the politically-favorable climate-catastrophe narrative in flagrant violation of the principle of not venturing beyond your data.  The Japanese report is a case in point, but there are hundreds of similar publications from all over the world that join the doom-crying chorus.

 

The members of the AGU who have encouraged this sort of thing bear the most responsibility for average citizens who are depressed because of climate change.  Causing the problem, and then hiring a science writer to write about the problem, is the height of something—hypocrisy, irony, stupidity, take your choice. 

 

The AGU should first clean up its own act by not exaggerating and fabricating claims of certain disaster that awaits us unless we voluntarily throw ourselves back to the Stone Age by giving up industrialized energy use.  If as much effort was expended on adapting and mitigating whatever climate-change effects come our way, as there is now on showing how bad it's going to be and developing punitive policies that thwart human flourishing, we'd be a lot better off. 

 

And the AGU wouldn't have to run articles on how depressed people are about the climate-change crisis that the AGU has played a large role in creating.

 

Sources:  The article by Katherine Kornei "The Mental Toll of Climate Change" appears on pp. 28-33 of the May 2023 issue of EOS.  The two papers I referred to as asserting the connection between climate change and the Japanese heat wave are Imada et al., SOLA, 15A, 8-12 (2019), and Shiogama et al., SOLA, 12, 225-231 (2016).

Monday, May 15, 2023

Would You Buy a Used Tesla from Elon Musk?

 

My father was a loan officer who specialized in auto loans.  In that position, he had to be a good judge of character.  I seem to remember one day he was talking about a fellow he knew, and said something like the following:  "He's stayed out of jail, but I wouldn't buy a used car from him."

 

More and more used-car buyers are going to face something like the headline's question as used electric vehicles (EVs), predominantly but not exclusively Teslas, hit the used-car market.  A recent article by Jamie L. LaReau of the Detroit Free Press, and republished by papers in the USA Today network, describes the challenges consumers face in buying a used EV.

 

As you probably know, the single most expensive component in an EV is the battery.  A complete replacement of the entire battery can cost about half the price of the car (e. g. $15,000 for a $30,000 used car).  The difficulty in buying a used EV is to figure out the condition of the battery—what its current range is and how long it will be before it has to be replaced.  Currently, there is no good way to do this.

 

LaReau recommends taking the prospective purchase for a long test drive, preferably a couple of  days, and running it on the kind of commuting route you expect to use it for.  If the battery runs precariously low in such a situation, the car may not be for you.  Some types of EVs allow the owner to replace individual faulty cells in the battery, thus avoiding an expensive replacement of the entire battery.  I would imagine that the diagnostics for such a replacement might not be straightforward, and only dealers for that particular model could do such a check.  Other types of

EVs make their batteries as a unitary packaged structure that has to be replaced all at once.  So when the battery's performance falls below what is required, there's really no other option but to replace the whole thing. 

 

Dave Sargent, whose title is Vice President of Connected Vehicles at the consumer-analytics  organization J. D. Power, is quoted as saying that mileage as reported by the odometer is not a good guide to battery condition.  More important is the way the car was driven—highway versus city streets—what the average temperature of its surroundings were (Phoenix or Bangor is bad, Atlanta is good) and how it was charged.  Fast charging, for example, is harder on batteries than the slower overnight charging that most consumers are able to do in their garages.  Also, if the battery was frequently allowed to discharge lower than 20% capacity, that tends to age it faster than otherwise.

 

In principle, all this data could be (and maybe is) stored somewhere, either on the car's computer or the manufacturer's remotely gathered database on the vehicle.  If somebody hasn't done this already, it shouldn't be hard to write software that can take such data and make an educated guess as to the overall condition of the battery at the time of sale.  At this time, however, such software doesn't seem to be generally available.

 

Some dealers will test the battery for a fee of about $150, but that only tells you what condition it is in now, not what it's going to do in the future.  A Federal government mandate to guarantee the battery in a new EV for eight years or 100,000 miles is worth something, but it is not clear if that warranty is always transferable to a used-car buyer.  On the lender CapitalOne's website, an article warns that some manufacturers won't replace a battery under the federal warranty until it is totally non-functional.  So even if the car would just get out of your driveway and then die, you'd be stuck with it until it wouldn't even do that.  And sometimes the warranty won't transfer to subsequent owners.

 

All in all, anyone buying a used EV is taking a chance that the battery will not do what they want in a time sooner than they'd like.  Of course, used cars in general are a somewhat risky purchase, but as a purchaser of used cars most of my life (I'm driving the first new car I ever bought, and that was only three years ago), there are ways to tell if you're getting a lemon or not, and state-mandated "lemon laws" allow consumers to return vehicles that were sold under clearly fraudulent conditions. 

 

But the lack of expertise on the ground who can make a reliable prediction as to when an EV's battery will degrade below an acceptable level of performance is a novelty that most buyers would rather not deal with. 

 

On the other hand, the reasons why people buy electric cars are not your usual reasons.  Currently, none of the EVs available, used or new, sell for prices that would attract what you might call the typical buyer.  LaReau cites statistics that say the current average price of a new EV is about $58,000 and for a used EV, you'll pay an average of $41,000.  So we are talking high-end if not luxury vehicles, and buyers for whom price is not the main consideration. 

 

I think one of the main motivations for people who buy EVs is a politico-esthetic one:  they think they are helping to avert global warming.  Whether buying and using an EV really does this, considering all the manufacturing steps, the mining of lithium and other metals under less-than-ideal circumstances, and the source of electric power used to charge the thing, is a question for another time.  Whether or not one really does affect global warming with an EV purchase, lots of people feel like they do, and that's what counts in marketing.

 

As with any used-car purchase, the old Latin motto caveat emptor ("let the buyer beware") applies in spades to buying a used EV.  If the car's battery performance turns out to be a disappointment, maybe the purchaser can just look upon it as one more sacrifice made in the cause of fighting global warming.  But your typical car buyer is likely to be unmoved by such sentiments, and so things will have to become a lot more transparent before used EVs become just as easy to sell as conventional gas guzzlers.

 

Sources:  The article "The future of car buying" by Jamie L. LaReau appeared in the business section of the online Austin American-Statesman for May 14, 2023.  I also referred to the article on the CapitalOne website https://www.capitalone.com/cars/learn/getting-a-good-deal/how-do-ev-battery-warranties-work/1960 regarding battery warranties for used EVs.

 

Monday, May 08, 2023

To Pay or Not To Pay—Ransomware Attacks on Public Institutions

 

In what is just the latest of a lengthening series of ransomware attacks, the sheriff's office of San Bernardino County, California reportedly paid over $1 million in ransom to an Eastern-Europe-based hacking group.  About half the money was paid by insurance and the county paid the rest from its risk-management fund.  Reporters for the Los Angeles Times were unable to determine exactly who authorized the payments, which enabled the county to restore its email servers, in-car computers, and law enforcement databases. 

 

According to the report, the FBI discourages payments to ransomware hackers, but almost half of the state and local governments attacked worldwide pay anyway.  A survey conducted by the British security firm Sophos was cited in the report, which said that the only organizations paying at a higher rate than local governments are K-12 schools, at a rate of 53%.

 

One distinguishes a trend here:  the less likely an organization is to have a well-funded and robust IT security operation, the more likely it is to pay ransom.  We haven't heard of successful ransomware attacks on, for example, Bank of America, because bank IT operations have historically been acutely aware of all kinds of hacking hazards and have devised means of preventing such large-scale attacks on their systems. This doesn't make them immune from the occasional data breach, but hackers have limited resources too, and they aren't going to take on the equivalent of a 900-pound gorilla when they can pick on a chihuahua, as long as the chihuahua will pay up.

 

From the hacker's point of view, it is a kind of optimization problem.  You want to go after a target that is large enough to pay a ransom that will remunerate you for expenses and leave a substantial profit, but not so large that their IT department will defeat your efforts.  Unfortunately, a great many institutions fit that description:  hospitals, city and county governments and law-enforcement agencies, state agencies, and innumerable private firms as well. 

 

It would be great if everyone could resist the temptation to give in to the hackers' demands, and defeat their malware attacks with backups and better IT security in the first place.  Unfortunately, the hackers are always devising new approaches, which means that successful defense requires an IT staff that is constantly updating their own knowledge and resources.  The analogy of preparing for war is, unfortunately, relevant here.  In war as well as IT security, the only way you know for sure you didn't spend enough preparing for a crisis is if you lose. 

And judging by the statistic that in 2021, U. S. banks processed an estimated $1.2 billion in ransomware payments, there are more and more entities taking the supposedly easy way out and simply paying the ransom.

 

This is a worrisome trend for several reasons.

 

One is that the ransom money has to come from somewhere: either taxpayer dollars that don't get spent on something useful, or customer revenue that has to be made up in the form of higher prices or reduced profits.  And it's not like the money gets spent in the U. S., either.  Studies indicate that many ransomware attacks originate either in Russia or Eastern Europe, where there is likely implicit or explicit cooperation between the criminals and their governments.

 

Another is that tolerance of corrupt practices lowers the moral tone of an entire environment.  What I mean by that can be explained with an analogy.  In the past, and to some extent in some countries even today, criminal organizations muscle their way into the commerce of a neighborhood by visiting the store owner and saying, "Nice little shop you have here.  A shame if anything should happen to it."  Whereupon the owner has to fork over cash simply to stay in business and not worry about having his store wrecked or firebombed some evening.  This type of thing is sometimes ironically referred to as "protection," but in some locations where local law enforcement was useless, a powerful crime syndicate would actually ensure safety for pay, because minor criminals knew better than to fool with a store under the protection of the Mob. 

 

Nothing good like that happens with ransomware.  A successful ransomware attack is just a loss to the organization attacked, which faces two alternatives.  One is to rely on their own IT support, outside security assistance, and backups to restore operations independently of the attack.  The other is to give in and pay the ransom, hoping that the attackers will be true to their word and restore operations to their pre-attack status.

 

Trusting criminals is rather stupid on the face of it, although paying ransom does work now and then.  But it sets a bad precedent that encourages further attacks and drains both public and private institutions of badly needed resources, while also raising insurance rates. 

 

I recently experienced something along the lines of a ransomware attack on my own PC.  I was visiting a site operated by a European lightning-detection organization run mainly by hobbyists (and therefore probably not supplied with abundant IT security help).  A button on the right of the screen read something ambiguous like "Click here" and when I did, the screen lit up with bells and buzzers and a mechanical woman's voice told me I had to do something or other to gain back control of my computer.  Fortunately, when I closed the browser it all went away, but there for a few seconds I thought the PC was a goner.

 

Sufficiently advanced and well-resourced IT security can in principle defeat any ransomware attempt.  Unfortunately, that standard is seldom met in practice, so we can expect ransomware attacks to continue, especially if the hackers find that their chances are about even of getting money out of their victims.  But taking the easy way out and paying, while it is often the path of least resistance for individual organizations, is muddying the IT waters for all of us.  The better way is to improve IT security, including fundamental changes to the way the Internet works, so that ransomware attacks could land in the dustbin of history along with stagecoach holdups.  But that may take quite a while to do.

 

Sources:  The article "San Bernardino County pays over $1.1 million ransom over Sheriff's Department hack" appeared on the Los Angeles Times website on May 6, 2023 at https://www.latimes.com/california/story/2023-05-06/hackers-targeted-a-california-sheriffs-department-should-they-have-paid-the-ransom.  I also referred to the CNBC article https://www.cnbc.com/2022/11/01/us-banks-process-roughly-1point2-billion-in-ransomware-payments-in-2021.html. 

Monday, May 01, 2023

Is MacGyver an Engineering Ethics Exemplar?

 

It's not often that a TV show makes a permanent contribution to language, but many people will know what I mean when I say, "I had to MacGyver something together because I didn't have time to order the right parts."  To MacGyver means to use whatever is at hand to solve a technical problem that under normal circumstances would take a lot more resources to do. 

 

I first heard the word used that way when I was working on a research project with no funding.  The right way to do it would have been to spend $15,000 on custom microwave components, but I didn't have that choice.  So instead, I bought a used DirecTV receiver on eBay and took it to a friend's lab and did some minor surgery on it to achieve my own ends.  A Korean graduate student of my friend's was watching, and said with a grin to my friend, "MacGyver!"  So by the early 2000s, the word had entered the vocabulary even in Korea.

 

As I quit watching network TV some time in the 1970s, I had never actually seen a MacGyver episode, which aired in its original form from 1985 to 1992.  So the other day we checked Season 1 out of the local library's DVD collection and watched the initial episode.

 

MacGyver evidently lives in the Griffith Observatory in Los Angeles.  In case there was any doubt about this, the producers made no effort to hide the lettering on the side of the domed structure, which clearly gives away its real identity.  This was also true of the supposedly secret location of some high-tech underground lab that suffered a massive explosion.  Shots showing the above-ground entrance to the lab revealed that it was right next to a California TV station, where the call letters were clearly displayed on the roof. 

 

Maybe these are signals to the viewer that a major suspension of disbelief is required to enjoy the show.  With that in mind, let's get to the plot.  Ignoring a spectacular rescue scene in Mongolia which had absolutely nothing to do with the main plot but got the pilot episode underway and established MacGyver's frankly superhuman powers, we saw two elderly scientists, one a guest of the underground lab, playing chess, followed by a shot of a bomb underneath the table just before it goes off and wrecks the lab, causing a crack in a sulfuric-acid tank that begins to leak, and cutting off the lower levels of the lab from the surface. 

 

MacGyver responds to the call to fly out to the TV station—excuse me, the lab—and figure out what happened and rescue the people still trapped underground.  And by the way, somehow the folks on the surface not only knew about the acid leak, they called some government agency which dispatched several tank trucks full of sodium hydroxide (lye, in other words) to neutralize the acid before it gets into the Colorado River, and these trucks are going to start flooding the whole lab with lye solution as inexorably as a law of the Medes and the Persians which cannot be revoked, at a fixed time.  This gives the producers an excuse to superimpose a red countdown clock on the screen showing us exactly how many seconds MacGyver has to get down the elevator shaft while avoiding the CO2-laser security beams that will fry him to bits, rescue the survivors, and fix the acid leak. 

 

Well, he does, of course, with the help of a nice-looking lady scientist (I think she was a scientist—she knew her way around the lab, anyway, but spent most of her time being awed by MacGyver and kissed him at the end).  And on the way to all this, he uses such humble implements as a paperclip, a book of matches, some cigarettes (he needed smoke to see the invisible laser beams), and some chocolate bars he gets the lady to stuff in the acid-tank crack to plug it, avoiding the deluge of lye that would otherwise come from a beneficent government tank truck and kill them all.  Hey, they were just following orders.

 

Now see, it sounds like all I can do is criticize the technical shortcomings of the show.  But clearly I have the wrong attitude.  What you need to do to enjoy it is to have the attitude of, say, a smart eight-year-old boy, who used to have fantasies like the following.

 

In Fort Worth when I was growing up, a constant reminder of the presence of Carswell Air Force Base was the sound of jet engines as the B-52 nuke-laden bombers took off on their regular patrols during the Cold War of the 1960s.  I used to imagine that one day, they would get into some kind of bind at the base involving a nuclear weapon, and they needed somebody who knew electronics, but was also only three and a half feet tall and weighed no more than 80 pounds.  As the Air Force doesn't recruit midgets, somehow they would find out about me, and a colonel in full uniform would show up at our door.

 

"Mrs. Stephan?"

 

"Yes?"

           

"You have a son named Karl David?  Knows electronics?"

 

"Yes?  Has he done anything wrong?"

           

"No, ma'am.  But he has an opportunity to serve his country, if I could speak with him for a few minutes."

 

The rest of the fantasy would be essentially a MacGyver episode—I'd go down there, crawl into whatever confined space they couldn't get into, and use a paperclip to fix whatever was wrong. 

 

That's the attitude you need to bring to a MacGyver episode.  And judging by the popularity of the show both domestically and in dubbed versions worldwide, a lot of people managed to have that attitude.

 

In engineering ethics, sometimes we talk about moral exemplars—engineers who do the right thing in ethically fraught circumstances, sometimes at considerable cost to themselves or their careers.  It's good to talk about such exemplars, because following in the footsteps of good people is one way we learn to be good ourselves.

 

I haven't seen much in the ethics literature about fictional moral exemplars, but I would have to say that MacGyver fits the bill.  He clearly knows as much engineering as any of the regular engineers he encounters, he makes quick judgments based on incomplete information that always turn out to be the right ones (at least by the end of the show), and—hey—he gets to kiss the girl in the end, too.  What more could you want?

 

Sources:  I referred to the Wikipedia article MacGyver (1985 TV series).  The complete first iteration of the show (there was a 2016 reboot as well) is available on DVD.