Monday, July 28, 2025

The Tea App Data Breach: Not Everybody's Cup Of . . .

 

A phone app called Tea became the No. 1 app downloaded from the U. S. Apple App Store last week.  Then Friday, news came that the app had been hacked, exposing thousands of images and other identifying information online.  The users of Tea are women who want to exchange information about men they are considering as dates, or have dated and want to evaluate for other women.  So any kind of data breach is disturbing, although it could have been worse.

 

Tea, an app introduced in 2023, is exclusively for women, and requires some form of identification to use.  Once a user is logged in, she can either post information about a certain man, or research his background as reported by other users, similar to the popular Yelp app that uses crowdsourcing to rate businesses. 

 

Understandably, some men take a dim view of Tea, and claim it violates their privacy or could even provide grounds for defamation lawsuits.  An attorney named Aaron Minc has been getting "hundreds" of calls from men whose descriptions on Tea are less than complimentary.  In an interview, Minc said Tea users could be sued for spreading false information.  But as the Wikipedia site describing Tea points out, the truth is an absolute defense against such a charge.  Nevertheless, being sued for any reason is not a picnic, and so with data breaches and lawsuits in the air, women may now think twice before signing up for Tea and posting the story of their latest disastrous date, which may have been arranged via social media anyway.

 

You might think most couples meet electronically these days, but a recent survey shows that even among those aged 18-29, only 23% of those who are either married or "in a relationship" met online.  So meeting the old-fashioned eyeball-to-eyeball way is still how most couples get together.  The woman meeting a new guy in person could still use Tea to check out the man's credentials, but that raises larger issues of how reliable the report of another woman would be, especially if the report is anonymous.

 

Lawsuits over relationships are nothing new, of course.  One of the running plot threads that Charles Dickens milked for a lot of laughs in his first published novel, The Pickwick Papers, was how Mr. Pickwick's landlady Mrs. Bardell misunderstood a stray comment he made as a proposal of marriage, and filed a suit against him for breach of promise.  Though the legal details differ, this kind of action pitted the woman against the man, who then had to prove that his intentions were honorable or lose the suit.

 

I will admit that the idea of anonymous women posting ratings on me is somewhat disquieting, but as a teacher at a university, I'm subject to somewhat the same treatment by "Rate the Prof" websites, which take anonymous reports by students of various professors and post them online.  I never look at such sites, and if anything scurrilous has been posted about me, I've remained blissfully unaware of it. 

 

The way Tea works raises the question of whether anonymity online should be as widespread as it is.  That issue has been in the news lately as several states have passed laws requiring robust systems to verify ages for users of pornographic websites, for example.  That is another example where anonymity leads to problems,and positive identification with regard to age can at least mitigate harms to children who are too young to be exposed to porn.

 

Unfortunately, anonymity is almost a default setting online, while tying one's identity to every online communication would be not only technically burdensome, but downright dangerous.  How would we do anonymous hotlines and tip lines?  There would have to be exceptions for such cases.  And VPN and other technologies exist that currently encumber even the most vigorous attempts to identify people online, such as in criminal investigations. 

 

The Tea app is facing not only its data-breach problem, which always is disturbing to users, but also the moral question of whether anonymous comments by women about men they have dated are fair to the men.  Such a question can be answered on a case-by-case basis, but in general, if women had to sign their real names to every comment they posted on Tea instead of remaining anonymous, the comments might not be as frank as they are now. 

 

The same principle applies to student evaluations conducted not by a commercial app, but by my university.  Students are guaranteed anonymity for the obvious reason that if they make a negative comment, and the professor finds out who said it, the student might have to take another one of the professor's classes, in which the professor might be tempted to wreak vengeance upon the unhappy but honest student.  So I accept the idea of anonymity in that situation, because I might otherwise be tempted to abuse my authority over students that way.

 

If Tea were really confined only to women users, there wouldn't seem to be any danger of that sort of thing happening.  But a woman could turn traitor, so to speak:  seeing a bad review of her current boyfriend on Tea, she could show it to him, and if the review was tied to the identity of the woman who gave him the bad review, he might consider some kind of revenge.  That would be bad news as well, so anonymity makes sense for Tea too. 

 

Still, when people know their names are associated with things they say, they tend to be more moderate in their expressions than if they hide behind a pseudonym and can flame to their hearts' content without any fear of retribution.  Some systems allow the option of either signing your name or remaining anonymous, and possibly that is the best approach.

 

Tea presents itself as a way to find "green flags," that is, women going online and saying what a good guy this was and you ought to date him.  If he's so good, why not keep him for yourself?  Realistically, I expect most of the comments are negative, which is why the site has been criticized to the extent it has been. Assuming the operators of Tea address their data breach, they can take comfort in the old saying attributed (perhaps apocryphally) to P. T. Barnum:  "There's no such thing as bad publicity."  More women know about Tea now, and so more men may get reviewed.  I only hope they get the reviews they deserve.

 

Sources:  The Associated Press article "The Tea app was intended to help women date safely.  Then it got hacked," appeared on July 26, 2025 at https://apnews.com/article/tea-app-data-breach-leak-4chan-c95d5bb2cabe9d1b8ec0ca8903503b29.  I also referred to an article at https://www.hims.com/news/dating-in-person-vs-online for the statistic about percentage of couples meeting online, and to the Wikipedia article on Tea (app). 

Monday, July 21, 2025

Can Teslas Drive Themselves? Judge and Jury To Decide

 

On a night in April of 2019, Naibel Benavidez and her boyfriend Dillon Argulo had parked the Tahoe SUV they were in near the T-intersection of Card Sound Road and County Road 905 in the upper Florida Keys.  George McGee was heading toward the intersection "driving" a Model S Tesla.  I put "driving" in quotes, because he had set the vehicle on the misleadingly-named  "full self-driving" mode.  It was going about 70 MPH when McGee dropped his cellphone and bent down to look for it.

 

According to dashcam evidence, the Tesla ran a stop sign, ignoring the blinking red light above the intersection, crashed through reflective warning signs, and slammed into the SUV, spinning it so violently that it struck Naibel and threw her 75 feet into the bushes, where her lifeless body was found by first responders shortly thereafter.  Dillon survived, but received a brain injury from which he is still recovering.

 

Understandably, the families of the victims have sued Tesla, and in an unusual move, Tesla is refusing to settle and is letting the case go to trial. 

 

The firm's position is that McGee was solely at fault for not following instructions on how to operate his car safely.  The driver should be prepared to take over manual control at all times, according to Tesla, and McGee clearly did not do that. 

 

The judge in the federal case, Beth Bloom, has thrown out charges of defective manufacturing and negligent misrepresentation, but says she is hospitable toward arguments that Tesla "acted in reckless disregard of human life for the sake of developing their product and maximizing profit."

 

Regardless of the legal details, the outlines of what happened are fairly clear.  Tesla claims that McGee was pressing the accelerator, "speeding and overriding the car's system at the time of the crash."  While I am not familiar with exactly how one overrides the autopilot in a Tesla, if it is like the cruise control on many cars, the driver's manual interventions take priority over whatever the autopilot is doing.  If you press the brake, the car's going to stop, and if you press the accelerator, it's going to speed up, regardless of what the computer thinks should be happening. 

 

The Society of Automotive Engineers (SAE) has promulgated its six levels of vehicle automation, from Level 0 (plain old human-driven cars without even cruise control) up to Level 5 (complete automation in which the driver can be asleep or absent and the car will still operate safely).  The 2019 Tesla involved in the Florida crash was most likely a Level 3 vehicle, which can operate itself in some conditions and locations, but requires the driver to be prepared to take over at any time. 

 

McGee appears to have done at least two things wrong.  First, he was using the "full self-driving" mode on a rural road at night, while it is more suitable for limited-access freeways with clear lane markings.  Second, for whatever reason, when he dropped his phone he hit the accelerator at the wrong time.  This could conceivably have happened even if he had been driving a Level 0 car.  But I think it is much less likely, and here's why.

 

Tesla drivers obviously accumulate experience with their "self-driving" vehicles, and just as drivers of non-self-driving cars learn how hard you have to brake and how far you have to turn the steering wheel to go where you want, Tesla drivers learn what they can get by with when the car is in self-driving mode.  It appears that McGee had set the car in that mode, and while I don't know what was going through his mind, it is likely that he'd been able to do things such as look at his cellphone in the past while the car was driving itself, and nothing bad had happened.  That may be what he was doing just before he dropped the phone.

 

At 70 MPH, a car is traveling over 100 feet per second.  In a five-second pause to look for a phone, the car would have traveled as much as a tenth of a mile.  If McGee had been consciously driving a non-autonomous car the whole time, he probably would have seen the blinking red light ahead and mentally prepared to slow down.  But the way things happened, his eyes might have been on the phone the whole time, even after it dropped, and when he (perhaps accidentally) pressed the accelerator, the rest of the situation plays out naturally, but unfortunately for Naibel and Dillon.

 

So while Tesla may be technically correct that McGee's actions were the direct cause of the crash, there is plenty of room to argue that the way Tesla presents their autonomous-driving system encourages drivers to over-rely on it.  Tesla says they have upgraded the system since 2019, and while that may be true, the issue at trial is whether they had cut corners and encouraged ways of driving in 2019 that could reasonably have led to this type of accident.

 

In an article unrelated to automotive issues but focused on the question of AI in general, I recently read that the self-driving idea has "plateued."  A decade or so ago, the news was full of forecasts that we would all be able to read our phones, play checkers with our commuting partners, or catch an extra forty winks on the way to work as the robot drove us through all kinds of traffic and roads.  That vision obviously has not come to pass, and while there are a few fully autonomous driverless vehicles plying the streets of Austin right now—I've seen them—they are "geofenced" to traverse only certain areas, and equipped with many more sensors and other devices than a consumer could afford to purchase for a private vehicle. 

 

So we may find that unless you live in certain densely populated regions of large cities, the dream of riding in a robot-driven car will remain just that:  a dream.  But when Tesla drivers presume that the dream has become reality and withdraw their attention from their surroundings, the dream can quickly become a nightmare.

 

Sources:  I referred to an Associated Press article on the trial beginning in Miami at https://apnews.com/article/musk-tesla-evidence-florida-benavides-autopilot-3ffab7fb53e93feb4ecfd3023f2ea21f.  I also referred to news reports on the accident and trial at https://www.nbcmiami.com/news/local/trial-against-tesla-begins-in-deadly-2019-crash-in-key-largo-involving-autopilot-feature/3657076/ and

https://www.nbcmiami.com/news/local/man-wants-answers-after-deadly-crash/124944/. 

 

Monday, July 14, 2025

Are AI Chatbots Really Just Pattern Engines?

 

That's all they are, according to Nathan Beacom, who wrote recently in the online journal The Dispatch an article with the title "There Is No Such Thing as Artificial Intelligence." 

 

His point is an important one, as the phrase "artificial intelligence" and its abbreviation "AI" have enjoyed a boom in usage since 2000, according to Google's Ngrams analysis of words appearing in published books.  The system plots frequency of occurrence, as a percentage, versus time.  The term "AI" peaked around 1965 and again around 1987, which correspond to the first and second comings of AI, both of which fizzled out as the digital technology of the time and the algorithms used were inadequate to realize most of the hopes of developers.

 

But starting around 2014, usage of "AI" soared until in 2022 (the last year surveyed by the Ngram machine), it stands at a level higher than the highest peak ever enjoyed by a common but very different phrase, "IRS."  So perhaps people now think the only inevitable things are death and AI, not death and taxes.

 

Kidding aside, Beacom says that the language we use about the technology generally referred to as AI has a profound influence on how we view it.  And his point is that a basic philosophical fallacy is embedded in the term "artificial intelligence."  Namely, what AI does is not intelligent in any meaningful sense of the term.  And fooling ourselves into thinking it is, as millions are doing every day, can lead to big problems.

 

He begins his article with a few extreme cases.  A woman left her husband because he developed weird conspiracy theories after he became obsessed with a chatbot, and another woman beat up on her husband after he told her the chatbot she was involved with was not "real."

 

The problem arises when we fall into the easy trap of behaving as though Siri, Alexa, Claude, and their AI friends are real people, as real as the checkout person at the grocery store or the voice who answers on the other end of the line when we call Mom.  Admittedly, quite a few of the chatbots out there would pass the Turing test, which compares the responses to typed commands and queries between a human being and a computer. 

 

But to believe that an AI chatbot can think and possesses human intelligence is a mistake.  According to Beacom, it's as much a mistake to believe that as it was for the probably apocryphal New Guinea tribesman who, when first hearing a voice come out of a radio, opened it up to see the little man inside.  The tribesman was disappointed, and it didn't do the radio any good either.

 

We can't open up AI systems to see the works, as they consist of giant server farms in remote areas that are intentionally hidden from view, like the Wizard of Oz who hid behind a curtain as he tried to astound his guests with fire and smoke.  Instead, companies promote chatbots as companions for elderly people or entertainment for the lonely young.  And if you decide to establish a personal relationship with a chatbot, you are always in control.  The algorithms are designed to please the human, not the other way around, and such a relationship will have little if any of the unpredictability and friction that always happens when one human being interacts with another human being. 

 

That is because human intelligence is a clean different kind of a thing than what AI does.  Beacom makes the point that all the machines can do is a fancy kind of autocomplete function.  Large-language models use their huge databases of what has been said on the Internet to predict what kind of thing is most likely to come after whatever the human interlocutor says or asks.  And so the only way it can sound human is by basing its responses on the replies of millions of other humans. 

 

But an AI system no more understands what it is saying, or thinks about your question, than a handheld calculator thinks about what the value of pi is.  Both a simple electronic calculator and the largest system of server farms and Internet connections are fundamentally the same kind of thing.  A newborn baby and its elemental responses to its environment, as simple as they are, represents a difference in kind, not degree, from any type of machine.  A baby has intelligence, as rudimentary as it is.  But a machine, no matter how complex and no matter what algorithms are running on it, is just a machine, as predictable (in principle, but no longer in practice in many cases) as a mechanical adding machine's motions.

 

Nobody in his right mind is going to treat a calculator like it was his best friend.  But the temptation is there to attribute minds and understanding to AI systems, and the systems' developers often encourage that attitude, because it leads to further engagement and, ultimately, profits.

 

There is nothing inherently wrong with profits, but Beacom says we need to begin to disengage ourselves from the delusion that AI systems have personalities or minds or understanding.  And the way he wants us to start is to quit referring to them as AI. 

 

His preferred terminology is "pattern engine," as nearly everything AI does can be subsumed under the general category of pattern repetition and modification. 

 

Beacom probably realizes his proposal is unlikely to catch on.  Terms are important, but even more important are the attitudes we bring toward things we deal with.  Beacom touches on the real spiritual problem involved in all this when he says that those who recognize the true nature of what we now call AI will "be able to see the clay feet of the new idol." 

 

Whenever a friend of mine holds his phone flat, brings it to his mouth, and asks something like "What is the capital of North Dakota?" I call it "consulting the oracle."  I mean it jokingly, and I don't think my friend is seriously worshipping his phone, but some people treat AI, or whatever we want to call it, as something worthy of the kind of devotion that only humans deserve.  That  is truly idolatry.  And as the Bible and history prove, idolatry almost always ends badly.

 

Sources:  Nathan Beacom's article "There Is No Such Thing as Artificial Intelligence" appeared in The Dispatch at https://thedispatch.com/article/artificial-intelligence-morality-honesty-pattern-engines/.

 

Monday, July 07, 2025

Has AI Made College Essays Pointless?

 

That's the question that Bard College professor of literature Hua Hsu asks in the current issue of The New Yorker.  Anyone who went to college remembers having to write essay papers on humanities subjects such as art history, literature, or philosophy.  Even before computers, the value of these essays was questionable.  Ideally, the task of writing an essay to be graded by an expert in the field was to give students practice in analyzing a body of knowledge, taking a point of view, and expressing it with clarity and even style.  The fact that few students achieved these ideals was beside the point, because, as Hsu says in his essay, "I have always had a vague sense that my students are learning something, even when it is hard to quantify."

 

The whole process of assigning and grading essay papers has recently been short-circuited by the widespread availability of large-language-model artificial intelligence (AI) systems such as ChatGPT.  Curious to see whether students at large schools used AI any differently than the ones at his exclusive small liberal-arts college, Hsu spent some time with a couple of undergraduates at New York University, which has a total graduate-plus-undergraduate enrollment of over 60,000.  They said things such as "Any type of writing in life, I use A. I."  At the end of the semester, one of them spent less than an hour using AI to write two final papers for humanities classes, and estimated doing it the hard way might have taken eight hours or more.  The grades he received on the papers were A-minus and B-plus.

 

If these students are representative of most undergraduates who are under time pressure to get the most done with the least amount of damage to their GPA in the subjects they are actually interested in, one can understand why they turn to resources such as ChatGPT to deal with courses that require a lot of writing.  Professors have taken various tacks to deal with the issue, which has mostly flummoxed university administrations.  Following an initial panic after ChatGPT was made publicly available in 2022, many universities have changed course and now run faculty-education courses that teach professors how to use ChatGPT more effectively in their research and teaching.  A philosophy professor, Barry Lam, who teaches at the University of California Riverside, deals with it by telling his class on the first day, "If you're gonna just turn in a paper that's ChatGPT-generated, then I will grade all your work by ChatGPT and we can all go to the beach."  Presumably his class isn't spending all their time at the beach yet, but Lam is pretty sure that a lot of his students use AI in writing their papers anyway.

 

What are the ethical challenges in this situation?  It's not plagiarism pure and simple.  As one professor pointed out, there are no original texts that are being plagiarized, or if there are, the plagiarizing is being done by the AI system that scraped the whole Internet for the information it comes up with.  The closest non-technological analogy is the "paper mills" that students can pay to write a custom paper for them.  This is universally regarded as cheating, because students are passing off another person's work (the paper mill's employee) as their own. 

 

When Hsu asked his interviewees about the ethics of using AI heavily in writing graded essays, they characterized it as a victimless crime and an expedient to give them more time for other important tasks.  If I had been there, I might have pointed out something that I tell my own students at the beginning of class when I tell them not to cheat on homework. 

 

A STEM (science, technology, engineering, math) class is different than a humanities class, but just as vulnerable to the inroads of AI, as a huge amount of routine coding is now reportedly done with clever prompts to AI tools rather than writing the code directly.  What I tell them is that if they evade doing the homework either by using AI to do it all or paying a homework service, they are cheating themselves.  The point of doing the homework isn't to get a good grade; it is to give your mind practice in solving problems that (a) you will face without the help of anything or anybody when I give a paper exam in class, and (b) you may face in real life.  Yes, in real life you will be able to use AI assistance.  But how do you know it's not hallucinating?  Hsu cites experts who say that the hallucination problem—AI saying things that aren't true—has not gone away, and may actually be getting worse, for reasons that are poorly understood.  At some level, important work, whether it's humanities research papers to be published or bridges to be built, must pass through the evaluative minds of human beings before it goes straight to the public. 

 

That begs the question of what work qualifies as important.  It's obvious from reading the newspapers I read (a local paper printed on actual dead trees, and an electronic version of the Austin American-Statesman) that "important" doesn't seem to cover a lot of what passes for news in these documents.  Just this morning, I read an article about parking fees, and the headline and "see page X" at the end both referred to an article on women's health, not the parking-fees article.  And whenever I read the Austin paper on my tablet, whatever system they use to convert the actual typeset words into more readable type when you select the article oftenmakesthewordscomeoutlikethis.  I have written the managing editor about this egregious problem, but to no avail.

 

The most serious problem I see in the way AI has taken over college essay writing is not the ethics of the situation per se, although that is bad enough.  It is the general lowering of standards on the part of both originators and consumers of information.  In Culture and Anarchy (1869) (which I have not read, by the way), British essayist Matthew Arnold argued that a vital aspect of education was to read "the best that has been thought or said" as a way of combating the degradation of culture that industrialization was bringing on.  But if nobody except machines thinks or says those gems of culture after a certain point, we all might as well go to the beach.  I actually went to the beach a few weeks ago, and it was fun, but I wouldn't want to live there. 

 

Sources:  Hua Hsu's "The End of the Essay" appeared on pp. 21-27 of the July 7 & 14, 2025 issue of The New Yorker.  I referred to the website https://newlearningonline.com/new-learning/chapter-7/committed-knowledge-the-modern-past/matthew-arnold-on-learning-the-best-which-has-been-thought-and-said for the Arnold quote.