Monday, March 09, 2026

Meta Considers X-Ray Glasses That Work

  

Superman supposedly had X-ray vision, which in the movies and comic books about him, he used only for noble and righteous purposes such as catching crooks.  But teenage boys could easily think of other things they might do with such an ability, and I'm sure there are jokes out there involving Superman, Lois Lane, and—well, on to more serious matters. 

           

In the back pages of Superman comic books in the 1950s, you might find an ad for X-ray glasses for the amazingly low price of $1.25.  Like most things that are too good to be true, these turned out to be nothing but cardboard specs with two small holes where the lenses would go.  The holes were covered with a textured plastic that created a diffraction effect whenever something with a sharp outline in front of you was strongly backlit.  Instead of just dark and light, the glasses produced a kind of broad gray area extending a fixed distance inside the outline.  When you viewed a hand this way, the effect was reminiscent of an X-ray, but only if you used your imagination.  And as for looking at backlit women, it still took a lot of imagination to see anything other than a slightly smaller silhouette of the actual person.

 

The thing Meta is considering is no joke, however.  According to a report in The Independent, a UK media outlet, the New York Times revealed last month an internal Meta memo which considers adding AI facial-recognition technology to its smart glasses. 

 

No such features are yet available commercially from Meta, but the idea drew strong criticism from women's-rights groups such as Refuge.  The charity tracks technology-related abuse, and claims that in 2025, referrals to its "technology-facilitated abuse and economic empowerment team" rose by 62% over 2024, to 829. 

 

Clearly, stalkers and other malefactors who are invading women's privacy are exploiting whatever technology they can get their hands on, from general Internet searches to facial-recognition technology applied to online images.

 

Suppose a man could simply wander around in a crowded place such as a shopping mall or bus terminal and find out all kinds of details—name, address, email, phone—for any woman he looks at.  It doesn't take much imagination to see how this situation could go bad very fast.  And even if Meta takes steps to prevent such potentially harmful activity, once the hardware is in place, determined individuals will figure out ways to bypass safeguards. 

 

It remains to be seen whether Meta can overcome the apparently steep barrier that has fended off efforts by Google, Apple, and others to turn smart glasses into a popular thing.  I don't know whether the hardware is still inadequate (battery life, resolution, weight, etc.) or whether people simply don't like the idea of weird internet stuff cropping up all the time in their field of vision, but the track record of smart glasses (as opposed to virtual-reality (VR) glasses that take over your visual field completely) is not good. 

 

But it's foolish to think that just because they haven't caught on yet means they never will.  And if they do, what is to prevent a bad actor from picking out a likely-looking woman in a crowd and digging up whatever he can find on her?

 

Today, someone trying to do that has to at least carry a smartphone around and point it at the intended victim.  But smart glasses, like a spy camera, make the act of photography invisible, so no one is aware that they're being imaged.  Of course, the ubiquity of security cameras in both commercial and residential spaces means that much of the time we're being photographed without our knowledge anyway.  Stores and banks have a motivation not to misuse the data thus captured, however.  Some guy walking around with smart glasses doesn't.

 

Whatever Meta decides to do about facial recognition with smart glasses, the prospect will be only one more brick pulled down from the wall of privacy that used to surround us, but is now not much more than a pile of rubble.  And as the Independent article points out, even if a company promises to keep data private subject to state and federal law, governments can and do require companies to divulge such data on demand.  So privacy is only privacy if the government lets you have it, rather like right-of-way on a freeway:  you don't have it unless someone gives it to you.

 

Of course, there are positive and defensive ways of using the same technology that can be used for stalking.  As a teacher, my poor ability to connect names to faces means that I rarely learn all my students' names before the end of the semester.  It would be great if I could just look at each one and see their names pop up underneath their faces—with their permission, of course.  And a few months ago, I was on a jury trying a case of indecent exposure in a public place.  During the trial, a critical piece of evidence turned out to be clips from the victim's dashcam that caught clear images of the man who a few minutes later committed the crime for which he was convicted.

 

So I have no doubt that good things could result from smart glasses becoming cheaper, better-performing, and more widespread.  But at the same time, it should be possible for a well-resourced outfit like Meta to come up with technological means to prevent unauthorized identification of people via facial-recognition technology.  It might involve some larger-scale privacy initiative that would offer potential victims the option not to be recognized by smart glasses.  Wouldn't that be nice?  The company would moan and groan about how expensive it would be, but they should compare whatever the safeguards would cost them, to the prospect of lawsuits by victims and family members who get stalked, attacked, or even killed by men who use Meta's technology to find their victims. 

 

Mark Zuckerberg, the choice is yours.

 

Sources:  An article on the National Review website at https://www.nationalreview.com/2026/03/you-cant-escape-the-ai-grid/ informed me of the Meta memo, which was also covered by The Independent at https://www.independent.co.uk/news/uk/home-news/meta-glasses-facial-recognition-domestic-abuse-b2923551.html.  A well-researched YouTube video tracing "X-ray" glasses back to the early 1900s and showing what you see through them is available at the Laura Legends channel at https://www.youtube.com/watch?v=rdVrTqaJrS4. 

Monday, March 02, 2026

Up Close and Personal with AI

  

After writing about AI for years, I still hadn't had what you might call a serious encounter with it in its personalized form.  Anyone who uses Google has probably been offered their "AI summary" before the conventional search results.  I've found these summaries helpful sometimes and not so helpful other times, but I haven't sought them out. 

 

What made me turn the corner was something a friend sent me by a software engineer named Matt Shumer, whose essay "Something Big is Happening" appeared on the Fortune website on Feb. 11.  Shumer's point was that the latest iterations of AI systems are so much more capable than what has gone before that whole swathes of what George Gilder calls "symbolic manipulators"—lawyers, engineers, judges, doctors, architects, you name it—now face a radical choice.  Either embrace AI and by doing so outperform your peers by orders of magnitude, or turn away from it and watch your career flame out.  That's a little exaggerated, but not much.

 

This reinforced something another friend has told me about his own personal use of AI:  that it has benefited his writing and research greatly, acting as a mostly trustworthy assistant to summarize large bodies of literature and help him clarify his thoughts.  The biggest problem this friend has had with it is that it tends to be sycophantic and flatter him excessively.  But he sat down with it one day and told it to refer to him as "the researcher" and itself as the "AI system," instead of "you" and "me," and things got better. 

 

So I decided to opt for a paid version of Anthropic's AI product, which Shumer said was significantly better than the free version, and decided to give it a major task that I would ordinarily give to a grad student, if I had one (funding is very hard to find in my research area).

 

The job involved reading about a thousand rows of data in a big spreadsheet to fill out some yes/no questions about each row.  The data was in the form of comments submitted by various individuals in response to questions.  I gave the AI system examples to follow and what I thought were pretty detailed instructions.

 

All this was in the form of the usual chat format, with me and the AI system taking turns typing into chat boxes.

 

After a misunderstanding in which I thought the system was working on the problem and it thought I hadn't told it to start yet, it got to work and spat out various things like "Ran 6 commands . . . Examine the spreadsheet structure" and so on.

 

It was done in about ten minutes.  Then I spent an hour or so going over its work.

 

I wish I could say I couldn't have done better myself.  But I could have, by a long shot. 

I didn't exhaustively examine all 700 rows of entries that the system produced—that would have taken many hours, about as long as it would take me to just do the job myself.  So I sampled every tenth row for a hundred rows to see how the thing did.

 

In looking at ten rows, I found nine mistakes.  This is not a good average.

 

In the system's defense, this is absolutely the first time I've ever tried anything like this.  I could go back and get a lot more explicit about the rules for answering the yes/no questions about each row, and let it try again.  But in comments online about this particular version of AI (Sonnet 4.6, I think), some people said that you get results faster, but you have to fix problems more often.  That is consistent with my experience.

 

Good things about this exercise include how fast the thing ran, how it basically grasped what I wanted, and how it produced something in only ten minutes.  But speed isn't everything. 

 

Some not-so-good aspects include the errors and a kind of weird fawning or flattery I also noticed.  I'd call it "gushing" when it spontaneously responded "This is a genuinely exciting dataset — ball lightning is one of the most mysterious atmospheric phenomena ever reported!" 

 

I suppose that sort of thing has been cultivated by the AI's keepers, probably to keep the user engaged, or encouraged, or something.  I found myself wishing that instead, they had adopted the mien of Joe Friday in the old Dragnet true-crime series.  Friday was famed for his flat "Just the facts, ma'am" aspect, and that seems more in keeping with a system that supposedly can tackle highly sophisticated and challenging jobs of major import. 

 

But like everybody else who doesn't work for Anthropic or the other four or five leading AI companies, we will simply have to take what we can get and deal with the negative aspects as well as we can.

 

Will I try again?  Probably, but maybe with a different task.  As part of my signup process, Anthropic has been emailing me little suggestions of other things to try:  writing recipes, managing emails, creating content, solving problems, visualizing data, or helping me decide whether to go to Portugal or Spain on vacation (no-brainer for me:  Spain, but I don't have time right now). 

 

I am not especially tempted to try any of these suggestions right yet. But I do admit that if I can get the thing to turn out useful work, it could be worth what I spent on it.  I paid for a year's subscription in advance, perhaps not the wisest thing to do, but I'm the type of person who is motivated to get his money's worth, and spending the money in advance may get me engaged when nothing else would.

 

I see that Anthropic just had a dustup with the Pentagon, which banned its use within the armed forces as punishment for uncooperation, or something.  Now that we are apparently in a war with Iran, the leaders of Anthropic may feel glad that their product isn't part of the war.  But not all battles are fought with bombs and bullets, and I have a feeling that the greatest battles involving AI are yet to come.

 

Sources:  The essay by Matt Shumer, who runs an AI applications company, appeared on Feb. 11, 2026 at https://fortune.com/2026/02/11/something-big-is-happening-ai-february-2020-moment-matt-shumer/.