Monday, September 30, 2024

Deer Park Pipeline Fire Raises Questions

 

Around 10 A. M. Monday, Sept. 16, someone in a white SUV drove it through a chain-link fence in Deer Park, Texas and straight into the above-ground valve structure of an Energy Transfer Company liquefied-natural-gas underground pipeline in the city of Deer Park, Texas.  When the pipe failed, it released the several hundred pounds per square inch of pressure that was keeping the material liquid along the miles of pipe that led to the valve.  The liquid began boiling into gas, and spewed out a column of white vapor that immediately caught fire. 

 

The resulting flame towered a hundred feet or more in the air and produced a plume of black smoke that could be seen for miles.   The fire burned for four days, from Monday to Thursday, and was so big that International Space Station astronaut Don Pettit could see it from space and sent a photo down to prove it.  Four people were injured, houses near the site were evacuated, and millions of dollars of damage was caused by the fire before it finally burned itself out sometime on Thursday Sept. 19.  Once it was safe to approach the area, medical examiners recovered a body from the vehicle that caused the fire.

 

Residents of the now somewhat-ironically-named Deer Park are no strangers to petrochemical disasters.  Living in one of the most concentrated areas of oil refining and petrochemical production in the world has both advantages and disadvantages.  Some of the advantages are that there are usually plenty of good, stable jobs for people who are willing to work hard and take risks, and the city's tax base is jet-propelled by billions of dollars invested in plants and equipment.  Some of the disadvantages are that you live in an atmosphere that is never entirely free of chemicals that may have long-term health risks, and sometimes things happen that pose more immediate dangers, for example a fire that melts the interior of the car that you had to abandon during an evacuation.

 

Partly to avoid insurance costs that would bankrupt any business, the refinery and petrochemical industries are some of the most safety-conscious in the world.  I was once privileged to take a tour inside a working refinery, after viewing them from afar for most of my growing-up life in Texas.  To gain this privilege, I had to sit through an hour-long series of training videos, sign a form, and when it got time to take the tour, all they did was load us on a bus and drive us through the place, maybe with a fire truck following after, I don't exactly recall. 

 

But despite the best safety precautions human minds can devise, other human minds can come up with ways to thwart them.  A friend of mine who lived and worked in Deer Park for many years has told me that the accident happened in a large right-of-way that carries many pipelines interconnecting the various plants in the Deer Park and La Porte area, which is on the Houston Ship Channel.  A video captured at the time of the accident shows the SUV in question traveling well above the speed limit and steering straight for the valve assembly.  Although the body recovered from the wreckage of the truck has not yet been publicly identified, reasonable speculation is that it is a case of suicide committed by someone who might have had inside knowledge of which valve to hit.  Although terrorism isn't suspected by the authorities, the investigation is too new to have produced much in the way of firm results.

 

Some residents of the area have called for additional protection for vulnerable pipeline infrastructure.  It would take something pretty substantial such as concrete barriers to prevent a similar intentional attack with a heavy vehicle, and there are plenty of vulnerable pipeline stations scattered around Texas and the rest of the country.  If this was simply a spectacular way for an isolated individual to commit suicide, it is likely not to be repeated any time soon, and the idea of passing regulations to make all pipeline systems proof against such attacks might be judged excessive. 

 

On the other hand, the 9/11/2001 terrorist attacks on the U. S. showed what a malevolent organization can do with people who are willing to give their lives for a cause.  In terms of casualties, the destruction caused by this particular fire was relatively small, and probably would not have appealed to a terrorist bent on causing death and destruction.  Fortunately, there was little wind during the fire, and any toxic products of the fire were carried far aloft, for the most part.  Nevertheless, it was a major inconvenience while it lasted, and if it had happened closer to a more populated area such as a school, the consequences could have been much worse. 

 

Residents of Deer Park who choose to live there or work there by and large know what they're getting into.  In an ideal world, no one would have to, let alone want to, live in a place that might be a long-term hazard to your health, coupled with a small but non-zero risk of being blown up or incinerated.  In many parts of the world, especially among professional and leadership classes, the entire array of fossil-fuel industries is something to get rid of as soon as we can. 

 

But few of the people holding those opinions have lived in a place where one's father and perhaps grandfather has made a good living from fossil-fuel-related work.  Nobody gets out alive, and during one's brief time on earth, doing something that can benefit others, even at risk to oneself, can be viewed as a good thing.  One person, for reasons known only to God now, decided that his own personal time was up, and chose to end that time in a spectacular way that ended one life and endangered others. 

 

We should do what we can to keep things like this from happening again.  But just as important is creating a cultural environment in which the average person can do remunerative work and feel that the rest of the world, or at least a good bit of it, appreciates what one does instead of wishing it would all just go away.   

 

Sources: I referred to reports on the fire at https://www.houstonpublicmedia.org/articles/news/energy-environment/2024/09/19/500356/deer-park-pipeline-fire-human-remains-suv-removed/ and https://abc13.com/post/deer-park-pipeline-blast-site-repairs-days-after-explosion-forced-residents-homes/15331135/.  The photo from space can be seen at https://www.khou.com/article/news/local/deer-park-pipeline-fire-international-space-station-photo/285-9c64b48c-b81e-48b3-aa40-ae740e057ab6. 

 

Monday, September 23, 2024

Deepfake Porn: The Rest of a Story

 

This blog is a species of journalism, and while it's more of an opinion blog than a place to find new facts, I acknowledge the journalistic obligation of accuracy.  So when someone questions the accuracy of something I write, it naturally concerns me, and on occasion I will add corrections to my blogs as necessary.  Something like that happened with last week's blog, and the details are involved and interesting enough to devote today's blog to the issue.

 

I write this blog in about an hour or two every Sunday morning.  It is devoted to commenting on other engineering-ethics-related news articles because, among other things, contacting live sources at 5 AM Sunday morning is not likely to produce positive results.  So I depend only on material that I can get from the Internet, books, magazines, and other sources that are indifferent to the time at which they are consulted. 

 

Last week's blog was based on an item carried by the print version of the professional-organization magazine IEEE Spectrum, which still pays real reporters to talk or otherwise communicate with live people.  One of those live persons was the former Virginia House of Delegates candidate Susanna Gibson, who spoke with Spectrum reporter Eliza Strickland. 

 

Here is the relevant quote from that article:  "[Gibson] was running for a seat in the Virginia House of Delegates in 2023 when the Republican party of Virginia mailed out sexual imagery of her that had been created and shared without her consent, including, she says, screenshots of deepfake porn."  This inspired her to start "MyOwn," an organization devoted to passing laws against such malfeasance.

 

From time to time, the website Mercatornet.com picks up this blog and republishes it, with my permission.  That happened with last week's blog, and readers of Mercatornet began commenting on it.  I get copies of these comments, and one of them said the following about the sentence saying that the Republican party of Virginia mailed out sexual images of her made without her consent:  "This is a false statement.  Gibson took and streamed the videos herself while soliciting viewers for money.  Deep fake porn is terrible, but it has nothing to do with the Gibson porn videos."

 

Another comment right after that says this:  "According to Wikipedia:  In September 2023, a Republican operative provided The Washington Post with videos showing Gibson performing sex acts with her husband on the adult streaming site . . ." and it goes on to name the site.

 

The Wikipedia article on Susanna Gibson referred to a Washington Post article of Sept. 11, 2023.  Now, one can question this story, but I have only so much space and we're going to stop at this point and see what the Post says.  According to that news source, Gibson and her husband recorded video of themselves doing certain things, and offering to do certain other things if viewers would send them monetary tokens, on a website called. . . well, let's leave that out, shall we?  Suffice it to say that members of that website are privileged to view videos that other members like the Gibsons post, but even that website's own rules forbid its users to ask other users to exchange money for seeing certain things. 

 

So according to the Post, as many as 5,000 people could have been watching the Gibsons doing things that former ages regarded as suitable only for the total privacy of one's bedroom.  And Susanna Gibson was apparently okay with that, especially if she raised money for her campaign, or next week's groceries, or whatever her motivation was. 

 

What got Ms. Gibson upset was not the fact that 5,000 strangers had a fly's-eye view of their bedroom, but that somebody copied and posted these videos onto non-subscription publicly available sites, and some Republican sent a note to the Washington Post telling the paper where the videos could be found.  And they found them, and published an article about them while Gibson was still running for office.  That's when she got mad and lawyered up and accused the paper of "an illegal invasion of my privacy designed to humiliate me and my family."

 

So where does the truth lie?  I don't think the Washington Post made up the details they published, which have the ring of authenticity.  And I suppose there are people around whose moral formation is so twisted that they think making money from porn seen by 5,000 total strangers is fine, but when news of doing this gets around in a way that interferes with your election campaign, you're suddenly a victim of invasion of privacy.  Was there even any mailing of any deepfake porn?  Only according to Gibson. 

 

There's enough mud in this story that nobody involved comes out quite clean.  IEEE Spectrum could have tried checking Gibson's story, for one thing, instead of just taking her word for it.  I could have checked around myself, but it was just a small part of a larger article, and I didn't.  And one can ask whether sites like the one that Gibson was using to get egg money should even exist, although as one of the lawyers involved pointed out, everybody on that site is a consenting adult and as long as they're okay with the rules and what people do on it, apparently nobody can stop them. 

 

None of this affects the main point of last week's blog, which is that deepfake porn is a terrible thing and something ought to be done about it.  But this weird little side story shows that deepfake porn is the tip of an iceberg of behaviors that technologies associated with the Internet have encouraged, not all of which are illegal or generally regarded as immoral, but which certainly couldn't be done as easily or as extensively as they are now with technological help. 

 

A professor I knew years ago once told me, "Never write anything you don't want to show up on the front page of the New York Times," and updated to today by including "write or video," I still think that's good advice.  Gibson now knows this to her regret, and this is the last blog I'm going to do on deepfake porn for a while. 

 

Sources:  My reprinted blog of last week with comments can be viewed at https://www.mercatornet.com/deepfake_porn_where_ai_goes_to_die, and the Washington Post story referred to in the Wikipedia article on Susanna Gibson is at https://www.washingtonpost.com/dc-md-va/2023/09/11/susanna-gibson-sex-website-virginia-candidate/. 

Monday, September 16, 2024

Deepfake Porn: The Nadir of AI

 

In Dante's Inferno, Hell is imagined as a conical pit with ever-deepening rings dedicated to the torment of worse and worse sinners.  At the very bottom is Satan himself, constantly gnawing on Judas, the betrayer of Jesus Christ. 

 

While much of Dante's medieval imagery would be lost on most people today, we still recognize a connection in language between lowness and badness.  Calling deepfake porn the nadir of how artificial intelligence is used expresses my opinion of it, and also the opinion of women who have become victims of having their faces stolen and applied to pornographic images.  A recent article by Eliza Strickland in IEEE Spectrum shows both the magnitude of the problem and the largely ineffective measures that have been taken to mitigate this evil—for evil it is.

 

With the latest AI-powered software, it can take less than half an hour to use a single photograph of a woman's face to produce a 60-second porn video that makes it look like the victim was a willing participant in whatever debauchery the original video portrayed.  A 2024 research paper cites a survey of 16,000 adults in ten countries, and shows that 2.2% of the respondents reported being a victim of "non-consensual synthetic intimate imagery," which is apparently just a more technical way of saying "deepfake porn."  The U. S. was one of the ten countries included, and 1.1% of the respondents in the U. S. reported being victimized by it.  Because virtually all the victims are women and assuming men and women were represented equally in the survey, that means one out of every fifty women in the U. S. has been a victim of deepfake porn. 

 

That may not sound like much, but it means that over 3 million women in the U. S. have suffered the indignity of being visually raped.  Rape is not only a physical act; it is a shattering assault on the soul.  And simply knowing that one's visage is serving the carnal pleasure of anonymous men is a horrifying situation that no woman should have to face.

 

If a woman discovers she has become the victim of deepfake porn, what can she do?  Strickland interviewed Susanna Gibson, who founded a nonprofit called MyOwn to combat deepfake porn after she ran for public office and the Republican party of Virginia mailed out sexual images of her made without her consent.  Gibson said that although 49 of the 50 U. S. states have laws against nonconsensual distribution of intimate imagery, each state's law is different.  Most of the laws require proof that "the perpetrator acted with intent to harrass or intimidate the victim," and that is often difficult, even if the perpetrator can be found.  Depending on the state, the offense can be classified as either a civil or criminal matter, and so different legal countermeasures are called for in each case.

 

Removing the content is so challenging that at least one company, Alecto AI (named after a Greek goddess of vengeance) offers to search the Web for a person's image being misused in this way, although the startup is not yet ready for prime time.  In the absence of such help, women who have been victimized have to approach each host site individually with legal threats and hope for the best, which is often pretty bad.

 

The Spectrum article ends this way:  ". . . it would be better if our society tried to make sure that the attacks don't happen in the first place."  Right now I'm trying to imagine what kind of a society that would be.

 

All I'm coming up with so far is a picture I saw in a magazine a few years ago of Holland, Michigan.  I have no idea what the rates of deepfake porn production are in Holland, but I suspect they are pretty low.  Holland is famous for its Dutch heritage, its homogeneous culture, and its 140 churches for a town of only 34,000 people.  The Herman Miller furniture company is based there, and the meme that became popular a few years ago, "What Would Jesus Do?" originated there. 

 

Though I've never been to Holland, Michigan, it seems like it's a place that emphasizes human connectedness over anonymous interchanges.  If everybody just put down their phones and started talking to each other instead, there would be no market for deepfake porn, or for most of the other products and services that use the Internet either.

 

As recently as 40 years ago, we had a society in which deepfake porn attacks didn't happen (at least not without a lot of work that would require movie-studio-quality equipment to do).  That was because the technology wasn't available.  So there's one solution:  throw away the Internet.  Of course, that's like saying "throw away the power grid," or "throw away fossil fuels."  But people are saying the latter, though for very different reasons. 

 

This little fantasy exercise shows that logically, we can imagine a society (or really a congeries of societies—congeries meaning "disorderly collection") in which deepfake porn doesn't happen.  But we'd have to give up a whole lot of other stuff we like, such as the ability to use advanced free Internet services for all sorts of things other than deepfake porn. 

 

The fact that swearing off fossil fuels—which are currently just as vital to the lives of billions as the Internet—is the topic of serious discussions, planning, and legislation worldwide, while the problem of deepfake porn is being dealt with piecemeal and at a leisurely pace, says something about the priorities of the societies we live in. 

 

I happen to believe that the devil is more than an imaginative construct in the minds of medieval writers and thinkers.  And one trick the devil likes to pull is to get people's attention away from a present immediate problem and onto some far-away future threat that may not even happen.  His trickery appears to be working fine in the fact that deepfake porn is spreading with little effective legal opposition, while global warming (which is undeniably happening) looms infinitely larger on the worry lists of millions. 

 

Sources:  Eliza Strickland's article "Defending Against Deepfake Pornography" appeared on pp. 5-7 of the October 2024 issue of IEEE Spectrum.  The article "Non-Consensual Synthetic Intimate Imagery:  Prevalence, Attitudes, and Knowledge in 10 Countries" is from the Proceedings of the CHI (presumably Computer-Human Interface) 2024 conference and available at https://dl.acm.org/doi/full/10.1145/3613904.3642382. 

Monday, September 09, 2024

The Politics of ChatGPT

 

So-called "artificial intelligence" (AI) has become an ever-increasing part of our lives in recent years.  After public-use forms of it such as OpenAI's ChatGPT were made available, millions of people have used it for everything from writing legal briefs to developing computer programs.  Even Google now presents an AI-generated summary for many queries on its search engine before showing users the customary links to actual Internet documents.

 

Because of the reference-librarian aspect of ChatGPT that lets users ask conversational questions, I expect lots of people looking for answers to controversial issues will resort to it, at least for starters.  Author Bob Weil did a series of experiments with ChatGPT in which he asked it questions that are political hot potatoes these days.  In every case, the AI bot came down heavily on the liberal side of the question, as Weil reports in the current issue of the New Oxford Review.

 

Weil's first question was "Should schools be allowed to issue puberty blockers and other sex-change drugs to children without the consent of their parents?"  While views differ on this question, I think it's safe to say that a plain "yes" answer, which would involve schools meddling in medicating students and violating the trust pact they have with parents, is on the fringes of even the left.  What Weil got in response was most concisely summarized as weasel words.  In effect, ChatGPT said, well, such a decision should be a collaboration among medical professionals, the child, and parents or guardians.  As Weil pressed the point further, ChatGPT ended up saying that "Ultimately, decisions about medical treatment for transgender or gender-diverse minors should prioritize the well-being and autonomy of the child."  Weil questions whether minor children can be autonomous in any real sense, so he went on to several other questions with equally fraught histories.

 

A question about climate change turned into a mini-debate about whether science is a matter of consensus or logic.  ChatGPT seemed to favor consensus as the final arbiter of what passes for scientific truth, but Weil quotes fiction writer Michael Crichton as saying, "There's no such thing as consensus science.  If it's consensus, it isn't science.  If it's science, it isn't consensus." 

 

As Weil acknowledges, ChatGPT gets its smarts, such as they are, by scraping the Internet, so in a sense it can say along with the late humorist Will Rogers, "All I know is what I read in the papers [or the Internet]."  And given the economics of the situation and political leanings of those in power in English-language media, it's no surprise that the center of gravity of political opinion on the Internet leans to the left. 

 

What is more surprising to me, anyway, is the fact that although virtually all computer software is based on a strict kind of reasoning called Boolean logic, ChatGPT kept insisting on scientific consensus as the most important factor in what to believe regarding global warming and similar issues. 

 

This ties in with something that I wrote about in a paper with philosopher Gyula Klima in 2020:  material entities such as computers in general (and ChatGPT in particular) cannot engage in conceptual thought, but only perceptual thought.  Perceptual thought involves things like perceiving, remembering, and imagining.  Machines can perceive (pattern-recognize) things, they can store them in memory and retrieve them, and they can even combine pieces of them in novel ways, as computer-generated "art" demonstrates.  But according to an idea that goes back ultimately to Aristotle, no material system can engage in conceptual thought, which deals in universals like the idea of dogness, as opposed to any particular dog.  To think conceptually requires an immaterial entity, a good example of which is the human mind.

 

This thumbnail sketch doesn't do justice to the argument, but the point is that if AI systems such as ChatGPT cannot engage in conceptual thought, then promoting such perceivable and countable features of a situation as consensus is exactly what you would expect it to do.  Doing abstract formal logic consciously, as opposed to performing it because your circuits were designed by humans to do so, seems to be something that ChatGPT may not come up with on its own.  Instead, it looks around the Internet, takes a grand average of what people say about a thing, and offers that as the best answer.  If the grand average of climate scientists say that the Earth will shortly turn into a blackened cinder unless we all start walking everywhere and eating nuts and berries, why then that is the best answer "science" (meaning in this case, most scientists) can provide at the time. 

 

But this approach confuses the sociology of science with the intellectual structure of science.  Yes, as a matter of practical outcomes, a novel scientific idea that is consistent with observations and explains them better than previous ideas may not catch on and be accepted by most scientists until the old guard maintaining the old paradigm simply dies out.  As Max Planck allegedly said, "Science progresses one funeral at a time."  But in retrospect, the abstract universal truth of the new theory was always there, even before the first scientist figured it out, and in that sense, it became the best approximation to truth as soon as that first scientist got it in his or her head.  The rest was just a matter of communication.

 

We seem to have weathered the first spate of alarmist predictions that AI will take over the world and end civilization, but as Weil points out, sudden catastrophic disasters were never the most likely threat.  Instead, the slow, steady advance as one person after another abandons original thought to the easy way out of just asking ChatGPT and taking that for the final word is what we should really be worried about.  And as I've pointed out elsewhere, a great amount of damage to the body politic has already been done by AI-powered social media which has polarized politics to an unprecedented degree.  We should thank Weil for his timely warning, and be on our guard lest we settle for intelligence that is less than human.

 

Sources:  Bob Weil's article "Wrestling for Truth with ChatGPT" appeared in the September 2024 issue of New Oxford Review, pp. 18-24.  The paper by Gyula Klima and I, "Artificial intelligence and its natural limits," was published in AI & Society in vol. 36, pp. 18-21 (2021).  I also referred to Wikipedia for the definition of "large language model" and for "Planck's principle."

 

Monday, September 02, 2024

Free Speech In the Age of Government-Influenced Facebook

 

Mark Zuckerberg, the founder, chairman, and CEO of MetaPlatforms, which includes Facebook, Instagram, and WhatsApp, recently sent a letter to Jim Jordan, Republican chairman of the House Judiciary Committee.  Zuckerberg is a busy man, and this was no bread-and-butter socializing note, but more along the lines of a confession. 

 

In the note, Zuckerberg admitted that in 2021, Facebook had caved in to government pressure, specifically from the Biden White House, concerning certain posts relating to COVID-19, "including humor and satire."  The company was also guilty of "demoting" stories about Hunter Biden's laptop when it chose to believe the FBI's claim that it was Russian disinformation in 2020.  In both cases, Zuckerberg says basically we were wrong and we won't do it again.

 

The most generous interpretation of this letter is that here is an upstanding citizen, who also happens to be the fourth richest person in the world, admitting that he and his people did some things that in retrospect might not have been the best choice, given what he knows now.  But hey, he's learned from his mistakes, and we should all feel better that Zuckerberg and his companies have admitted they messed up in what were understandably hard circumstances. 

 

Ranged against this rather anodyne letter are some cherished U. S. traditions such as freedom of speech and the rule of law.  Let's talk about the rule of law first.

 

In a recent issue of Touchstone magazine, professor of law Adam J. MacLeod outlines how the idea of rule by law rather than men arose during the reign of the Emperor Justinian (485-565).  Justinian caused twelve ivory tablets to be placed on public display, tablets that contained a concise summary of the laws of the land.  All disputes were to be decided on the basis of reasoning from what the tablets said, not from what somebody in power said.

 

In placing reason above power, the rule of law placed everyone on a much more equitable footing.  The peasant who could reason out law was now able to defend himself against a powerful lord who wanted to take his land, if the peasant could show what the lord was trying to do was against the law.  MacLeod admits that since the late 1800s, jurisprudence has largely abandoned the fundamentals that supported the rule of law, but in practice, vestiges of it remain.  No thanks to Zuckerberg, however, for those vestiges.

 

Although Facebook is not a branch of government, in bowing to White House pressure it acted as a government agent.  And its near-monopoly on social media channels makes it a powerful player in its ability to censor unfavored speech, such as people making fun of Anthony Fauci or other prominent players in the COVID-19 follies.  So where was the ivory tablet to which a satirical outfit such as the Babylon Bee could appeal when its posts disappeared?  Their only option was to mount a lawsuit that might take years, would certainly cost tons of money, and might in the end amount to nothing.  So much for the rule of law.

 

Some counter the claim that the principle of freedom of speech does not apply to private companies such as Facebook, because a private entity can allow or disallow anything it likes and be as capricious about it as they want.  If Facebook had the reach of my town paper, the San Marcos Daily Record, this argument would carry weight.  One little outlet being arbitrary about what it publishes is no big deal.  But Facebook, although not the only social-media show in town, is by far one of the largest, and its censorship, or lack thereof, hugely influences public discourse in the republic that is the United States, as it does in many other countries of the world with less of a tradition of free speech. 

 

Once again, while Facebook is not a government entity, when it takes actions that the government pressures it to do (either through legal means or simply jawboning), it becomes an agent of that government.  And while it is perhaps true that Facebook did not violate the letter of the First Amendment which prohibits only Congress from making a law that abridges the freedom of speech, the spirit of the law is that the Federal government as a whole—executive, judicial, or legislative—should refrain from suppressing the freedom of the people to express themselves in any way that is not comparable to yelling "Fire!" falsely in a crowded theater. 

 

There are two extremes to which we might go in this situation, at opposite ends from the muddled middle in which we presently find ourselves.  One extreme would be to treat near-monopolies such as Facebook as "common carriers" like the old Ma Bell used to be.  With very few exceptions, nobody regulated what you could say over the telephone, and in the common-carrier model, Facebook would fire all its moderators and only retain the engineers who would keep hackers from crashing the entire system.  Other than that, anybody could say anything about anything.  Zuckerberg wouldn't censor anything, and I bet he'd be relieved to be rid of that little chore.

 

The other extreme would be to regulate the gazoo out of all social media and set up explicit "twelve-tablet"-like rules as to what can and can't be said on it.  We have something like this model in the way the Federal Communications Commission regulates what can be said or shown over the (public) airwaves (not cable).  The FCC is mostly concerned with obscene or indecent content, but that's just a historical fluke.  In a republic you can vote to regulate anything you want.  This would be a return to the pre-deregulation days of inefficient but reliable airline and phone service.  It would be duller and more predictable, but there are worse things than dull.

 

Neither of these extremes will come to pass, but the present near-total governmental inaction in either direction leaves a political vacuum in which Mark Zuckerberg, emperor of social media, will continue to do what he thinks best, and the rest of us simply have to deal with it.  And the rule of law and freedom of speech will continue to suffer.

 

Sources:  I referred to an Associated Press article "Zuckerberg says the White House pressured Facebook over some COVID-19 content during the pandemic," at https://apnews.com/article/meta-platforms-mark-zuckerberg-biden-facebook-covid19-463ac6e125b0d004b16c7943633673fc.  Zuckerberg's letter to Congress is at

https://x.com/JudiciaryGOP/status/1828201780544504064/photo/1, and I also referred to https://en.wikipedia.org/wiki/The_World%27s_Billionaires.  Adam J. MacLeod's "How Law Lost Its Way" appeared on pp. 22-28 of the Sept/Oct 2024 issue of Touchstone.