Monday, July 26, 2021

Facebook Is Watching Your Friends

 

Suppose you went to a party with a group of friends, one of whom is a rather outspoken person we'll call Ms. A.  After an hour or so, someone you never met before comes up to you and says, "I couldn't help noticing that you came here with Ms. A.  Aren't you concerned that her views are a little extreme?  I can give you the number of somebody who can help her." 

 

How would you react?

 

I don't know about you, but my first thought would be, "Who the — are you to be judging my friend?"  The whole thing smacks of authoritarian control and monitoring on the part of the snoop who expressed concern about Ms. A.  Yet in early July, Facebook announced that it was going to do a trial of a system that essentially does that very thing.  And I know someone it's already happened to.

 

Here is the way Facebook explains what it's doing, as reported by Reuters on July 1:  "This test is part of our larger work to assess ways to provide resources and support to people on Facebook who may have engaged with or were exposed to extremist content, or may know someone who is at risk."  That sounds reasonable—after all, groups such as Al Qaeda used platforms like Facebook to recruit U. S. citizens to their cause, and if there's something Facebook can do to keep that from happening again, it sounds like it's worth doing.

 

I'm involved in a group that meets monthly to discuss an article from the journal of religion and public life called First Things.  Most of us are over 50, and a more harmless group of non-radicals is hard to imagine.  For a time, a woman attended who later joined the Roman Catholic Church.  From my limited interactions with her, I would say she was conservative, but not radically so, and unusually articulate about various social problems, including abortion. She no longer attends our discussion group, but some of us still follow her on Facebook.

 

Her Facebook followers were surprised the other day when Facebook asked them if they thought the Catholic woman was becoming an extremist.  I don't use Facebook and so I can't say what material she might have posted which inspired Facebook to ask this question.  But based on what I know about the woman, at the very least Facebook is wasting its time.  And more seriously, this anonymous action on the part of a powerful corporation exerts a chilling effect on the tattered, bedraggled thing we once called free speech.

 

The fly in this otherwise admirable-sounding ointment of extremism prevention is the question of just what counts as "extremist."  One person's extremist is another's enthusiast.  Also on July 1, Fox News reported the comments of several people who had received such warnings, which typically read  "Are you concerned that someone you know is becoming an extremist?" followed by an option to "Get Help" which leads to an organization called "Life After Hate."  One user who received this type of notice neatly summed up the dilemma that Facebook faces: "'Confidential help is available?' Who do they think they are? Either they’re a publisher and a political platform legally liable for every bit of content they host, or they need to STAY OUT OF THE WAY."

 

The reason Facebook isn't liable for every bit of content they host, as a conventional newspaper publisher would be, is Section 230 of the Communications Decency Act, which exempts platform hosts from being liable for what third parties place on their platforms.  Perhaps in the early days of the Internet, this protection was needed in order to encourage investment in the young, struggling things that were Google and Facebook.  But now that social media constitute a major, if not the primary, source of political and cultural news in the U. S., the pretense that they are insignificant people-connectors who just barely make enough money from ads to stay in business and need special protection from the government is looking more ridiculous every day. 

 

Not only is Facebook deciding who is an extremist, it's getting help deciding what truth is from the White House.  Biden Administration press secretary Jen Psaki said on July 15 that they are "identifying 'problematic' posts for Facebook to censor because they contain 'misinformation' about COVID-19."

 

Again, this sounds reasonable at first glance.  Some things that people are saying on Facebook about COVID-19 and vaccinations for it are ludicrous and harmful.  But what happened to the old saying "I disapprove of what you say, but I will defend to the death your right to say it"?  According to Wikipedia, this quotation comes from a biography of Voltaire by one Evelyn Beatrice Hall, writing as S. G. Tallentyre. 

 

Hall was trying to illustrate one of Voltaire's principles, which was a radical (there's that word again) belief in free speech, which is one of the pillars of what should now be called classical liberalism, along with democratic governance and freedom of religion.  The American Civil Liberties Union adhered to radical free-speech principles until a few decades ago, even defending such scurrilous extremists as a neo-Nazi group that wanted to stage a march in a Chicago suburb where many Holocaust survivors lived.   This was in 1978, and although the ACLU itself has a page on its website describing this episode, I think it's fair to say that the current ACLU is finding other things to do with its time.

 

Facebook wants to have things both ways.  They want to receive plaudits as the platform for the little guy where a thousand free-speech flowers bloom, and they also want to avoid opprobrium (and lawsuits, and fines) for hosting material that is illegal, libelous, or harmful in someone's eyes.  But as the gentleman quoted above implied, you can't have freedom without responsibility.  Editing or censoring one thing on Facebook means the whole thing is now an edited entity.  You can't be just a little bit pregnant, and you can't pretend a platform is free if parts of it aren't—especially when the parts that aren't change from day to day, or from White House instruction to embarrassing news report. 

 

Sources:  The Reuters report describing Facebook's test program advising about extremism was published on July 1, 2021 at https://www.reuters.com/technology/facebook-asks-are-your-friends-becoming-extremists-2021-07-01/.  I also consulted a Fox News report at https://www.foxnews.com/media/facebook-warns-users-have-been-exposed-harmful-extremists.  The New York Post reported on Jen Psaki's comment about the White House advising Facebook on COVID-19 "misinformation" at https://nypost.com/2021/07/15/white-house-flagging-posts-for-facebook-to-censor-due-to-covid-19-misinformation/.  I also referred to Wikipedia articles on Evelyn Beatrice Hall and Voltaire.

Monday, July 19, 2021

Helping the Mute to Speak

 

Losing the ability to speak is tragic, especially if one's mind is otherwise intact.  Various diseases from encephalitis to stroke to ALS (amyotrophic lateral sclerosis, also known as Lou Gehrig's disease) can destroy the human speech system.  As long as some motor ability is left, victims can communicate by pointing to a sequence of letters on a board or by similar tedious means, but sometimes even that is no longer possible if the disease progresses.  Brain researchers have long sought a way to use the neural impulses in the brain's speech area to actuate an external "speech neuroprosthetic"—a machine that interprets the brain's impulses as speech.  And now, Dr. Edward Chang of the University of California San Francisco and his colleagues have done it.

 

Fifteen years ago, the patient they worked with (now in his 30s) suffered a severe brain-stem stroke that left him mostly paralyzed and unable to speak.  He has communicated since then by moving his head so that a pointer attached to a cap indicates letters on a board.  After extensive experimentation with epileptic surgery patients to determine which regions of the brain carried the most significant signals pertaining to speech, Chang implanted electrodes in the mute patient's brain and connected them to some sophisticated signal-processing systems, which probably involved trainable artificial-intelligence programs.  Then they asked the man to try saying specific sentences and noted the resulting signal patterns.  Eventually, the system was able to recognize these patterns when the man merely thought them with the intention of speaking.  A phrase or sentence takes a few seconds to appear after the patient forms it, but that is already faster than pointing to letters on a board.  Chang says there are many improvements to be made, but the demonstration shows that at least in one case, a speech neuroprosthesis can work.

 

This is a truly remarkable achievement, and in a blog usually devoted to bad news of one kind or another I thought it would be nice to look at something positive for a change.  At the same time, this feat raises all kinds of questions that medical advances raise.  How much would it cost if such a system is commercialized?  How safe is it to go around with wires implanted in your brain?  (Probably not very.)  Is there a less invasive means of detecting the brain impulses than wires directly on the brain?  Who gets to decide which of the thousands of mute people whose disability came about after they learned to talk, will get a chance to use it? 

 

Possibly some lessons can be learned from the analogous, but not quite so invasive, practice of cochlear implantation to remedy profound deafness.  The Wikipedia article on cochlear implants says that as of 2016, about 600,000 people worldwide have received them.  The average cost for the surgery in the U. S. is about $100,000, but if it works (and most of the time it does), society saves a substantial fraction of that cost because expensive special education is no longer needed for the patient.  The first cochlear implants were performed in the 1970s, so the procedure can said to be fairly routine by now.

 

Given the more invasive nature of the speech neuroprosthesis developed in San Francisco, we can suppose the procedure will cost more than a cochlear implant.  But years of development work lie ahead, and there may be issues or complications that arise along the way.  Let's suppose that the R&D goes smoothly and in another decade we have commercial speech neuroprostheses available.  Will that be a net benefit to the patients and to society?

 

This is just a specific example of the judgment called for when any society chooses to allocate scarce resources such as medical care.  One factor the U. S. apparently has going for it in comparison with many other countries is that there are strong financial incentives for companies to spend what it takes to develop advanced new medical products and procedures.  The free market has its downsides, certainly, but the semi-private nature of the way healthcare is paid for in the U. S., although deeply flawed, does have this redeeming feature. 

 

On the other hand, it's likely that not everyone who could benefit from a speech neuroprosthetic will get one.  Some people simply lived and died too early to benefit, but that doesn't mean they inevitably passed their lives in frustration and meaningless existence. 

 

When I taught in Massachusetts I would often have lunch at the student center in the central hotel complex at the University of Massachusetts Amherst, and quite a few times there I saw a married couple, Ruth Sienckiewicz-Mercer and Norman Mercer.  They were easy to spot because they were both shorter than four feet, wheelchair-bound, and Ruth was completely unable to talk. 

 

When she was less than a year old, she contracted encephalitis which left her with cerebral palsy that severely impaired her control of her body except for her face and digestive system.

Her family raised her until she was eleven, when financial difficulties led them to send her to the Belchertown State School, a warehousing facility for such hopeless cases.  She was mistreated as an idiot for years, but finally a sympathetic staff person developed a word board for her.  She became one of the high-functioning residents and was eventually able to move to her own apartment and marry another ex-patient of the school, Norman Mercer.  She then became a disability-rights activist, traveling across America and influencing governments to close warehousing institutions such as the State School, which eventually closed in 1992.  With the help of a co-author, she wrote a book about her experiences before she passed away in 1998.

 

Ruth Sienciewicz-Mercer didn't need a speech neuroprosthesis to do what she did.  Her indomitable spirit and the help of sympathetic bystanders enabled a supposedly disabled person to achieve things that most normally-abled people don't do.  If you believe that Y'shua the Nazarene was able to make the dumb speak by simply telling them to, he made it clear that the healings were not the main point of what he came to do either.  They helped the sufferers who came to be healed, certainly.  But they were only means to an end.  The end itself, the point of it all, was the relationship created between the Healer and the healed.  And that is a lesson we shouldn't forget.

 

Sources:  I thank my wife for pointing out to me the article in The Guardian entitled "Paralyzed man’s brain waves turned into sentences on computer in medical first" at https://www.theguardian.com/science/2021/jul/15/paralyzed-man-brain-waves-sentences-computer-research.  I also referred to the Wikipedia articles on cochlear implants and Ruth Sienciewicz-Mercer.  Her book, I Raise My Eyes to Say Yes, was co-authored with Steven Kaplan and published in 1996.

Monday, July 12, 2021

Social Media and the Fourth Deadly Sin

 

In case you're wondering, the fourth deadly sin in the classical lists of seven sins is envy.  Sociologist Anne Hendershott has written a whole book about it—The Politics of Envy—and the editors of the Spring 2021 issue of The Human Life Review excerpted part of a chapter that shows how social media sites such as Facebook and Instagram leverage this particular deadly sin to their advantage.  But not always to the advantage of the users, it turns out.

 

First, let's distinguish between envy and jealousy.  Although the two words are now used almost interchangeably, envy originally meant the feeling of resentment or anguish one person has when faced with another person's superior possessions or characteristics.  Envy requires a specific person to be envied, while a jealous husband, for instance, may not be worried about any particular other man interested in his wife—he's just suspicious of all of them. 

 

Hendershott points out that while envy has always been a part of the human condition, in times past it was limited to people you knew, or knew about.  But in the digital age, there are as many targets of potential envy as there are people on Facebook, and the opportunities for envy are multiplied indefinitely. 

 

Covetousness is related to envy, but in addition has the honor to be prohibited explicitly by God in Commandment No. 10 (or 9, if you're Catholic).  Advertisers have been exploiting covetousness for centuries, but until recently they had to do the tedious work of creating an artificially attractive and enviable portrayal for each ad:  "Here's this good-looking guy who just got the gal, and if you used his kind of toothpaste you could be where he is now." 

 

But with social media, all the Facebook techies have to do now is provide the proper tools, and people will naturally put their best faces forward in what Hendershott calls the "highlight-reel" version of themselves:  the best-looking picture taken at the party, the most expensive vacation setting, etc.  And envy is a strong motivator for certain other kinds of people to go and look at the enviable types, eat their hearts out, and in so doing add advertising revenue to the coffers of big tech.

 

What does envy do to the envious?  One would not expect a lot of positive effects, and several studies bear this out.  Hendershott says that a 2015 study carried out by a Denmark organization called the Happiness Research Institute found that people who take a break from social media report being happier.  Besides some studies that show a general inverse correlation between social-media use and happiness, another study of Danish teenagers found no strong correlation between the hours of social-media use and happiness.  Digging deeper, the researchers did find that the way social media was used did influence happiness. 

 

When they divided the users into active ones who posted a lot of material themselves, and passive ones who just poked around viewing the postings of others, then a big difference showed up.  The active users tended to be happier than the passive ones who just looked at friends' pages without posting much of their own lives.

 

Certain aspects of modern life seem to be inextricably tied to certain classic sins in a way that defeats their separation.  Where would modern capitalism be without greed, for instance?  Or advertising without covetousness?  Does this mean we simply have to shrug our shoulders and accept the harm caused by media-induced sin?  Or could something be done about it?

 

Just in the last week or so, I have read in different places some proposals to regulate social media, or at least the artificial-intelligence algorithms that are used to train users to be more reliable and complacent consumers of targeted advertising.  And I have also read strong arguments against such regulation, based mainly on the idea that any regulation of social-media content beyond what the private platform operators do themselves violates the First Amendment's protection of free speech. 

 

But when someone makes a one-to-one correspondence between, say, Patrick Henry making a speech before a crowd of fellow Virginians in 1774, and Cristiano Ronaldo, a professional Portuguese athlete who plays "football" (soccer in the U. S.) and is currently, according to Wikipedia.  the most-followed person on Facebook today with 148 million followers, I'd say we're definitely in apples-and-oranges territory, or maybe even apples and geodes.  What the comparison leaves out is the incredibly sophisticated AI-based machinery that Facebook and company use to train and otherwise manipulate both followers and leaders in modern social media—machinery that was entirely absent in the days before the Internet and electronic media in general.

 

The comparison that makes more sense to me is one between what the U. S. Food and Drug Administration (FDA) does today and what might be done by a similar governmental entity in the future.  The FDA came about largely because the sophisticated chemical adulterants used by manufacturers in the early 1900s were not something that your average consumer was even aware of, let alone could defend himself against.  So the FDA was charged with using the same advanced chemical and biological science available to the manufacturers to make sure that the foods and drugs sold to the public were not harmful, using a negotiated and agreed-on definition of harm.

 

It seems to me that given enough political good will (always a scarce commodity, but especially so today), we could define objective levels of psychological harm:  depression, anxiety, even rates of suicide.  And we might be able to determine to what extent these conditions were attributable, not simply to the user posts on social media, but the sophisticated methods used to heighten their impact on certain people who are thereby harmed.  And then we could go in and say to the tech companies, "This set of algorithms is okay, but that set is off-limits, because it leads to X suicides, Y instances of depression, Z cases of porn-induced erectile dysfunction, etc."  In order to do any good, the regulatory agency would have to have a cadre of sophisticated techie types just as smart as the ones the private companies have, and that might be hard.  But only such types can see through the opaque fog of algorithms to tell what's going on, and which ones are harmful.

 

Sources:  A portion of Chapter 9 of Anne Hendershott's The Politics of Envy (Crisis Publications, 2020) was reprinted as Appendix A (pp. 87-92) of the Spring 2021 edition of The Human Life Review.  I also referred to the Wikipedia pages on envy, jealousy, and Cristiano Ronaldo, who I had never heard of before today.

Monday, July 05, 2021

High-Tech Surveillance and Control: Citizens or Sheep?

 

In recent testimony before the U. S. Senate's Subcommittee on Antitrust, Competition Policy and Consumer Rights, philosopher and author Matthew B. Crawford reminded his audience that the Revolutionary War was fought for principles that are now once again in jeopardy.  In a summary of his remarks posted on the website of the technology and society journal The New Atlantis, Crawford pointed out that the coming era of smart-home technology comes with a price that people may not fully understand.  He used the example of the Sleep Number bed product, which includes a phone app that must be installed to gain the full benefits of the bed.  To use the app, the user must agree to a sixteen-page "privacy policy" which is in reality anything but. 

 

The policy, which a vanishingly small number of users probably read in its entirety, talks about all the things that you allow the Sleep Number people to do with information about what is probably the most intimate and private sector of your life—the time you spend in bed, alone or with someone else.  Although this part of the policy has been deleted, one section used to state that the app was also allowed to transmit "audio in your room."  And the tracking can go on even after you cancel your Sleep Number account.

 

Crawford pointed out that this is only one prominent example of the multifarious ways that big tech is invading all aspects of our lives to mine data that is then used to "determine the world that is presented to us" and to "determine our standing in the reputational economy" with regard to credit ratings, for example.  One need only look to China to see how a malevolent government can employ high-tech means to rank its citizens by standards that it alone determines.  But the difference between what the Chinese Communist Party does and what Experian or Sleep Number does is only a matter of degree, not of kind.

 

The ability to take private information from us and to use it in ways that ultimately benefit corporations rather than individuals, without the individuals involved having any effective say over the process, is a good definition of tyranny.  Substitute "the state" for "corporations" and you have a good description of the way life was in the old Soviet Union.  And at least the USSR was honest enough to admit that the state took priority over the individual. 

 

But in the picture painted by our supposedly free-market big-tech firms, it's all positives and no negatives:  everything is "free" and all they're trying to do is give you a personalized world that will make you happier—or at least more profitable to them.

 

Politics is simply the conduct of public affairs, as a seventh-grade civics teacher once told me.  Facebook, Google, Apple, and their ilk are private only in the sense that they are not recognized as sovereign states (yet).  But if you measure the degree to which an organization is public by number of people affected, money that flows through it, or its power to influence lives, Google is a lot more of a public entity than the United States or even the United Nations. 

 

As Crawford reminded the senators, the Revolutionary War of 1776 was fought, among other things, over the way the government of Britain allowed corporations to control trade with the colonies.  Simply to go about one's ordinary life, one was forced to deal with certain British companies and had no choice in the matter, even if the prices or quality of service or goods was way out of line with reasonable expectations.

 

Surely there is a parallel between that situation and the fact that getting up in the morning, driving to work, and doing one's job, to say nothing of communicating with one's family and friends, is virtually impossible these days without submitting oneself to the powers of big-tech firms that are doing opaque things to one's data and one's online world.

 

Crawford drew an apt parallel between the way big-tech firms operate and the governmental trend of power flowing away from Congress—the only branch of government designed to represent the people in a direct way—and the administrative state of experts who are not beholden to anyone but their own kind.  Both are signs that whatever control the populace used to exert over either government or private business is waning, while power is concentrating in the hands of an elite group of overlords and their technological minions. 

 

He left the senators with a probing question:  "Who rules?"  Is it the people?  Congress?  The executive branch?  Or is it a group of mostly faceless high-tech employees and their billionaire controllers who, while sometimes intending to do the right thing, will not let anything get in the way of continuing to make money in the most successful way possible.  And one can't really blame them for that.  But one can take steps to safeguard what used to be the most cherished aspect of living in the United States:  freedom from tyranny. 

 

Crawford made no suggestions as to how this might be accomplished.  In the space I have remaining, I will throw something out that I'm sure is full of holes, but might get the ball rolling.

 

It used to be the case that banks could not operate outside of the state in which they were chartered.  In 1927, the federal McFadden Act restricted banks from opening branches across state lines.  That law did nothing to stop the Great Depression, but it did preserve the regional nature of the banking industry until the Riegle-Neal Act of 1994 rescinded most of those restrictions. 

 

One thing we could try would be to pass a kind of McFadden Act for online companies.  As legal corporations, they could not operate as a single entity, but only as state-chartered organizations.  You wouldn't have one Google:  you'd have fifty Googles, each with a physical presence in each state. Obviously, for Google to be Google, you'd have to allow them to exchange data somehow, but not money.  If something like this were proposed, I'm sure Google's corporate heads would scream bloody murder and say it would destroy the company, but that's just what we heard when the Bell System breakup came along.  And look what happened:  not only did phone service not vanish, it got a great deal better.  And something similar might happen with Google.  We'll never know until we try.

 

Sources:  Matthew Crawford, author of Shop Class as Soulcraft:  An Inquiry Into the Value of Work, is the author of "Defying the Data Priests" posted at https://www.thenewatlantis.com/publications/defying-the-data-priests.  I also referred to an article on interstate banking at https://www.federalreservehistory.org/essays/riegle-neal-act-of-1994.