Showing posts with label bias. Show all posts
Showing posts with label bias. Show all posts

Monday, January 02, 2023

What? Twitter Neutral?

 

Back when I started this blog in 2006, the phrase "social media" was hardly used by anybody, according to Google Trends.  It began to climb above 1% of its current frequency of use around 2008, possibly in connection with the elections of that year, and has been climbing ever since. 

 

Twitter, the social-media format that has become the default medium of choice for announcements by Presidents on down, was also founded in 2006.  From an obscure techie-speak term, it has turned into a routine and near-universal medium of expression that its leadership has claimed is as neutral as they can make it.  But a recent article by political scientist Wilfred Reilly details how the medium's claim of neutrality is false. 

 

Specifically, in 2018, Twitter's CEO Jack Dorsey said, in response to accusations that the firm was silently suppressing or banning certain conservatives, that "We don’t shadow-ban conservatives — period."  Similar assertions were made by company officials testifying before Congress and in other public venues.

 

Then along comes reporter Bari Weiss, who used Elon Musk's recently released Twitter files last month to demonstrate dozens of examples in which Twitter silenced or suppressed certain accounts. 

 

Weiss found a variety of ways Twitter can cripple the reach of a given account.  One way is by making the person unsearchable, which is more effective these days than the class of untouchables maintained in some cultures.  Encumbering tweets with warnings, suppressing the sharing of certain tweets—the list of technical means goes on and on.

 

As wonky as I am about engineering details, I'd like to pull back to examine a broader question:  has Twitter behaved unethically in (a) saying they don't "shadow-ban" while clearly doing so, and (b) favoring some tweets and suppressing others?

 

We can dispense with (a) pretty quickly.  Unless Dorsey wants to play a Clintonesque definition game with the phrase "shadow-ban" ("It depends on what you mean by 'shadow-ban.'"), it's obvious that he and his corporate minions have lied repeatedly about how they treat certain accounts.  Companies lie about what they do for a variety of reasons.  Sometimes it's simple ignorance—nobody told the boss what was going on.  That seems hardly likely in this case.  Sometimes it's a deliberate strategy to avoid public embarrassment and financial loss.  That would explain Dorsey's behavior, certainly, and imagining what would have happened if he'd said, "Well, yes, we think we have a duty to the public to protect it from some opinions, and so we do shadow-ban," I can see why a lie would be appealing. 

 

Reilly makes the point that we shouldn't be surprised when we find that Twitter or any other social-media outlet shapes its content to suit its own purposes, whether those be profit, a desire to shape the political landscape, or other things perceived as of more value than telling the truth about what one is up to.  What is disappointing, if not surprising, is the ease and frequency with which Twitter lied about it, and the gullibility of much of the dominant media to believe them, and to criticize so-called conspiracy theorists for claiming that certain stories and outlets—the Hunter Biden laptop episode comes to mind—were intentionally suppressed.  Musk's revelations of internal Twitter documents basically confirm many of these claims that were so scornfully dismissed before.

 

What about (b)?  Regardless of whether they are honest about it, should Twitter mold and shape their content by hyping some tweets and squashing others?  And we shouldn't limit the scope of the question to Twitter.  Facebook, search engines such as Google, and the whole megillah of social media and the way we look for information these days should be included in this question.

 

Most people would agree on certain outer limits to stuff that people post or tweet.  Blackmail, bullying, the lowest dregs of the human imagination—these things should not be allowed into the public arena.  The problem comes when you ask about the rest of what comes into a place like Twitter for potential publication. 

 

Strictly speaking, Twitter and virtually all other social media are private companies which are, and probably should remain, in control of what they publish.  Twitter is not like a public park, paid for with taxes and therefore available to any taxpayer who follows some basic rules.  It's more like a private estate in that sense, where once you are allowed in on the owner's terms, almost anything goes that doesn't break the law.  There is no intrinsic right to express yourself on Twitter or any other private platform.

 

The practical problem is that in replacing the old-fashioned print and one-way electronic media, social media have become the default public square.  Stuff that used to be announced in press conferences before cameras and reporters now gets tweeted routinely first, and press conferences come later, if at all. 

 

The legacy media repressed things silently too.  I can't recall the details, but I remember reading about some reporters who showed up at the house of a prominent public official to ask him something.  His wife came to the door drunk as a skunk, and the code of behavior back then (this was in the early 1960s, I think) made them ignore her state and behavior, and they went away without any story at all.  These days, of course, a live video of her would go viral from the reporter's phone, likely as not.

 

So the news that Twitter shapes tweets to suit itself isn't really news in the sense of a radical new thing happening.  What needs to happen is that people who use social media—and for most of us, that means readers rather than the relatively few producers of viral tweets—need to be aware that everything is biased:  Twitter, Facebook, Google, the newspapers, and even emails from your friends. 

 

With your friends, you probably know them well enough to allow for whatever biases they bring to the table.  And with Musk's revelations about Twitter, we are effectively learning more about Twitter's personality—what things it likes and what things you aren't likely to hear from it.  The bad part of this is that if you want to says something that Twitter doesn't like, you are going to have to find another way to say it.  And that's a problem, but as Reilly pointed out at the end of his article, there's always dictionaries and encyclopedias, and I'd add snail-mail to that, too.

 

Sources:  Wilfred Reilly's article "The Conspiracy Theories Were Real, and Other Revelations" appeared on the National Review website on Dec. 30, 2022, at https://www.nationalreview.com/2022/12/the-conspiracy-theories-were-real-and-other-revelations/. 

Monday, November 29, 2021

Judging the Judgments of AI

 

If New York City mayor Bill De Blasio allows a new bill passed by the city council to go into effect, employers who use artificial-intelligence (AI) systems to evaluate potential hires will be obliged to conduct a yearly audit of their systems to show they are not discriminating with regard to race or gender.  Human resource departments have turned to AI as a quick and apparently effective way to sift through the mountains of applications that Internet-based job searches often generate.  AI isn't limited to hiring, though, as increasing numbers of organizations are using it for job evaluations and other personnel-related functions. 

 

Another thing the bill would do is to give candidates an option to choose an alternative process by which to be evaluated.  So if you don't want a computer evaluating you, you can ask for another opinion, although it isn't clear what form this alternative might take.  And it's also not clear what would keep every job applicant from asking for the alternative at the outset, but maybe you have to be rejected by the AI system first to request it.

 

In any case, New York City's proposed bill is one of the first pieces of legislation designed to address an increasingly prominent issue:  the question of unfair discrimination by AI systems. 

 

Anyone who has been paying attention to the progress of AI technology has heard some horror stories about things as seemingly basic as facial recognition.  An article in the December issue of Scientific American mentions that MIT's Media Lab found poorer accuracy in facial-recognition technologies when non-white faces were being viewed than otherwise. 

 

Those who defend AI can cite the old saying among software engineers:  "garbage in, garbage out."  The performance of an AI system is only as good as the set of training data that it uses to "learn" how to do its job.  If the software's designers select a training database that is short on non-white faces, for example, or women, or other groups that have historically been discriminated against unfairly, then its performance will probably be inferior when it deals with people from those groups in reality.  So one answer to discriminatory outcomes from AI is to improve the training data pools with special attention being paid to minority groups.

 

In implementing the proposed New York City legislation, someone is going to have to set standards for the non-discrimination audits.  Out of a pool of 100 women and 100 men who are otherwise equally qualified, on average, what will the AI system have to do in order to be judged non-discriminatory?  Picking 10 men and no women would be ruled out of bounds, I'm pretty sure.  But what about four women and six men?  Or six women and four men?  At what point will it be viewed as discriminating against men?  Or do the people enforcing the law have ideological biases that make them consider discriminating against men to be impossible?  So far, none of these questions have been answered.

 

Perhaps the best feature of the proposed law is not the annual-audit provision, but the conferral of the right to request an alternative evaluation process.  There is a trend in business these days to weed out any function or operation that up to now has been done by people, and replace the people with software.  There are huge sectors of business operations where this transition is well-nigh complete. 

 

Credit ratings, for example, are accepted by nearly everyone, lendors and borrowers alike, and are generated almost entirely by algorithms.  The difference between this process and AI systems is that, in principle at least, one can ask to see the equations that make up one's credit rating, although I suspect hardly anyone does.  The point is that if you ask how your credit rating was arrived at, someone should be able to tell you how it was done.

 

But AI is a different breed of cat.  For the newest and most effective kinds (so-called "deep neural networks") even the software developers can't tell you how the system arrives at a given decision.  If it's opaque to its developers, the rest of us can give up any hope of understanding how it works. 

 

Being considered for a job isn't the same as being tried for a crime, but there are useful parallels.  In both cases, one's past is being judged in a way that will affect one's future.  One of the most beneficial traditions of English common law is the custom of a trial by a jury of one's peers.  Although trial by jury itself has fallen on hard times because the legal system has gone in for the same efficiency measures that the business world goes for (some judges are even using AI to help them decide sentence terms), the principle that a human being should ultimately be judged not by a machine, but by other human beings, is one that we abandon at our peril.

 

Theologians recognize that many heresies are not so much the stating of something that is false, as they are the overemphasis of one true idea at the expense of the other true ideas.  If we make efficiency a goal rather than simply a means to more important goals, we are going to run roughshod over other more important principles and practices that have given rise to modern Western civilization—the right to be judged by one's peers, for example, instead of by an opaque and all-powerful algorithm. 

 

New York's city council is right to recognize that AI personnel evaluation can be unfair.  Whether they have found the best way to deal with the problem is an open question.  But at least they acknowledge that all is not well with an AI-dominated future, and that something must be done before we get so used to it that it's too late to recover what we've lost.

 

Sources:  The AP news story by Matt O'Brien entitled "NYC aims to be first to rein in AI hiring tools" appeared on Nov. 19 at https://apnews.com/article/technology-business-race-and-ethnicity-racial-injustice-artificial-intelligence-2fe8d3ef7008d299d9d810f0c0f7905d.  The Scientific American article "Spying On Your Emotions" by John McQuaid (pp. 40-47) was in the December 2021 issue.