Showing posts with label Aristotle. Show all posts
Showing posts with label Aristotle. Show all posts

Monday, May 08, 2017

The False Promise of Digital Storage for Posterity


Now that almost every book, photograph, artwork, article, news item, story, drama, or film is published digitally, we are supposed to rejoice that the old-fashioned imperfect and corruptible analog forms of these media—paper that ages, film that deteriorates—has been superseded by the ubiquitous bit, which preserves data flawlessly—that is, until it doesn't.  A recent article in the engineering magazine IEEE Spectrum highlights the problems that Hollywood is having in simply keeping around usable digital copies of their old films.  And "old" in this sense can mean only three or four years ago. 

It's not like there isn't a standard way of preserving digital copies of motion pictures.  About twenty years ago, a consortium of companies got together and agreed on an open standard for magnetic-tape versions of movies and other large-volume digital material called "linear tape-open" or LTO.  If you've never heard of it, welcome to the club.  An LTO-7 cartridge is a plastic box about four inches (10 cm) on a side and a little less than an inch thick.  Inside is a reel of half-inch-wide (12 mm) tape about three thousand feet (960 m) long, and it can hold up to 6 terabytes (6 x 1012 bytes) of uncompressed data.  Costing a little more than a hundred bucks, each cartridge is guaranteed to last at least 30 years—physically.

The trouble is, the same companies that came up with the LTO standard are part of the universal high-tech digital conspiracy to reinvent the world every two years.  Keeping something the same out of respect for the simple idea that permanence is a virtue is an entirely foreign concept to them.  Accordingly, over the last twenty years there have been seven generations of LTO tapes, and each one hasn't been backward-compatible for more than one or two generations. 

What this means to movie production companies that simply want to preserve their works digitally is this:  every three or four years at the outside, they have to copy everything they've got onto the new generation of LTO tapes.  And these tapes don't run very fast—it's not like burning a new flash drive.  Transferring an entire archive can take months and cost millions of dollars, but the customers are at the mercy of the LTO standard that keeps changing. 

According to the Spectrum article, Warner Brothers Studios has turned over the job of preserving their films to specialist film archivists at the University of Southern California, which already had a well-funded operation to preserve video interviews with Holocaust victims.  But USC faces the same digital-obsolescence issues that the studios are dealing with, and one USC archivist calls LTO tapes "archive heroin"—it's a thrill compared to the old analog archive methods, but it gets to be an expensive habit after a while.

And that gets us to a more fundamental question:  given limited resources, what should each generation preserve, in terms of intellectual output, for the next one?  And how should preservation happen?

For most of recorded history, preservation of old documents was left mostly to chance.  Now and then a forward-looking monarch would establish a library, such as the famous one in Alexandria that was established by Ptolemy I Soter, the successor of Alexander the Great, about 300 B. C.   It held anywhere from 40,000 to 400,000 scrolls, and lasted until the Romans conquered Egypt around 30 B. C., when it suffered the first of a series of fires that destroyed most of its contents. 

One can argue that the entire course of Western history would be different if all the works of the Greek philosopher Aristotle (384 B. C. - 322 B. C.) had been lost.  The way we came to possess what works we have of his is hair-raising.  After Aristotle died, his successor Theophrastus at the school where Aristotle taught, the Lyceum, inherited from Aristotle a large set of what we would call today lecture notes.   After Theophrastus died, he left them to Neleus of Scepsis, who took them from Athens, where the Lyceum was, back home to Scepsis, and stuck them in his cellar.  Then he died.  Evidently the Greek families held on to real estate back then, and it's a good thing too, because it wasn't till about 100 B. C., more than two centuries after Aristotle's passing, that Neleus's descendants had a garage sale or something, and a fellow named Apellicon of Teos found the manuscripts and bought them.  He took them back to Athens, where Apellicon's library was confiscated by the conquering Romans in 86 B. C.  Finally, some Roman philosophers realized what they had in Aristotle's works and started making copies of them around 60 B. C.

I won't even go into how most of Aristotle's works were lost again to everyone except Arabic scholars up to about 1200 A. D., but we've had enough ancient history for one blog.  The point is that historic preservation was left largely to chance until people began to realize the value of the past to the present in an organized way. 

While the movie industry deserves credit for laying out lots of money to preserve chunks of our visual cultural history, one must admit that their interests are mostly financial.  Once the people who see a movie when they're in their twenties die out, the only folks interested in such films are the occasional oddball historian or fans of specialty outlets such as the Turner Classic Films channel. 

The real problem with digital archives is not so much the fact that the technology advances so fast, although that could be alleviated.  It's the question that never has an answer until it's sometimes too late:  what is worth preserving? 

If you're a well-heeled library like the one at Harvard University, the answer is simple:  everything you get your hands on.  But most places are not that well off, so it's a judgment call as to what to toss and what to keep using the always-limited resources at hand.

Despite the best intentions of well-funded film archivists, my suspicion is that a few centuries hence, we will find that many of the works of most importance to the future, whatever they are, were preserved not on purpose, but by hair-raising combinations of fortunate accidents like the ones that brought us the works of Aristotle.  And if I'm wrong, well, chances are this blog won't be one of those things that are preserved.  So nobody will know.

Sources:  The article "The Lost Picture Show:  Hollywood Archivists Can't Outpace Obsolescence" by Marty Perlmutter appeared in the May 2017 issue of IEEE Spectrum and online at http://spectrum.ieee.org/computing/it/the-lost-picture-show-hollywood-archivists-cant-outpace-obsolescence?.  The story of how Aristotle's works came down to us is reported independently by at least two ancient sources, and so is probably pretty close to the truth, according to the Wikipedia article on Aristotle.  I also referred to Wikipedia articles on the Library of Alexandria and the Ptolemaic dynasty. 

Monday, February 23, 2015

Temperance, Net Neutrality, and the FCC


Later this week, on Feb. 26, the U. S. Federal Communications Commission (FCC) is going to vote on a proposal to enforce net neutrality.  Net neutrality, according to some, is the idea that all bits are created equal, and that communications firms using or operating parts of the Internet should not discriminate against or for certain types of services, providers, or customers.  If I could do one thing to help the FCC decide wisely on this proposal, I'd bring back Aristotle and ask him to explain to the commissioners what he means by egkrateia, which is usually translated as "temperance" or "moderation."  The Internet has to be one of the most influential and beneficial engineering developments of all time, and it would be a shame for the FCC to cripple it.  But if they don't exercise temperance, that's just what they might do.

Writing in the electrical engineering professional journal IEEE Spectrum, Jeff Hecht points out that wireless technologies, where a lot of the most exciting new Internet developments are happening, need careful technical management to work.  It has to do with the fact that all data on the Internet travels in little chunks called packets.  When the Internet was founded, most data was not that time-sensitive.  If data for email or a webpage shows up in pieces spaced even several seconds apart, it's no big deal.  But as highly time-sensitive services such as telecommunications (phones) and video began to switch to the Internet, and as new time-sensitive services such as multiplayer games developed, timing became a big deal.  Hecht points out that a delay of only twenty milliseconds can disrupt a phone conversation, and if a sound that short goes missing it can turn "can't" into "can" and lead to all kinds of problems.  The same goes for video, which gets jerky with such delays, or game apps, which slow down and aren't that fun anymore.

Delays like this and speed-slowing bottlenecks are especially hard to avoid in two places: (1) where internet service providers (ISPs) connect to the Internet's "backbone," or (2) where wireless is used, such as when you access the Internet from your phone or mobile device.  In the latest generation of mobile phone service, called 4G LTE, providers have developed a way to label packets with what amounts to a digital ship-by date.  Packets that spoil fast—phone conversations, video, game-player data, and time-sensitive system control data—get shipped the fastest, while packets that represent email or webpages have to wait longer in line. 

This technical packet-labeling is called "priority coding" and it's a critical ingredient in the new high-fidelity phone service called VoLTE (LTE, by the way, stands for "long-term evolution"). 

Here's where the moderation comes in.  Reportedly, the FCC is planning to reclassify the Internet as a "common carrier."  Currently the FCC views it through a different legal lens, as an "information provider," which allows the government fewer regulatory options.   But the common-carrier class includes the highly regulated telecommunications industry, and so the FCC's proposed rule changes could allow it to regulate the Internet much more closely than it does now.  Depending on what the FCC means by net neutrality, the commission (or a sneaky lawyer wielding the Commission's new rules) could use its new legal chops to break the new 4G LTE by making priority coding illegal.  After all, if every bit is created equal, shoving some to the front of the line in front of others could be viewed as discrimination.

Any time a government agency decides to extend its regulatory authority, you have to hope that it won't go overboard and stifle the industry it's allegedly trying to help.  This is where Aristotle's virtue of temperance can help.  As has happened in many other fields, the Internet's technology has in many ways outstripped the legal frameworks that were set up to regulate communications systems in the past.  I think it's good for the FCC to acknowledge that the communications world has changed, and that pretending the Internet is just an information provider is outdated.  But an attempt at heavy-handed populist-style regulation in the name of absolute net neutrality could do more harm than good.

Moderation on all sides is called for.  Free-market enthusiasts may worry that the FCC is going to tax or regulate the Internet to death with its new proposed powers.  This is unlikely.  But at the same time, a more subtle danger to watch out for is the co-opting of government authority by big corporate players in a way that favors their interests over those of small firms who want to innovate, but whose innovations pose a threat to the big guys.  This can't happen in a lightly-regulated industry, which so far the Internet has been, for the most part.  I think the FCC is smart enough not to issue rules that would flat-out break the 4G LTE technology.  But any extension of regulatory authority can lead to manipulation of that authority by vested interests.  And I think that is what Aristotle would caution us about the most.  But first, we'd have to explain to him what the Internet is.

Sources:  Jeff Hecht's article "Net Neutrality's Technical Troubles" was posted on the IEEE Spectrum website on Feb. 12, 2015 at http://spectrum.ieee.org/telecom/internet/net-neutralitys-technical-troubles/.  On Feb. 4, FCC Chairman Tom Wheeler declared his intentions with regard to net neutrality in the online edition of Wired at
http://www.wired.com/2015/02/fcc-chairman-wheeler-net-neutrality/.  I also referred to an article on The Daily Dot about the FCC's Title II authority (which allows it to regulate common carriers such as telecomm companies) at http://www.dailydot.com/politics/what-is-title-ii-net-neutrality-fcc/.  I most recently blogged on net neutrality on Nov. 24, 2014 in "How Neutral Is the Net?"

Monday, September 29, 2014

The Limits of Diversity


This month's Scientific American devotes twenty pages to articles on diversity in science—the shameful lack thereof, and what can be done about it. One piece is a kind of confessional by a Lockheed Martin engineer who quickly moved into management and found that as she chose staff from widely varying backgrounds, the quality of her group's work increased.  Other articles cite social-science studies that show diverse organizations are not only more socially just; they do better science and engineering too.  The reader of these paeans to diversity could not be blamed for taking away the impression that diversity is like goodness:  you simply can't have too much of it.  Is that true?  Or are there limits to diversity?

What does diversity mean?  It has both an objective aspect, and a subjective or political aspect.  In the strict sense of diversity meaning merely "difference," one can objectively measure diversity in genetic makeup, diversity in hair color, or diversity in virtually any other measurable characteristic that a group of things or people has.  This scientific aspect allows statisticians to crank out reams of charts showing the degree of gender diversity in the number of Ph. D.'s granted, ethnic diversity in hiring practices, and so on.  So in this scientific sense, diversity is a quantifiable, measurable thing.

But when we ask what kinds of diversity are significant in the sense addressed by the Scientific American authors, the list narrows to political and cultural hot-button matters:  gender, race, socioeconomic status, ethnicity, sexual orientation, and so on.  Scientists and engineers must deal with these matters not as mythical objective professionals, but as human beings.  And in so doing, the issue becomes an ethical, political, and even philosophical one.

The idea of virtue is not a scientific concept, but it is one of the best ways to describe a certain class of characteristics involving choice, as Aristotle says.  I think Aristotle would class diversity as a type of virtue because a diverse organization is better with regard to social justice than a non-diverse one, and (as the social scientists have shown), diverse scientific organizations do science and engineering better than non-diverse ones.  Making something intrinsically good and also better at what it does are the two main aspects of a virtue (again, according to Aristotle), and diversity qualifies on both counts.

The next question is this:  can you have too much diversity?  Most virtues represent a mean or rough average between the two extremes of excess and deficiency.  Assembling a competent technical organization with a mind to diversity represents a compromise between extremes.  In the decades before diversity in its modern sense was recognized as an organizational goal, those in charge (usually white males) picked the best people they could while following the cultural norms of their time.  These norms generally (but not always) excluded women and minorities, and tended to perpetuate the demographic makeup of the organization, while making it extremely hard or impossible for non-whites and non-males to enter.  This was bad. 

However, you can imagine an opposite extreme.  The perfectly diverse organization would have diversity statistics identical to those of the largest applicable sample group:  the state, the nation, or even the world.  William F. Buckley is supposed to have said he'd rather be governed by the first hundred names in the Boston phone book than by the entire faculty of Harvard University, and in this proposal, at least, he was favoring the opposite extreme of diversity I am talking about.  But if diversity is the only criterion of selection, the specialized competencies that a research or engineering organization needs will be absent, except by chance, and it will fail to achieve its objective, unless its only objective is to show that it is acceptably diverse.

The U. S. National Science Foundation has in recent years spent a substantial portion of its resources encouraging diversity in various ways.  To the extent that these efforts have righted previous injustices committed either consciously or through unconscious bias against certain groups, they are to be applauded.  But there is nothing scientific about the choices of which measures of diversity to work on. 

In a secular democracy, these choices are made politically.  And making politics your ultimate authority can land you in unpleasant places, as scientists in Russia and Germany have found.  A crackpot biologist named Lysenko got his hands on the political levers of control in the old USSR in the 1920s. Lysenko thought acquired characteristics could be inherited, and for the next forty years, any Soviet biologist who disagreed with Lysenko about evolution was liable to disappear into the Siberian work camps.  And the Nazi party in Germany took delight in calling Einstein's theory of relativity "Jewish physics."  Such blatant overruling of science by politics can always happen if those in charge value political goals more than the integrity of science.

I am personally about as un-diverse as you can get: an old white male conservative Christian Texan.  An organization composed of people like me would score close to zero on any diversity index you care to name.  I view the diversity project as an attempt, however flawed, to show the type of love that wills the good of the beloved to people who would otherwise be kept from flourishing to the best of their abilities.  There is nothing wrong with this type of love.  It is the type of love Jesus Christ exhorted his followers to show to each other.  But implementing diversity in a way that helps those who need that type of help without inflicting harm or the loss of opportunity on others is an inherently complex task.  The only way to do it perfectly would be to have perfect insight into the problem of social justice, and only God has that.  Any human attempt at diversity represents a compromise between using resources to increase diversity, versus using resources to address the task at hand.  And those promoting diversity should remember that it is possible to have too much of a good thing.

Sources: The October 2014 issue of Scientific American includes Stephanie C. Hill's article "In pursuit of the best ideas:  How I learned the value of diversity," on pp. 48-49.  I also referred to Wikipedia articles on Lysenkoism and "Deutsche physik" ("German physics").

Monday, July 21, 2014

Books and E-books


Last Christmas, someone gave me a Kindle, and I have made intermittent attempts to get engaged in reading e-books on it.  These attempts have met with only mixed success.  A book that was highly recommended by my pastor, who makes no secret that he's not much of a reader, left me unimpressed, and I abandoned it.  More recently, out of a sense of duty to a cultural icon more than genuine interest, I downloaded (for free) a copy of Swann's Way, the first volume of Marcel Proust's encyclopedic multivolume Remembrance of Things Past.  Proust wins my nomination for Greatest Introspector of the Nineteenth Century Award, but I'm afraid I've abandoned him too, somewhere in his childhood garden among his maiden aunts and the eccentric visitor Mr. Swann. 

The only books I've managed to finish on the thing were a couple of mass-produced page-turners written for young adults.  They managed to keep me turning the electronic pages, all right, but after I finished the last one I felt a little like you might feel after binge-watching five recorded episodes in a row of some trashy TV series—I had to ask myself, "Was that really the best use of my time?" 

Despite numerous prophecies that the days of the printed book are numbered, e-books have not yet done to the paper-book publishing business what hand-held electronic calculators did to the slide-rule business.  Electronic calculators were so obviously superior to slide rules in nearly every way that only die-hard traditionalists clung to their slide rules, which took a one-way trip to the museum and never came back.  That is not happening with paper books.

Once the market stabilized on a few common platforms such as Kindle, e-book sales took off and increased steadily for several years.  Some of the biggest sales boosts came from mass-market fiction series such as the hugely popular Hunger Games franchise.  But in the last year or so, e-book sales have flattened out, while paper-book sales are seeing increases, both in the U. S. and worldwide, that in many cases show faster growth than e-books.  A report on the Digital Book World website says that U. S. sales of e-books through August 2013 were $647 million, about a 5% increase from the previous year, while hardcover printed books accounted for sales of $778 million, up nearly 12% from a year earlier.  This trend is continuing in 2014, and is not the picture of a situation where one medium is simply being dropped for a newer one. 

Instead, it's beginning to look like the book medium one chooses will depend on the message it carries.  This is a familiar phenomenon in other fields—music, for example.  Take two music lovers.  One is a busy college student whose part-time job is standing in front of a tax office waving a big arrow sign.  He wants something to listen to while doing this mindless task.  The other is a professional music critic with exquisite taste and highly discriminating ears, wishing to evaluate the latest recordings of a particular Mozart string quartet.  The college student will be happy with an iPod (or smartphone) with earbuds, while the music critic will want to listen in a quiet room through a high-dollar stereo system and speakers.  Different kinds of messages are just naturally suited to different kinds of media, and the same may be true of book publishing going forward.

So will e-books destroy the paper-book publishing business?  No, but they will change the makeup of what gets published that way.  Books with mainly transient value—what an acquaintance of mine once called "nonce books," meaning it's of interest for the nonce, but not much longer—will probably show up as e-books.  Fiction mega-hits that masses of otherwise non-literary folk gobble up are perfectly suited to the e-book format, which makes it easy for the reader to plow through in a straight line as fast as he or she can read.  But for more scholarly publications that someone might want to keep around for reference or contemplation, I think the paper format is more suitable, and current sales statistics say that paper books are not on the verge of immediate extinction.

If you think about it, there is a physical connection, however tenuous, between a person holding a mechanically typeset book in his hands, and the original author, no matter how long ago the author lived.  If you pick up a copy of Aristotle printed before about 1960, the chain goes like this:  from handwritten manuscript to medieval scribes, to nineteenth-century editor, to typist copying the editor's manuscript, to the Linotype operator setting the type, to the stereotype plates that impressed the ink into the very paper you hold in your hands.  

Maybe some computer geek can figure out the analogous path for an e-book, but I'm not sure I want to hear about it.

I think one of the most profound differences between the natures of the two media is that paper books are inclined to permanence, while e-books are suited to transience.  In the nature of things, I expect that today's e-books will not be readable by future generations of machines, or if they are, it will become a bigger and bigger hassle to do so as time goes on, just as it is probably hard for you right now to recover files on a computer you used more than a decade ago.  But unless the ink has faded to invisibility or the paper has crumbled to dust, we can still read writings that were penned thousands of years ago. 

There is a story, possibly apocryphal, that the only copy of the writings of Aristotle, upon whose ideas much of Western civilization is based, lay forgotten in some heir's basement for a couple of hundred years before being rediscovered.  Good thing they were written on paper, because if Aristotle had used a Kindle, in two centuries the batteries would have died and the operating system would have been, well, ancient history.

Sources:  I referred for statistics on U. S. publishing of print and e-books to the websites http://www.digitalbookworld.com/2013/adult-ebooks-up-slightly-in-2013-through-august-hardocovers-up-double-digits/ and http://www.publishersweekly.com/pw/by-topic/industry-news/publisher-news/article/62031-print-digital-settle-down.html, and for worldwide sales to http://www.publishingtechnology.com/2013/07/year-on-year-ebook-sales-fall-for-the-first-time-says-nielsen-research/.  The popular fiction I read on Kindle was the first two books in the "Airel" series by Aaron Patterson and Chris White.  The story of the rediscovery of Aristotle's works is reported by at least two ancient historians, according to the Wikipedia article on Aristotle.   

Monday, May 26, 2014

The Dendrites Made Me Do It: Free Will and Morality


Back in the 1970s, TV comedian Flip Wilson liked to play a character named Geraldine, whose Church of What's Happening Now taught her to say, whenever she did something bad, that "the devil made me do it!"  Recent research over the last decade or two has given new strength to the argument that our will is determined not by the devil, but by certain neurons, whose axons and dendrites transmit characteristic nerve impulses that appear to precede our conscious decisions to do a thing by a substantial fraction of a second. 

Both research scientists and the general public tend to conclude from brain science that there is no such thing as free will.  If free will is an illusion, then so is responsibility, and it doesn't matter what we do.  And thinking we don't have free will actually affects the way we act, according to a body of research pointed out by Azim Shariff, a psychologist, and Kathleen Vohs, a business-school professor. 

Writing in the June issue of Scientific American, Shariff and Vohs describe a series of experiments in which researchers first primed subjects to think critically of free will.  This priming took various forms.  Reading an essay explicitly criticizing the concept of free will was one way.  But popular-science articles describing brain research, without explicitly mentioning free will, appeared to be about as effective.  However the researchers brought up the topic, it tended to make their subjects less likely to act nice, and less likely to punish others for not-nice behavior.  In one study, volunteers who read an anti-free will article put almost twice as much hot sauce on tortilla chips they were asked to prepare for another volunteer (secretly in cahoots with the researchers), who had previously made it well known that he didn't like hot sauce.  Other subjects shown anti-free will material then proceeded to cheat on an academic test more than subjects who read about an unrelated matter.  The idea that we are simply complex material objects reacting in an entirely predetermined way to our environment appears to lead people to behave irresponsibly, and to judge others as having less responsibility for their actions as well. 

The essay by Shariff and Vohs is an outstanding example of what I would call scientific fence-sitting.  Nowhere do they say what their personal views are on whether free will exists.  Instead, they cite studies of what happens to people when they are exposed to the idea of determinism, either directly or indirectly, and they find that mostly, the results are not good, except that people tend to ease up on the idea of punishment as revenge.  Shariff and Vohs seem to think that modern societies are gradually abandoning the idea of free will, and that this might lead to trouble, although if things get too bad after we leave the concept behind, we "might have to reinvent it."  But they write as though science is the only way of knowing anything for sure, and because science can't resolve the question of whether free will really exists, there's no point in talking about it directly.  It doesn't seem to occur to them that science is not the only way to know things.

I have a couple of modest proposals in the form of hypothetical questions.  Dr. Shariff is an assistant professor, meaning that in the normal course of events, he will go up for tenure some time in the next few years and be evaluated by his peers in the department.  Dr. Vohs is Land O'Lakes Professor of Excellence in Marketing, which presumably means she owes her living to the success of Land O' Lakes Inc. in selling lots of margarine.  Dr. Shariff, what would you think (not feel, not act, but think) if your personnel committee came up to you after you turned in your application for tenure and they said, "Well, Azim, we were going to grant you tenure, but hey, we were talking and discovered that we're all determinists and it doesn't matter what we do, so to save money, we're going to let you go instead."  And Dr. Vohs, what if the people in charge of the Land O' Lakes Endowment or whatever it is that pays your salary sent you a letter saying, "Dear Dr. Vohs, reading your article in Scientific American convinced us of the truth of determinism, so being freed of the burden of responsibility for our actions, we're taking the endowment that pays your salary and are all going on an extended vacation to the French Riviera." 

I am fairly confident that both injured parties in these situations would think that their supervisors made wrong decisions, and would appeal to rules that apply to their jobs.  These rules spell out responsibilities for both professors and administrators.  You can take the position that, as a practical matter, societies have to pretend that things like free will and the responsibility of moral agents exist, because otherwise we'd degenerate into a state of anarchy and chaos, like Somalia is today. 

But there is a problem here.  I thought science was the search for truth.  Not pretense, not convenient fictions that we live by in order to survive, but truth.  The sense I get from the Scientific American essay (I'm reading between the lines here) is that the authors don't personally believe in free will, but recognize that if we didn't act like it existed, we'd all be in a lot of trouble, both individually and collectively. 

There are two related lessons here for the engineer, and anyone else for that matter, who is looking to behave ethically, both on the job and elsewhere. 

First of all, be careful what you read.  I'm not saying that you shouldn't read articles on brain science, but be sure your reading is at least a balanced diet of a variety of viewpoints, because your reading may influence your actions even if you don't think it does. 

Next, guess what?  Free will exists.  I'm not just pretending that it exists, but it really does.  I'm not enough of a philosopher to trot out all the arguments in favor of it, but I can point to people like Aristotle and Aquinas who were, as well as plenty of modern philosophers.  And while it is technically what theologians call a "mystery," meaning we can understand some of it but never all of it, free will is compatible with the idea that God is in ultimate control of the universe.  Why, there are even some philosophers, called compatibilists, who argue that free will is compatible with atheistic determinism!  So you really can decide to do the right thing in your work, in your personal life, and in deciding what you think of a couple of professors who won't even say whether they believe in free will, even though they spend years researching it.

Sources:  The essay "The World Without Free Will" appeared on pp. 76-79 of the June 2014 issue of Scientific American.  I referred to the Stanford Encyclopedia of Philosophy at http://plato.stanford.edu on free will, and to Wikipedia for articles on Flip Wilson and dendrites.  Also, I found out that Land O' Lakes (www.landolakesinc.com) makes other things besides margarine—Purina animal feeds, for example. 

Friday, July 26, 2013

The Medieval Wisdom of Google’s “Don’t Be Evil”


Back in 2000, when the founders of Google were discussing ways to express their core philosophy, Paul Buchheit (employee No. 23) suggested “Don’t be evil.”  At the time, he was simply trying to contrast the way Google did business with the less salutary practices of some of their competitors.  Nobody dared to disagree with the principle of not being evil, so the phrase was adopted and down to today remains one of Google’s official core values.  Along the way it has acquired another phrase, so the complete statement is “Do the right thing; don’t be evil.”  In promulgating this notion, Google has (perhaps unwittingly) taken a stand on the side of Aristotle, St. Thomas Aquinas, and countless other ancient sages against much of what today passes for acceptable moral principles.  It would surprise me, however, to discover that more than a few Google employees are aware of this.

Many of them, in fact, would probably subscribe to the notion that no one should impose one’s moral principles on another person.  Even Google doesn’t explicitly recommend their “do good, avoid evil” principle for everybody; the most they are saying is that Google employees will try to live up to it.  If you like doing evil, fine, just don’t go to work for Google.  But as physicist Anthony Rizzi points out in his book The Science Before Science, the advice to not impose one’s moral views on another, is itself a moral view. 

If I see an adult male in a shopping mall beating up a two-year-old, and I rush to intervene, and the man says, “Leave us alone, you’ve got no business imposing your morality on me,” I could respond with, “Sir, that itself is a moral principle which you are trying to impose on me.”  (What I would really do is call the cops, but that’s another matter.)  And in any event, as Rizzi points out, no one consistently acts as though all moral principles are simply matters of personal preference, even though they may give lip service to the idea in academic papers, for example.  If the chair of a philosophy department read a paper by one of his philosophers claiming that all morality is relative, and called the author up one day and said, “Because all morality is relative and I don’t like your looks, I’m reducing your pay by half,” I seriously doubt that the philosopher would calmly accept this as a logical consequence of his own philosophical position.  So even if some people say morality is relative, on matters that affect them personally they usually don’t act like they really believe it.

So where does that leave us?  It begins to look as though there really may be some objective moral principles “out there” so to speak, independent of whatever we say or think about them.  And behind them all, at the head of the logical chain of reasoning where first things must always be, stands the principle embraced by Google:  “Do the right thing; don’t be evil.”  You can’t derive that principle from anything else.  It is one of those self-evident statements that can’t come from another more basic notion.  As it stands, of course, it needs development before it can help you live your life.  But all other moral principles can be logically derived from what Rizzi calls “the first principle of ethics”:  do good and avoid evil.

Ah, but what is good and what is evil?  In a thousand-word column, I obviously can’t do justice to that question.  The short answer is, good is that which fulfills one’s purposes, and evil is the absence of such good.  One reason there is so much evil in the world is that, while every person does what seems good at a particular time and place, what seems good at the time may not really help one to fulfill one’s purposes.  It may seem good to an alcoholic to take one more drink, even if it’s the one that makes him so drunk he gets in his car and causes the death of another driver.  It’s not always easy to figure out what the true good is, which is one reason why ethics can get complicated—so complicated that the analytically-minded tend to throw up their hands and say it’s all hopeless. 

But it’s not hopeless.  Most people figure out what good to do, and what evil to avoid, with a good bit of success every day.  The lapses happen when our emotions or our hasty judgments lead us astray.  It requires just as much thought and attention, if not more, to be a good person as it does to be a good engineer.  But the technical and the ethical sides of engineering start from different foundations.

When Mr. Buchheit hit on “Don’t be evil” to guide what would become one of the greatest corporations of the twenty-first century, he was saying more than he knew.  Neither Google (through whose facilities this blog appears, by the way) nor any other firm can completely live up to their core principles, including that one.  But having it out there to shoot for is a start.  And in having that core principle to live up to, all the Googleites are following in the footsteps of medieval thinkers such as St. Thomas Aquinas, who clearly saw that the first logical step in being good is to admit there are such things as universal moral principles, and that the one to start with is “do good and avoid evil.” 

Sources:  Anthony Rizzi is a practicing research physicist at the Institute for Advanced Physics at Baton Rouge, Louisiana (www.iapweb.org) and author of The Science Before Science:  A Guide to Thinking in the 21st Century (IAP Press, 2004).  Of all books that I’ve read about scholastic philosophy (which is the term for the type of philosophy done in the High Middle Ages by St. Thomas Aquinas), Rizzi’s does the best job of defining terms and explaining concepts in ways that the average non-philosopher can understand.  I also referred to the Wikipedia articles on Paul Buchheit and “Don’t be evil.” 

Monday, November 12, 2012

A Purely Nominal Problem

 
My father didn’t like to spend money when he didn’t have to, so when my mother expressed a wish for an automatic dishwasher, one day he showed up with an old portable unit that some friends of ours got rid of when they bought a newer model.  It was a big floor-model box on rollers, and you ran one hose to the kitchen sink and another to the sink drain and plugged it into a wall outlet.  It worked fine for a few weeks.  Then one day it refused to drain.  We opened the door and saw all this dirty dishwater, so we bailed it out and I volunteered to fix it.  Because I was cheaper than calling a repairman, my father agreed to let me tear into the thing.  After a lot of gross and messy work, I found the problem:  a toothpick had lodged between the drain pump impeller and the housing.  That little toothpick had jammed the pump, and as a result the whole washer couldn’t drain.

I learned several things from that experience (not the least of which was to avoid appliance repair as a future career).  But the most important one was that fairly small, common, almost unnoticeable things can have big negative effects.  And the things don’t need to be physical ones at all.  In fact, immaterial things can make a lot more difference than any physical object, especially if they are so widespread that you don’t notice them, like fish who don’t realize it’s water that they’re swimming in.  The little thing I’d like to draw your attention to is nominalism.

The word “nominal” is often used by engineers to mean “typical” or “according to the specifications.”  But its original meaning is “relating to names.”  Nominalism is a philosophical position first proposed by William of Ockham (~1288 A. D. - ~1348).  Until he came along, most philosophers thought the word “apple,” for example, referred to a real and essential, though immaterial, “appleness” that is shared by all things properly called apples.  However, William of Ockham claimed that there was no such thing as appleness—the essence of what it is to be an apple.  Instead, “apple” is just a name for certain kinds of objects that we, in our human wisdom, have decided to call apples.  In other words, he denied that there are any universals—that is, essences of things.  There’s just a lot of round red fruits out there that, for convenience, we have decided to group under the name of “apple,” but in reality, all apples are different individuals and there is nothing more to the word than the sum of all things called apples.

After William of Ockham proposed nominalism, the other philosophers had to think of a name to call themselves, and the term they chose was “realists.”  A realist, in this technical sense, thinks that there is indeed a universal concept, objective and independent of our minds, which in English is denoted by the word “apple.”  These concepts, which the moderate realist Aristotle called essences, are as objectively real as a bank account.  A bank account is not a material thing, though there may be material records of it.  A bank account is a non-material concept, and so are the concepts of “apple,” “tree,” “horse,” and “man.”

Unless you are aware of this historical controversy, as a typical 21st-century person you probably think and act as a nominalist most of the time.  For example, if you agree with the words of the 1992 U. S. Supreme Court decision in Planned Parenthood vs. Casey that “At the heart of liberty is the right to define one's own concept of existence, of meaning, of the universe, and of the mystery of human life,” you are a nominalist, because defining is what a nominalist does.  First comes the name, then come the items to be grouped under that name.  But the namer is always in charge, and things can be arbitrarily regrouped by the namer to suit one’s convenience.  As the philosopher Richard Weaver has pointed out, an important consequence of nominalism is that “if words no longer correspond to objective realities, it seems no great wrong to take liberties with words.”  So, for example, the genocide of Jews by Nazi Germany in World War II is euphemized to “the final solution.”

Engineers are perhaps less apt to fall into the grosser errors of nominalism, because we have frequent encounters with objective reality.  If a computer chip you design doesn’t work, calling it by a different name isn’t going to make it start working.  But even the way engineers use logic has been affected by nominalism.  The digital logic that all digital computers use is based on symbolic logic devised by George Boole, a nineteenth-century mathematician whose hope was to reduce all logic to symbols.  The trouble is, symbolic logic assumes that nominalism is true, and throws out a great deal of material that traditional Aristotelian logic relied on, including the notion that understanding is a uniquely human power essential to right thinking.  But if everybody uses nominalist logic that can be expressed by Boole’s “boolean algebra,” we have reduced our thought processes to those that can be done by computers.  This is an important source of the idea put forth by artificial intelligence proponents that the brain is really nothing more than an advanced wet computer.  If we can’t make computers act like humans, we’ll reduce humans to the point that they act like computers.

Let’s hope that doesn’t happen.  Regular readers of this blog may have noted that I try to approach philosophical matters from an Aristotelian perspective that moves from the real, objective world to the world of thoughts.  Nominalism tempts us to do the opposite:  to define things the way we want them to be, and then look for pieces of reality that fit our preconceived notions.  I think engineers of all people should be aware of the dangers of nominalism.  Realism is more than just being practical; it means realizing that there is more to the world than we can possibly understand or control, and the proper attitude toward nature is one of humility.  Otherwise, like that toothpick in the dishwasher, nominalism can throw a whole culture out of whack.

Sources:  The quotation by Richard Weaver is from his 1948 book Ideas Have Consequences (Univ. of Chicago Press), p. 7.  I was inspired to write about nominalism and realism by reading one of the few logic textbooks in print which employ realist Aristotelian logic rather than symbolic logic as its basis:  Peter Kreeft’s Socratic Logic (South Bend, Indiana:  St. Augustine’s Press, 2004).