Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Monday, April 02, 2018

The Legend of King Minsky: An AI Parable


Once upon a time, long ago but not that far away, there lived a king named Minsky.  His kingdom was prosperous and his citizens were contented, for the most part, but that was more than you could say for the king.  He had servants galore—chancellors of this and that, ladies and gentlemen in waiting, butlers, maids, footmen, all the way down to the scullery boy who carried out the trash.  But his servants never quite measured up to the king's expectations somehow.  The whole point of having servants, the king liked to say, was so you didn't have to worry about things.  He had enough to worry about already, because his wife the queen had died some time ago, leaving him with a young daughter to raise.  But the more servants he hired, the more problems he had with them.  His grand banquet was spoiled when the kitchen ran out of roast pig.  And the annual ball was a flop because the steward forgot to hire an orchestra.  So one day, when an itinerant magician came to the castle and offered to solve all the king's servant problems, the king was ready to listen.

"For one low price," the magician said, "I can give you the power to change your servants into perfectly obedient machines.  They'll look just like they do now, but you won't have to feed them or let them sleep or rest.  And they will do your every bidding exactly the way you want."

"Hmm," said the king.  "Sounds too good to be true."

"I have references!" said the magician.  And he pulled out a sheaf of letters written by kings of nearby kingdoms, some of whom King Minsky even knew.  They all swore by the magician's abilities and said they were delighted with what he was offering.

"Well, all right, how would it work?"

"We have several options."  After looking at the magician's brochure, the king chose the magic-touch option. 

"Excellent choice!  You won't be disappointed!"  And the king called for his treasurer, paid the high price asked by the magician, and duly received the power named in the contract.

Once the magician got his money, he seemed in a hurry to leave.  Before he went out the door, he called over his shoulder, "Don't forget to read the instructions!  Bye now!"  But the king never was much for reading instructions, and he couldn't wait to try out his new power.

The first place he went was the scullery, where he found the surly, dirty-faced scullery boy.  The king had never spoken to the boy and knew him only by sight.  But this time he walked right up to him and said, "Let me shake your hand!"  The boy held out a soiled hand for a handshake.

As soon as the king's hand touched the boy, something about the boy's face changed.  The surliness left it, but so did anything human.  "Boy," said the king, "I want you to wash your face and hands and do everything the cook tells you, without dawdling around."

In a toneless voice the boy replied, "Yes, Sire."  And the king was pleased to see that the change in the boy's behavior from that moment on was nothing short of miraculous.  Soon the whole kitchen was spotless because the formerly lazy scullery boy not only carried out the trash, but spent all his time cleaning up after everyone.

When he saw this change, the king couldn't wait to shake hands with the cooks and the butlers and the maids, one after the other.  The same thing happened to them.  Each one became the ideal servant.  The butler never dropped a plate again.  The cook never ran out of food, and the steward always remembered everything he needed to.  The treasurer quit making math errors in the accounts.  The king was very pleased with the results overall, although he wondered if he would miss the jokes that the treasurer was in the habit of telling.

Well, you can see where this is going.  Earlier that day, the governess had taken the king's daughter outside the castle for a picnic lunch.  The daughter's name was Persephone, and she was five years old.  Whenever Persephone saw her father, she'd raise her arms up and ask to be picked up, and he'd lift her up and put her on his shoulder for a while.  So that afternoon, the king was standing at his desk talking with the treasurer when the governess brought in Persephone.  King Minsky didn't have time to turn around before his daughter ran up behind him, saying, "Pick me up!" and grabbed him by the hand.

. . . I leave it to the reader's imagination to finish the story.  Needless to say, it doesn't end well, for either the king or his daughter.

At the present time, artificial intelligence (AI) is enjoying an unprecedented boom.  Corporations and governments worldwide are pouring billions of dollars into AI R&D, and products are hitting the market that promise to revolutionize life as we know it, from Siri-like robots to self-driving cars and more.  What the parable is intended to address is not so much any particular AI application, as it is meant to question the philosophy, mostly unspoken, on which much of AI work is based.

This philosophy treats human beings as simply "meat computers" that are no different in principle from a silicon-based computer.  The problem with this philosophy is that it is false. 

Beliefs issue in actions.  If I believe that you differ only in degree, and not in kind, from my cellphone, I am bound to treat you differently than if I believe (as I do) that there is a radical and provable fundamental difference between human beings and every other physically manifested being—animal, vegetable, or mineral.  This difference has many aspects, but in the space remaining I will concentrate on only one:  the ability of our intellects to form universal concepts. 

No computer will ever understand freedom, for example.  AI systems may some day imitate the conversation of an erudite scholar discussing freedom, but that does not mean, and cannot mean, that the computer understands the universal concept of freedom.  The proof of this point is too lengthy to give here, but is contained in Michael Augros' book The Immortal In You.  And I assure you, it is a proof that approaches the mathematical in its rigor.

Remembering this essential difference will be vital to all those who deal with AI innovations, products, and proposals in the future.  And forgetting this difference may land us in the same unenviable position King Minsky was in after his daughter grabbed his hand.

Sources:  If by some mischance you have never heard of the legend of King Midas, whose touch turned everything to gold, the Wikipedia entry about King Midas will remedy that defect in your education.  Michael Augros' The Immortal In You was published in 2017 by Ignatius Press.  For a brief summary of the argument for the immateriality of the intellect (which is why computers can't understand freedom), see the online resource by the late philosopher Mortimer Adler at
http://selfeducatedamerican.com/2011/09/06/is-intellect-immaterial/.  And in case you are not familiar with famous names in AI, King Minsky is named for the early AI proponent and general gadfly Marvin Minsky (1927-2016).

Monday, August 07, 2017

Giulio Tononi and His Consciousness Meter


If you're reading this, you're conscious of reading it.  Consciousness is something most of us experience every day, but for philosophers, it has proved to be a tough nut to crack.  What is it, exactly?  And more relevant for engineers, can machines—specifically, artificially intelligent computers—be conscious? 

Until recently, questions like this came up only in obscure academic journals and science fiction stories.  But now that personal digital assistant devices like Siri are enjoying widespread use, the issue has fresh relevance both for consumers and for those developing new AI (artificial intelligence) systems.

Philosophers of mind such as David Chalmers point out that one of the more difficult problems relating to consciousness is explaining the nature of experiences.  Take the color red, for example.  Yes, you can point to a range of wavelengths in the visible-light spectrum that most people will call "red."  But the redness of red isn't just a certain wavelength range.  A five-year-old child who knows his colors can recognize red, but unless he's unusual he knows nothing about light physics and wavelengths.  Yet when he sees something red, he is conscious of seeing something red.

One popular school of thought about the nature of consciousness is the "functionalist" school.  These people treat a candidate for consciousness as a black box and imagine having a conversation with it.  If its answers convince you that you're talking with a conscious being, well, that's as much evidence as you're going to get.  By this measure, some people probably already think Siri is conscious.

Now along comes a neuroscientist named Giulio Tononi, who has been working on something he calls "integrated information theory" or IIT.  It has little to do with the kind of information theory familiar to electrical engineers.  Instead, it is a formal mathematical theory that starts from some axioms that most people would agree on concerning the nature of consciousness.  Unfortunately, it's pretty complicated and I can't go into the details here.  But starting from these axioms, he works out postulates and winds up with a list of characteristics that any physical system capable of supporting consciousness should have.  The results, to say the least, are surprising.

For one thing, he says that while current AI systems that are implemented using standard stored-program computers can give a good impression of conscious behavior, IIT shows that their structure is incapable of supporting consciousness.  That is, if it walks like it's conscious and quacks like it's conscious, it isn't necessarily conscious.  So even if Siri manages to convince all its users that it's conscious, Tononi would say it's just a clever trick.

How can this happen?  Well, philosopher John Searle's "Chinese room" argument may help in this regard.  Suppose a man who knows no Chinese is nevertheless in a room with a computer library of every conceivable question one can ask in Chinese, along with the appropriate answers that will convince a Chinese interrogator outside the room that the entity inside the room is conscious.  All the man in the room does is take the Chinese questions slipped under the door, use his computer to look up the answers, and send the answers (in Chinese) back to the Chinese questioner on the other side of the door.  To the questioner, it looks like there's somebody who is conscious inside the room.  But a reference library can't be conscious, even if it's computerized, and the only candidate for consciousness inside the room—the man using the computer—can't read Chinese, and so he isn't conscious of the interchange either.  According to Tononi, every AI program running on a conventionally designed computer is just like the man in the Chinese room—maybe it looks conscious from the outside, but its structure keeps it from ever being conscious.

On the other hand, Tononi says that the human brain—specifically the cerebral cortex—has just the kind of interconnections and ability to change its own form that is needed to realize consciousness.  That's good news, certainly, but along with that reassurance comes a more profound implication of IIT:  the possibility of making machines whose consciousness would not only be evident to those outside, but could be proven mathematically.

Here we get into some really deep waters.  IIT is by no means universally accepted in the neuroscience community.  As one might expect, it's rather unpopular among AI workers who either think consciousness is an illusion, or that brains and computers are basically the same thing and consciousness is just a matter of degree rather than a difference in kind. 

But suppose that Tononi's theory is basically correct, and we get to the point where we can take a look at a given physical system, whether it's a brain, a computer, or some as-yet-uninvented future artifact, and measure its potential to be conscious rather like you can measure a computer's clock speed today.  In an article co-written with Christof Koch in the June 2017 IEEE Spectrum, Tononi concludes that "Such a neuromorphic machine, if highly conscious, would then have intrinsic rights, in particular the right to its own life and well-being.  In that case, society would have to learn to share the world with its own creations." 

In a sense, we've been doing exactly that all along—ask any new parent how it's going.  But Tononi's "creation" isn't another human—it would be some kind of machine, broadly speaking, whose consciousness would be verified by IIT.  There has been talk about robot rights for some years, fortunately so far entirely on the hypothetical level.  But if Tononi's theory comes to be more widely accepted and turns out to do what he claims it will do, we may some day face the question of how to treat entities (I can't think of another word) that seem to be as alive as you or me, but depend for their "lives" on Pacific Gas and Electric, not the grocery store.  

Well, I don't have a good answer to that one, except that we're a long way from that consummation.  People are trying to design intelligent computers that are actually built the way the brain is built, but they're way behind the usual AI approach of programming and simulating neural networks on regular computer hardware.  If Tononi is right, the conventional AI approach leads only to what I was pretty sure was the case all along—a fancy adding machine that can talk and act like a person, but is in fact just a bunch of hardware.  But if we ever build a machine that not only acts conscious, but is conscious according to IIT, well, let's worry about that when it happens.

Sources:  Christof Koch and Giulio Tononi's article "Can We Quantify Machine Consciousness?" appeared on pp. 65-69 of the June 2017 issue of IEEE Spectrum, and is also available online at http://spectrum.ieee.org/computing/hardware/can-we-quantify-machine-consciousness.  I also referred to the Wikipedia article on integrated information theory and the Scholarpedia article at http://www.scholarpedia.org/article/Integrated_information_theory.

Monday, December 19, 2016

Are We Ready For an AI World?


The other day I was making some hotel reservations, and set them up with two different hotel chains.  One is universally pet-friendly (we often travel with a dog), and you can call the hotel you want to stay at and talk with the desk clerk directly to make your reservation.  The clerk gets into their reservation system and takes your information and usually there's no problem, although if you call at a busy time it can be a little stressful on the clerk. 

The other chain makes all phone reservations through a centralized phone system—if you call the individual motel, the desk clerk transfers you to the same reservation number you can call directly.  Recently this chain transitioned to a computerized voice-recognition system—your voice is unheard by human ears when you dial the number.  It didn't go well.

I suppose those familiar with the robotic phone-tree industry could name the company that makes this system by the way it sounds.  It has a friendly female voice saying, "Okay, what can we do for you?  Tell me if you want to make a reservation," etc.  At first I hoped I'd eventually get to talk with a live human, because my experience with these robot voices has been mixed at best.  Maybe it's my tone of voice, maybe it's my Southern background, but unless the computer is asking for simple yes-or-no answers, I don't have much luck with them. 

It asked me for the place I wanted to stay and what day and how many nights.  I tried to tell it—twice, in fact—but all I got back was this peculiar fast clicking ("pip-pop-pip-pop") which I have to believe is what the system puts on the line instead of Muzak while it's trying to puzzle out what you said, and then it asked the same question all over again.  Finally I hung up and used the chain's website to make the reservation, which may be what they want people to do anyway—I'm sure it's a lot less trouble to them than their robot telephone operators. 

This is an up-close and personal encounter with something that is only going to get worse—or better, depending on your point of view—in the future.  I'm talking about the replacement of people with technology in a wide variety of jobs.  In a recent issue of The New Yorker magazine, Elizabeth Kolbert reviews a number of books concerned with the recent advances in artificial intelligence (AI), and the effects this is going to have on the the job market, the economy, and society in general. 

This isn't going to happen overnight.  Paradoxically, it's easier to program a computer to diagnose certain types of diseases with expert systems than it is to teach one how to fold towels.  Kolbert cites an experiment at U. C. Berkeley with a robot that learned to fold a towel—after practicing, it got its time down to twenty-five minutes per towel.  In that regard, at least, Rosie the Robot isn't going to replace hotel housemaids any time soon. 

On the other hand, if you work in a phone-answering "boiler room," you have reason to be worried, although my own experience with the robotic reservation clerk shows there is still a place for humans on the other end of the line.  Kolbert classifies jobs into four types:  manual routine jobs (e. g. folding towels or working on an assembly line), cognitive routine jobs (e. g. keeping track of a warehouse inventory), manual nonroutine jobs (e. g. home health care or brain surgery), and cognitive nonroutine jobs (e. g. developing a new AI system).  Both types of routine jobs, where you can basically write an algorithm about what to do in any given situation, are ripest for replacement by robots and AI software.

The fear that humans will lose their jobs to machines goes back at least to the 1700s, when mechanical looms and spinning jennies began to replace weavers and the one-person spinning wheel.  But until recently, industrialization produced at least as many new jobs as the old ones it eliminated, if not more. 

The problem now is that many new firms that attract billions in capital now operate with essentially nobody.  Kolbert cites an extreme example:  the messaging firm Whatsapp, with its fifty-five employees, was bought by Facebook in 2014 for twenty-two billion dollars.  That's four hundred million dollars per employee.  When I told my wife about it, she said, "Well, I hope they didn't lose their jobs when they got bought out."  I hope not either.  Maybe the janitor did, but you can rest assured that some of that twenty-two billion found its way into the pockets of at least a few of those people. 

Leaving lottery-like occurrences aside, the point is that both software-based and manufacturing enterprises are finding ways to do what they need to do with fewer and fewer warm bodies who are not in the upper echelon of the cognitive non-routine class.  The few people they still need—lawyers, managers, creative people, and other "symbolic manipulators," in George Gilder's phrase—may form the future ruling class of what software developer Martin Ford calls "techno-feudalism." 

But even feudal lords needed their serfs to work their lands.  The ruling class of the future will have no need for anyone not in their class, except as consumers.  Most of the authorities Kolbert cites figure that the best we can do with the vast majority of us ordinary mortals who have no aptitude for programming, management, the law, or high finance, is to pension us off with guaranteed incomes, or something that amounts to that, and hope we don't decide to up and storm the castle some day.

Next week I plan to look at an alternate view of the same problem, written during the depths of the Great Depression, but I've run out of space today.  In the meantime, if you have a job, be grateful for it, and share some of what you have with those less fortunate.   

Sources:  Elizabeth Kolbert's piece "Rage Against the Machine:  Will Robots Take Your Job?" begins on p. 114 of the Dec. 19 & 26, 2016 issue of The New Yorker magazine.