Monday, August 14, 2017

The Ethical Spin on Spinners


The first time I saw one in a store, I couldn't figure out what it was for and I had to ask my wife.  "Oh, that's a fidget spinner," she said.  "You don't need one."  She's right there.

As most people under 20 (and a few people over 60) know, fidget spinners are toys that you hold between your finger and thumb and spin.  That's it—that's the whole show.  When the fad showed signs of getting really big, somebody rushed into production battery-powered Bluetooth-enabled spinners.  My imagination obviously doesn't run in mass-marketing directions, because I couldn't think of what adding Bluetooth to a spinner could do.  Well, a quick Amazon search turns up spinners with little speakers in each of the three spinning lobes (playing music from your Bluetooth-enabled device), spinners with LEDs embedded in them and synced to the rotation somehow so that when you spin it, it spells out "I LOVE YOU," spinners with color-organ kind of LEDs that light in time to music—you name it, somebody has crammed the electronics into a spinner to do it.

But all this electronics needs super-compact batteries, and where there's batteries, there's the possibility of fire.  Already, there have been a couple of reports of Bluetooth-enabled spinners catching on fire while charging.  No deaths or serious injuries have resulted, but the U. S. Consumer Product Safety Commission (CPSC) has put out a nannygram, as you might call it:  don't overcharge the spinner, don't plug it in and leave it unattended, don't use a charger that wasn't designed for it, and so on.  I am not aware that teenagers are big fans of the CPSC website, but nobody can say the bureaucrats haven't done their job on this one.

The Wikipedia article on spinners discounts claims that they are good for people with attention-deficit disorder, hyperactivity, and similar things.  Seems to me that holding a spinning object in your hand would increase distraction rather than the opposite, and some high schools have agreed with me to the extent of banning the devices altogether. 

As a long-time manual tapper (no equipment required), I think I can speak to that aspect of the matter from personal experience.  Ever since I was a teenager or perhaps before, I have been in the habit of tapping more or less rhythmically on any available surface from time to time.  My wife is not exactly used to it—she will let me know now and then when it gets on her nerves—but it's no longer a huge issue between us.  Often when she asks me to stop, it's the first time I've fully realized I'm doing it, and that's part of the mystery of tapping or doing other habitual, useless things with your hands.

The most famous manual fidgeter in fiction was a character in Herman Wouk's World War II novel The Caine Mutiny, Captain Philip F. Queeg, who had the habit when under stress of taking two half-inch ball bearings out of his pocket and rolling them together.  (Queeg lived in an impoverished age when customized fidget toys were only a distant dream, so he had to use whatever fell to hand, so to speak.)  During the court martial that forms the heart of the novel, a psychologist is called to the stand to speculate on the reasons for Queeg's habit of rolling balls.  The doctor's comments ranged from the sexual to the scatological, and will not be repeated here.  But it appears that psychology has not made much progress in the last seventy years to find out why some people simply like to do meaningless motions with their hands.  That hasn't kept a lot of marketing types from making money off of them.

Fidget spinners are yet another example of the power of marketing to get people to buy something they didn't know they wanted till they saw one.  I don't know what the advertising budget was for the companies that popularized the toy, but I suspect it was substantial.  For reasons unknown to everyone but God, the thing caught on, and what with Bluetooth-enabled ones and so on, the marketers are riding the cresting fad wave for all it's worth before it spills on the beach and disappears, as it will.  Somehow I don't think we're going to see eighty-year-olds in 2100 taking their cherished mahogany spinners out of felt-lined boxes for one last spin before the graveyard.

Like most toys, fidget spinners seem to be ethically benign, unless one of them happens to set your drapes on fire.  Lawsuits are a perpetual hazard of the consumer product business, but the kind of people who market fad products are risk-takers to begin with, so it's not surprising they cut a few corners in the product safety area before rushing to the stores with their hastily designed gizmos.  By the time the cumbersome government regulatory apparatus gets in gear, the company responsible for the problematic spinners may have vanished.  Here's where the Internet and its viewers' fondness for exciting bad news can help even more than government regulations.  When hoverboards started catching fire a year or two ago, what kept people from buying more of the bad ones wasn't the government so much as it was the bad publicity the defective board makers got on YouTube.  And that's a good thing, when consumers who get burned (sometimes literally) can warn others of the problem.

As for Bluetooth-enabled spinners, well, if you want one, go get one while you can.  They'll be collectors' items pretty soon.  And those of us who learned how to cope with tension the old-fashioned way by drumming on a tabletop can at least rest assured that they aren't going to take our fingers or tabletops away.  But they might tell us to stop tapping.

Sources:  Slate's website carried the article  
"New Fidget Spinner Safety Guidelines Prove We Can’t Have Nice Things" by Nick Thieme at http://www.slate.com/blogs/moneybox/2017/08/11/cpsc_just_released_fidget_spinner_safety_guidelines_proving_we_can_t_have.html.  I also referred to the Wikipedia article on fidget spinners.  Herman Wouk's Pulitzer-Prize-winning novel The Caine Mutiny was published in 1952, and led to a film of the same name starring a considerably miscast Humphrey Bogart.

Monday, August 07, 2017

Giulio Tononi and His Consciousness Meter


If you're reading this, you're conscious of reading it.  Consciousness is something most of us experience every day, but for philosophers, it has proved to be a tough nut to crack.  What is it, exactly?  And more relevant for engineers, can machines—specifically, artificially intelligent computers—be conscious? 

Until recently, questions like this came up only in obscure academic journals and science fiction stories.  But now that personal digital assistant devices like Siri are enjoying widespread use, the issue has fresh relevance both for consumers and for those developing new AI (artificial intelligence) systems.

Philosophers of mind such as David Chalmers point out that one of the more difficult problems relating to consciousness is explaining the nature of experiences.  Take the color red, for example.  Yes, you can point to a range of wavelengths in the visible-light spectrum that most people will call "red."  But the redness of red isn't just a certain wavelength range.  A five-year-old child who knows his colors can recognize red, but unless he's unusual he knows nothing about light physics and wavelengths.  Yet when he sees something red, he is conscious of seeing something red.

One popular school of thought about the nature of consciousness is the "functionalist" school.  These people treat a candidate for consciousness as a black box and imagine having a conversation with it.  If its answers convince you that you're talking with a conscious being, well, that's as much evidence as you're going to get.  By this measure, some people probably already think Siri is conscious.

Now along comes a neuroscientist named Giulio Tononi, who has been working on something he calls "integrated information theory" or IIT.  It has little to do with the kind of information theory familiar to electrical engineers.  Instead, it is a formal mathematical theory that starts from some axioms that most people would agree on concerning the nature of consciousness.  Unfortunately, it's pretty complicated and I can't go into the details here.  But starting from these axioms, he works out postulates and winds up with a list of characteristics that any physical system capable of supporting consciousness should have.  The results, to say the least, are surprising.

For one thing, he says that while current AI systems that are implemented using standard stored-program computers can give a good impression of conscious behavior, IIT shows that their structure is incapable of supporting consciousness.  That is, if it walks like it's conscious and quacks like it's conscious, it isn't necessarily conscious.  So even if Siri manages to convince all its users that it's conscious, Tononi would say it's just a clever trick.

How can this happen?  Well, philosopher John Searle's "Chinese room" argument may help in this regard.  Suppose a man who knows no Chinese is nevertheless in a room with a computer library of every conceivable question one can ask in Chinese, along with the appropriate answers that will convince a Chinese interrogator outside the room that the entity inside the room is conscious.  All the man in the room does is take the Chinese questions slipped under the door, use his computer to look up the answers, and send the answers (in Chinese) back to the Chinese questioner on the other side of the door.  To the questioner, it looks like there's somebody who is conscious inside the room.  But a reference library can't be conscious, even if it's computerized, and the only candidate for consciousness inside the room—the man using the computer—can't read Chinese, and so he isn't conscious of the interchange either.  According to Tononi, every AI program running on a conventionally designed computer is just like the man in the Chinese room—maybe it looks conscious from the outside, but its structure keeps it from ever being conscious.

On the other hand, Tononi says that the human brain—specifically the cerebral cortex—has just the kind of interconnections and ability to change its own form that is needed to realize consciousness.  That's good news, certainly, but along with that reassurance comes a more profound implication of IIT:  the possibility of making machines whose consciousness would not only be evident to those outside, but could be proven mathematically.

Here we get into some really deep waters.  IIT is by no means universally accepted in the neuroscience community.  As one might expect, it's rather unpopular among AI workers who either think consciousness is an illusion, or that brains and computers are basically the same thing and consciousness is just a matter of degree rather than a difference in kind. 

But suppose that Tononi's theory is basically correct, and we get to the point where we can take a look at a given physical system, whether it's a brain, a computer, or some as-yet-uninvented future artifact, and measure its potential to be conscious rather like you can measure a computer's clock speed today.  In an article co-written with Christof Koch in the June 2017 IEEE Spectrum, Tononi concludes that "Such a neuromorphic machine, if highly conscious, would then have intrinsic rights, in particular the right to its own life and well-being.  In that case, society would have to learn to share the world with its own creations." 

In a sense, we've been doing exactly that all along—ask any new parent how it's going.  But Tononi's "creation" isn't another human—it would be some kind of machine, broadly speaking, whose consciousness would be verified by IIT.  There has been talk about robot rights for some years, fortunately so far entirely on the hypothetical level.  But if Tononi's theory comes to be more widely accepted and turns out to do what he claims it will do, we may some day face the question of how to treat entities (I can't think of another word) that seem to be as alive as you or me, but depend for their "lives" on Pacific Gas and Electric, not the grocery store.  

Well, I don't have a good answer to that one, except that we're a long way from that consummation.  People are trying to design intelligent computers that are actually built the way the brain is built, but they're way behind the usual AI approach of programming and simulating neural networks on regular computer hardware.  If Tononi is right, the conventional AI approach leads only to what I was pretty sure was the case all along—a fancy adding machine that can talk and act like a person, but is in fact just a bunch of hardware.  But if we ever build a machine that not only acts conscious, but is conscious according to IIT, well, let's worry about that when it happens.

Sources:  Christof Koch and Giulio Tononi's article "Can We Quantify Machine Consciousness?" appeared on pp. 65-69 of the June 2017 issue of IEEE Spectrum, and is also available online at http://spectrum.ieee.org/computing/hardware/can-we-quantify-machine-consciousness.  I also referred to the Wikipedia article on integrated information theory and the Scholarpedia article at http://www.scholarpedia.org/article/Integrated_information_theory.