Showing posts with label Scientific American. Show all posts
Showing posts with label Scientific American. Show all posts

Monday, May 27, 2019

Can We Trust Alexa? Wade Roush Hopes So


Wade Roush is a journalist who writes a column on innovation for the prestigious Scientific American monthly.  In the June issue, he looks at the future of increasingly smart and omni-present artificial-intelligence (AI) agents that you can talk with—Apple's Siri, Google's Assistant, Amazon's Alexa, Microsoft's Cortana, and so on.  Apple has installed a Siri app in its AirBuds so all you have to do is say, "Hey, Siri" and she's right there in your ear canals.  (Full disclosure:  I don't use any of these apps, except for a dumb talk-only GPS assistant we've named Garmina.) 

True to his column's marching orders, Roush came up with a list of five protections that he says users should "insist on in advance" before we go any farther with these smart electronic assistants.  Don't get me wrong, it's a good list.  But the chances of any of the five taking hold or being realized in any substantial way are, in my view, way smaller than a snowball's chances in you-know-where. 

Take his first item:  privacy.  Inevitably, AI interactions are cloud-based because of the heavy-duty processing required.  Therefore, he calls for end-to-end encryption so even the companies running the AI assistants can't tell what's going on.  This is a contradictory requirement.  Of course they have to know what you're asking, because otherwise how are they going to respond to requests for information?  Maybe Roush is thinking of something like the old firewall idea that used to be maintained between the editorial and advertising divisions of a news organization.  But there are huge holes in those walls now even in the most traditional news outlets, and I don't see how any company could both remain ignorant of what's going on between its AI system and the user, and have the AI system do anything useful.

The next protection he asks for is unprecedented, so I will quote it directly:  "AI providers must be up front about how they are handling our data, how customer behavior feeds back into improvements in the system, and how they are making money, without burying the details in unreadable, 50-page end-user license agreements."  If any of the AI-assistant firms manage to do this, it will be the first time in recorded history.  Especially the part about how they make money.  That's called a firm's business strategy, and it's one of the most closely guarded secrets that most firms have. 

Next, he calls for every link in the communication chain to be "hacker-proof."  Good luck with that.  Hacker-resistant, I can see.  But not hacker-proof.

Next, he says the assistants must draw on "accurate data from trusted sources."  This is a hard one.  If you ask Alexa a question like, "What do you mean, an Elbonian wants my help in transferring millions out of his country?" what's she going to say in response?  The adage "garbage in, garbage out" still applies to AI systems just as it did to IBM System 360s in the 1960s.  And if we're truly talking about artificial intelligence, with no human intervention, I don't see how AI systems will filter out carefully designed phishing attacks or Russian-sponsored political tweets any better than humans do, which is to say, not very well.

And I've saved the best for last.  He calls for autonomy, for AI assistants to give us more agency over our lives:  "It would be a disaster for everyone if they morphed into vehicles for selling us things, stealing our attention or stoking our anxieties." 

Excuse me, but those three actions are how most of the Internet works.  If you took away all the activity that was designed to sell us things, the Internet would dwindle back down to a few academics sending scientific data back and forth, which is how it began in the 1980s.  If you tell designers not to try stealing our attention, and turned off all the apps and sites designed to do so, Facebook, Instagram, all the online games, Twitters, newsfeeds—all that stuff would disappear.  Facebook designers are on public record as having said that their explicit conscious intention in designing the system was to make using it addictive.  And as for stoking our anxieties—well, that's a good capsule description of about 80% of all the news on the Internet.  Take that away, and maybe you'll have some good stories about rainbows, butterflies, and flowers, but only till the sponsoring companies go bankrupt for lack of business.

I have no personal animus against Mr. Roush, and in dealing with a new technology he has to say something about it.  And there's no harm in holding up an ideal for people to approach in the future, even if they don't have much of a chance of approaching it very closely.  But it's strange to see a supposedly savvy technology writer call for future protections on any high-tech innovation that are so ludicrously idealistic, not to say contradictory in some points. 

Perhaps a page from the historians of technology would be helpful here.  They make a distinction between an internalist view of history and an externalist view.  I'm radically simplifying here, but basically, an internalist (I would count Roush in that number) takes the general assumptions of a field for granted and looks at things in a we-can-do-this way.  And in principle, if you take the promises of smart-AI proponents at face value, we could in fact achieve the five goals of protection that Roush outlined.

But an externalist views a situation more broadly, in the context of what has happened before both inside and outside a given field.  In saying that the protections Roush calls for are unlikely to be realized fully, I rely on the history of how high-tech companies and other actors have behaved up to this point, which is to fall far short of every protection that Roush calls for, at one time or another. 

I hope that this time it will be different, and talking with your trusted invisible AI assistant will be just as worry-free as talking with your most trusted actual human friend on the phone.  But after writing that sentence, I'm not even sure that I want that to happen.  And if it does, I think we will have lost something in the process.

Sources:  Wade Roush's column "Safe Words for Our AI Friends" appeared on p. 22 of the June 2019 print issue of Scientific American.

Monday, December 28, 2015

The Ironies of Carbon Capture Technology


In a recent article in Scientific American, reporter David Biello summarizes the current state of carbon-capture technology, and it's not good.  If a negative view of carbon capture appeared in some obscure climate-change-denier publication, it could be dismissed as biased reporting.  But the elite-establishment Scientific American has been in the forefront of the anti-climate-change parade, and so for such an organ to publish such bad news means that we would do well to take it seriously.

The basic problem is that capturing a gas like carbon dioxide, compressing it, and injecting it deep enough underground where it won't come out again for a few thousand years is not cheap.  And the worst fossil-fuel offenders—coal-fired power plants—make literally tons of the stuff every second.  It would be hard enough to transport and bury tons of solid material (and coal ash is a nasty enough waste product), but we're talking about tons of a gas, not a solid.  Just the energy required to compress it is huge, and the auxiliary operations (cleaning the gas, drilling wells, finding suitable geologic structures to hold it underground) add millions to billions to the cost of an average-size coal-fired plant.  Worst of all, the goal for which all this effort is expended—slowing carbon-dioxide emissions—is a politically-tinged goal whose merit is doubted by many, and which is being ignored wholesale by some of the world's worst offenders in this regard, namely China and India. 

However, shrinking the U. S. carbon footprint is regarded by many as a noble cause, and a few years ago Mississippi Power got on the bandwagon by designing a new lignite-burning power plant to capture its own carbon-dioxide emissions and send them into a nearby oil field, whereupon they expel oil that is, uh, eventually burned to make more carbon dioxide.  Here is the first irony.  Evidently, one of the few large-scale customers for large quantities of carbon dioxide are oil companies, who send it underground (good) to make more oil come to the surface (not so good). 

The second irony is an economic one.  It is the punishment meted out by economics to the few good corporate citizens in a situation where most citizens are not being so good.

Currently in the U. S., there is no uniform, rational, and legally enacted set of rules regarding carbon-capture requirements.  So far, the citizenry as a whole has not risen up and said, "In our constitutional role as the supreme power in the U. S., we collectively decide that capturing carbon dioxide is worth X billion a year to us, and we want it done pronto."  Instead, there is a patchwork of voluntary feel-good individual efforts, showcase projects here and there, and large-scale operations such as the one Mississippi Power got permission to do from the state's utility commission, as long as they didn't spend more than $2.88 billion on the whole thing.

So far, it's cost $6.3 billion, and it's still not finished.  This means big problems for the utility and its customers, in the form of future rate hikes.  Capturing carbon is not a profitable enterprise.  The notion of carbon-trading laws would have made it that way, sort of, but for political reasons it never got off the ground in the U. S., and unless we get a world government with enforcement powers, such an idea will probably never succeed on an international level.  So whatever carbon capturing is going to be done, will be done not because it is profitable, but for some other reason.

The embarrassment of Mississippi Power's struggling carbon-capture plant is only one example of the larger irony, which is that we don't know what an appropriate amount is to spend on carbon capture, because we don't know exactly, or even approximately, what it will cost if we don't, and who will pay.  Probably the poorer among us will pay the most, but nobody can be sure.  (There's a lot of very expensive real estate on coasts around the world, and sometimes I wonder if that influences the wealthy class to support anti-global-warming efforts as much as they do.)  

The time factor is a problem in all this as well.  Nearly all forecasts of global-warming tragedies are long-term things with timelines measured in many decades.  That is good in the sense that we have a while to figure out what to do.  But in terms of making economic decisions that balance profit against loss—which is what all private firms have to do—such long-run and widely distributed problems are chimerical and can't be captured by any reasonable accounting system.  Try to put depreciation on an asset you plan to own from 2050 to 2100 on your income-tax return, and see how far you get. 

So the only alternative in many places for large-scale carbon capture to happen is by government fiat.  A dictatorial government such as China's could do this tomorrow if it wanted to, but as the recent Paris climate-accord meeting showed, it doesn't want to—not for a long time yet, anyway.  In a nominal democracy such as the United States, the political will is strong in some quarters, but the unilateral non-democratic way the present administration has been trying to implement carbon limits has run into difficulties, to say the least.

My sympathies to residents of Mississippi who face the prospect of higher electric bills when, and if, their carbon-capturing power plant goes online.  Whatever else the project has done, it has revealed the problems involved in building a hugely expensive engineering project for a payoff that few of those living today may ever see.

Sources:  The article "The Carbon Capture Fallacy" by David Biello appeared on pp. 58-65 of the January 2016 edition of Scientific American.