The Institute of Electrical and Electronics Engineers (IEEE) is probably the largest society of engineering professionals in the world, with over 300,000 members worldwide. Its Code of Ethics has a little-known clause in which IEEE members agree to "improve the understanding of technology, its appropriate application, and potential consequences." My father sometimes used to greet me as I came home from school with the question, "And what did you do to make the world a better place today?" I could equally well ask the question of engineers, "What did you do to improve the public's understanding of technology today?"
People called applications engineers do that all the time, but strictly in the context of helping their firm's customers use its products. But I don't think that's all the drafters of the Code had in mind. By virtue of our specialized knowledge, engineers are under an obligation to the public to spread the truth about technology and to counter fraud and fakery wherever found. This may be one reason you don't find more engineers in politics.
In fairness to politicians, many of them try their hardest to understand technical concepts with important political implications, and to express what they see as their essentials to the public. One such attempt which I think succeeded pretty well was published in the Jan. 30 Austin American-Statesman as an editorial by U. S. Rep. Silvestre Reyes (D-El Paso). The occasion is a plan promoted by the Republican governor of Texas to build 18 more coal-fired power plants in the state. Hold on a minute, says Rep. Reyes, we have better things in store being developed right here at Ft. Bliss, where the Army has some laboratories engaged in something called "Power the Army!" The exclamation point must mean they're serious.
If you've ever been to West Texas, you will know that the ironically-named Ft. Bliss is a good place to test systems that need to work well in dry, hot, desert-like conditions. Today's electronically-intensive military can't just find the nearest wall outlet to plug their equipment into. Traditionally, they have had to lug along heavy, expensive, noisy, inefficient diesel generators and the thousands of gallons of fuel needed to run them. So the Army has perhaps a greater motivation than the rest of us to find ways to make electric power from solar energy, of which there is plenty in dry deserts.
Most solar power research has focused on bringing down the cost of the solar cells themselves, which despite much progress over the years are still about twice as expensive as conventional sources. Judging by their website, the "Power the Army!" project engineers have turned to a neglected aspect of solar-fueled electric energy, what is technically termed "power conditioning."
Like most other commodities, electric power has to meet certain standards to be used. Voltage is an important characteristic for power: if your car battery voltage falls below a certain point, your car won't start. If the voltage delivered to your house changes more than a percent or so suddenly, your lights flicker. It turns out that the raw electric power from solar cells is not in very good shape: it varies from moment to moment with cloud cover, from day to day with solar angle, and depends on temperature and other factors. Until recently, developers of solar panels more or less took what they could get, but evidently the Army initiative is working to develop very sophisticated power-conditioning modules that are small enough to fit on each yard-square panel, and are centrally computer-controlled for optimum efficiency. Together with DC-to-AC inverters of improved design, the Army hopes to deliver solar power at half the cost that prevails today.
That's the way an electrical engineer writing for the public would put it. Now read how Rep. Reyes says essentially the same thing:
"The program uses three components: the extractor, which extracts electrons from solar panels rather than the sun having to push them out of the panels; an inverter, which converts direct current (DC), which solar panels provide, into alternating current (AC), which we actually use, at very high efficiency; and a control system to regulate the process."
How do you like that? I think it's great. The bit about "extracting" electrons instead of making the sun push them out, technically speaking, is close to nonsense. But it gets the overall point across, which is that the system works better by doing something actively which up to now has been accomplished passively. And it was written (or commissioned—Rep. Reyes probably had some help) by a former immigration official with a degree in criminal justice who has taken the trouble to learn enough about an important technical matter to bring it to the public's attention.
Few engineers go into fields where they communicate routinely with the general public. But some of those who do have done quite well. The civil engineer Henry Petroski has written many books that make the practice of engineering at least comprehensible, and sometimes interesting and even dramatic. The independent journalist Keith Snow was once a student of mine in electrical engineering, and although his work no longer relates only to technology, the honesty and attention to detail he learned in school has served him well in his present position. An engineering education can be used for a variety of things besides straight design engineering. Perhaps the world would understand more about what engineers do, if more engineers decided to obey that obscure clause in the code of ethics about helping the public understand technology.
Sources: The editorial by Rep. Reyes appeared on p. A9 of the print edition of the Austin American-Statesman. The "Power the Army!" project has a website at http://gina.nps.navy.mil/Projects/PowerTheArmy/tabid/61/Default.aspx. The IEEE Code of Ethics is available at http://www.ieee.org/portal/pages/about/whatis/code.html.
Tuesday, January 30, 2007
Wednesday, January 24, 2007
Googling Fame: Who's In Charge?
First, I will heed the proverbial warning not to bite the hand that feeds you, or in this case, the company that provides my blog free of charge. Google, that huge, somewhat mysterious entity run by a couple of thirty-somethings who are (I read recently) two of the most admired people in America, said they would let me blog here for free, and would provide easy-to-use facilities for setting up my blog and running it. Almost without exception, they have kept their word, whoever they are. I don't have to have ads on my blog unless I choose to, the system is as easy to use as they said, and in sum, my limited experience with the organization has been almost uniformly positive. And to make things even better, after nearly a year of blogging here, I find that if you type "engineering ethics blog" into Google's search engine, the first thing that comes up is this blog. Not only that, but among the next few results are references to this blog at the University of Illinois—Urbana-Champaign and Illinois Institute of Technology. (If you type just "engineering ethics," it shows up too, but not till the fourth page.)
Now before I start preening in public, I should let you know that I have friends at UIUC and IIT, and I'm almost certain that these friends are the reasons for the references to my blog at those institutions, not the fact that Google points here. But why would you or I or anybody else care about the fact that something you write shows up on Google's search engine?
The answer is obvious to anyone who is at all familiar with the way search engines work these days. In contrast to the early days five years or so ago, when a query for "dog houses" would turn up everything from frankfurters to Manhattan real estate, search engines today use techniques that not only turn up the most relevant results first, but also rank them according to popularity. Popularity is easily measured by the frequency with which people go to certain sites referred to by the search engine, and possibly by other means of which the non-computer scientist writing this blog is ignorant. (It's amazing—and sometimes a little frightening—what people can know about your web habits with the right software.)
In the nature of things, with the billions of interactions Google handles each minute, the vast majority of what it does must be automated, in the sense that no human being is directly aware of or dealing with the activity. Somewhere on top of all the software is a cadre of superintendents who set policy for the system, but surely can't deal with it down on the level of individual rankings of individual search items, unless there is some kind of crisis or legal problem that requires manual intervention.
In the pre-web days, the closest analogy I can think of to this kind of thing is newspaper and magazine columns. Back then, real money had to be involved, either as payment from a publisher or as a self-publishing venture, before a person could set himself up to give advice in print to the public. This was a large barrier, but it also spared the public (at least in non-Communist countries) from stuff that nobody wanted to read. (I make an exception for Communist countries, because if for example Kim Jong-Il wishes to enlighten his citizens with a five-page editorial or a three-hour TV speech, nobody can stop him.) The keeper of these barriers were editors, people who had some judgment about what might attract readers and what ought to be put before the public.
Things are different now, sort of. Take for example a blog that you locate through Google's search engine. Instead of a newspaper editor who judiciously (or sometimes injudiciously) places before your breakfast ham and eggs a carefully selected column, in searching for a blog on a given subject you turn the task of discrimination over to whoever—or whatever—at Google decides how things are ranked in a search. Because Google is not (and probably couldn't be) totally forthcoming about how they do this, or who is responsible, you just have to take what you get. Of course you don't have to be satisfied with it if you don't like it, and it's not like you've paid anything (although you will be exposed to ads somewhere along the way—Google has to pay the bills somehow). But at least in principle, if you disagreed with an editor's choice of column, or choice of words in an editorial, you could write a letter to the editor in the time-honored way, and maybe he would print it. If you don't like what a search engine does, especially if it's Google, I'm not sure what recourse you could find, other than hiring a lawyer. And that is so trite nowadays.
How is this related to engineering ethics? I'm simply pointing out that engineers (software engineers, yes, but they like to be called engineers too) have created a new mass medium with fundamentally different rules. Communications technologies frequently get a free ride in engineering ethics courses because of the idea that communication between people is the responsibility of the people, not the medium. That is true up to a point. But when a technical medium is used by millions of people every day and exerts a powerful influence on what they read and how they view the world, the engineers in charge are making ethical choices in the way they design search engines, whether they realize it or not.
In an earlier column (Mar. 30, 2006), I raked Google, Yahoo, and Microsoft over the coals (gently) for bending their rules about freedom of speech to fit the constraints imposed by the Peoples' Republic of China in order to operate there. Clearly, suppressing blogs on freedom and democracy in China is an extreme example of the power of software engineers to manipulate public opinion. And it's very unlikely (although possible) that anything to do with a search engine will result in deaths or injuries, which is generally what it takes for an engineering ethics matter to make headlines. But the power is there, and software engineers at Google and everywhere should give some thought as to how to use it responsibly.
Sources: I thought I could find a reference confirming what I read somewhere about Google founders Larry Page and Sergey Brin being some of the most admired heroes by people under thirty, but Google has failed me—for once. Or maybe they're just being modest.
Now before I start preening in public, I should let you know that I have friends at UIUC and IIT, and I'm almost certain that these friends are the reasons for the references to my blog at those institutions, not the fact that Google points here. But why would you or I or anybody else care about the fact that something you write shows up on Google's search engine?
The answer is obvious to anyone who is at all familiar with the way search engines work these days. In contrast to the early days five years or so ago, when a query for "dog houses" would turn up everything from frankfurters to Manhattan real estate, search engines today use techniques that not only turn up the most relevant results first, but also rank them according to popularity. Popularity is easily measured by the frequency with which people go to certain sites referred to by the search engine, and possibly by other means of which the non-computer scientist writing this blog is ignorant. (It's amazing—and sometimes a little frightening—what people can know about your web habits with the right software.)
In the nature of things, with the billions of interactions Google handles each minute, the vast majority of what it does must be automated, in the sense that no human being is directly aware of or dealing with the activity. Somewhere on top of all the software is a cadre of superintendents who set policy for the system, but surely can't deal with it down on the level of individual rankings of individual search items, unless there is some kind of crisis or legal problem that requires manual intervention.
In the pre-web days, the closest analogy I can think of to this kind of thing is newspaper and magazine columns. Back then, real money had to be involved, either as payment from a publisher or as a self-publishing venture, before a person could set himself up to give advice in print to the public. This was a large barrier, but it also spared the public (at least in non-Communist countries) from stuff that nobody wanted to read. (I make an exception for Communist countries, because if for example Kim Jong-Il wishes to enlighten his citizens with a five-page editorial or a three-hour TV speech, nobody can stop him.) The keeper of these barriers were editors, people who had some judgment about what might attract readers and what ought to be put before the public.
Things are different now, sort of. Take for example a blog that you locate through Google's search engine. Instead of a newspaper editor who judiciously (or sometimes injudiciously) places before your breakfast ham and eggs a carefully selected column, in searching for a blog on a given subject you turn the task of discrimination over to whoever—or whatever—at Google decides how things are ranked in a search. Because Google is not (and probably couldn't be) totally forthcoming about how they do this, or who is responsible, you just have to take what you get. Of course you don't have to be satisfied with it if you don't like it, and it's not like you've paid anything (although you will be exposed to ads somewhere along the way—Google has to pay the bills somehow). But at least in principle, if you disagreed with an editor's choice of column, or choice of words in an editorial, you could write a letter to the editor in the time-honored way, and maybe he would print it. If you don't like what a search engine does, especially if it's Google, I'm not sure what recourse you could find, other than hiring a lawyer. And that is so trite nowadays.
How is this related to engineering ethics? I'm simply pointing out that engineers (software engineers, yes, but they like to be called engineers too) have created a new mass medium with fundamentally different rules. Communications technologies frequently get a free ride in engineering ethics courses because of the idea that communication between people is the responsibility of the people, not the medium. That is true up to a point. But when a technical medium is used by millions of people every day and exerts a powerful influence on what they read and how they view the world, the engineers in charge are making ethical choices in the way they design search engines, whether they realize it or not.
In an earlier column (Mar. 30, 2006), I raked Google, Yahoo, and Microsoft over the coals (gently) for bending their rules about freedom of speech to fit the constraints imposed by the Peoples' Republic of China in order to operate there. Clearly, suppressing blogs on freedom and democracy in China is an extreme example of the power of software engineers to manipulate public opinion. And it's very unlikely (although possible) that anything to do with a search engine will result in deaths or injuries, which is generally what it takes for an engineering ethics matter to make headlines. But the power is there, and software engineers at Google and everywhere should give some thought as to how to use it responsibly.
Sources: I thought I could find a reference confirming what I read somewhere about Google founders Larry Page and Sergey Brin being some of the most admired heroes by people under thirty, but Google has failed me—for once. Or maybe they're just being modest.
Wednesday, January 17, 2007
The Electric Car Arrives—Again?
In 1990, General Motors Chairman Roger Smith announced that his firm was developing an all-electric car for the consumer market, partly in response to a California law mandating the sale of zero-emission vehicles in the future. Six years later, the EV1 made its debut in California and Arizona. Only about a thousand were made, and technically you could never own one—GM allowed only leases. In 2002, concluding that the program failed, GM demanded the return of the vehicles, much to the dismay of some loyal EV1 drivers who saw the move as a back-door way to show that electric vehicles were still impractical. Just last week, GM announced at the Detroit International Auto Show that it plans to get back into the electric-car business with the Chevrolet Volt, a home-chargeable battery-operated model that carries a small gasoline engine. Should we believe them this time?
In fairness to GM, whose well-known financial woes have more to do with pensions and a glut in the world auto market than missing advances in technology, selling electric cars to everybody will be hard. Technologically, it is oversimplifying to think of cars as either "electric" or "gasoline." A better way is to ask what percentage of the total stored energy on board is in the battery or the gas tank. Any car that doesn't have to be cranked by hand is slightly "electric" in this sense: what's that battery for, if not to supply stored energy to start the engine? The hybrids that Toyota and Honda have marketed with great success up the battery-energy percentage to the 20%-30% range. If you run out of gas in a Prius, you won't get very far, but you'll get farther than you will in an Edsel. The new Volt that GM announced moves most of the way toward all-electric. Its large battery will store perhaps as much as 50% of the total energy on board. GM expects that normal commuter usage will draw only on the energy stored in the battery, with the gasoline engine kicking on only for long trips. This will allow people to charge the car overnight at home from the electric grid, which has great systemic advantages over conventional hybrids. Eventually, we may see cars with onboard fuel cells that circumvent the thermodynamic limitation on efficiency that internal combustion engines suffer. These could use hydrogen or possibly biofuels, and would go most of the way toward eliminating harmful tailpipe emissions.
If electric cars are so great, why aren't we all driving them? Historically, as long as the electric car idea has been around, the glass ceiling stopping progress has been the battery. Pound for pound, gasoline contains nearly five hundred times as much energy as a fully charged lead-acid battery. And even the most advanced (and expensive) nickel-hydride batteries are only four times better than lead-acid, leaving gasoline way ahead.
That's the technology in a nutshell. Now, what should engineers be doing with it? Recent advances in materials science and engineering have improved batteries to the point that they are practical—but still expensive—in hybrid vehicles like the Prius. We will have to wait and see if GM, or anyone else, can make and use batteries that are good, reliable, and cheap enough to provide the main source of energy for a commuter-type vehicle that is charged overnight. But growing in importance to overshadow these technical factors is the human appeal factor.
The human appeal factor has to do, not with the technology itself, but how people perceive it. For example, you can show through chemical analysis that some organically-grown food products are scientifically indistinguishable from their non-organic counterparts. Knowing this, some people will still buy organic products. You can view their purchases as a kind of vote in the marketplace for a certain way of living. The human-appeal factor is in play when people bypass clothing made under sweatshop conditions for essentially the same quality of clothes (at higher prices) made under better labor conditions.
With all the problems in the Mideast and other oil-producing regions, more people are making the connection between the kind of car they drive and the international political situation. Engineers who ignore this objective, testable fact (if poll results can be said to be objective and testable!) and concentrate only on some engineering-friendly factor such as efficiency or cost, will find themselves missing a few boats on down the line, if not right away.
Should all engineers be political wonks instead? By no means! Generally speaking, the kind of personality who finds delight in making and dealing with things is not all that well suited to a life in politics, although there are exceptions. But a technologist who ignores the desires and perceptions of the marketplace, and the political and social effects of a technology, is missing an important part of the picture, a part no less important than the technical aspects.
Good people can differ over the questions of whether electric cars should be in our future, whether the marketplace or the legislatures should decide this question, and whether GM is serious this time or just has another trick up its collective sleeve. But to ignore all but the technical aspects of the questions is to lose a little of your humanity, and to become a little more like the machines you are designing.
Sources: An article on the introduction of the Volt and related electric-car news was written by John O'Dell of the Los Angeles Times, and appeared in the Boston Globe online edition on Jan. 14, 2007 at http://www.boston.com/cars/news/articles/2007/01/14/vehicles_of_the_future_likely_to_be_more_plugged_in/. An advocacy group for electric vehicles maintains a website at www.pluginamerica.com. The data on the comparable energy content of batteries and gasoline was obtained from a table at http://everything2.com/index.pl?node=energy%20density. You can see a picture of the Smithsonian's EV1 at http://americanhistory.si.edu/ONTHEMOVE/collection/object_1303.html.
In fairness to GM, whose well-known financial woes have more to do with pensions and a glut in the world auto market than missing advances in technology, selling electric cars to everybody will be hard. Technologically, it is oversimplifying to think of cars as either "electric" or "gasoline." A better way is to ask what percentage of the total stored energy on board is in the battery or the gas tank. Any car that doesn't have to be cranked by hand is slightly "electric" in this sense: what's that battery for, if not to supply stored energy to start the engine? The hybrids that Toyota and Honda have marketed with great success up the battery-energy percentage to the 20%-30% range. If you run out of gas in a Prius, you won't get very far, but you'll get farther than you will in an Edsel. The new Volt that GM announced moves most of the way toward all-electric. Its large battery will store perhaps as much as 50% of the total energy on board. GM expects that normal commuter usage will draw only on the energy stored in the battery, with the gasoline engine kicking on only for long trips. This will allow people to charge the car overnight at home from the electric grid, which has great systemic advantages over conventional hybrids. Eventually, we may see cars with onboard fuel cells that circumvent the thermodynamic limitation on efficiency that internal combustion engines suffer. These could use hydrogen or possibly biofuels, and would go most of the way toward eliminating harmful tailpipe emissions.
If electric cars are so great, why aren't we all driving them? Historically, as long as the electric car idea has been around, the glass ceiling stopping progress has been the battery. Pound for pound, gasoline contains nearly five hundred times as much energy as a fully charged lead-acid battery. And even the most advanced (and expensive) nickel-hydride batteries are only four times better than lead-acid, leaving gasoline way ahead.
That's the technology in a nutshell. Now, what should engineers be doing with it? Recent advances in materials science and engineering have improved batteries to the point that they are practical—but still expensive—in hybrid vehicles like the Prius. We will have to wait and see if GM, or anyone else, can make and use batteries that are good, reliable, and cheap enough to provide the main source of energy for a commuter-type vehicle that is charged overnight. But growing in importance to overshadow these technical factors is the human appeal factor.
The human appeal factor has to do, not with the technology itself, but how people perceive it. For example, you can show through chemical analysis that some organically-grown food products are scientifically indistinguishable from their non-organic counterparts. Knowing this, some people will still buy organic products. You can view their purchases as a kind of vote in the marketplace for a certain way of living. The human-appeal factor is in play when people bypass clothing made under sweatshop conditions for essentially the same quality of clothes (at higher prices) made under better labor conditions.
With all the problems in the Mideast and other oil-producing regions, more people are making the connection between the kind of car they drive and the international political situation. Engineers who ignore this objective, testable fact (if poll results can be said to be objective and testable!) and concentrate only on some engineering-friendly factor such as efficiency or cost, will find themselves missing a few boats on down the line, if not right away.
Should all engineers be political wonks instead? By no means! Generally speaking, the kind of personality who finds delight in making and dealing with things is not all that well suited to a life in politics, although there are exceptions. But a technologist who ignores the desires and perceptions of the marketplace, and the political and social effects of a technology, is missing an important part of the picture, a part no less important than the technical aspects.
Good people can differ over the questions of whether electric cars should be in our future, whether the marketplace or the legislatures should decide this question, and whether GM is serious this time or just has another trick up its collective sleeve. But to ignore all but the technical aspects of the questions is to lose a little of your humanity, and to become a little more like the machines you are designing.
Sources: An article on the introduction of the Volt and related electric-car news was written by John O'Dell of the Los Angeles Times, and appeared in the Boston Globe online edition on Jan. 14, 2007 at http://www.boston.com/cars/news/articles/2007/01/14/vehicles_of_the_future_likely_to_be_more_plugged_in/. An advocacy group for electric vehicles maintains a website at www.pluginamerica.com. The data on the comparable energy content of batteries and gasoline was obtained from a table at http://everything2.com/index.pl?node=energy%20density. You can see a picture of the Smithsonian's EV1 at http://americanhistory.si.edu/ONTHEMOVE/collection/object_1303.html.
Thursday, January 11, 2007
I Spend, Therefore I'm Spied Upon?
The 17th-century philosopher René Descartes' most famous dictum was, "I think, therefore I am." While Descartes was a military man for a time, he lived long before an age when simply carrying money around in your pocket made you vulnerable to espionage. A recent Associated Press report carried in the San Francisco Examiner online edition describes "spy coins" that have been found on contractors doing classified U. S. government business in Canada. According to the report, these Canadian coins carried tiny radio transmitters that could conceivably have been used to track the contractors' movements. No details were given about who the contractors were, what work they were doing, or even what denomination of coin was used. One of the security experts consulted by the reporter said that the technique didn't seem to make a lot of sense, because there is nothing to keep a person from spending a spy coin almost as soon as he or she receives it. My guess is it's a scheme cooked up by North Korea, whose counterfeiting activities are already well-known. It would be consistent with that country's old-style cold-war mentality to cook up something so outlandish that nobody would think of it, even if it didn't have a great chance of producing useful results.
Unless you do classified work for the U. S. and travel to Canada a lot, this news probably won't make you look more closely at the change you get at your next visit to the coffee shop. But it brings up a much broader issue, which is the fact that in the near future, devices very much like the Canadian spy coins will appear in millions of consumer products. Radio-frequency identification tags (abbreviated "RFID") is a technology that has been in the works for decades, and is poised to go public in a big way in the next few years. You have probably heard of systems like the New York State Thruway's "E-Z Pass," which uses an RFID device in one's car and allows the driver to pass through a toll booth without stopping. The RFID system notes the time and place and sends a bill at the end of the month.
RFID applications like that have no apparent ethical downsides, unless maybe somebody steals your E-Z Pass. Notifying the authorities of the theft will allow them to disable that particular unit, and even nab the thief if he happens to be stupid enough to try and use it himself. But other applications of RFID, including their use as a replacement for bar-code labels on consumer products, can get into some ethical gray areas pretty quickly.
The basic RFID technology works by means of a two-way exchange of information through radio waves between the tag and another transceiver. In a grocery store, for example, RFID may eventually allow you to simply roll your supermarket cart through a kind of portal similar to the ones used at airport screening checkpoints, and a few seconds later the receipt would come out of the cash register ready for payment. Like many developments in retail-related technology, this will be good news for consumers and not so good news for the checkout people, who will now simply pack things into bags and take payment. But that trend has already started with the hands-off do-it-yourself checkout stations at many supermarkets and hardware stores.
What is of more concern is the possibility of a personal RFID tag. This might easily be built into your driver's license, for example, or anything else you typically carry with you at all times. Depending on who is authorized to access it and the availability and cost of the necessary technology, a personal RFID tag would enable whoever runs the system to know where you are, anytime you were in range of a transceiver. And eventually, that could be a lot of places. Already in this country, and especially in Great Britain, we've gotten used to the ubiquitous security cameras that monitor our every move in many public and private places. But a person's identity, Social Security number, and other vital information are not immediately available simply from one's image on a security camera, so the privacy threat from that technology is not as extensive as it is from the potential abuse of a personal RFID tag.
Of course, any time you use a credit or debit card, your financial institution has a near-real-time bit of information about your location and activities, and occasionally this data becomes of interest to law enforcement authorities, or becomes a means of identity theft. We can expect that if personal RFID tags become either necessary or desirable, that someone somehow will find a way to hack the system. One can imagine a hacker-stalker who uses his ill-gotten data to hound his victim.
Developers of RFID systems are aware of at least some of these problems, but the technology deserves close scrutiny as it makes its way into increasing numbers of stores, warehouses, and other public and private locations. In the meantime, at least now you know what RFID means the next time you see it in print. And don't take any Canadian spy coins.
Sources: The article on Canadian spy coins was carried by the San Francisco Examiner on Jan. 11, 2007 at http://www.examiner.com/a-502598~U_S__Warns_About_Canadian_Spy_Coins.html.
Unless you do classified work for the U. S. and travel to Canada a lot, this news probably won't make you look more closely at the change you get at your next visit to the coffee shop. But it brings up a much broader issue, which is the fact that in the near future, devices very much like the Canadian spy coins will appear in millions of consumer products. Radio-frequency identification tags (abbreviated "RFID") is a technology that has been in the works for decades, and is poised to go public in a big way in the next few years. You have probably heard of systems like the New York State Thruway's "E-Z Pass," which uses an RFID device in one's car and allows the driver to pass through a toll booth without stopping. The RFID system notes the time and place and sends a bill at the end of the month.
RFID applications like that have no apparent ethical downsides, unless maybe somebody steals your E-Z Pass. Notifying the authorities of the theft will allow them to disable that particular unit, and even nab the thief if he happens to be stupid enough to try and use it himself. But other applications of RFID, including their use as a replacement for bar-code labels on consumer products, can get into some ethical gray areas pretty quickly.
The basic RFID technology works by means of a two-way exchange of information through radio waves between the tag and another transceiver. In a grocery store, for example, RFID may eventually allow you to simply roll your supermarket cart through a kind of portal similar to the ones used at airport screening checkpoints, and a few seconds later the receipt would come out of the cash register ready for payment. Like many developments in retail-related technology, this will be good news for consumers and not so good news for the checkout people, who will now simply pack things into bags and take payment. But that trend has already started with the hands-off do-it-yourself checkout stations at many supermarkets and hardware stores.
What is of more concern is the possibility of a personal RFID tag. This might easily be built into your driver's license, for example, or anything else you typically carry with you at all times. Depending on who is authorized to access it and the availability and cost of the necessary technology, a personal RFID tag would enable whoever runs the system to know where you are, anytime you were in range of a transceiver. And eventually, that could be a lot of places. Already in this country, and especially in Great Britain, we've gotten used to the ubiquitous security cameras that monitor our every move in many public and private places. But a person's identity, Social Security number, and other vital information are not immediately available simply from one's image on a security camera, so the privacy threat from that technology is not as extensive as it is from the potential abuse of a personal RFID tag.
Of course, any time you use a credit or debit card, your financial institution has a near-real-time bit of information about your location and activities, and occasionally this data becomes of interest to law enforcement authorities, or becomes a means of identity theft. We can expect that if personal RFID tags become either necessary or desirable, that someone somehow will find a way to hack the system. One can imagine a hacker-stalker who uses his ill-gotten data to hound his victim.
Developers of RFID systems are aware of at least some of these problems, but the technology deserves close scrutiny as it makes its way into increasing numbers of stores, warehouses, and other public and private locations. In the meantime, at least now you know what RFID means the next time you see it in print. And don't take any Canadian spy coins.
Sources: The article on Canadian spy coins was carried by the San Francisco Examiner on Jan. 11, 2007 at http://www.examiner.com/a-502598~U_S__Warns_About_Canadian_Spy_Coins.html.
Tuesday, January 02, 2007
Science, Engineering, and Ethical Choice: Who's In Charge?
Every now and then it's a good idea to look at the foundations of a field, the usually hidden and unspoken assumptions that everybody knows, but few ever talk about. A recent New York Times essay by Dennis Overbye on free will addressed the question of whether our choices are really choices, or whether we are really just "meat computers" executing a program of which we are unaware. What has that got to do with engineering ethics? Only everything.
You can put this issue in the form of a paradox. Modern engineering got where it is today by being based on science. From the many reputable scientists interviewed by Overbye, we learn that from what science can tell so far, everything in the universe is either determined by physical law (in which case we can predict it) or random (which is another way of saying we can't predict it, and may not in principle ever be able to). This includes the behavior of all physical systems, including the human brain. And if choices and decisions can be said to come from any physical object, they come from the human brain.
Now engineering ethics is all about making the right choices. But what if the idea of choice is false? If we only think we choose something when the reality is that we're just following a hugely complex but possibly predictable program, what does it mean to make the right choice, or indeed any choice at all? According to some of the scientists Overbye talked with, not much.
The view that all our supposed choices are really determined by external factors is called determinism. Daniel Dennett, a philosopher of science, thinks free will and determinism are compatible, even mutually dependent. According to Dennett, strict causality ". . . makes us moral agents. You don't need a miracle to have responsibility." On the other hand, medical researcher Mark Hallett limits the idea of free will to the perception, not the absolute fact. "People experience free will," he says. "They have the sense that they are free. The more you scrutinize it, the more you realize you don't have it."
Dr. Hallett spends his days pondering the inner workings of the brain, and understandably tends to view it as a complex system that may one day yield all of its secrets to science, which is to say, other brains. Overbye is diligent enough to note that while a system may be deterministic, it nonetheless may not be predictable. Citing mathematicians Kurt Gödel and Alan Turing, he points out that any moderately complex mathematical system cannot be shown to be consistent within itself: there will always be statements you can make with it that you cannot prove or disprove within the system. Philosopher and historian of science Stanley Jaki has used this fact to show that the scientist's dream of a mathematically complete "final theory" that would predict everything—all physical constants, all deterministic activity down to the end of time—is only a dream. So it seems that science has also told us that we know there are things that we will never know about the world, in the objective, testable, scientific sense.
So does this mean that a truly consistent scientific engineer will disregard ethics as an illusion and act however he or she pleases? Here is where the engineer's famed pragmatism comes into play. Most engineers I know are eminently practical people, wanting to get the job done and impatient with what they regard as hairsplitting philosophical discussions about the ultimate meaning of this or that. Most engineers would immediately realize that disregarding right and wrong simply because some philosophers and scientists say choice is an illusion would be fatal both to their careers and quite possibly to the people served by their engineering. And death is a bad thing.
These common-sense notions do not come from science. In their more sober moments, most scientists—and many philosophers—will admit that science cannot pass judgment on questions of value. The stated goal of science is knowledge, not guidance or moral instruction. But to allow a scientific conclusion about the source of free will to abolish one's ethics would be to allow science to dictate morality, or rather, the lack thereof.
Prominent by their absence from Overbye's list of interviewees was anyone who spoke for the religious viewpoint which takes free will and the reality of moral agency seriously. While there are philosophical issues that arise from the question of how God can allow free will in a universe of which he has perfect foreknowledge, at least that picture makes sense morally. The issue that Overbye sidles up to, but never quite breaches, is the one that Dostoevsky made plain when he wrote in Notes from the Underground, "For what is man without desires, without free will, and without the power of choice but a stop in an organ pipe?" In other words, a passive piece of machinery whose sound and fury signifies nothing. All the shilly-shallying of the philosophers who say in effect, "Well, we don't really have it, but we think or feel that we do, and so it doesn't make much difference," simply evades the logical conclusions of their positions, which many of them are afraid to espouse openly.
Engineering is not philosophy, and most engineers are not trained philosophers. But every engineer who thinks about the reasons for professional actions must sooner or later ask, "What do I think the right thing is?" and "Can I really choose freely?" Many engineers, including yours truly, have a religious answer to these questions. And we are not bound by the dicta of scientists or philosophers to decide otherwise—especially if we couldn't decide!
Sources: The New York Times article "Free Will: Now You Have It, Now You Don't" appeared in the Jan. 2, 2007 online edition at http://www.nytimes.com/2007/01/02/science/02free.html?pagewanted=1&8dpc. The Dostoevsky quotation is from About.com's section on classic literature by Esther Lombardi at http://classiclit.about.com/od/dostoyevskyf/a/aa_fdostquote.htm.
You can put this issue in the form of a paradox. Modern engineering got where it is today by being based on science. From the many reputable scientists interviewed by Overbye, we learn that from what science can tell so far, everything in the universe is either determined by physical law (in which case we can predict it) or random (which is another way of saying we can't predict it, and may not in principle ever be able to). This includes the behavior of all physical systems, including the human brain. And if choices and decisions can be said to come from any physical object, they come from the human brain.
Now engineering ethics is all about making the right choices. But what if the idea of choice is false? If we only think we choose something when the reality is that we're just following a hugely complex but possibly predictable program, what does it mean to make the right choice, or indeed any choice at all? According to some of the scientists Overbye talked with, not much.
The view that all our supposed choices are really determined by external factors is called determinism. Daniel Dennett, a philosopher of science, thinks free will and determinism are compatible, even mutually dependent. According to Dennett, strict causality ". . . makes us moral agents. You don't need a miracle to have responsibility." On the other hand, medical researcher Mark Hallett limits the idea of free will to the perception, not the absolute fact. "People experience free will," he says. "They have the sense that they are free. The more you scrutinize it, the more you realize you don't have it."
Dr. Hallett spends his days pondering the inner workings of the brain, and understandably tends to view it as a complex system that may one day yield all of its secrets to science, which is to say, other brains. Overbye is diligent enough to note that while a system may be deterministic, it nonetheless may not be predictable. Citing mathematicians Kurt Gödel and Alan Turing, he points out that any moderately complex mathematical system cannot be shown to be consistent within itself: there will always be statements you can make with it that you cannot prove or disprove within the system. Philosopher and historian of science Stanley Jaki has used this fact to show that the scientist's dream of a mathematically complete "final theory" that would predict everything—all physical constants, all deterministic activity down to the end of time—is only a dream. So it seems that science has also told us that we know there are things that we will never know about the world, in the objective, testable, scientific sense.
So does this mean that a truly consistent scientific engineer will disregard ethics as an illusion and act however he or she pleases? Here is where the engineer's famed pragmatism comes into play. Most engineers I know are eminently practical people, wanting to get the job done and impatient with what they regard as hairsplitting philosophical discussions about the ultimate meaning of this or that. Most engineers would immediately realize that disregarding right and wrong simply because some philosophers and scientists say choice is an illusion would be fatal both to their careers and quite possibly to the people served by their engineering. And death is a bad thing.
These common-sense notions do not come from science. In their more sober moments, most scientists—and many philosophers—will admit that science cannot pass judgment on questions of value. The stated goal of science is knowledge, not guidance or moral instruction. But to allow a scientific conclusion about the source of free will to abolish one's ethics would be to allow science to dictate morality, or rather, the lack thereof.
Prominent by their absence from Overbye's list of interviewees was anyone who spoke for the religious viewpoint which takes free will and the reality of moral agency seriously. While there are philosophical issues that arise from the question of how God can allow free will in a universe of which he has perfect foreknowledge, at least that picture makes sense morally. The issue that Overbye sidles up to, but never quite breaches, is the one that Dostoevsky made plain when he wrote in Notes from the Underground, "For what is man without desires, without free will, and without the power of choice but a stop in an organ pipe?" In other words, a passive piece of machinery whose sound and fury signifies nothing. All the shilly-shallying of the philosophers who say in effect, "Well, we don't really have it, but we think or feel that we do, and so it doesn't make much difference," simply evades the logical conclusions of their positions, which many of them are afraid to espouse openly.
Engineering is not philosophy, and most engineers are not trained philosophers. But every engineer who thinks about the reasons for professional actions must sooner or later ask, "What do I think the right thing is?" and "Can I really choose freely?" Many engineers, including yours truly, have a religious answer to these questions. And we are not bound by the dicta of scientists or philosophers to decide otherwise—especially if we couldn't decide!
Sources: The New York Times article "Free Will: Now You Have It, Now You Don't" appeared in the Jan. 2, 2007 online edition at http://www.nytimes.com/2007/01/02/science/02free.html?pagewanted=1&8dpc. The Dostoevsky quotation is from About.com's section on classic literature by Esther Lombardi at http://classiclit.about.com/od/dostoyevskyf/a/aa_fdostquote.htm.
Subscribe to:
Posts (Atom)