Monday, September 16, 2019

Facing Google In Your Living Room

An article on cnet.com recently described how Google's new smart assistant, called Google Nest Hub Max, uses facial recognition technology to tell who is talking with it.  This feature has raised privacy concerns, as Google has admitted that they reserve the right to upload facial data from it to the cloud to help improve "product experience."  But whatever Google does legitimately, a hacker might be able to do too, and so we are approaching a time when the telescreens of George Orwell's dystopian fantasy novel Nineteen Eighty-Four have become a reality—not because of the unilateral command of a totalitarian government (at least not in the U. S.), but because we want what they can do.

For those unfamiliar with the novel, Orwell's book was a warning to the free world to beware of what a dictatorship could do with communications technologies of the future.  Telescreens were two-way televisions on which propaganda by a dictator known only as Big Brother is transmitted, and through which images of whoever is watching are transmitted back to the party's central headquarters.  Orwell was simply extrapolating the efforts of regimes such as the Nazis of the 1930s and the Soviet Union of the 1940s to spy on their populace twenty-four hours a day to enforce total obedience to the regime. 

At the time the novel was published in 1949, no one took the telescreen-spying idea very seriously, because it would take a huge number of human monitors to spy on a significant number of people.  Carried to its logical extreme, the only way the government could watch everybody would be if half the population spied on the other half, and then took turns. 

But neither Orwell nor anybody else at the time reckoned on the development of advanced artificial-intelligence (AI) systems using facial recognition technology.  In China, the government is deploying many thousands of cameras and taking facial data from millions of people with the intention of developing a Social Credit rating that measures how well you measure up to the regime's model of the ideal citizen.  If you have been caught on camera by computers going to suspicious places or meetings, your Social Credit score could go in the tank, making it hard to travel, get a job, or even stay out of jail. 

None of that is happening in the U. S., but the fact that a large corporation will now have electronic access to views in millions of private residences should at least give us pause. 

Leaving the hardware aside for the moment, let's examine the difference in motives between a government, such as in Nineteen Eighty-Four, spying on its citizens for the purposes of controlling behavior, and a commercial entity such as Google using images to sell both its own services and advertising for others.  The government spying is motivated by suspicion and fear of what people might be doing while the government isn't watching them.  Whatever the regime sets out as an ideal of behavior, it watches for deviations from that ideal, and punishes those who deviate from it.  Participation is not voluntary, and people have to go to great lengths to avoid being spied on.

Now contrast that with a benign-looking thing such as the Google Nest Hub Max.  Nobody is going to make you buy one.  And if you do, there are ways of turning off the facial-recognition feature, though it will be less convenient to use.  And the device is intended to serve you, not the other way around.  It's sold with the vision portrayed in so many TV ads of people happily using it to make their lives better, not for means of social control like Orwell's telescreens. 

But maybe the differences are not as great as they first appear.  Both the telescreen and the Nest Hub Max are intended to change behavior.  If they don't, they have failed.  True, the ideal behavior that a totalitarian government wants and the ideal behavior that a company wants are two different things.  But neither ideal is the way the citizen-consumer was before the screen or Nest Hub shows up, namely, unwatched and unbenefited by the products or services that the company wants to sell.

Nobody should read this blog and then go around saying "Ahh, Stephan's saying Google is Big Brother and they're trying to take over our lives!"  That's not the comparison I'm making.  My point is that the mere fact of being watched by someone, or something that can inform someone about us, is going to change our behavior.  And that change by itself is significant.

Now, the change may not necessarily be bad.  Already, virtual audio assistant devices such as Alexa have been used in criminal cases when bad actors set them off, either by accident or on purpose, and the data thus generated has proved to be incriminating.  Though this is ancient history, I am told that in the days when some middle-class and upper-class people had servants, families tended to behave better when the servants were around, although I'm sure there were exceptions.  Alexa isn't Jeeves the butler, but as virtual assistants play a more significant role in domestic life, it's not beyond imagination to think that some of the worst behavior in homes—domestic abuse, for example—might be mitigated if the victim could call 911 by just shouting it instead of having to pull out a phone.

I'm not necessarily crying doom and gloom here.  Millions of people are already using virtual assistants with few if any problems, and adding two-way video to the mix will only increase the devices' capabilities.  But we are entering a new territory of connectivity here, and it's bound to have some effects that nobody has predicted yet.  Perhaps it's not too helpful to predict that there will be unpredictable effects, but right now that's all I can do at the moment.  Let's hope that the security features of the Nest Home Max are good enough to prevent nefarious use, and that people who buy them are truly happier with them than they were before. 

Sources:  The article "Google collects face data now.  Here's what it means and how to opt out" appeared on Sept. 11, 2019 at https://www.cnet.com/how-to/google-collects-face-data-now-what-it-means-and-how-to-opt-out/#ftag=CADf328eec.  I also referred to Wikipedia articles on Nineteen Eighty-Four and Google Home.  I thank my wife for pointing this article out to me.

Monday, September 09, 2019

Vaping Turns Deadly


At this writing, three people have died and hundreds more have become ill from a mysterious lung ailment that is connected with certain types of e-cigarettes.  The victims typically have nausea or vomiting at first, then difficulty breathing.  Many end up in emergency rooms and hospitals because of lung damage.

Most of the sufferers are young people in their teens and twenties, and all were found to have been  using vaping products in the previous three months.  Many but not all were using e-cigarettes laced with THC, the active ingredient in marijuana.  Others were vaping only nicotine, but some early analysis indicates that a substance called vitamin-E acetate was found in many of the users' devices.  It's possible that this oily compound is at fault, but investigators at the U. S. Centers for Disease Control (CDC) and the Food and Drug Administration (FDA) have not reached any conclusions yet. 

In fact, the two agencies have released different recommendations in response to the crisis.  The CDC is warning consumers to stay away from all e-cigarettes, but the FDA is limiting its cautions to those containing THC.  Regardless, it looks like the vaping party has received a damper that may change a lot of things.

So far, vaping and the e-cigarette industry is largely unregulated, unlike the tobacco industry.  It found its first mass market in China in the early 2000s.  The technology was made possible by the development of high-energy-density lithium batteries, among other things.  While vaporizers for medical use have been around since at least the 1920s, it wasn't possible to squeeze everything needed into a cigarette-size package until about fifteen years ago. 

Since then, vaping has taken off among young people.  A recent survey of  U. S. 12th-graders shows that about 20% of them have vaped in the last 30 days, and this is up from only about 11% in 2017, the sharpest two-year increase in the use of any drug that the National Institutes of Health has measured in its forty-some-odd year history of doing such surveys.

The ethical question of the hour is this:  has vaping become popular enough, mature enough, and dangerous enough, that some kind of regulation (either industrial self-policing or governmental oversight) is needed?  The answer doesn't hinge only on technical questions, but on one's political philosophy as well.

Take the extreme libertarian position, for example.  Libertarians start out by opposing all government activity of any kind, and then grudgingly allow certain unavoidable activities that are needed for a nation to be regarded as a nation:  national defense, for instance.  It's not reasonable to expect every household to defend itself against foreign aggression, so most libertarians admit the necessity of maintaining national defense in a collective way. 
           
But on an issue such as a consumer product, the libertarian view is "caveat emptor"—let the buyer beware.  If you choose to buy an off-brand e-cigarette because it promises to have more THC in it than the next guy's does, that's your business.  And if there's risk involved, well, people do all sorts of risky things that the government pays no attention to:  telling your wife "that dress makes you look fat" is one example that comes to mind. 

On the opposite extreme is the nanny-state model, favored generally by left-of-center partisans who see most private enterprises, especially large ones, as the enemy, and feel that government's responsibility is to even out the unfair advantage that huge companies have over the individual consumer.  These folks would regulate almost anything you buy, and have government-paid inspectors constantly checking for quality and value and so on. 

It's impractical to run your own bacteriological lab to inspect your own hamburgers and skim milk, so the government is supposed to do that for you.  Arguably, it's also impractical for vapers to take samples of their e-cigarette's goop and send it to a chemical lab for testing, and then decide on the basis of the results whether it's safe to use that particular product. 

My guess at this point is that sooner or later, probably sooner, the e-cigarette industry is going to find itself subject to government standards for something.  Exactly what isn't clear yet, because we do not yet know what exactly is causing the mysterious vaping illnesses and deaths.  But when we do, you can bet there will be lawsuits, at a minimum, and at least calls for regulation of the industry. 

Whether or not those calls are heeded will depend partly on the way the industry reacts.  Juul, currently the largest maker of vaping products, is one-third owned by the corporate entity formerly known as Philip Morris Companies.  In other words, the tobacco makers have seen the vaping handwriting on the wall, and are moving into the new business as their conventional tobacco product sales flatten or decline. 

The tobacco companies gained a prominent place in the Unethical Hall of Fame when they engaged in a decades-long campaign of disinformation to combat the idea that smoking could hurt or kill you, despite having inside information that it very well could.  In the face of an ongoing disaster such as the vaping illness, this ploy doesn't work so well.  But they could claim that only disreputable firms would sell vaping products that cause immediate harm, and pay for studies that show it's better than smoking and harmless for the vast majority of users.

Sometimes the hardest thing to do is be patient, and that's what we need to do right now, rather than rushing to conclusions that aren't supported by clinical evidence.  Investigators should eventually figure out what exactly is going on with the sick and dying vapers, and once we know that, we'll at least have something to act on.  Until then, if by chance anyone under 30 is reading this blog, take my advice:  leave those e-cigarettes alone. 

Monday, September 02, 2019

Lawyers in Space?


A recent Washington Post article highlights what would normally be a humdrum domestic dispute alleging identity theft.  The unusual feature of the dispute is that the party who allegedly accessed a bank account without permission did it from the International Space Station, and thus may have committed the first legally recognized crime in space.

Anywhere humans go, the lawyers can't be far behind.  While Shakespeare probably got a laugh in his play Henry VI when the criminal type Dick the Butcher said, "The first thing we do, let's kill all the laywers," the context was not a sober discussion of how to make a better society.  Dick and his rebel friends were imagining a fantasy world made to their liking, where all the beer barrels would be huge, all the prices low, and naturally, there wouldn't be any lawyers to get people like them into trouble.

Law-abiding citizens need have no fear of ordinate laws, and so it's only right that there are some treaties and international agreements that govern humans and human-made artifacts in space. 

The foundational agreement is something called the Outer Space Treaty, which over 100 countries have signed, including every nation that is currently capable of orbiting hardware in space.  Implemented in 1967, its most prominent theme is that space is for peaceful uses only.  It therefore prohibits keeping nuclear weapons in space.  It also forbids any country from claiming sovereignty over any part of outer space, including any celestial body.  So when the U. S. planted its flag on the moon, it was just a symbolic gesture, not the first step in creating a fifty-first state with zero population.

Right now, there are companies making quite serious plans to do space mining, build space hotels, and engage in other profit-making activity that would involve substantial amounts of investment of both hardware and human capital.  There's a concern that the Outer Space Treaty is silent on the question of individual or corporate ownership of space property, and unless we get more specific on what the rules are, such developments may be stifled.

I don't see any critical problem here, because we have abundant precedents in a similar situation:  the international laws governing ocean-going vessels.  The Disney Company puts millions of dollars into what amounts to floating hotels, and they quite happily manage to cope with the fact that while they own the ship, it travels in international waters and docks at various ports owned by other countries.  Of course, there are hundreds of years of tradition bound up in the law of the sea, and the same isn't true of space law.  But the fact that ocean-going commerce goes on quite smoothly for the most part shows such things can be done, and so that doesn't concern me at all.

What could throw the whole situation into doubt is if somebody finds a fantastically lucrative space-based enterprise.  Diamonds on the moon sounds like something that Edgar Rice Burroughs would cook up, but there are quite serious organizations out there planning to do things like mining asteroids.  And depending on what they find, we might see something like the rush of the Old World explorers to the New World, where they in fact did discover gold.  Like most naive fantasies, that discovery didn't work out quite as nicely as the explorers hoped, what with the abysmal treatment of Native Americans and the disastrous inflation that the introduction of huge amounts of gold caused in the economies of Europe. 

It's hard to imagine something similar happening as a result of a space-based discovery, but stranger things have happened.  The optimist in me, as well as numbers of Silicon Valley types who seem to think that space colonization is not only possible, but inevitable and represents the last best hope of humanity, would like to see the future of space exploration and settlement as another chance for us to get some things right.  After all, something like that was the motivation for many Europeans to make the arduous journey to the New World where unknown hardships awaited them.  Overall I'm glad they did, or else I would not have the opportunity to live in Central Texas today.

But the identity-theft case in the International Space Station reminds us that no matter what idealistic plans we make, we will take all our mental and behavioral baggage with us wherever we go.  That is why we will always need lawyers, whether in San Marcos or a moon base.  Because a certain number of us will misbehave beyond the boundaries that ordinary people around us can deal with, and so the law will have to get involved.  Right now, all the parties to the identity-theft dispute turned out to be U. S. citizens, so U. S. law applies.  But in the future, when space colonies (for lack of a better phrase) may want to set up their own independent governments, things may get considerably more complicated.  And complications mean lawyers.

One thing I haven't mentioned is the question of militarization in space.  President Trump recently announced the establishment of a Space Command, which is apparently a kind of umbrella under which the space activities of the various branches of the military will be gathered.  While the Outer Space Treaty prohibits "weapons of mass destruction" in space, it does nothing to stop nations from testing weapons or placing military personnel in space. 

It is perhaps inevitable that rivalries on the ground will end up being played out in space as well.  But we can hope that for the near future, anyway, the need for lawyers and law in space will be limited to minor issues such as the identity-theft case, and that we can view space as a place where for the time being, people can just get along.  But if they don't, I'm sure
lawyers will find a way to get involved.

Sources:  Deanna Paul's article "Space:  The Final Legal Frontier" appeared on Aug. 31 on the Washington Post website at https://www.washingtonpost.com/technology/2019/08/31/space-final-legal-frontier/.  I also referred to Wikipedia articles on the Outer Space Treaty and "Let's kill all the lawyers." 

Monday, August 26, 2019

This Business of Engineering


Early Sunday morning, Aug. 5, 1888, a 39-year-old woman named Bertha Benz set off for her mother's house in Pforzheim, some sixty-six miles (106 km) away from Mannheim, Germany.  She lived there with her husband Karl and two teenage sons, and she took her sons along for the ride.  Visiting her mother was not unusual.  But the way she planned to get there was. 

For the last several years, Karl had been developing what he called a Patent-Motorwagen—what we would call today an automobile.  Its one-cylinder engine burned an obscure solvent called ligroin, obtainable only at pharmacies.  It had wooden brakes and only two gears, low and high.  Bertha was from a wealthy family, and she had put a considerable amount of money into her husband's invention.  But like many inventors, Karl was content to make incremental improvements to his machine and treated it gingerly, never driving it more than a few miles away from home on short test drives.  Besides, there were laws regulating such machines, and to drive it a long distance legally, he would have had to get permission from various local authorities along the way.  It was much easier just to tinker with it in his shop and drive it only around town.

But Bertha had had enough of this.  She knew Karl's invention was good, but people had to know what it was capable of.  Without telling her husband, she and her two boys left Mannheim on the rutted wagon roads leading to Pforzheim.  On steep hills, the boys had to get out and push the underpowered vehicle uphill.  At one point the fuel line clogged, and Bertha unplugged it with a hatpin.  A chain broke, and she managed to find a blacksmith willing to work on Sunday to fix it.  The brakes proved inadequate, and she stopped at a cobbler's shop and had him cut some leather strips to fit onto the brakes, thus inventing the world's first brake pads.  A little after sunset the same day, she and her boys arrived in Pforzheim, no doubt to the great surprise of her mother.  She telegrammed her husband of her successful trip, and by the time she drove back several days later, reports of her exploit were in newspapers all over the country.  Which was exactly what she wanted.  Benz's invention, and Bertha's exploit, were foundational steps in the worldwide automotive industry.

Somehow I had gotten to my present advanced age without learning about Bertha Benz's first-ever auto trip.  But this was just the most interesting of many such anecdotes about engineering and business that I encountered in a new book by Matt Loos:  The Business of Engineering. 

Loos is a practicing civil engineer in Fort Worth who realized a few years after being in the working world, that many of the most important skills he was using every day had little or nothing to do with what he learned in engineering school. 

This is not to disparage engineering education (which is what I do for a living) but simply reflects the fact that the technical content of engineering is so voluminous these days that there isn't much room in a nominal four-year curriculum for what are (perhaps unfortunately) called "softer" subjects such as management techniques, ethical issues, and coping with the dynamics of rapidly changing technical fields. 

A newly-graduated engineer could do a lot worse than to pick up The Business of Engineering and read it to find out what the late radio commentator Paul Harvey called "the rest of the story."

If a person is going to claim to be able to use specialized technical knowledge to do something of value, they must have mastered that technical knowledge.  That fundamental requirement is the reason behind the extensive and challenging technical content of engineering undergraduate courses.  But as Loos points out in numerous ways—through anecdotes like Bertha Benz's story, through recent statistics and facts drawn from a variety of technical fields, and from his own personal experience—knowing your technical stuff by itself will not make you a successful engineer.  And even the definition of success depends on what sort of business you are in and how your own personal goals fit in with the directions that the industry is moving. 

I have to say that if I had read and taken to heart what Loos says in his book when I was, say, twenty-four, my career might have been very different.  At the time, I had a very simplistic and immature notion that all an engineer had to do was to come up with brilliant technical stuff, and the world would beat a path to his door.  But in thinking that, I was acting like Karl Benz, happily tinkering away in his shop but afraid to try his pet invention out in the real world.  The lesson I needed to learn was that if nobody but you cares about what you're doing, nothing much good will come of it.  Working engineers need to be engaged in the world around them, not only on a purely technical level, but also at the levels of economics, social relations, and ethics, to mention only a few.

This is Loos's first book, and as with most things, one's first efforts occasionally lack the polish that long experience can give.  But it is still highly readable, even if you don't read it for anything but the stories.  One of the strengths of the book is that Loos is realistic about how an engineer's personal habits can make the difference between success and something considerably below success:  things like attention to details, ability to organize one's time, problem-solving skills, and so on.  Now and then I come across a student who has more than adequate brain power to do engineering problems.  But when he confronts a problem he's not familiar with, he will simply sit there and appear to wait for inspiration.  And if inspiration doesn't come, well, it's just too bad.  The better way is to follow the advice of G. K. Chesterton (this isn't in Loos's book), who said anything worth doing is worth doing badly.  Even trying something that doesn't work will probably tell you something about what will work, and it's better than just passively waiting for something to happen.  Engineers make things happen—not always the best thing, but something that moves the process along.

Loos's book is now available on Amazon, and I recommend it especially to graduating engineers who can benefit from the experience and the stories that The Business of Engineering collects.

Sources:  The Business of Engineering by Matthew K. Loos, P. E. is available on Amazon at
https://www.amazon.com/gp/product/0998998788?pf_rd_p=183f5289-9dc0-416f-942e-e8f213ef368b&pf_rd_r=2V03WN39CX0ASA8E4VGJ.  Mr. Loos kindly provided me with a free review copy.  I enhanced the Bertha Benz story with some details from the Wikipedia page on her.
-->

Monday, August 19, 2019

Should Social Media Be Regulated?


Last month, the youngest U. S. Senator, Josh Hawley, a freshman Republican from Missouri, filed a bill called the Social Media Addiction Reduction Technology (SMART) Act.  The purpose of the act is to do something about the harmful effects of addiction to social media.

What would the bill do?  I haven't read it, but according to media reports it would change the ways companies like Facebook, Twitter, and Google deal with their customers.  The open secret of social media is that they are designed quite consciously and intentionally to be habit-forming.  So-called "free" media make their money by selling advertising, and advertising is worthless unless someone looks at it.  So their bottom line depends on how firmly and how long they glue your eyeballs to their sites.  And they have scads of specialists—psychologists, media experts, and software engineers—whose full-time job is to squeeze an extra minute or two of attention from you every day, regardless of whatever else is going on in your life. 

Writing on the website of the religion and public life journal First Things, Jon Schweppe says the SMART Act may not be the one-stop cure-all for our social media problems, but it's a step in the right direction.  It would  prohibit certain practices that are currently commonplace, apparently including one that has always reminded me of what life might be like in Hell:  the infinite webpage.

It used to be that when people first figured out how to make a web page scroll, it was only so long.  You could always get to the bottom of it, where you might find useful things like who wrote it or other masthead-and-boilerplate information.  Well, that doesn't always happen anymore.  The infinite webpage pits the pitiful finite mortal human against the practically unlimited resources of the machine to come up with more eye candy, as much as you want.  You keep scrolling, it will keep showing you new stuff. 

This particular feature reminds me of a passage from C. S. Lewis's The Lion, the Witch, and the Wardrobe featuring the candy called Turkish Delight.   The wicked-witch Queen of Narnia offered the boy Edmund his favorite type of candy to convince him to betray his friends.  The candy she offered him was enchanted so that whoever ate it always wanted more, and "would even, if they were allowed, go on eating it till they killed themselves."  No matter how much time you waste on an infinite website, there's always more.

The SMART bill would also give users a realistic option to voluntarily limit their own use of social media with daily timers, prohibits "badge" systems (which is evidently a kind of special-privilege feature that gets rewards heavy users and encourages them even more), and would prohibit or modify other addictive features. 

The Federalist's John Thomas sees the SMART bill as the first step in what may be a turning point in the history of social media.  He likens it to the Parisian reaction to brightly-colored advertising posters enabled by the then-new lithography process in the 1860s.  Pretty soon, a good percentage of all available vertical flat surfaces were covered with posters, and the town fathers decided to regulate how and where posters could be displayed. 

This may be the point at which the U. S. citizenry stops merely wringing its hands and saying there's nothing you can do in the face of rising teen depression and other ill effects of social media, and starts to take action.  As Thomas points out, though, there are few grass-roots organizations taking up the control-social-media banner. 

This may be because the dangers social media pose for mental health are insidious and gradual rather than abrupt and catastrophic.  Suppose that every person had an intrinsic social-media limit:  say after viewing X hours of social media (and X would be different for each person), your brain would literally explode and you'd die.  Well, you can bet that after two or three of these incidents, governments would come down on Facebook, Google, and company like a ton of bricks with all sorts of restrictions, up to and including an outright ban.

But nobody's brain literally explodes from doing too much Facebook.  The negative consequences of social-media use are much less obvious than that, but are nonetheless real.  Even the most tragic cases of teen suicides that result from peer persecution over social media can be blamed not just on the media, but on the cruelty of other teens.  Nevertheless, the nominal anonymity and ease of use that social media offer can turn what might be fairly well-behaved peers in person into abominable monsters on Facebook. 

Some writers oppose the SMART Act and similar legislation on the free-market principle that government is more likely to make things worse with legislation than otherwise.  While that can happen, it is foolish to take the hyper-libertarian position that if a good or service is bad, people just shouldn't use it.  Back when ordinary glass was used for automobile windshields, it would turn into long razor-sharp shards that decapitated numerous drivers, and Congress invited Henry Ford to testify about a proposed law that would require the use of the more expensive safety glass in windshields.  Reportedly (and this is from memory), Ford said, "I'm in the business of making cars, I'm not in the business of saving lives."

When Mark Zuckerberg testified before Congress not too long ago, he was self-controlled enough not to say anything that harsh.  But if the day has at last arrived when our elected officials are finally going to do something about the harmful effects of social media, one of two things (or perhaps a combination) is going to happen.  Either the social-media companies will have to get ahead of the proposed legislation and enact real, quantifiable reforms of their own and prove that they work, or they will have to change their ways in accordance with regulatory laws that they brought upon themselves. 

My own hope is that the companies will figure out a transparent and effective way to self-regulate.  But the choice is theirs, and if they brush off the SMART Act and think they have the raw power to squash such regulation, they may be in for a painful surprise.

Sources:  Jon Schweppe's article "Big Addiction" appeared on the First Things website on Aug. 13, 2019 at https://www.firstthings.com/web-exclusives/2019/08/big-addiction.  John Thomas's article "Hawley's SMART Act Is the Beginning Of the Revolt Against Big Tech" is on the Federalist website at https://thefederalist.com/2019/08/13/hawleys-smart-act-beginning-revolt-big-tech/. 
-->

Monday, August 12, 2019

USB-Crummy


About a year ago, my old Mac laptop died and I had to buy a new one.  I was pleased overall with my new machine, as far as the software and operating characteristics went.  And at first I wasn't too concerned that the only physical ports it had were one 3.5-mm phone jack for a headphone and four USB-C jacks, two on each side. 

I wasn't familiar with USB-C, although like anybody else who's dealt with computers, I knew about USB (Universal Serial Buses) in general.  I had nothing in my possession that would work with a USB-C connector—no printer cables, no external hard drive cables, no headphones.  So I went to Best Buy and bought a docking station that promised to solve all my interface problems.

It has a single USB-C connector plug on a short cable that goes to a flat aluminum box that has nearly every kind of jack you can think of:  an old-fashioned VGA (Video Graphics Array) connector for your ancient video projector, large and small HDMI (High-Definition Multimedia Interface) connectors that work with our medium-screen TV, an Ethernet cable port for hard-wired networking, an SD/MMC jack for flash-drive cards, a micro-SD for the teeny flash drive cards, three USB-2 (regular size) jacks, and another USB-C jack in case you need it.  And for most of a year after I bought the new laptop, I used these ports on the docking station whenever I wanted to connect anything to the laptop other than the USB-C power supply that came with it.

Then a month or so ago, I began to have problems.  First it was an issue with downloading data to a flash drive.  I would be downloading something and then all of a sudden I'd get a message on my Mac criticizing me for removing the flash drive before ejecting it.  Only I hadn't touched a thing.  I could take it out and put it back in and it would start working—sometimes.  And sometimes it would drop out again without warning.

That made me wonder if something was wrong with the docking station.  So I bought a simple adapter cable that has just a USB-C connector on one end and a USB-2 regular-size USB jack on the other, and used another adapter to get to a flash drive.  That worked for a while, but then I began to have the same problem. 

The worst thing was when I would do a backup to an external hard drive.  That takes a while, and in the middle of the transfer I'd get an error saying I'd removed the drive without ejecting it.  Only I hadn't.

Finally, I went online to see if other people were having these problems.  Turns out they were.  And amid all this wonderful stuff that USB-C is supposedly capable of doing (it has six modes of operation, including everything from external display support to power and 20-GB-per-second data transfer), there is a smelly fly in the ointment:  mechanical unreliability.

If you've never looked closely at a USB-C connector, get a magnifying glass and do so.  Inside that tiny rigid plug there are twenty-four pins, twelve on each side.  And apparently, for the high-speed data transfer to work, a good number of them (either four or eight, the best I can tell from online information) have to make perfect contact with their mating members in the socket you plug it in to.  Or else it looks to the hardware like you've jerked out the connector and you get an error.

From various online forums I read, it appears that the mechanical design of the consumer-grade USB-C jack is unreliable.  I saw tales of people with Macs like mine who had to take theirs in to get the USB-C jacks replaced because they simply wouldn't work for high-speed data transfer anymore.  I also use them (of necessity) for low-speed stuff like connecting to my keyboard and my printer, and I've never had any problems with those functions, because evidently they use different contacts than the high-speed ones.  But even if you get new jacks, they're just as unreliable as the old ones, and you're likely to have the same problem show up again in a few months.

As one online forum writer commented, every connector engineer knows that other things being equal, the more contacts you put in a connector, the less reliable it becomes.  Squeezing twenty-four pins into a tiny USB-C connector and expecting all of them to work all the time was one of the dumbest standards decisions I've come across in a long time.

The proverb saying a chain is only as strong as its weakest link applies in spades to connectors, and I am now stuck with a system that has a known weak link:  the USB-C connectors, which are the only practical physical way I have to get data in and out of my Mac. 

Now that I know it's a mechanical problem, I can do things like trying not to use one of the four USB-C ports for anything except the occasional backup, for example, and tiptoeing around any time a long data transfer is happening for fear I will set up vibrations that will break one of the eight vital connections and ruin the whole process.  This is not progress.

It may be too much to hope for, but maybe whoever devises the next standard after USB-C will come up with a fail-safe approach, or at least one that will be as reliable as the larger USB-2 standard was.  The old Bell System, which for much of the twentieth century relied on electromechanical relays for all of its network switching functions, found that the only way to make a relay reliable was to duplicate every one of its contacts, so that if a piece of dust got into one contact you had the other one that would still work.  Whoever designed the USB-C evidently forgot that hard-earned piece of wisdom.  I hope the next standards committee working on whatever comes after USB-C will not forget, but it looks like we may have to wait a while before that happens.

Sources:  I referred to the Wikipedia article on USB-C connectors and the website of the USB Implementers Forum (usb.org), as well as several online discussion boards about the unreliability of USB connectors. 

Monday, August 05, 2019

Should Engineers Get Their Fingernails Dirty?


Up the road from me in Austin, Texas, it turns out that Apple has been building Mac Pros, a desktop model, in a Flextronics manufacturing plant since at least 2013.  But a recent news item in the Austin American-Statesman says that the company will soon shut down its manufacturing activity of Mac Pros here in Texas and move it to China.  However, Apple is moving ahead with plans to open a new billion-dollar campus in Austin, which will increase the number of Apple workers there from around its current 7,000 to as many as 15,000.  But you can be pretty sure that most of those workers won't be holding soldering irons or screwdrivers—they'll be sitting at computers typing code.

The image of the engineer has changed radically over the years since the profession first attained significant public recognition, which came about in the late 1800s.  In 1900, artistic portrayals of an engineer would show a rugged, muscular man who might be holding an engineer's hammer in one hand (something halfway between a regular hammer and a sledgehammer), and adjusting a surveyor's transit as he squints through the eyepiece.  Most engineers prior to about 1920 were civil engineers, engaged in laying out railroad tracks or roads, installing water and sewer systems, and making sure bridges and buildings didn't fall down after they were built.

Then the era of scientific engineering came along.  The slide rule replaced the hammer, and now engineers were depicted in hiring ads in the 1950s as lab-coated intellectuals, wearing horn-rimmed glasses and looking through microscopes or fiddling with flasks of chemicals.  The habitat of the 1950s engineer was inside, not outside, but he (always a "he" back then in images of the time) was still engaged in working with exotic equipment or machinery for which special training and even clothing was required.

Then the computer came along, banishing the slide rule.  Companies quickly learned it was cheaper to let engineers try new designs in software models rather than actually building prototypes and finding out that most of them didn't work.  So as much of the engineering knowledge that formerly resided in engineers' brains moved into computer programs, the typical engineer wound up sitting at a desk in front of a computer monitor.  Sometimes she just writes code, and sometimes she works with drawings of actual stuff.  But whatever subject is displayed on the screen, that is often as close as the engineer gets to the actual thing that is built. 

The end product, whether it is a microchip, a car, or an airplane, is often made far away from where the designer sits, possibly in another country.  The people who actually get their hands dirty to make the products know no more about them than they have to in order to do their jobs right.  There is nothing intrinsically wrong with this—it is one more application of the great economist Adam Smith's principle that each person, organization, or nation should specialize in what they are best at, and swap their products with those who are best at other things.  If engineers are best at designing with software, why, that's what they should do, and not waste their expensive time on a workbench actually building things that lower-paid and less-educated technicians can build. 

I grew up and learned engineering at a time when math and models could get you only so far, and there were imponderable and incalculable factors that had to be worked out on the workbench.  In my specialization of RF engineering, that meant that any engineer worth his salt had to know how to solder and troubleshoot actual hardware, and we did.  And things got built—maybe not the absolutely most optimized designs, but good enough to go out the door and make money for the company (most of the time, anyway). 

But these days, things that had to be figured out on the workbench thirty-five years ago can be modeled in the much more sophisticated software that is available today.  And so almost every kind of engineer these days, whether chemical, electrical, mechanical, civil, or environmental, ends up spending most of their working time in front of a computer.  There are exceptions, of course, but it is in the interests of most firms to see that their engineers spend as little time as possible fiddling with hardware and as much time as possible doing what they are paid the big bucks to do. 

Engineering is an intensely practical business, and if most engineering firms succeed in satisfying their customers with the services of engineers who never touch hardware, I can't see anything to criticize in that.  The lingering suspicion I have that something is missing that may cause trouble down the road may be nothing more than an old guy's prejudice in favor of the way things used to be. 

In my own teaching, I try to make students deal with hardware when it's practical to do so.  This fall I will be teaching a course in analog design.  Software is available that lets you build the whole circuit on the computer screen and test it with software "scopes" and get results that are much more precise than anything you can do in the lab.  But after the students have done that, I will request them to go get real parts, that they really have to read the values of, and put them in a real prototype "breadboard" circuit, and show me that it really works. 

Most students don't complain of this.  In fact, over the years I have had positive comments along the lines of, "I never knew what that oscilloscope was for until I had to use it in this class," and so on.  I'm sure some of them feel that shoving little wires into just the right holes is beneath them, but a little humbling is good for the soul.

Next spring, if all goes according to plan, I will teach the first course in power electronics ever taught on this campus.  Power electronics involves things like controlling giant 1,000-horsepower motors in steel mills.  Much as I would like to have a lab that used a 1,000-horsepower motor, all the labs in this course will use software.  It's much cheaper and safer to have a software 750-kW motor blow up than a real one.  And that's perhaps the way it should be. 

All the same, engineers should never forget that no matter how nice things look on paper or the computer screen, physical reality reigns.  And sometimes, it bites back.

Sources:  The report on Apple's moving production out of Austin appeared at https://www.statesman.com/news/20190628/report-apple-moving-mac-pro-production-out-of-austin.
-->