Monday, March 20, 2023

Trying Out ChatGPT


Since the artificial-intelligence laboratory OpenAI made its latest major project ChatGPT available to the public last fall, the chatbot's popularity, not to say notoriety, has soared.  Chatbots—software that responds to human-typed inputs with conversation-like output—are nothing new, but the combination of speed, apparent knowledge, and polish with which ChatGPT responds to a huge variety of "prompts"—basically, commands to write something about a subject—have attracted probably millions of users, a ton of publicity, and expressions of concern.


One of the most understandable concerns is that students will simply take any given writing assignment, put it into ChatGPT, and cut-and-paste the result into their homework.  Plagiarism is a chronic problem in education, and universities across the world have been holding special meetings to deal with the advent of ChatGPT and how to detect and prevent such cheating. 


I wish them luck, because when I tried the system this morning on a topic that's very familiar to me, it came up with verbiage of such high quality that I wouldn't hesitate to use it as the lead section of a research proposal, for instance.  That is, if I didn't mind the fact that I was using some computer's synthetic prose rather than my own. 


In case you want to judge for yourself how ChatGPT did, here's a sample.  The prompt I gave it was this:  "Describe ball lightning in two paragraphs or less (under 250 words) and quote experts in the field." 


The response begins, "Ball lightning is a rare and mysterious phenomenon in which a glowing sphere of light appears during thunderstorms and floats through the air for several seconds to several minutes before disappearing."  So far, so good.  It goes on for 136 words, which is under 250, and quotes only one expert, John Abrahamson.  The quote itself is a long one—57 words—and seems to be taken from an interview that I was not immediately able to identify by typing it into Google, a favorite trick I used to pull with student essays that I suspected of being copied wholesale from the Internet.  Either Google doesn't do that type of search very well anymore, or ChatGPT may have used some obscure transcription of a radio or TV interview, but not even part of the original two sentences shows up in my search.  So I simply have to take ChatGPT's word for it that it's accurately quoting Prof. Abrahamson, a New Zealand chemical engineer who published a well-publicized theory of ball lightning around 2000.


And that points out one of the big problems with some forms of AI:  they behave like black boxes, and figuring out how they work and where they get their information can be difficult or simply impossible.  I suppose I could go back and ask ChatGPT where it got the quote, but then I wouldn't have time to finish this column. 


So is access to powerful software such as ChatGPT a threat to the integrity of education and the livelihood of copywriters and grant writers everywhere, or on the other hand a great boon to the millions of people who can't put two coherent sentences together?  To some extent, I'd have to say "all of the above." 


Whatever else the ChatGPT developers have done, I have to congratulate them on the generally flawless grammar in all ChatGPT outputs I've seen so far.  They must have come up with some way of assessing the grammatical quality of sources and picking only the best ones, because believe me, there is a lot of bad English grammar out there, especially in the reams of technical publications that attract authors whose first language isn't English.  So that's the good news.


What is perhaps not so good news is that lots of us could become dependent on ChatGPT and its successors.  Now, is this a dependence that is harmless, like our dependence on pocket calculators instead of doing long division by hand?  Or is it a malignant dependence such as some people have for porn or alcohol or video games, distorting their lives and inhibiting human flourishing? 


My first impression is that the main hazard so far of using ChatGPT is that of letting the machine do one's writing and thinking too.  Now, technically, I let my pocket calculator do my thinking when I use it, but the kind of thinking it does is extremely mechanical—that's why mechanical calculators were successful—and it's no loss to my mental integrity to outsource the taking of square roots to a machine. 


But expressing a complicated original idea in clear prose is something that has thus far been reserved for humans.  If I take out the word "original," it appears that ChatGPT can do as good as or better than your average human being at expressing complicated ideas clearly.  And of course, original is a relative term, as nobody can come up with a fourth primary color, for example.  We quickly get into philosophical waters here, but I will leave it with the Christian observation that God is the only Person who can truly originate things from nothing.  All so-called human inventions and discoveries are the unearthing or understanding of things and ideas that have always been latent in the universe, waiting for us to find them. 


I don't know whether some puckish mathematician has yet typed into ChatGPT, "Prove Goldbach's conjecture true or false."  Goldbach's conjecture is the proposition that every even number greater than 2 is the sum of two primes.  It's one of those things that seems like it ought to be true, and nobody can find a counterexample, but nobody so far has been able to prove it one way or the other.  From everything I understand about ChatGPT, it would come up with a lot of verbiage, and maybe equations, but as it simply pulls from whatever is already out there on the Internet (and according to its developers, it's skimpy on anything after 2021), if a proof isn't out there it's not likely to come up with one.


So the mathematicians are safe.  For the rest of us, I'm not so sure.


Sources:  A good description of ChatGPT and instructions on how to use it were published on the website Digital Trends at  I also referred to a list of ten hardest unsolved math problems at (You don't think I really go around worrying about Goldbach's conjecture, do you?)

Monday, March 13, 2023

Justice and Artificial Intelligence


Historians of technology are familiar with the problem of technological advances that outstrip the legal system, leading to situations that are clearly unfair, but leave some people with no legal recourse.  In a recent article in The Dispatch, artificial intelligence (AI) expert Matthew Mittelstaedt calls for a case-by-case approach to the problem of AI advancing beyond the borders of the law.


In the article, reporter Alec Dent cited the case of someone who recently filed for a copyright on an AI-generated piece of artwork.  The U. S. Copyright Office rejected the application, which was filed on behalf of the AI program itself, because it lacked the "human authorship" needed to create a valid copyright claim.  We don't know what would have happened if the programmer or software developer would have filed for a copyright on his or her own behalf, saying that the AI program was just a tool like an artist's brush.  But as AI systems take over more and more formerly creative jobs done by humans, the distinction may become harder to make.


ChatGPT, the powerful AI chatbot developed by the OpenAI firm, has attracted a lot of attention in academia for its ability to come up with plausible-sounding paragraphs of English text on virtually any topic, including those typically assigned as essay questions in the humanities.  It's not exactly like students weren't cheating before ChatGPT—there are numerous online stores where essays and even completed lab reports and exams can be purchased.  But ChatGPT creates work that, if not exactly original, hasn't existed in exactly the same form before, and thus falls between the stool of previously existing work that is merely copied, and the stool of truly original work that a human mind has originated. 


The original intent of copyright law, and anti-cheating rules at universities for that matter, was to allocate the rewards (or penalties) due to a piece of original work fairly.  If Mr. A wrote an essay on the downfall of the Soviet Union, whether it was for his history class or for The Atlantic, Mr. A should get the credit (or blame) for it, not some word-processing software or AI program. 


One big problem I foresee will arise from the defective anthropology that prevails in our modern culture.  Human beings are different from other animals and from machines due to a difference in kind, not merely in degree.  In the rush to embrace ever more impressive applications of AI, a lot of people have lost sight of this fact, if they ever believed it in the first place. 


The people at the U. S. Copyright Office seem to believe it, at least so far, but the person who applied for a copyright in the name of an AI program seems to think that there's no essential difference between human intelligence and AI.  Unfortunately, that view is very popular in high places—much of academia, the Silicon Valley world, and even in parts of government. 


A degraded view of humanity results when one assumes that there is no essential distinction between humans and advanced AI programs.  Of course there is a practical distinction, so far—AI systems still make stupid mistakes that a normally intelligent five-year-old wouldn't make.  But the AI optimists see this as merely a temporary condition that will disappear with further advances in the field.


Saying that human intelligence and AI is basically the same drags humans down to the level of machines.  And all the consequences of treating humans as machines will result from that attitude.  No AI expert wants to be treated like a machine.  But allowing AI programs to hold copyrights or be in charge of decisions that formerly required human input does exactly that to the subjects or patients of the AI system in question.


Justice is something that happens among human beings, and human beings are not machines.  People advocating for robot rights and similar policies that attempt to attribute human-like characteristics to AI programs are not exalting robots—they are degrading humans in a subtle and indirect way, a way that selectively degrades some humans more than others. 


We have already heard of cases in which AI-informed medical or legal decisions turned out to be highly discriminatory against certain minority groups.  The AI developers are rightly exercised about such problems, but it takes human beings to recognize that other human beings are being treated unfairly.  The whole body of the law can be regarded as a huge algorithm for executing justice among people—after all, what are laws but a set of rules and procedures for carrying them out? 


But as of yet, we have not handed over the execution of the laws to machines.  Lawyers, judges, and juries do that.  The institution of the jury trial, as rare as it's getting to be in criminal justice, is a common-law recognition that ordinary people deserve to be judged by other ordinary people, not just experts, whether the experts are human beings or computers. 


Turning over works of creativity and judgment to AI systems may be efficient.  It may even be fairer in a human sense than leaving such actions in the fallible hands of human beings.  But beyond a certain point—that point to be judged by humans, not by machines—it becomes a dereliction of duty, just as a student typing in his history assignment to ChatGPT and handing in the AI program's answer is a dereliction of his duty to think for himself.


The law will eventually catch up to today's AI innovations, as it always does.  Of course, by then we will have new advances, and so for a time at least we will see a kind of legal-AI arms race with AI leading and the law lagging behind.  But we will all be losers if legislators and AI developers forget that human beings are different in kind, not just degree, from AI programs.  If we forget that critical fact, we may deserve what happens if we do.


Sources:  Alan Dent's article "The Gaps Between the Law and Artificial Intelligence" was published on Mar. 8, 2023 at  I also referred to Wikipedia articles on ChatGPT and OpenAI.


NOTE:  There used to be an RSS feed that readers could subscribe to who wished to be notified of new blog articles, which are issued every Monday morning.  A reader recently pointed out to me that this feature was no longer functioning.  I am currently trying to repair the problems and add an automatic email notification system, but it may take a while, so I ask readers interested in these features to be patient.

Monday, March 06, 2023

Is Twitter a Wholly Owned Subsidiary of the FBI?


When Elon Musk took over Twitter last October, he made available to reporters a large number of internal company emails relating to content moderation, deplatforming, and other interventions that the firm has done at the request of, or under the influence of, the U. S. government.  Like most people, I was dimly aware of these revelations, but the news coverage of them was intermittent and depended greatly on the political orientation of the media outlet reporting it.  And I'm sure that remains the case today.


But recently I came across one report that summarizes the facts in a chilling and alarming way.  If what this report says is true, we indeed have a major problem that involves not only electronic social media, but the government and fundamental constitutional issues. 


In all such cases, one should consider the source.  The source of this report is John Daniel Davidson, a senior editor at The Federalist, a conservative website which Wikipedia says has carried false and misleading information at times.  The particular report I refer to did not appear in that website, but in a newsletter called Imprimis issued by Hillsdale College, a private college that is one of the few serious colleges in the U. S. that refuses to take federal funds on principle.  Adapted from a talk Davidson gave at the college, the report is entitled "The Twitter Files Reveal an Existential Threat."


Davidson details three examples of how the FBI, working both on its own behalf and as a liaison between a number of other federal agencies and Twitter, directed the firm to flag, suppress, or suspend numerous accounts such as those of the New York Post, whose offense was to break the news of the Hunter Biden laptop; President Trump, whose suspension after the January 6, 2021 Capitol riot was sui generis in its disregard for internal suspension policies; and during the COVID-19 epidemic, in which Twitter was asked to, and did, squelch information that did not follow the official line on the pandemic that prevailed at the time.


The main point of Davidson's article is summed up in these words toward the end of the piece:  ". . . the entire concept of 'content moderation' is a euphemism for censorship by social media companies that falsely claim to be neutral and unbiased."  Davidson presents evidence that in 2017, Twitter publicly announced that all content moderation took place "at [Twitter's] sole discretion," but internally, they would censor anything that "U. S. intelligence identified as a state-sponsored entity conducting cyber-operations," whether the intelligence community was right or not.  As later events proved, the suspected Russian influence on U. S. elections was largely a smokescreen for allowing the federal government to suppress a wide variety of actors, most of which were not sponsored by any state, in direct violation of the First Amendment.


Currently, the U. S. Supreme Court is considering two cases that involve Section 230 of the Communications Decency Act.  The basic thrust of the section is to allow social-media companies to claim immunity from prosecution regarding material posted on their sites by third parties—namely, anybody but the company itself.  It also exempts the companies from lawsuits involving content moderation as long as the company can show such moderation was a good-faith effort to remove "objectionable" material. 


This law was passed in the very early days of social media, when it was not at all clear that internet-based systems such as Facebook and Twitter would ever make money.  Those days are long gone, and the pipsqueak upstarts of the 1990s have become the 900-pound gorillas of the 2020s. 


Far from being a minor sideshow in the ways the public learns what their elected officials and the rest of the government are up to, Twitter is arguably the primary source of breaking news from officialdom, equivalent to the Associated Press wire service of the long-ago day when news really traveled mainly over copper wires to teletype machines.  As publishers, the newspapers, radio, and TV outlets of yore (yore being anytime before about 1980) knew that they were legally responsible for what they printed or broadcast, and made careful distinctions between what was news and what was analysis or opinion.  They had the freedom to print what they wanted to print, courtesy of the First Amendment, which prohibits the federal government from "abridging the freedom of speech, or of the press."  But they also had the responsibility of standing behind what they printed as facts, and so they stressed fact-checking and accuracy, plus an effort to present all the significant news and suppress none of it, no matter how far it strayed from the newspaper's own political position.


Granted, this was an ideal that was only approached in practice.  But if you transpose what Twitter has done in the last few years to the register of how news was produced in, say, 1970, the results can be shocking.


Suppose the 1970s Watergate break-in, Deep Throat's revelations, and the secretly recorded Nixon White House tapes had been systematically expunged from all newspaper, radio, and TV coverage through the intervention of the FBI, saying that it was all a plot by the Russians?  After Nixon told the news media that they wouldn't have him to kick around anymore following his 1962 loss to Pat Brown in the California governor's race, suppose all the networks agreed to ban him from ever appearing on radio or television again, again at the behest of the federal government? 


I am no fan of Richard Nixon.  But my point is that none of these acts of censorship happened back then, because the reigning media companies kept their distance from the government, no matter who was running it.


Needless to say, the situation is different now.  Davidson's summary of the Twitter Files is an indictment of the hand-in-glove way that the federal government, using the channel of the FBI, has succeeded in manipulating the media landscape to suit its purposes, and not the best interests of the American people at large.  It is far past time to restore a responsible distance between social media and the government, but doing that will require a well-informed public, and the media we have may not be up to the job.


Sources:  John Daniel Davidson's article "The Twitter Files Reveal an Existential Threat" appeared in Vol. 62, No. 1 (Jan. 2023) of Imprimis, a publication of Hillsdale College.  I also referred to a report on the Supreme Court Section 230 cases at and Wikipedia articles on The Federalist and Richard Nixon's November 1962 news conference. 

Monday, February 27, 2023

East Palestine, Ohio: An Abnormal Accident


When a wheel bearing overheated on a train carrying five tank cars of vinyl chloride through East Palestine, Ohio on Feb. 3, a sequence of events began that turned into a major accident with nationwide political implications. 


Overheated bearings (so-called "hot boxes") are not new to railroading.  With the thousands of wheel bearings in use every day, it's almost inevitable that some of them will fail by running hot and basically grinding themselves to pieces.  What is new in the last few decades are a number of safety devices that the railroads have installed to deal with overheated bearings. 


The Ohio train's bearing was sensed by three track-side hot-bearing detectors.  The National Transportation Safety Board (NTSB) has determined that one sensor noted the suspect bearing was warm (38 F above ambient, which was a cold 10 F) about twenty minutes before the derailing occurred.  Ten miles and 12 minutes or so later, it had heated up to 103 F above ambient.  The detector system's threshold was set so that these readings did not set off an audible alarm in the cab.  But the third sensor's reading of 253 F did.


By that time, it was too late.  Trains up to a mile or more long can't be stopped on a dime.  Three crew members were on the train, all in the lead engine.  By the time the engineer stopped the train, they could see fire and smoke behind them.  The train dispatcher authorized the crew to set hand brakes on the lead cars and move the front engines a mile or so down the track to safety. 


A total of 38 railcars had left the tracks, and the ensuing fire damaged 11 more.  Firefighters determined during the following two days that some intact tanks of vinyl chloride were heating up.  This chemical, which is used to make the ubiquitous plastic PVC (polyvinyl chloride), can undergo a spontaneous polymerization reaction that can cause an explosion.  To avoid that dire consequence, firefighters released the vinyl chloride into ditches they had dug nearby and burned it. 


Subsequent news coverage has dealt with the environmental consequences of the release of vinyl chloride and other toxic chemicals into the air, water, and soil around East Palestine.  One news report says monitors have estimated about 43,000 animals have perished, but 38,000 or so of that number were non-endangered minnows.  Residents have complained of odd tastes in the drinking water and air, and extreme measures have been and will be taken to clean up the extraordinary mess that several tank cars of toxic chemicals can make.


In an interview, NTSB chair Jennifer Homendy said that the accident was "100% preventable."  On the face of it, that makes railroad operator Norfolk Southern look pretty bad.  But exactly how could this accident have been prevented?


Certain safety requirements such as advanced braking systems for "high-hazard" trains that the Biden White House said the Trump administration removed would not have made a difference in this case, according to Homendy.  So attempts to turn the accident into a political football seem to amount to more smoke than fire, so to speak.


Norfolk Southern may rethink their thresholds for hot-box detectors.  This particular bearing seems to have gone from barely sensibly warm to disintegrated in less than 30 minutes.  The spacing and sensitivity of such detectors is a matter of engineering judgment, and accidents have a remarkable way of leading designers to reconsider safety precautions and adjust them so that the previous accident, at any rate, will not happen again. 


Too low a threshold on the detectors would mean that the trains are stopping needlessly, as a bearing may run warm for some unknown time before it fails.  Only the railroads have the statistics on these kinds of problems, but now that we know what can go wrong if the detection system doesn't stop the train in time, there may be a lot of readjusting going on in the future.


Some statistics from the Federal Railroad Administration cited by National Review say that train accidents caused by axle and bearing-related failures have fallen 59 percent from 1990 to 2019, largely due to the use of hotbox detectors.  So far from doing nothing about the problem, the railroads have been steadily improving their performance in this regard.


Sociologist Charles Perrow devised the term "normal accident" to express the type of mishap that very complex systems such as nuclear reactors can produce when multiple interacting parts do something that is very hard to predict, let alone forestall.  The East Palestine derailment was not that complicated.  But the system that should have prevented it failed in this case, and because of the particular cargo being carried, the consequences were awful. 


Despite an apocalyptic-looking crash scene, no one was killed or even injured in the derailment or fire.  This accident was far less consequential in that sense than the one, also involving derailed tank cars, that devastated the town of Lac-M├ęgantic, Quebec, on July 6, 2013 and killed 47 people.  So as bad as the East Palestine situation is, it could have been much worse.


Norfolk Southern is going to be paying for the consequences of this accident for a long time.  Already, millions of gallons of contaminated firefighting water have been shipped to Deer Park, Texas, where a specialist firm will inject it deep underground.  While some would question the propriety of this type of disposal method, Deer Park is in the middle of the most concentrated cluster of oil refineries and petrochemical plants in the U. S., and a little vinyl-chloride-contaminated water way beneath their feet will trouble Deer Park residents not at all. 


Someone will have to pay for all the contaminated dirt to be dug up and shipped somewhere else, and so East Palestine is getting more national attention and commerce, although of an undesirable kind, than the mayor and city council ever dreamed of.  While I hope that life returns to whatever passes for normal in eastern Ohio, East Palestine will never be the same again.


Sources:  I referred to the following articles:  the NTSB preliminary report on the accident at, a CNN piece on the report at, a USA Today report at, a National Review article at, and the Wikipedia article on "Normal Accidents." 

Monday, February 20, 2023

The Tesla Self-Driving-Software Recall


Innovators tend to disrupt established procedures and shake things up generally.  No matter how good a new idea is, there are usually lots of people who will be inconvenienced or worse if the innovation takes hold and spreads, and the innovator has to push hard just to get a hearing. 


The way Tesla under Elon Musk has introduced semi-autonomous cars is a great example of this principle.  Readers of this blog are familiar with numerous cases in which Tesla drivers have been injured or killed under circumstances that point to careless use of the car's self-driving feature.  But until now, the National Highway Traffic Safety Administration (NHTSA) has not taken definite far-reaching actions against the firm. 


That changed this week when Tesla, under pressure from the NHTSA, issued a voluntary recall of some 360,000 of its cars that have the so-called "Full Self-Driving" mode installed.  This option, which according to one report costs $15,000, reportedly does all the work necessary to move the car safely:  steering, acceleration, and braking, under the control of cameras and artificial-intelligence (AI) systems.  An Associated Press article quotes Raj Rajkumar, a computer-science professor at Carnegie-Mellon, as saying Teslas don't use radar or laser systems in addition to cameras, and thus can miss important environmental clues that such systems provide.


In its recall, the NHTSA refers to Tesla's self-driving system as a Level 2 SAE type.  Some years back the Society of Automotive Engineers established a five-level ranking system for autonomous-car features.  Warning-only features make a car Level 0, as the driver is still doing all the actual work.  Level 2 can control braking, acceleration, and steering, but the driver "must constantly supervise these support features; you must steer, brake or accelerate as needed to maintain safety." 


Technically, the software being recalled is a beta version, meaning that the user in some sense agrees to be a guinea pig and try out something that may still have bugs in it.  The bugs cited by the NHTSA in its recall notice include things like ignoring speed zones, rolling through stop signs without stopping, and going straight in a turn-only lane.  These things sound like typical careless-driver problems, but as Rajkumar points out, upgrading the software to solve these issues may not be a simple or quick fix.


The recall itself is peculiar, in that no hardware has to be idled or replaced.  Tesla will simply issue an over-the-air upgrade at no cost to the car owners.  Musk tweeted that calling such an upgrade a recall was "anachronistic and just flat wrong!"  He has a point, in that over-the-air software upgrades are a routine part of doing business, but are usually not mandated or pressured into execution by a federal agency.


This recall is not the only concern that the NHTSA has with self-driving Teslas.  The agency is investigating numerous incidents that pose serious safety concerns, such as the tendency of some autonomous-mode Teslas to crash into emergency vehicles—some 14 such crashes have been recorded, as well as the deaths of 19 people in accidents where self-driving features may have been involved. 


Musk claims that the safety record of self-driving Teslas is better than old-fashioned hand-driven cars, but detailed statistics to back up his claim are lacking.  The autonomous-car project has always suffered from a chicken-and-egg problem.  To develop good self-driving systems, you need to get extensive testing in all sorts of real-world situations, but fielding a system that isn't yet perfected—whatever that might mean—entails some level of risk for both the people riding in the autonomous vehicles and for everybody else around them too. 


Here in Central Texas, as I drive around the stretch of I-35 between Austin and San Antonio, sightings of Teslas have gone from remarkably rare—maybe one every few weeks—to almost routine, as a couple of them are now usually parked in the same lot I use at work.  I have not yet seen one rolling down the road while the driver was reading a book or kissing his girlfriend, but I'm sure that happens.  The one Tesla driver I've spoken to about the autonomous system—our piano tuner, of all people—says he only uses it on I-35 and takes over once he gets off the freeway.  So for every careless driver who ignores the instructions to "constantly supervise" the self-driving mode, there are many responsible Tesla owners who learn the limitations of the system and act accordingly.


As a federal agency, the NHTSA seems to have its act together a lot better than, say, the FBI.  It largely stays out of politics and sticks to its mandate to safeguard the nation's highways.  The Tesla recall could have been much more drastic, as the NHTSA has the power to order carmakers to tell car owners to stop using the vehicle in question until the recall is installed.  That would have been an unnecessary move.  While nineteen fatalities possibly involving a new technology are of course tragic, that number pales in comparison to the estimated 42,915 people who died in traffic accidents in 2021, which was a 16-year high. 


Ideally, autonomous cars will contribute to a decline in traffic casualties, not an increase.  Overall, the NHTSA seems to be doing its job as watchdog, not cutting off a given manufacturer at the knees, so to speak, but not ignoring problems either.  The recall mechanism indicates to me that the NHTSA thinks Tesla may be going a bit too fast and careless in their beta-testing of so-called Full Self-Driving systems, and Musk knows he is playing a game that could cripple his car business if he and his engineers are not careful.  But being too careful in an innovative industry leaves you in the lurch, so it will be interesting to see how, and whether, the industry as a whole approaches the prize of truly autonomous Level 5 driving, in which the driver neither knows nor cares what is going on around him.  But we are by no means there yet.


Sources:  The AP article "Tesla Recalls 'Full Self-Driving' To Fix Unsafe Actions" appeared on the AP website on Feb. 16, 2023 at  The NHTSA recall notice can be found at  The SAE autonomous driving levels are listed at  The statistic on 2021 U. S. driving fatalities was from 

Monday, February 13, 2023

Is There A 3D-Printed Concrete House In Your Future?


In a recent issue of The New Yorker, reporter Rachel Monroe describes the efforts of an Austin entrepreneur named Jason Ballard to revolutionize the construction industry the way the electronics business has been revolutionized by the introduction of integrated circuits. 


In some ways, the analogy is appealing.  Probably the most complicated consumer-electronics item in the early 1960s was a television set.  You can find online videos showing how TV sets were made back then:  dozens of women (the assemblers were almost always women) hand-wired chassis with individual components, one part at a time.  While there are probably some hand-assembly steps in the production of iPads or laptops today, virtually all the "wiring" happens without any human intervention in a series of automated photographic and chemical processes.  That is a big reason why your smartphone doesn't cost ten million dollars and doesn't have to be rolled around on a cart.


Ballard hopes to apply the technology of 3D printing to houses.  He has developed a special kind of concrete that stays in place when it's squirted out of a precisely positioned nozzle at the end of a giant 3D printing frame that builds up each wall of a house layer by layer, the same way smaller 3D-printed structures are made. 


So far, except for the manufactured-housing market, the residential construction industry in the U. S. has been stubbornly resistant to automation efforts.  Most houses built today are done the old-fashioned way:  grading the site, laying the foundation, erecting the frame, closing it in, and doing the wiring and plumbing and interior finish work.  All these are labor-intensive manual operations, which means labor costs are a good fraction of the total cost of new housing.  Which is why Ballard has high hopes for lowering the cost of housing worldwide with his 3D-printing idea. 


Housing is one of the three legs on the stool of humankind's material necessities:  food, clothing, and shelter.  So anything that promises to make better or cheaper housing is worth looking into.  However, there are reasons to believe that Ballard's company may not achieve all that he's hoping it will.


One reason is the regulatory environment.  As Monroe points out, building codes are highly localized, and what is legal in one locality may be otherwise elsewhere.  I think it's significant that Ballard set up shop in Texas, which is one of the most union-unfriendly states in the Union.  I can easily picture picket lines forming around any attempt to erect a 3D-printed house in, say, New Jersey, where construction unions are part of the landscape. 


Another is history.  Speaking of New Jersey, that state harbors several all-concrete-construction houses built by none other than Thomas Edison.  As kind of a sideline to his unsuccessful iron-ore business, in 1899 Edison founded the Edison Portland Cement Company and started to manufacture cement.  To increase sales, he began experimenting with the idea of an all-concrete house made with complicated molds.  He filed patents on his ideas and attracted the attention of a philanthropist named Henry Phipps Jr., who hoped to solve New York City's housing crisis with Edison-designed houses.


The farthest Edison's experiments in concrete living got were the construction of a few two-story concrete bungalows in New Jersey, with a few more in Indiana.  The New Jersey houses are still occupied and in reasonably good shape. 


The problem was not in the quality of the resulting house, but in the complex molds needed to form an entire house in one pour.  After trying a few test houses, Edison realized there was no way to mass-produce houses with the ridiculously intricate molds he needed.  So Edison's concrete houses stand today, a mute testimony to yet another great idea that had an unfortunate encounter with reality.


It's an open question whether Ballard's automated 3D-printing approach will cheapen the cost of housing enough to be attractive to a wide range of customers.  Monroe points out that it's most likely to be used in places where labor is extremely scarce, such as the moon.  If we ever establish a space colony on the moon or other planets, it will be cheaper by far to send up a bunch of machinery that will build housing rather than getting the members of Local 310 of the New Jersey Carpenters' Union to the moon safely and back. 


But the moon is not Ballard's only goal, although he has been in discussions with NASA about extraterrestrial construction.  He hopes that the combination of cheaper construction and the new possibilities that 3D-printed construction opens up—it's just as cheap to print a curved wall as a straight wall, for example—will make his method the next big thing in construction.


Edison's experience may be instructive, in that he, as well as Ballard, tried to substitute a one-time capital expenditure (molds in Edison's case, the 3D printing machine in Ballard's case) for ongoing labor expenses.  Sometimes this works, but sometimes it doesn't.  The basic structure of a house is only part of the total construction costs.  Trim features such as walls and doors, wiring and plumbing, and interior finishes (not everybody will like the coarse-stucco effect that results from the unvarnished 3D-printing process) are still hand-labor items, and no mention is made in the article about how the roof is dealt with.  Icon, Ballard's company, shows example houses on its website, and they appear to have conventional metal roofs, not poured-concrete roofs. 

So it begins to look like 3D printing may find a few niche markets where labor costs are high or peculiar construction requirements make that technique advantageous. But as for taking over the entire construction industry, I don't think the union carpenters in New Jersey, or anywhere else, should be shaking in their workboots for fear Icon will take away their jobs.


I wish Ballard well, and perhaps he can overcome the problems that Edison faced and revolutionize the way we build houses.  But he may stand a better chance in places where building codes and unions haven't shown up yet, and the moon certainly qualifies.


Sources:  Rachel Monroe's article "Build Better" appeared on pp. 24-29 of the Jan. 23, 2023 issue of The New Yorker. The story of Edison's concrete houses is told well at  Icon's website is

Monday, February 06, 2023

Is Your HP Printer Really Yours?


In choosing to make Charlie Warzel's printer stop working because of an expired credit card, the corporate giant HP picked the wrong consumer to pick on.  Warzel happens to be a staff writer at The Atlantic.  In "My Printer Is Extorting Me," Warzel excoriates HP and the general tendency of Big Tech to keep long strings attached to items that we thought we'd bought, only to find that the old notion of "fee simple title," meaning total possession of a thing, is history as far as digital stuff is concerned.


It happened this way.  Back in 2020 during the pandemic, Warzel decided his family had to have an inkjet printer, and he went online—practically the only option then—and bought an HP model for more than $200.  Without really knowing what he was doing, but making what was a rational decision at the time, he also signed up for an HP program called Instant Ink.  Allegedly, Instant Ink monitors your ink cartridges, and when one of them is getting low, it automatically generates a shipping order and delivers the requisite cartridge to your door before you even knew you needed it.  At least, that's how it's supposed to work.


The cost for this service was $5.99 a month, based on the number of pages you typically print in a month (100 pages for $5.99).  I checked the HP website for this service, and it's unclear what happens if you sign up for the 100-page plan and unintentionally print, say, 101 pages in February, the shortest month.  Does that bump you up to the next highest level? 


Anyway, that and many other aspects of the program were not clear to Warzel, who began to receive ink cartridges until the day that the credit card he'd used for the service expired.


If your credit card number expires or has to be changed, one of the vicissitudes of modern life is trying to remember all the different places you've given it to, so as to hunt them up and tell them what the new number is.  When this happened to Warzel, HP was one of the places he forgot to notify.  But they reminded him quick.


One day not long after his card expired, he went to print something and found that his printer had quit working.  Looking into the matter, he was dismayed to discover that the reason it had quit was that his credit card had expired, and "the company had effectively bricked my device in response."


Apparently, all he had to do to get it running was to go out into the real world and seek a set of genuine (not imitation!) HP printer cartridges, and his printer would start working again.  He doesn't say whether he did that or resolved to go back to quill pens and foolscap.  But the type of problem he experienced is increasingly common, and Warzel discovered several other examples of situations in which tech companies exert eerie and disturbing control over things that consumers thought they had purchased outright.


As Warzel points out, HP is only engaging in a particularly nasty form of commerce that traces its roots back to at least the early 1900s.  The first consumers who installed light bulbs and later discovered that, unlike a kerosene lamp, an incandescent lamp burns out with alarming frequency, may have felt something close to the outrage that Warzel felt when his printer quit.  It may have been the electric-lighting industry that gave rise to the business saying, "The weakness of the goods is the strength of the trade."  Back in the early days of electric lighting, you might buy only a few lamp fixtures, but you'd buy hundreds of light bulbs over the years, furnishing a steady revenue stream for GE or whoever was making bulbs at the time.  It's for the same reason that Kodak sold cheap cameras, because they made back any money they lost on camera sales by selling the film for them. 


The digital printer industry is only the latest version of this kind of system, which I hesitate to call a scam.  But by using its ability to trace each individual cartridge and exert remote control over them, and the printer that uses them, HP may have pushed their advantage a bit too far.  It was too far for Warzel, at any rate.


The Polish philosopher and sociologist Zygmunt Bauman (1925-2017) wrote a book entitled Liquid Life.  I haven't read it, but the notices of his work I have come across attribute to him the perception that one of modern life's strong tendencies is to blur formerly clear-cut distinctions.  Take privacy, for example.  As recently as the 1970s, when one made a phone call, one could be reasonably assured that nobody was listening other than the person you called at the other end of the line, and that if your phone rang, it was another person who had a legitimate personal reason to call you.  Nowadays, of course, our very conversations, on and off the phone, are monitored by digital spies who start throwing ads for printer cartridges at us if we so much as mention printers in casual conversation. 


And there was an invisible barrier a typewriter crossed, say, when you walked out of the store with it.  Up to that moment it belonged to the store.  But once you paid for it, it belonged to you—you could type with it, use it for a paperweight, or take it out to the lake and use it as a boat anchor (some of the old IBM Selectrics would have served this purpose admirably).  And nobody—not the store, not IBM—could have stopped you.


Perhaps we'll just have to get used to the leaky liquid nature of digital ownership, but Warzel seems to think we will have lost something important in the process. 


Sources:  Charlie Warzel's article "My Printer Is Extorting Me" appeared on The Atlantic's website at  I also referred to the HP Instant Ink website at and the Wikipedia article on Zygmunt Bauman.