Monday, September 26, 2022

TikTok and the Chinese Connection


One of the newest stars in the social-media constellations is TikTok, a free video-sharing app that is very popular among young users.  Ostensibly, you have to be at least 13 to join TikTok, but such age limits are notoriously easy to evade.  Like other social-media apps, the TikTok app has various ways of making money, including advertising, contests, and in-app purchases.  It was the first non-Meta app (e. g. not Facebook or its ilk) to reach the threshold of three billion downloads worldwide, even with the handicap of being banned in India.


The reason India banned TikTok in 2019, only two years after it went global, was that an Indian court viewed it as a source of pornographic content and a medium likely to be used by sexual predators.  In 2020, the Indian Ministry of Electronics and Information Technology issued a permanent ban, citing national security concerns.  Although there have been efforts to ban TikTok in the U. S., they have been unsuccessful so far.


TikTok is a wholly-owned subsidiary of ByteDance, a company based in China.  The U. S. division of TikTok recently made the news when five executives resigned after facing interference by ByteDance into the U. S. organization's internal workings. According to an article originally published in National Review, one executive complained that "A lot of our guidance came from HQ, and we weren't necessarily part of strategy building. . . . I don't want to be told what to do."  Coming after earlier reports of a leaked strategy document from ByteDance that ordered subsidiaries to "play down the China connection," one wonders just how tight a rein ByteDance holds on its foreign TikTok operations.


Nothing major goes on in China without at least the passive acquiescence of the government.  So we can be sure that Chinese government leaders are aware of what ByteDance is doing.  This is one reason that the TikTok app itself is not available in China.  Instead, a modified version called Douyin is available there.  But the leaked document urges PR people to respond to questions about Chinese control of TikTok by saying that "TikTok is not available in China."  It's the truth, it's nothing but the truth, but it's not the whole truth.


The question of how much control a central ownership hub should exert over foreign subsidiaries is nothing new.  Dodgy things were done during World War II with regard to American-owned properties in Nazi Germany, at least up to the point when Germany blocked American assets there once the U. S. declared war on Germany. 


The U. S. isn't at war with China.  But one could be pardoned for wondering why China is exporting tons of fentanyl for illegal consumption in the U. S.  Perhaps it's in revenge for the Opium Wars, a sordid episode in the relationship between China and the West that forced China to open its doors to opium imported from British-colonized India in the 1800s.  Whatever the reason, some people doubt that China has the best interests of the U. S. at heart, and look with suspicion on the way ByteDance is consolidating control of TikTok in China.


Starting in the 1990s, globalizing free trade became a worldwide goal and, just as Adam Smith would have predicted, raised the living standards of billions of people around the globe.  Most of these were in the developing world, but Walmart wouldn't be able to sell most of its stuff as cheaply as it does if it weren't for China, so to that extent the U. S. benefited as well. 


But lately, we are seeing a variety of ways in which the pernicious effects of social media are becoming increasingly obvious—in the toxicity of political discourse, in the soaring rates of depression and suicide among young people, and in the general distractedness of the U. S. population.  A purely U. S. version of TikTok might not be much better than the one we have, but the fact that its strings are being pulled by Chinese masters adds a sinister look to an already fraught situation.


If sovereignty means anything, it means that a sovereign government can control the kind of activities and commerce that foreign-owned and foreign-operated enterprises conduct.  So as a theoretical matter, the U. S. would be entirely in its rights to ban TikTok outright, as India in fact has.  Yes, there would be a howl, but people would get over it.  And probably something similar to TikTok would spring up overnight and try to evade the ban. 


But that presumes a unity and coherence of action on the part of government which is notably absent today.  As with everything else, a serious movement to ban TikTok would become politicized, with Republicans (probably) favoring it and Democrats (probably) opposing the ban on account of free speech, or possibly even just because the Republicans favor it and we're opposed to whatever Republicans are in favor of.  And then the outcome would depend on which party controls the levers of power, unless there is a stalemate. 


That is the good old small-d democratic way, but social media itself has thrown numerous monkey wrenches in the formerly smooth operations of democratic governance.  I have never viewed TikTok, but by its reputation it doesn't seem that political.  (I wouldn't put it past the Beto for Texas Governor campaign to put an ad on it, though—he reportedly joined TikTok in March of 2020, just in time for COVID-19.) 


I begin to wonder whether we are ever going to get back to the former compromising and horse-trading that went on when U. S. politicians knew both how to condemn the other side in fiery speeches, and then join their opposite-aisle colleagues at the bar after work for a friendly chat about how to wrangle out legislation that would leave most parties at least partly satisfied.  The current style of take-no-prisoners scorched-earth politics may make for entertaining sound bites, but it doesn't get much done.  Including banning TikTok, if in fact that is what we ought to do.


Sources:  The article "TikTok Execs Resign After Being Asked to Take Orders from Parent Company" originally appeared in National Review, but was republished at  I also referred to an article at Gizmodo at, the website GoBankingRates at, and the Wikipedia articles "TikTok" and "Censorship of TikTok." 

Monday, September 19, 2022

Is Nowhere Going Somewhere?


Back in August, I blogged on the reportedly poor esthetics of Meta's metaverse, the enterprise that Mark Zuckerberg has poured billions into.  Unlike most of my blogs, it attracted the attention of several people, including one Katie Rosin, who works as a publicist for an alternative metaverse company called Nowhere.  She offered me a chance to try out their platform, so last week I spent half an hour Nowhere in particular—literally. 


According to Rosin, Nowhere began when the event developer firm Windmill Factory faced a crisis caused by COVID-19.  "Event developer" is my phrase for describing an organization that, in their words, "manufactures sublime experiences."  For a younger generation that values experiences over objects, that's probably a good business to be in going forward.  Judging by the list of experiences on their website, they specialize in getting people together in a defined space to do something cool—hear a concert, maybe, or experience some sort of light-show-assisted event.  Without actually attending one of their events, I can't say what it's like, but I assume they don't use the word "sublime" lightly. 


Anyway, the Windmill people were planning their next big effort for April 2020 when you-know-what happened, and they faced the problem of doing events without people.  Thus was born the idea that ultimately became Nowhere.


The goal of Nowhere is to create interpersonal connections in a safe space.  Regarding safety, everybody who enters Nowhere has to be identifiable at least by a legitimate email and a password.  And your real name is available to anybody who happens to meet you in Nowhere. 


What does it take to get into Nowhere—special VR goggles?  No.  My middle-aged Macbook Pro running Chrome and a pair of headphones was all it took.  That's a big plus in my book already.


What do you see when you get there?  First there's a black screen with moving stars on it, sort of like the old Star Wars titles.  This is a screensaver to give the software time to boot up.  Then one of the stars gets bigger and you enter one of their many spaces. 


The first one I got to wasn't that impressive.  It was totally black except for things around a square perimeter that looked like electronic billboards.  I could move my point of view around with some simple key commands or mouse movements. 


When Katie showed up, she took the form of a "nonagram" (nine-sided roundish screen) inside of which I had a view like you would see on Zoom, just a real video of her seated at a desk.  But the way she entered the space reminded me of how Glinda the Good Witch arrives in Oz to rescue Dorothy—her nonagram zoomed in from a tiny spot and showed up in front of me.  What took MGM thousands of dollars of optical-printer equipment and hours of time can now be yours for free. 


A nonagram viewed from the side is shaped like a thin slice of a sphere, as if you cut the end off a tomato.  The flat part holds the screen and the back is just a black surface.  It floats above the—ground—or whatever is beneath it, and its position (in technical terms, four degrees of freedom—x, y, z, and yaw) are under the control of whoever's nonagram it is. 


One thing I wish they'd install is a mirror, so you could see your own nonagram.  Maybe there is one, but I didn't get to it.


Katie took me to a couple of spaces that were a lot nicer than the black one with the billboards.  One had what looked like a grassy hill with a giant apple on it, surrounded by blue skies.  She informed me it was supposed to be a cherry, not an apple.  Then we went to a forest (there was a selection of about six or eight such places on a pop-up panel at the bottom of the screen), which had a sort of wooden platform that she guided me on to.  The background there was very peaceful—trees in a fog, and bird sounds.


What that place reminded me of was the old video game Myst, which is somewhat legendary among persons of a certain age (it came out around 1993).  Myst consisted almost entirely of static scenes that the user could move through in a limited way—but what scenes!  The whole thing was a work of art that portrayed a kind of heightened reality that was the product of much study of how the real world really looks and sounds. 


To the extent that Nowhere's developers have followed that principle, they have my vote.  It's rather exciting being in on the early stages of yet another medium, if metaverses can be called that.  Zuckerberg obviously wants to dominate the new medium, but we may see a kind of PC-versus-Mac thing happen, in that the old, existing dominant platform may not be agile enough to come up with innovative approaches that lots of people prefer. 


I would much rather show up to other people as a kind of videoscreen on a tomato slice than as some crude animated caricature of myself.  And I'd also like to know who it is I'm talking with—really—rather than taking my chances on some wacko pretending to be a CEO or what have you. 


It's vitally important to get certain things right when an enterprise is small and starting out, and doesn't yet have a ton of legacy issues to deal with.  In principle, I suppose, we could redesign the Internet so that everybody on it could be tracked down instantly.  But such a redesign is a practical impossibility, now that the huge infrastructure is in place that allows anonymity at such a large scale. 


I wish the developers of Nowhere well, and hope that their good ideas about beauty, if not their actual platform, will positively influence whatever the metaverse becomes for most people most of the time.  Beauty is one of the three transcendentals, the other two being truth and goodness.  Anyone who ignores it in making something that could be beautiful is going against the design of the universe, and will face the consequences. 


Sources:  The website of Nowhere is, and the looping video on that site gives you a good idea of what it's all about.  I also referred to the website of the Windmill Factory at


Monday, September 12, 2022

You Don't Compute—Or Do You?


Technology leaders from Bill Gates to Elon Musk and others have warned us in recent years that one of the biggest threats to humanity is uncontrolled domination by artificial intelligence (AI).  In 2017, Musk said at a conference, "I have exposure to the most cutting edge AI, and I think people should be really concerned about it."  And in 2019, Bill Gates stated that while we will see mainly advantages from AI initially, ". . . a few decades after that, though, the intelligence is strong enough to be a concern."  And the transhumanist camp, led by such zealots as Ray Kurzweil, seems to think that the future takeover of the universe by AI is not only inevitable, but a good thing, because it will leave our old-fashioned mortal meat computers (otherwise known as brains) in the junkpile where they belong. 


So in a way, it's refreshing to see a book come out whose author stands up and, in effect, says "Baloney" to all that.  The book is Non-Computable You:  What You Do that Artificial Intelligence Never Will, and the author is Robert J. Marks II. 


Marks is a practicing electrical engineer who has made fundamental contributions in the areas of signal processing and computational intelligence.  After spending most of his career at the University of Washington, he moved to Baylor University in 2003, where he now directs the Walter Bradley Center for Natural and Artificial Intelligence.  His book was published by the Discovery Institute, which is an organization that has historically promoted the concept of intelligent design. 


That is neither here nor there, at least to judge by the book's contents.  Those looking for a philosophically nuanced and extended argument in favor of the uniqueness of the human mind as compared to present or future computational realizations of what might be called intelligence, had best look elsewhere.   In Marks's view, the question of whether AI will ever match or supersede the general-intelligence abilities of the human mind has a simple answer:  it won't. 


He bases his claim on the fact that all computers do nothing more than execute algorithms.  Simply put, algorithms are step-by-step instructions that tell a machine what to do.  Any activity that can be expressed as an algorithm can in principle be performed by a computer.  Just as important, any activity or function that cannot be put into the form of an algorithm cannot be done by a computer, whether it's a pile of vacuum tubes, a bunch of transistors on chips, quantum "qubits," or any conceivable future form of computing machine. 


Some examples Marks gives of things that can't be done algorithmically are feeling pain, writing a poem that you and other people truly understand, and inventing a new technology.  These are things that human beings do, but according to Marks, AI will never do. 


What about the software we have right now behind conveniences such as Alexa, which gives the fairly strong impression of being intelligent?  Alexa certainly seems to "know" a lot more facts than any particular human being does. 


Marks dismisses this claim to intelligence by saying that extensive memory and recall doesn't make something intelligent any more than a well-organized library is intelligent.  Sure, there are lots of facts that Alexa has access to.  But it's what you do with the facts that counts, and AI doesn't understand anything.  It just imitates what it's been told to imitate without knowing what it's doing. 


The heart of Marks's book is really the first chapter entitled "The Non-Computable Human." Once he gets clear the difference between algorithmic tasks and non-algorithmic tasks, it's just a matter of sorting.  Yes, computers can do this better than humans, but computers will never do that. 


There are lots of other interesting things in the book:  a short history of AI, an extensive critique of the different kinds of AI hype and how not to be fooled by them, and numerous war stories from Marks's work in fields as different as medical care and the stabilization of power grids.  But these other matters are mostly a lot of icing on a rather small cake, because Marks is not inclined to delve into the deeper philosophical waters of what intelligence is and whether we understand it quite as well as Marks thinks we do.


As a Christian, Marks is well aware of the dangers posed to both Christians and non-Christians by a thing called idolatry.  Worshipping idols—things made by one's own hands and substituted for the true God—was what got the Hebrews into trouble time and again in the Old Testament, and it continues to be a problem today.  The problem with an idol is not so much what the idol itself can do—carved wooden images tend not to do much of anything on their own—but what it does to the idol-worshipper.  And here is where Marks could have done more of a service in showing how human beings can turn AI into an idol, and effectively worship it. 


While an idol-worshipping pagan might burn incense to a wooden image and figure he'd done everything needed to ensure a good crop, a bureaucracy of the future might take a task formerly done at considerable trouble and expense by humans—deciding on how long a prison sentence should be, for example—and turn it over to an AI program.  Actually, that example is not futuristic at all.  Numerous court systems have resorted to AI algorithms (there's that word again) to predict the risk of recidivism for different individuals, and basing the length of their sentences and parole status on the result. 


Needless to say, this particular application has come in for criticism, and not only by the defendants and their lawyers.  Many AI systems are famously opaque, meaning even their designers can't give a good reason for why the results are the way they are.  So I'd say in at least that regard, we have already gone pretty far down the road toward turning AI into an idol.


No, Marks is right in the sense that machines are, after all, only machines.  But if we make any machine our god, we are simply asking for trouble.  And that's the real risk we face in the future from AI:  making it our god, putting it in charge, and abandoning our regard for the real God.


Sources:  Robert J. Marks II's book Non-Computable You was published in 2022 by the Discovery Institute.  The Musk quote is from, the Gates quote is from, and a story about AI used for sentencing guidelines is at  I also referred to Wikipedia for biographical information on Marks. 

Monday, September 05, 2022

The Thin Line of Trust: China Eastern Airlines Flight 5735


Back in March, we blogged about the crash of China Eastern Airlines Flight 5735, which crashed on March 21 during a flight from Kunming to Guangzhou, killing all 132 people on board.  At the time, it was too early to draw any conclusions, as the investigations had just begun and the flight data recorders had not yet been recovered.  Within days, however, the voice and data recorders were found, and the data recorders were sent to the U. S. National Transportation Safety Board (NTSB) for analysis.


In April, rumors began to circulate in China that the crash was caused deliberately by someone on the flight deck.  These rumors were substantiated when several U. S. news outlets, including the Wall Street Journal and ABC News, reported in May that U. S. officials had determined that someone in the cockpit had pushed the control stick forward to initiate the dive from 29,000 feet that led to the crash.  The Civil Aviation Administration of China (CAAC) has neither confirmed nor denied these reports, while grumbling that "unofficial speculation" can interfere with the ongoing investigation.  Nevertheless, until further official information is made available, it looks like deliberate action on the part of someone in the cockpit may well have caused the crash.


The Wikipedia article on the crash lists the three members of the flight crew:  Captain Yang Hongda, who had been a Boeing 737 pilot since 2018; First Officer Zhang Zhengping, an award-winning commercial pilot with more than a decade of experience, including the training of 100 other pilots; and Ni Gongtao, a trainee with less than 600 hours of flight experience whose official duties were simply to observe the more experienced pilots. 


The psychology of a flight crew is a somewhat neglected but vital aspect of the smooth functioning of the team, who must cooperate effectively under both routine and emergency conditions.  Any time there is more than one person involved in a situation, there will be questions of authority and precedence.  That is why the very titles of the flight crew indicate a precedence of authority, the captain being in charge of both first and second officers. 


An excessively rigorous adherence to the priorities of rank can be detrimental, as the 1997 crash of Korean Air Flight 801 illustrates.  Despite errors the captain of that flight made in his approach to the Guam airport, he was not challenged by the other two members of the flight crew until six seconds before the crash, by which time it was far too late to do anything.  Since that time, Korean Air and other flight organizations have emphasized that the authority of the captain is not absolute, and if the other members of the flight crew see that the captain has made a mistake, they should take positive action to correct it.


But in the case of Flight 5735, it would be hard to believe that deliberate action to crash the plane would be taken by more than one of the three flight-crew members.  If we assume that only one of the three men on the flight deck decided to crash the plane, that raises several hard questions.


First, one would think that two men determined to save themselves and the passengers could overpower one man bent on destroying the plane.  While I have no details of how a 737 cockpit is arranged, it's hard to imagine a way that one man could impose his will on the others and remain at the controls, if the other two were determined to stop him.


As long as we're imagining things, suppose the suicide pilot, let's call him, somehow smuggled a firearm along with him, and threatened to shoot anyone who interfered with him?  That would be awkward, but conceivable.  And it's not clear whether pilots go through the same security checks that passengers do, and if they do, how easy it would be to evade them in order to carry a gun on board.


Neither of those scenarios seem too credible.  An interesting fact from the record of the flight before the crash is that it briefly leveled off around 8,000 feet before continuing its plunge into the mountains.  This might indicate a temporary turn for the better in the cockpit battle for the controls. 


Another possibility is that the suicide pilot shot or otherwise disabled the other two crew members before implementing his flight to doom.  This almost makes more sense, but it still leaves open the question of how he was able to disable them:  a gun?  Some kind of spray?  A struggle would still have to take place.


A second question is, which of the three flight crew members may have done it?  The award-winning Zhang Zhengping would seem least likely, having invested his life in his career.  The CAAC investigated the backgrounds of all three of the crew and found nothing unusual such as outstanding debt or personal troubles that would obviously account for suicidal intent. 


A third possibility is that someone from the passenger area broke into the cockpit and forced the plane to the ground.  This involves the question of how mechanically difficult such a feat might be. 


A cursory Internet search reveals that there are no private bathrooms in airliner cockpits, meaning that the door to the passenger area has to be open to allow pilots to answer calls of nature.  Updated regulations after 9/11 mandate that at least two crew members must be in the cockpit at all times, so for example, only one pilot on Flight 5735 could leave the cockpit at a time.  A patient terrorist with a first-class seat having a view of the cockpit door could therefore wait until the door opened and make a threatening move, perhaps holding a flight attendant hostage at knifepoint (assuming he could smuggle a knife on board).  But he would still have to overpower three determined flight crew members to do the dastardly deed.


Well, I think we've had enough of these dismal speculations for one column.  Suffice it to say that deliberate human action looks like the most likely explanation for the fate of Flight 5735.  We may never know much more than that, unless there are clues in the cockpit voice recorders that remain to be unveiled.  Despite all the modern technology that is deployed to ensure air safety, as long as people fly the planes, we have to trust those people.  And once in a very great while, someone decides to betray that trust.


Sources:  I referred to the article "Flight data suggests China Eastern plane deliberately crashed:  Wall Street Journal report" posted on May 18, 2022 at  I also referred to the Wikipedia article on Flight 5735.   

Monday, August 29, 2022

Tesla Service Is In a Fix


A recent article from reports that about a fifth of Tesla owners who took their vehicles in for servicing were unhappy with how long the process took.  And investigative reporters at Vox have perused over a thousand complaints about Tesla to the Federal Trade Commission, revealing a variety of problems with a type of vehicle that was supposed to all but do away with automotive repair shops.


The Better Business Bureau has received over 9,000 reports on Tesla, many having to do with faulty or delayed service, inadequate supplies of spare parts, poor communication with the customer, and poor manufacturing quality in the first place. 


One mitigating factor in making bad service less bad overall is that electric vehicles (EVs) supposedly need a lot less service than internal-combustion (IC) cars do.  And while that may be the case eventually, Vox quotes a representative of the consumer-research firm J. D. Power as saying Tesla owners need service about as frequently as owners of conventional cars do.  So it looks like even Tesla owners need to go to the shop about as often as the rest of us do, which doesn't help if the experience with Tesla service is a bad one.


In Tesla's defense, their service-center count may not include their mobile units that schedule appointments at the customer's home or business.  And because so much of the Tesla functionality is computerized, remote software upgrades, diagnosis, and repairs are often done without any need to bring the physical car into a shop.  Even so, there are enough purely mechanical or electrical problems that necessitate a trip to the service center to warrant many times the number of locations that Tesla presently operates.


Some independent garages are starting to work on Teslas, but this can cause problems with the car's warranty, and Elon Musk has gone on record as opposing the right-to-repair movement that seeks to break up manufacturers' monopolies on service.


As every general knows, maintenance and repair are a vital part of any mechanized military effort.  A lack of spare parts can defeat an army as effectively as enemy fire.  Tesla is in a unique and transient situation, as they have exploited a cultural trend against fossil fuels that is especially popular among the upper classes who can afford electric cars, and done an end run around the established automakers by reinventing carmaking from scratch and dominating the U. S. EV market. 


But people who wear Rolexes and drive Teslas are not happy when they have to wait a month for a service appointment and then find it's canceled at the last minute.  At the present time and for some time in the future, most people will look at the prospect of an all-electric car and decide the questionable advantages are not worth the extra cost and the other problems:  limited range, scarcity of charging stations, and now, unreliable service.  So while Tesla has done a good job exploiting the low-hanging market fruit of relatively wealthy and environmentally-conscious customers, that market may be close to saturation.


To people who simply want a reliable and economical means of transportation, service is at least as important as features.  And no matter how well-built a machine is initially, sooner or later something will go wrong and a qualified person will have to get their hands on the car to fix it. 


I will confess to some feeling of nostalgia as I take my seventeen-year-old Element in to the independent shop that is used to seeing me every few months as yet another piece of it reaches the end of its service life, and thinking, "Gee, with the coming of EVs, all these garages will go the way of the village blacksmith before the Model T."  Now I'm not so sure. 


For one thing, electric vehicles are really hard on tires.  The sudden torque strains and the added battery weight will destroy ordinary car tires in short order, so EV tires have to be specially designed to take the added punishment.  And most Teslas don't have spare tires—don't ask me why, but they don't.  Maybe the car's too heavy to safely change a tire on the road. 


While the electronics industry has gotten us used to the concept—for good or ill—of using it and throwing it away when it breaks or a software upgrade renders it useless, a $45,000 investment can't be treated like that.  Lots of people out there keep their cars for five, ten, or fifteen years, and a century's worth of progress in auto manufacturing has made that kind of longevity possible. 


Even if everything else on a Tesla worked perfectly forever, the battery has a known and limited lifetime.  So far, the relative scarcity and prestige of Teslas make it economical to replace worn-out batteries, but this may not always be the case.  Life-cycle engineering takes a holistic view of a product from inception to end-of-life disposal or preferably recycling.  With Tesla's current focus on getting cars out the door, it looks like the company has neglected the maintenance part of the cycle.  This may well be a temporary condition, though.


If I were a young gearhead wanting to open my own automotive service center, I might well choose to specialize in Teslas and take my chances that Tesla wouldn't shut me down.  Because the demand is certainly there, and it doesn't sound like Tesla is anywhere close to meeting it with their own service centers.


Sources:  The Vox article "Missing parts, long waits, and a dead mouse:  the perils of getting a Tesla fixed" by Rebecca Heilweil appeared on Aug. 24, 2022 at  

Monday, August 22, 2022

Averse to the Metaverse


The company Meta, formerly known as Facebook, sparked some criticism after it opened its keynote "Horizon Worlds" virtual-reality (VR) platform to France and Spain last week.  According to an article in Slate, new European users were disappointed in the graphics, to say the least.  And judging by the screen shots provided in the article, they have a point.  Adjectives like "bland," "cartoonish," and "slightly weird" come to mind.  In common with many VR systems, only the upper bodies of human avatars appear.  I've never understood the reason for this myself, but one user speculated that it was so that nobody online can have sex. 


At any rate, it doesn't sound like the French are going to start holding meetings of the Académie Française in Horizon Worlds any time soon.  But Mark Zuckerberg can live without the Academy's forty members if he can get several million mere mortals to join.


Meta has put a huge chunk of its colossal resources into its metaverse venture, some $13 billion, and you can be sure the firm is not doing that for fun.  Its vision is that as Internet bandwidth and access increase, you will be able to don a VR headset and basically live life online:  working, playing, even exercising (I suppose, but this might present problems unless you're doing it on some physical treadmill tied to your VR system).  As for sex, there's plenty of that on the Internet already, so maybe Meta is staying out of that area purely for business reasons. 


There is a theory in the history-of-technology field called "technological determinism."  It basically says that technology has its own built-in direction, and once a technology develops into a feasible, marketable form, there's no stopping it.  Its development path and growth are intrinsic to the nature of the technology, and human factors and influences count for nothing.  The term was invented mainly by people who didn't believe in it to criticize those who appeared to support it, but I've never encountered a pure card-carrying technological determinist. 


Nevertheless, a lot of stories of how technologies developed tend to make it seem inevitable in retrospect.  I think this betrays a lack of imagination on the part of the storyteller, and some of the best histories of technology I have encountered look at failed technologies that might have succeeded if certain almost random factors had gone the other way. 


With "Horizon Worlds" we are witnessing the first baby steps of a new technology which Meta, at least, hopes will inevitably dominate the Internet and become as much a part of our lives as mobile phones—and all they can do—have become today.  And, as any good public corporation will try to do, Meta intends to turn this into cash—lots of cash.


The Slate article quotes a University of Virginia professor's dark prophecy that Meta hopes to "monitor, monetize, and manage everything about our lives."  While this is no doubt an exaggeration, it's hard to deny that the picture conjured up by proponents of VR, especially the Meta-style of VR, seems to aspire toward a kind of totalizing, all-inclusive situation in which people would take off their VR headsets only to attend to physical needs, like eating, going to the bathroom, and (maybe) sex.  And sleep, unless we manage to genetic-engineer our way out of that little necessity.


In promulgating his Meta vision, Zuckerberg and his colleagues suffer from a problem that they has in common with many tech-savvy leaders who combine awesome technical and business skills with the philosophical understanding of ten-year-old boys.  There are various answers to the question, "What are humans for?"  Different cultures and religions come up with different answers, but the worst response of all to that question is to ignore it altogether. 


Any entity whose business operations affect millions or even billions of people, as Facebook/Meta's does, should consider seriously what its model for human flourishing is.  And the bigger the firm is, the more seriously it should consider that question. 


Unfortunately, this rarely happens.  Instead, a business like Meta is in practice avoiding two guardrails on opposite sides of a wide road.  One guardrail is profits:  if the firm ceases to make money going forward, something has to be done to avoid the guardrail of losses that kill the firm.  The other guardrail is a combination of law and public criticism.  Even a profitable business can go out of existence if its leaders end up in jail or become social pariahs, as the Weinstein Company did when its head Harvey Weinstein was accused (and eventually convicted) of sexual misdeeds.  As long as a firm avoids hitting these two guardrails, its leaders will consider it a success, and will keep doing whatever they feel is necessary to keep going, regardless of the firm's effects on the souls of its millions of customers.


Aesthetics are more than just a thing that has to meet minimum standards in order for people to use a platform like "Horizon Worlds."  Aesthetics is another word for beauty.  In the past, advances in technology have led to the creation of beautiful things that make the world a better place.  Advances in building technology led to artistic creations such as Chartres in France and Burgos in Spain.  Cathedrals were designed for the ordinary human, just like "Horizon Worlds" is.  But those who built the cathedrals of the Middle Ages were guided by a very specific vision of human purpose:  to encounter, and eventually to love, the Divine.  Their creations gained thereby a timelessness that motivates even a secular culture such as that of France's today to reconstruct Notre Dame after its disastrous fire, at a cost of millions of dollars.


The metaverse can be a place of truth, beauty, and goodness.  But those values can be achieved only if those designing it make these values intentional goals.  On the other hand, if their guides are only the two guardrails of profit and avoiding jail or pariah status, they are likely to forge an erratic path that may lead millions of people to places they wish they hadn't gone to.


Sources:  The article "Why the Metaverse Has to Look So Stupid" by Nitish Pahwa appeared on Slate's website on Aug. 19, 2022 at 

Sunday, August 14, 2022

AI Illustrator Still Looking for Work: The Shortcomings of DALL-E 2


The field of artificial intelligence has made great strides in the last couple of decades, and any time a new AI breakthrough is announced, critics voice concerns that yet another field of human endeavor has fallen victim to automation and will disappear from the earth when machines replace the people who do it now. 


Until recently, the occupation of art illustrator seemed reasonably safe from assault by AI innovations.  Only a human, it seemed, can start with a set of verbal ordinary-language instructions and come up with a finished work of art that fulfills those instructions.  And that was mostly true until AI folks began tackling that problem.


The AI research lab OpenAI has publicized its work in this area using a "transformer" type of program that has gone through two versions so far, DALL-E and now DALL-E 2.  I'm sure the late surrealist artist Salvador Dali would be pleased at the honor of having a robot artist named after him, but the connection between his works and the productions of DALL-E 2 are perhaps closer than the researchers would like.  In an article in IEEE Spectrum, journalist Eliza Strickland highlights the shortcomings of DALL-E 2's productions and speculates on whether such systems will ever be generally useful.


As is the case with most such human-task-imitating systems, the first step the researchers took was to compile a large number of examples (650 million in the case of DALL-E 2) to train the software about what illustrations look like.  The images and their accompanying descriptive texts were all from the Internet, which has its own biases, of course.  The researchers have learned some lessons from previous fiascos with AI software that allowed random users to request sketchy or offensive products, so they have been very careful about pre-screening the training images and (in some cases) manually censoring what DALL-E 2 comes up with.  They have also not released the program for general use yet, but have carefully selected users under controlled conditions.  If you just let your imagination roam with the scenario of letting some randy teenage boys loose with a program that will make a picture of whatever they describe to it, you can see the potential for abuse.


For certain purposes, DALL-E 2 does fine.  If you want a generic type of picture that would look good as a filler for a brochure about a meeting, and you just want to show some people in a corporate setting, DALL-E 2 can do that.  But so can tons of free clip-art websites.  If you want an image that conveys specific information, however—a diagram, say, or even text—DALL-E 2 tends to fall flat on its digital face.  The Spectrum journalist was privileged to make a few text requests to DALL-E 2 for specific images.  She asked for an image of "a technology journalist writing an article about a new AI system that can create remarkable and strange images."  She got back three photo-like pictures, but they were all of guys, and only one seemed to have anything to do with AI.  Then she asked for "an illustration of the solar system, drawn to scale."  All three results had a sun-like thing somewhere, and planet-like things, and white circular lines on a black background showing the orbits, but the number of planets and what they looked like was pretty random.  So technical illustrators (those who are left after software like Adobe Illustrator has made every man or woman his own illustrator) need not file for unemployment insurance right away.  DALL-E 2 has a way to go yet.


There is a fundamental question lurking in the background of AI exploits like this.  It can be phrased a number of ways, but it basically amounts to this:  will AI software ever show something that amounts to human-like general intelligence?  And believe it or not, your humble scribe, along with Gyula Klima, a philosopher then at Fordham University, recently published a paper addressing just that question, and we concluded that the answer was "No."


As you might guess when a philosopher gets involved, the details are somewhat complicated.  But we began with the notion that the intellect, which is a specific power of the human mind, relies on the use of concepts.  In the limited space I have, I can best illustrate concepts with examples.  The specific house I live in is a particular thing.  There is only one house exactly like mine.  I can remember it, I can form a mental image of it, and I can even imagine it with a different color of trim than it actually has.  And software programs can do what amounts to these sorts of mental operations as well.  In my mind, my house is a perception of a real, individual thing.


By contrast, take the concept of "house."  Not my house or your house, just "house."  Any mental image you have that is inspired by "house" is not identical to "house"—it's only an example of it.  The idea or concept denoted by the word "house" is not reducible to anything specific.  The same goes for ideas such as freedom or conservatism.  You can't draw a picture of conservatism, but you can draw a picture of a particular conservative. 


In our paper, we gave strong evidence in favor of the notion that because concepts cannot be reduced to representations of individual things, AI programs will never be able to use them.  In the article, we used examples from an art-generating AI program that in some ways resembles DALL-E 2, in that it was trained on thousands of artworks and then made to generate artwork-like images.  The results showed the same kind of fidelity to superficial details and total absence of underlying coherence that DALL-E 2's productions showed.  As one of the OpenAI researchers quoted in the Spectrum article noted, "DALL-E doesn't know what science is . . . . [S]o it tries to make up something that's visually similar without understanding the meaning."  That is, without having any concept of what it's doing.


The big question is whether further research in AI will produce programs that truly understand concepts, and use that understanding to guide their production of art, text, or what have you.  Klima and I think not, and you can look up our article to understand why.  But we may be wrong, and only time and more AI research will tell.


Sources:  Eliza Strickland's article "DALL-E 2's Failures Reveal the Limits of AI" appeared on pp. 5-7 of the August 2022 print issue of IEEE Spectrum.  "'Artificial intelligence and its natural limits" by Karl D. Stephan and Gyula Klima appeared in vol. 36, no. 1, pp. 9-18, of AI & Society in 2021.