Monday, October 31, 2016

Zombie Cameras On the Internet of Things

On Friday, Oct. 21, millions of Internet users trying to access popular websites including Twitter, Netflix, the New York Times, and Wired suddenly saw them stop working.  The reason was that for a few hours, a massive distributed-denial-of-service (DDOS) attack hit a domain-name-server (DNS) company called Dyn, based in New Hampshire.  As I mentioned in last week's blog, DNS companies provide a sort of phone-book service that turns URLs such as into machine-readable addresses that connect the person requesting a website to the server that hosts it.  They are a particularly vulnerable part of the Internet, because one DNS unit can handle requests for thousands of websites, so if you take that DNS machine down, you've automatically damaged all those websites as long as the DNS is out of service.

DDOS attacks are nothing new, but the Oct. 21 attack was the largest yet to use primarily Internet-of-Things (IoT) devices in its "botnet" of infected devices.  The Internet of Things is the proliferation of small sensors, monitors, and other devices less fancy than a standard computer that are connected to the Internet for various purposes. 

Here's where the zombie cameras come in.  Say you buy an inexpensive security camera for your home and get it talking to your wireless connection.  If you're like millions of other buyers of such devices, you don't bother to change the default password or otherwise enhance the security features that would prevent unauthorized access to the device, like you might do if you bought a new laptop computer.  Security experts have known for some time about a new type of malware called Mirai that takes over poorly protected always-on IoT devices such as security cameras and DVRs.  When the evil genius who sent out the Mirai malware sends a signal to the infected gizmos, they all start spouting requests to the targeted DNS server, which immediately gets buried in requests and can't respond to anybody.  That is what a DDOS attack is. 

As the victim learns the nature of the requests, programmers can mount a defense, but skillful attackers can foil these defenses too, for a time, anyway.  The attackers went away after three attacks that day, each lasting a couple of hours, but by then the damage had been done.  The attacks made significant dents in the revenue streams of a number of companies.  And perhaps most importantly, we learned from experience that the much-ballyhooed Internet of Things has a dark side.  The question now is, what should we do about it?

Sen. Mark Warner, a Democrat from Virginia, has reportedly sent letters to the FCC and other relevant Federal agencies asking that same question.  According to a report on the website Computerworld, Warner has a background in the telecomm industry and recognizes that government regulation may not be the best answer.  For one thing, Internet technology can change so fast that by the time a legislative or administrative process finally produces a regulation, it can be outmoded even before it's put into action.  Warner thinks that the IoT industries should develop some kind of seal of security approval or rating system that consumers could use to compare prospective IoT devices before they buy. 

This may get somewhere, and then again it may not.  The reason is that an IoT device that can be used in a DDOS attack but otherwise functions normally as far as the consumer is concerned, is a classic case of what economists call an "externality."

A more familiar type of externality is air-pollution abatement devices on cars:  catalytic converters, the diesel exhaust fluid that truckdrivers now have to buy, and all that stuff.  None of it makes your car run better; in fact, cars can get better mileage or performance if they don't have that anti-pollution stuff working, as Volkswagen knew when it purposely disabled the anti-pollution function on some of its diesel models and turned it on only to pass government inspections.  The pollution your car would cause without anti-pollution equipment is an externality.  The additional pollution that your car causes is so small that you won't notice it.  Only when you add up the contributions of the millions of cars in a city does it become a problem.  But if you don't have anti-pollution stuff on your car, you're adding a tiny bit to the air pollution that everybody in your city has to breathe.  It's that involuntary aspect, the fact that other people are put at a disadvantage because of your action (or inaction), that makes it an externality.

The vulnerability of IoT devices to being used in DDOS attacks is an externality of a similar kind.  When you buy and install a security camera, or rent a DVR from your cable company, and they don't have enough security software installed to prevent them from being used in a DDOS attack, you're raising the risk of such an attack for everybody on the Internet.  And they don't have a choice in the matter.

Historically, externality problems such as air and water pollution have been resolved only when the government gets involved at some level.  When the externality problems are strictly local, sometimes local political pressures can resolve the issue, but the Internet is by its nature a global thing, in the main, although for reasons that are not entirely clear, the Oct. 21 attacks affected mainly East Coast users.  So my guess is that to fix this issue, we are going to have to have national or international governmental cooperation to set some rules and fix minimum standards for IoT devices regarding this specific problem.

The solutions are not that hard technically:  things like attaching a unique username and password to each IoT device and designing them to receive security updates.  These measures are already in place for conventional computers, and as IoT devices get more sophisticated, the additional cost of these security measures will decline to the point that it will be a no-brainer, I hope. 
But right now there's millions of the gizmos out there that are still vulnerable, and it would be very hard to get rid of them by any means other than waiting for them to break or get replaced by new ones.  So we have created a serious security problem that somebody, somewhere has figured out how to take advantage of.  Let's hope that the Oct. 21 attack was the last big one of this kind.  But right now that's all it is—just a hope. 

Sources:  I referred to the article " What We Know About Friday’s Massive East Coast Internet Outage" by Lily Hay Newman of Wired at, and the article "After DDOS attack, senator seeks industry-led security standards for IoT devices" by Mark Hamblen at  I also referred to the Wikipedia articles on "externality" and "Mirai" (which means "future" in Japanese).

Monday, October 24, 2016

The Day The Internet Goes Down

This hasn't happened—yet.  But Bruce Schneier, an experienced Internet security expert with a track record of calling attention to little problems before they become big ones, is saying he's seeing signs that somebody may be considering an all-out attack on the Internet.  In an essay he posted last month called "Someone Is Learning How to Take Down the Internet," he tells us that several Internet-related companies which perform essential functions such as running domain-name servers (DNS) have come to him recently to report a peculiar kind of distributed denial-of-service (DDOS) attack.

For those who may not have read last week's blog about ICANN, let's back up and do a little Internet 101.  The URLs you use to find various websites end in domain names—for example, .com or .org.  One company that has gone public on its own with some limited information about the attacks is Verisign, a Virginia-based firm whose involvement with the Internet goes back to the 1990s, when they served as the kind of Internet telephone book for every domain ending in .com for a while, before the ICANN, now an internationally-governed nonprofit organization, took over that job.  Without domain-name servers, networked computers can't figure out how to find websites, and the whole Internet communication process pretty much grinds to a halt.  So the DNS function is pretty important.

As Schneier explains in his essay, companies such as Verisign have been experiencing DDOS attacks that start small and ramp up over a period of time.  He likens them to the way the old Soviet Union used to play tag with American air defenses and radar sites in order to see how good they were, in case they ever had to mount an all-out attack.  From the victim's point of view, a DDOS attack would be like if you were an old-fashioned telephone switchboard operator, and all your incoming-call lights lit up at once—for hours, or however long the attack lasts.  It's a battle of bandwidths, and if the attacker generates enough dummy requests over a wide enough bandwidth (meaning more servers and more high-speed Internet connections), the attack overwhelms the victim's ability to keep answering the phone, so to speak.  Legitimate users of the attacked site are blocked out and simply can't connect as long as the attack is effective.  If a critical DNS is attacked, it's a good chance that most of the domain names served will also disappear for the duration.  That hasn't happened yet on a large scale, but some small incidents have occurred along these lines recently, and Schneier thinks that somebody is rehearsing for a large-scale attack.

The Internet was designed from the start to be robust against attack, but back in the 1970s and 1980s, the primary fear was an attack on the physical network, not one using the Internet itself.  Nobody goes around chopping up fiber cables in hopes of bringing down the Internet, because it's simply not that vulnerable physically.  But it's likely that few if any of the originators thought of the possibility that the Internet's strengths—universal access, global reach—would be turned against it by malevolent actors.  It's also likely that few of them may have believed in original sin, but that's another matter.

Who would want to take down the Internet?  For the rest of the space here I'm going to engage in a little dismal speculation, starting with e-commerce.  Whatever else happens if the Internet goes down, you're not going to be able to buy stuff that way.  Schneier isn't sure, but he thinks these suspicious probing attacks may be the work of a "state actor," namely Russia or China.  Independent hackers, or even criminal rings, seldom have access to entire city blocks of server farms, and high-bandwidth attacks like these generally require such resources.

If one asks the simple question, "What percent of retail sales are transacted over the Internet for these three countries:  China, the U. S., and Russia?" one gets an interesting answer.  It turns out that as of 2015, China transacted about 12.9% of all retail sales online.  The U. S. was next, at about 8.1%.  Bringing up the rear is Russia, at around 2%, which is where the U. S. was in 2004.  Depending on how it's done, a massive attack on DNS sites could be designed to damage some geographic areas more than others, and without knowing more details about China's Internet setup I can't say whether China could manage to cripple the Internet in the U. S. without messing up its own part.  But there is so much U. S.-China trade that Chinese exports would start to suffer pretty fast anyway.  So there are a couple of reasons that if China did anything along these lines, they would be shooting themselves in the foot, so to speak.

Russia, on the other hand, has much less in the way of direct U. S. trade, and while it would be inconvenient for them to lose the use of the Internet for a while, their economy, such as it is, would suffer a much smaller hit.  So based purely on economic considerations, my guess is that Russia would have more to gain and less to lose in an all-out Internet war than China would.

A total shutdown of the Internet is unlikely, but even a partial shutdown could have dire consequences.  Banks use the Internet.  Lots of essential utility services, ranging from electric power to water and natural gas, use the Internet for what's called SCADA (supervisory control and data acquisition) functions.  The Internet has gradually become critical piece of infrastructure whose vulnerabilities have never been fully tested in an all-out attack.  It's not a comfortable place for a country to be in, and in these days of political uncertainty and the waning of dull, expert competence in the upper reaches of government, you hope that someone, somewhere has both considered these possibilities in detail, and figured out some kind of contingency plan to act on in case it happens. 

If there is such a plan, I don't know about it.  Maybe it's secret and we shouldn't know.  But if it's there, I'd at least like to know that we have it.  And if we don't, maybe we should make plans on our own for the Day The Internet Goes Down.

Sources:  Bruce Schneier's essay "Someone Is Learning How to Take Down the Internet" can be found at  I obtained statistics on the percent of U. S. retail e-commerce sales from the website, the China data from, and the Russia data from  I also referred to the Wikipedia article on Verisign.

Monday, October 17, 2016

Internet Technical Governance: ICANN Says "I can," But Can It?

At a time when politics seems to have gotten into everything, like sand after a trip to the beach, it's not too surprising to hear that Senator Ted Cruz and some state attorneys general have seized upon a largely technical issue involving the Internet domain name system (DNS), specifically the transfer of supervision from the U. S. Department of Commerce to an independent non-profit organization called ICANN (Internet Corporation for Assigned Names and Numbers).  This matter highlights a little-known fact about engineers:  they often handle political matters a good deal better than many politicians do.

I still think the best definition of politics is one I heard from my eighth-grade civics teacher:  "Politics is just the conduct of public affairs."  In the nature of something as widespread and influential as the Internet, in one sense every issue affecting its operation and integrity is political, in that it could potentially affect every user.  But that is not the usual sense in which the word is used.

The facts of the issue are these.  When you type in a URL that uses letters that stand some chance of being understood by a normal human (e. g. the Domain Name System is sort of like a phone book in which network computers look up the URLs that are linked to numbers that computers actually use.  Up until a couple of weeks ago (Oct. 1, to be exact), certain operations pertaining to the assignment of domain names and other more technical matters were performed under the supervision of the U. S. Department of Commerce's National Telecommunications and Information Administration (NTIA), through a contract with the already-existing ICANN, a nonprofit organization based in California.  This tie to the U. S. government was viewed by some as a liability, in that it has led in the past to calls from Russia and China to transfer supervision of ICANN to a United Nations agency called the International Telecommunications Union (ITU).  (You can tell there's engineers involved by the number of alphabet-soup outfits in this piece.)  Partly to counter this, for many years both Democratic and Republican presidential administrations have been moving to cut the last formal ties between the Department of Commerce and ICANN, and finally a date was set:  October 1 of this year.

For reasons known best to themselves, but possibly having to do with businesses which were not happy with how domain-name disputes turned out, the attorneys general of the states of Arizona, Nevada, Oklahoma, and Texas filed suit to block the transfer.  But a federal judge denied the request and for two weeks now, ICANN has been running without its former Department of Commerce supervision.  I for one have not noticed any big changes, but it was never the kind of thing that was supposed to lead to the sudden appearance of massive censorship on the Internet in the first place.

While the assigning of Internet domain names and keeping them straight on the "root servers" could conceivably be manipulated for devious or sinister purposes, I am unaware of any major instances of this.  As numerous reports pointed out, Internet censorship of the type that goes on in China or Egypt from time to time is committed by the host governments, not ICANN, and there's nothing ICANN can do about it if a sovereign government chooses to pull their Internet plug.  I won't say that the concerns of Sen. Cruz and company are entirely without merit, but it's one of those things that can't be predicted in advance. 

So far, ICANN, and many other technical matters pertaining to the Internet, seem to have been run in a way that is familiar to many engineers, but little known outside the engineering community.  There is not a single term that describes this process, but the phrases "consensus," "just-in-time governance," and "ad-hoc committees" pertain to it.  It is most prominent in the development of engineering standards, which the Internet vitally depends on.
Many times in the course of engineering history a need for a standard has arisen.  Technology gives rise to a new capability—precisely-machined screw threads, or radio transmissions, or computer networks—but it will work in a widespread way only if the parties making and using the technology agree on certain standards so everybody's screws will fit, or everybody's computer can talk to the others without a lot of fuss.  So engineers have learned to form standards committees whose members have in mind both technical knowledge and the interests of private and public entities concerned with the new technology.  These committees are very lean organizations—usually the members' firms or departments pay for their participation, so there is little or nothing in the way of staff, buildings, or tax money involved.  The committee meets as long as it takes to figure out a standard, agrees on it, and then publishes its results and in effect says, "If you want to play this new game, here are the rules."  The committee disbands, often, and life goes on, only better than before because now there's a new standard that engineers can use to implement a new technology.

Because these standards committees work almost entirely out of the public eye, most people don't even know they exist.  But without them, we wouldn't have, well, most of the highly sophisticated technology we have.  Wireless networks depend on standards.  The Internet depends on standards.  Electric power depends on standards (the battle of Westinghouse's AC versus Edison's DC was in large part an issue of standards).  And all these things get done almost invisibly, without much publicity or public expense.

Some political scientists have floated the idea of adopting the engineering-standard style of governance to more public matters, and they may have a point.  As anyone who has attended standards meetings can attest, they are not without controversy.  But by and large, standards organizations and technical outfits such as ICANN operate in this mode successfully and efficiently.  And unless future events prove otherwise, it's likely that the fears of Sen. Cruz and company will turn out to be groundless.

I hope ICANN can keep doing its generally good job without the Department of Commerce looking over its shoulder any more.  Instead of politicians making politics out of what looks to be a smoothly-functioning situation, perhaps we could encourage them to ask how engineers deal with technical matters that have political aspects, and learn something about how to get work done.  But at this point in history, it might be too much to ask.

Sources:  CNET carried two stories to which I referred on the transfer of ICANN to an independent status:  one on Sept. 16 at, and one on Oct. 1 at  I also referred to the Wikipedia article on ICANN.

Monday, October 10, 2016

115 Years Young?

Vannevar Bush, the head of the U. S. Office of Scientific Research and Development during World War II, tells the story of how during the war he was trying to gain more funding from Congress for medical research.  Hoping to further his cause, he convinced A. Newton Richards, President Roosevelt's chairman of the Committee on Medical Research, to testify in favor of more funding before a Congressional committee.  As Bush recounts, "It was towards the end of the war, and Richards was feeling tired and a bit old.  One of the congressmen asked him, 'Doctor, will all these researches you are carrying on tend to lengthen the span of human existence?'

'God forbid,' said Richards, smack into the record."

While not every medical researcher shares Dr. Richard's reluctance to lengthen human longevity, Dr. Jan Vijg of the Albert Einstein College of Medicine thinks he has discovered the true limit to how long humans can live.  It's about 115 years, he says.

According to a recent New York Times report, Dr. Vijg and his colleagues studied the mortality records of a number of countries to see which age group experienced the most rapid growth in recent decades.  As the general level of health care in industrialized countries has improved, the average lifespan has increased, but Dr. Vijg guessed that if there was a natural limit, it would show up first in the leveling off of the age of the fastest-growing group of old people.  For example, in the 1920s in France, 85-year-olds were the fastest-growing group, but by the 1990s that honor belonged to 102-year-olds.  In the last decade or so, the trend has stagnated at about 115.  There are exceptions, of course, such as Jean Calment, who before her death in France in 1997 at age 122 was fond of retelling her recollections of meeting Vincent Van Gogh.  But statistically, Dr. Vijg has strong evidence that no matter what specific diseases we conquer, we have a built-in expiration date of about 115 years.

Lots of people disagree with Dr. Vijg, of course.  One of the scientists who collected data used in Dr. Vijg's study deplores his conclusion, calling it a "travesty."  But we should distinguish between a descriptive study, whose purpose is simply to give us insight into what is in fact happening, and a claim of proof.  Not even Dr. Vijg is claiming to have proved nobody can live longer than 115, if for no other reason than the fact of Ms. Calment's achievement.  What he presents is persuasive statistical data that, unless we discover the root causes of aging and get a handle on how to manipulate them, we are unlikely to push the maximum lifespan higher than 115.

It's curious, but a friend of mine has been going around for some time claiming approximately the same thing on the basis of a sermon he heard.  I didn't hear the sermon, but evidently the preacher was looking forward to living to 120 on the basis of a Bible verse in the Old Testament.

He was probably talking about Genesis 6:3, which reads in the King James Version, "And the Lord said, 'My spirit shall not always strive with man, for that he also is flesh:  yet his days shall be an hundred and twenty years.'"

Now, this passage occurs in the midst of a number of other sayings that are, to say the least, hard to interpret—things about the sons of God marrying the daughters of men, giants in the earth, and so on.  Some interpreters say this passage has nothing to do with a limit on human lifespans; rather, it refers to the time during which God put up with man's increasing misbehavior before he decided to put an end to it with the Flood, which only Noah and his fellow shipmates survived. 

Whatever the meaning of the Genesis passage, it has historically been a truism that everyone's going to die sooner or later, and society has been arranged with that assumption in mind.  Dr. Vijg's claim that we shouldn't expect to live longer than 115 years or so just confirms what nearly everyone assumes, and puts a number on it.

However, these ideas are rejected by a small but vociferous group called transhumanists, who increasingly put their faith in the idea that humanity is shortly going to figure out how to extend useful, fruitful life indefinitely.  They don't always say "forever," but many of them mean that.  One type of transhumanists called "immortalists" in particular seem to think that we can figure out how to live forever.  While I understand why a person, especially one who doesn't believe in God, would get interested in extending human life—it's the only show in town, on their view—the danger in this movement is that in trying to move us toward a glorious paradisacal future, they will unwittingly turn the present into Hell on earth.  This sort of thing went on during the Cold War, and continues in some places today, as millions were subjugated by Communist regimes which promised a wonderful future of abundance at the price of sacrificed freedoms now.  Lest we dismiss the transhumanists as a powerless fringe group, one of their leading lights, Ray Kurzweil, currently holds a high-level position at Google. 

Perhaps the best thing is not to focus just on how long we can live, but how long we can live well.  Dr. Vijg takes this approach, saying that even if there is a natural limit of 115, there's a lot we can still do to prevent diseases such as Alzheimer's, osteoporosis, and other age-related maladies from robbing us of the enjoyment of those later years.  So the news that we can't live past 115 is not a counsel of despair, by any means.  Still, for those who think death is the end, it's not good news either.

However long one lives, the length matters less than what you do with it.  Old paintings of philosophers would sometimes show a human skull prominently displayed in the philosopher's study.  It represents a reminder that life is limited, and every minute is one of a finite number of minutes we will have, so we should make the most of them.  Some of the best advice along these lines, for believers and non-believers alike, is from Psalm 90, which says "So teach us to number our days, that we may apply our hearts unto wisdom."

Sources:  The New York Times article describing Dr. Vijg's work appeared on Oct. 5, 2016, at   The quotation from Dr. Richards is from Vannevar Bush's book of memoirs Pieces of the Action (NY:  William Morrow, 1970), p. 130.  The patience-of-God interpretation of Genesis 6:3 can be found at  I also referred to the Wikipedia articles on transhumanism and Ray Kurzweil.

Monday, October 03, 2016

Freeing Information—But How?

Last week I attended a talk by philosopher William "Trey" Brant III on a crisis in the academic publishing world involving either massive data piracy, or a blow for the freedom of information, depending on your point of view.  I was particularly interested in this topic because earlier in the day, I'd found out that the electronic version of one of my books had been "cracked," meaning people could now download it for free. 

The phrase "information wants to be free" was coined by Whole Earth Catalog author Stewart Brand in a conversation with Steve Wozniak in 1984.  Brand was a techno-optimist who participated in one of the first online communities in the San Francisco Bay area, and hoped that computer networks would foster a kind of egalitarian new age of togetherness and sharing.  That was before big money got involved.

As Brant described, and as I have been able to confirm since, the world of academic publishing, specifically scientific and technical papers, is now an exceedingly profitable one.  Here is the way it works.  People who write academic papers are not often great entrepreneurs, although there are exceptions.  They are typically in a situation where they have to publish a certain number of papers a year to keep their jobs—the old "publish or perish" paradigm, which has taken hold in more and more universities and colleges throughout the world.  In the old pre-Internet days, there were only so many journals, because the expense of printing and shipping pieces of paper around was nontrivial, and so the whole enterprise had a kind of natural limit.  The main subscribers to academic journals were either libraries (which had to subscribe to all the big ones) or individual academics (who typically subscribed to only one or two journals that matched their specialized interests).  If you go back far enough, say before World War II, academic publishing of journals was a very small business, not worth the time it would take for any major publisher to fool with. 

But then big money came to the hard sciences, professional organizations grew, and the Internet came along.  Now all you need to set up a journal is a website and connections to some academics willing to edit the thing.  And many of the established journals have now been liberated from the hard-copy page limits and can easily publish 10,000 pages a year—it's just more bits, not more trees.  So it's gotten a lot cheaper to run an academic journal, but it hasn't gotten any cheaper to subscribe.

Publishers such as Elsevier (full disclosure:  my latest technical paper was published through an Elsevier journal) have their profitable cake and get to eat it too.  The academics who write their content send their papers in for free, or in some cases even pay the publishers page charges.  The other academics who review the papers also usually review for free.  And then the publisher gets to charge five-figure subscription fees to the libraries of the same institutions where the papers were generated.  It is indeed good to be an academic publisher these days.  At least until Alexandra Elbakyan came along.

In 2011, Elbakyan was a neurotechnology graduate student in Kazakhstan, dismayed by the charges she had to pay per research article to places like Elsevier unless she was affiliated with a university whose library had a subscription.  (Anybody can get individual articles from these publishers, but unless you have a connection to a research library, they will ask you to pay on the order of $30 US per article.  Academics such as myself are shielded from these costs, which are borne by the libraries.)  So she found a willing hacker or two and started something called Sci-Hub.  According to Dr. Brant, you can find millions of documents on it that you'd otherwise have to pay for, and outside of North America and Northern Europe, especially in Russia and China, nobody fools with the official academic publishers any more.  They just go to Sci-Hub.

This is not a stable situation.  No matter what treaties and agreements say, if Country A harbors some folks who are violating Country B's copyright laws, and Country A doesn't want to cooperate, there's not much that anybody in Country B can do.  If everybody switches to using Sci-Hub, the academic publishers' revenue streams dry up and the whole system collapses.  It hasn't happened yet, but the house of cards is teetering.

The question is, who should pay for academic publishing, and how much is fair to charge for it?  Saying the free market will decide isn't going to work, because we're not talking about a commodity like oil or wheat.  Each academic paper is unique, and anyway, there are studies showing that only two or three other people ever read a typical journal paper anyway.  But if it's not in principle available to everybody, you can't say it's really published, so the availability has to be there.  In a way, academic publishing is a kind of vanity press, but one in which the writers don't typically pay up front.  Recently the concept of open-access publishing has made some headway, in which the author pays a lump sum (typically on the order of $1000 US), and the publisher promises to keep it on the Internet forever.  To my mind, that goes too far the other way, in that your typical English professor at a lowbrow college is not going to have that kind of money, and neither is his department. 

I don't know what the solution is.  Obviously, the peer-review process is still necessary, in which qualified experts pass judgment on what should be published, and it costs something to organize that and put papers in shape to go online.  But I seriously doubt that it costs as much as places like Elsevier are charging.  Maybe the pressure brought to bear by free-access sites such as Sci-Hub will lead to lower prices.  Or some mechanism or international agreement may be found in which people will still have access to information they need, but at a price which reflects something closer to the true cost of the service, and not just whatever the traffic bears.

And as for my book, well, there's something to be said for paper after all. 

Sources:  I thank Trey Brant for bringing my attention to this matter.  I referred to Wikipedia articles on "Information Should Be Free" and "Sci-Hub."  And no, I'm not going to tell you where to find my book for free, or what Sci-Hub's current domain name is.  You have to find those on your own.