Odi et Amo: Programming Edition

One of the things about being a programmer is that you use a lot of different tools, where “tool” is roughly defined as a discrete program that accomplishes some particular task. IDEs are tools; the command line is many tools accessed through a common interface; programming languages themselves are tools, which are typically surrounded by a larger ecosystem of other tools that are necessary to get things done; and so on. Most of these tools can be combined with one another, so a given user’s toolchain can, in theory, comprise any one of a exponentially exploding number of tool combinations.

The tragedy of the situation is how bad so many of these tools are.

There’s a phenomenon which I think is more prevalent in the software development world than in other technical disciplines which licenses half-ass work. There are a lot of contributing factors to this attitude. For example, the fact that most software isn’t running critical operations means that bugs are, relatively speaking, low priority. It’s just not that important to get it right the first time, unlike, say, building a bridge or engineering a car. Combine that with the valorization of hacking and you get a recipe for lots of shoddy software. Furthermore, there’s a legacy element to software that makes it hard to correct old mistakes or fix bad old tools; unlike physical tools, which wear down and have to be replaced, code is basically forever. So if a bad tool makes its way into the chain by e.g. having been constructed earlier than others, it’s near-impossible to get rid of it.

The other attitudinal problem that software development has as a community is a predisposition towards “tough it out” thinking. This is manifested as a predisposition to disregard such factors as user experience and design quality and shift the blame for any problems with the tool onto the user. These attitudes usually don’t appear (or are repressed) in environments where user experience is actually important, i.e. where money changes hands, but those are situations in which software developers are selling their product to non-developers or the general public, rather than creating tools for each other. When it comes for tools that we actually use, the conventional wisdom is that it’s the user’s responsibility to adapt to the tool, rather than the opposite. This problem is exacerbated by the fact that tools which have become entrenched from early adoption in the days when almost no tools existed have only marginal incentive to improve.

As a result of these factors, we’re left using a lot of extremely shitty tools to perform development tasks. The various shells that originated with Unix and have been passed down to its spiritual descendants are uniformly terrible. Shell scripts have a barely-comprehensible syntax while providing a fraction of the power of real programming languages. And yet these pieces of legacy code are used to maintain entire operating systems. That seems clearly absurd, but in a world where no scripting languages existed, a shitty scripting language was better than none.

Of course, when real scripting languages appeared they only improved the situation marginally. Perl was still shitty, but it allowed you to break so many more things and it had regexes as first-class objects for some reason so you didn’t have to pipe your text to awk (and learn yet another syntax), so people jumped all over it. Ignoring the fact that it initially had almost none of the useful abstractions of more advanced programming languages like Common Lisp; ignoring that its syntax was cobbled together from an unholy mixture of C, awk, and shell; ignoring that Larry Wall, for reasons that can only be assumed to be sadistic, designed it to mimic natural language; the internet went bonkers over it and adopted it wholesale. And now, even though it’s slowly dying out, every once in a while a programmer logs into some old system and discovers some legacy Perl scripts whose author is long gone and which are utterly incomprehensible. The old saw about Perl being the glue of the internet is apt: it’s a nasty, tacky substance which gets into small niches from which it is impossible to extract. Perl is built on the philosophy that there’s more than one way to do it and you should never be prevented from picking the wrong way. Perl sucks.

Autotools sucks. Autotools started out as a script that some guy wrote to manage his Makefiles, which also suck. Autotools is conceptually a correct idea, namely, that you shouldn’t have to write per-configuration Makefiles by hand, but it goes about it a bizarre way. Take a look at the truly byzantine dependency graph on the Autotools Wikipedia page. There are a ton of moving parts, each one with its own configuration logic. Naturally all of this is written in shell script instead of a real language, so Cthulhu help you if you ever have to get down into the weeds of Autotools. Most people run it like a ritual invocation; you just do the minimal amount necessary to get your project to build and hope nothing ever breaks. Actually, Autotools is written in at least two languages, because it uses the m4 macro processor, which Kernighan and Ritchie wrote for C back in the Paleolithic. Hmm, what other language do I know of that is useful for writing domain-specific languages because of its advanced macro capabilities? But of course using Common Lisp would have been too obvious, so m4, which, again, has a totally incomprehensible syntax and no real programming language functionality to speak of, is what gets used. As a result of a lot of talented people wasting a substantial portion of their lives, Autotools has been brought to a level where people can actually use it, with the result that this nightmarish rats’ nest of code has become irreplaceable basically forever. Autotools is the abyss.

PHP is a fractal of bad design. Created by someone who wasn’t really interested in programming for people who also apparently aren’t interested in programming, or consistency, or reliable behavior, or really any other normal markings of a functional piece of software, PHP caught on like consumption at a gathering of 19th century Romantics, because it allowed you to make terrible web pages, which is what everyone in the 90s wanted to do. However many number of revisions later, people are still phasing out shitty old features of the original language in the hopes of someday creating something one-third as pleasant to use as Python.

Make is terrible and confusing. Autotools was create so that you wouldn’t have to write Makefiles by hand, which tells you something about what a pleasant experience that was. Make was initially purely a rule-based system, but at some point it dawned on folks that perhaps they’d like to have things like “iteration” and “conditionals” in their build process, so naturally those got grafted onto Make during, I assume, some sort of Witches’ Sabbath, with Satan’s presence consecrating the unholy union. Despite being created contemporaneously with many of the tools mentioned above, Make does not share syntax with them. In order to avoid writing Makefiles, a complicated tool called cmake was invented, which allows you to write the files that write the Makefiles in yet another syntax which in comprehensibility is somewhere between make itself and a shell script. As per Greenspun’s 10th Rule, Make almost certainly contains at least a portion of a working Common Lisp interpreter. Make sucks.

All these terrible and weird legacy pieces of code have survived down the generations from early times when they were nothing more than convenient hacks that made it possible to automate things. Over the years, they’ve accreted corrections and version numbers and functionality and eventually the process of using them was either made somewhat tolerable or most users were insulated from the messy core by layers and layers of supporting infrastructure. Because replacing old stuff is hard, and because code doesn’t wear out the way that hardware does (and also because most of the cost of usability fall on the developers themselves), these tools just persist forever. Any discussion of their terrible usability or their shortcomings is met with, at best, indifferent shrugs (“It’s too bad, but who’s going to take on that job?”) or outright hostility. People become habituated to their tools and view any suggestion that they might be inadequate as a personal attack. Just check out that PHP post, in which a bunch of people in the comments defend PHP on the grounds that “it’s weird but we’ve gotten used to it!” Well, you can get used to driving a car with a faulty alignment or driving nails with a microscope, but that doesn’t mean you should. If you bring up the point of Perl’s syntax or its weird referencing rules, you’ll be told that you should just memorize these things and once you do it’s not that big of a deal. Suggestions that perhaps knowledge of modern programming practices should be put to good use by creating replacements for tools that behave in opaque and hard-to-understand ways are greeted with incredulity at the heresy.

As developers, I think we do ourselves no favors this way. We should demand, and work to build, better tools. We should have build systems that can be configured in a language that’s easy to parse and understand. We should make use of the strengths of the languages we do have, so that when we need a macro expander, we have one in Lisp or one of its variants. We should have languages that don’t confuse us with unnecessary visual clutter and which are easy to read. We should not be afraid of abandoning old tools because they’re old and were created by esteemed personages at the dawn of programming. We should, above all, pay lots of attention to human factors and usability studies because human time is precious but programming time is cheap. We should, in the end, not be afraid of change, of learning from past mistakes, and of abandoning rather than perpetuating legacy code. That’s my presidential platform; write me in next year in November.

Some Thoughts About Amazon

A recent New York Times article examining the alleged problems with Amazon’s work culture has been making waves all week. Depending on whom you want to believe, Amazon is either the province of the damned, chained to their cubicles and forced to work while being whipped by demons, or a glorious utopia of technological innovation where no one is ever unhappy. This unresolvable war of cross-firing anecdotes is impossible to adjudicate from the outside, for the simple reason that only Amazon could even collect the necessary data to do that, and it wouldn’t make them public in any case. So anyway, this prompted in me a few loosely-connected observations, presented in roughly ascending order of how interesting I find them:

  1. Large organizations are like the rainforests they’re sometimes named after: if you go looking for something, you’re likely to find either that thing or a reasonable facsimile thereof. If what you’re looking for is team dysfunction and people being drummed out of the company for having had the temerity to get cancer, you’ll find that; if you’re looking for a functional team of normal adults who treat each other well and all go home satisfied at the end of the day, I bet you could find that as well. Interviews with newspaper reporters aren’t nothing, but they’re not company-wide statistics, and neither are anecdotes from some guy who really loves it there. It wouldn’t be impossible to set up an experiment that attempted to describe at a macro level the effects of Amazon’s internal culture, but it would require a pretty serious resource investment from Amazon itself, which, despite their claims of being very data-driven, I doubt Amazon would actually undertake.

  2. One theme that sounds throughout the Amazonians’ replies to the NYT article is that the high-criticism stack-ranking culture just has to be the way it is in order for Amazon to be at its most awesomest. The natural question this raises is: how do they, or anyone, know that? Has Amazon ever experimented with any other system? What, put simply, is the control group for this comparison? Without this information, justification of ostensibly bad culture practices are nothing more than post hoc rationalizations by the survivors. Clearly this hazing made me into a superlative soldier/frat brother/programmer, so suck it up! Also recognizable as the kind of justification offered by people who beat their children. You’d think that an organization as allegedly devoted to data gathering as Amazon would have done some controlled studies on these questions but my guess is that Amazon gives precisely zero fucks about whether its culture is poisonous or not, except insofar as it affects their public image. There’s basically no incentive to care, since there’s always another fresh-out-of-college 23-year-old programmer to hire.

  3. Another common theme that Amazon’s defenders (and the tech world’s agitprop more generally) plays again and again is that of SOLVING THE VERY CHALLENGINGEST OF PROBLEMS. Here’s a thing that a grown-up person actually wrote:

    Yes. Amazon is, without question, the most innovative technology company in the world. The hardest problems in technology, bar none, are solved at Amazon.

    This, of course, is totally fucking ludicrous, and yet no one seems to ever question these claims. Obviously Amazon has some fairly serious problems that need solving; that would be true of almost any organization of its scale and scope. But in the end, those problems are about how to make the delivery of widgets slightly more efficient, so you can get your shit in two days instead of three. This, of course, twins with the tech world’s savior complex: not only are we solving the most challenging problems but they also happen to be the most pressing ones and also the ones that will result in the greatest improvements to standards of living/gross national happiness/overall karmic state of the universe. It’s never enough to merely deliver a successful business product if that product doesn’t come with messianic pretensions. So it is with Amazon, which must sell itself as the innovatingest innovator that ever innovated if it hopes to keep attracting those 23-year-olds. These grandiose claims are hard to square with the reality that marginal improvements in supply chain management and customer experience, while good for the bottom line (or, I guess in Amazon’s case, investors) and certainly not technically trivial, ain’t the fucking cure for cancer or even a Mars rover. If your shit gets here in three days after all, you’ll survive. Or to put it another way, Bell Labs invented C and UNIX and also won eight Nobel Prizes in Physics. That’s what actual innovation looks like.

Sports Still not a Morality Play

The St. Louis Cardinals’ inept illegal access of the Houston Astros’ database is a hilarious sports scandal for many reasons. As an IT professional, I am giddy with inappropriate excitement over the Astros’ terrible password policies, but as a hater of cheap sentiment and unctuous mythmaking, I’m super-delighted that this happened to the Cards.

I don’t follow baseball at all, but if you read any sort of sports media, it’s impossible to escape the cult that the Cardinals have wrapped themselves in. Not content to be merely one of the most successful teams of all times, the Cardinals PR-machine puts out endless reams of propaganda about how everything the organization “wins the right way” and is just such a moral paragon. That this has now backfired on them in the worst way possible (federal indictments might be coming!) is just the most delicious of ironies.

Here’s the thing: we routinely conflate external characteristics with internal virtue, or lack thereof. Not just in sports, but in society generally. Rich and attractive people are perceived to be more virtuous than poor and ugly ones, despite the fact that there’s no connection whatsoever between these things. Still, sports is particularly bad at this; there’s no more tired sports cliche than the assertion that on-field performance reflects personal worth, even though it’s manifestly untrue. What this story should teach us, but won’t, is that winning and being a good person are totally unconnected. Winning is a function of team or individual performance in a contest of skill, and being a good person is, well, a song from an entirely different opera, as my people like to say. Teams should, but will not, stop wrapping themselves in moralistic language and pretending that their sports triumphs are indicative of anything other than their performance in those contests. Sports teams aren’t moral undertakings; they’re businesses designed for entertainment, and if they succeed at entertaining us, that ought to be enough.

It turns out that good people often lose and bad people often triumph, and there’s no real rhyme or reason to it. It’s nice when “good guys” win, but being a good guy guarantees nothing. You know, kinda like life.

English and the Political Language

Among the strangest phenomena of American political life is one politician accusing another of “playing politics.” This terrible locution is bipartisan, employed as often by liberals as by conservatives, and I don’t know of another area of human activity in which practice partly consists of denying the existence of the very activity you are engaged in. To accuse a basketball coach of “playing basketball” or an egineer of “playing engineering” would be nonsensical, and yet in politics we routinely hear such accusations leveled.

Like any piece of widely employed nonsense, this phrasing does, of course, carry a certain kind of semantic content, one conveyed not so much by the phrase itself as by the fact of it being uttered. What does it mean, to “play politics?” That depends on where and how you split the phrase. In its naive usage, “playing politics” is normally used to signify that one’s opponent has taken a “non-political” question and rendered it political, somehow. For example, liberals are often accused of “playing politics with the troops” when either attempting to curb American warmaking abroad or provide some support for returning soldiers domestically; by the same token, conservatives will be called out for, say, “playing politics with women’s lives,” when attempting to enact limits on reproductive rights.

The paradoxical nature of the “playing politics” maneuver is its ubiquitous deployment by political actors engaged in the political process of achieving political goals. What is the question of, say, reproductive rights, if not a political issue? The actions of politicians carried out in the course of their professional work are almost definitionally “politics,” and the attempt to prevent the political success of an opponent is, again, definitionally political. So: what purpose does it serve? On my reading, one operation accomplished by the accusation of “playing politics” or “politicization” is the suggestion that politics itself is a kind of alien enterprise that no one should engage in. At the same time, by deploying this rhetoric, its user seeks to position themselves on the ground of consensus: all reasonable non-political people acknowledge the universal rightness of my position, and it is only the political operative who disagrees. Thus: to be political is to stand in fundamental disagreement with a presumed rightness. And more: to be political, to politicize, is to acknowledge conflict where the accuser demands recognition of trans-political necessary truth. It’s not just that the personal is not held ot be political, but even the political itself is transformed into a dishonorable practice.

That’s the “politics” fork of “playing politics.” What about the “playing?” To accuse someone of playing is, firstly, to accuse them of a sort of insincerity. You are not truly a fan of 1960s avant garde Czech cinema; you are merely playing at being one for nefarious purposes (hipster cred, presumably). In politics, that translates as follows: you are not really concerned about the issue that you claim to be concerned about; you are merely putting on a sort of act by pretending concern. While it’s certainly true that political debates are full of what might generously be described as concern-trolling, we do have a language for calling bullshit on those things: we merely say that the speaker is lying. Whether true or not, an accusation of lying is at least intelligible and, presumably, open to some sort of independent adjudication with reference to the facts at hand. But “playing politics” is precisely the kind of slippery non-phrase that can never be proven or disproven. Are we truly concerned or is our political face merely another actor’s mask we wear on the face we present in everyday political life? How can you tell the dancer from the dance? This of course is an unanswerable question, with unanswerability being just the point: the goal is not to establish a fact but to sow doubt.

A secondary, complementary meaning of the accusation of “playing” is to imply that the accused regards the process as a kind of game, games being the sorts of things you play. In other words: the accused may or may nor really care about the issue at hand, but is really employing it as a kind of point-scoring maneuver in a game that has no purpose beyond itself. This dovetails neatly with the first fork, which seeks to convey the sense of politics as a fundamentall alien activity. If politics is, in fact, alien, that is, if it has no real relevance to our lives, then of course any political engagement can only be understood not as an expression of particular principles, but rather as just another game in which the goal is not to achieve any particular end, but rather to “defeat” whatever opponent stands in your way. Couple that to the accusation of insincerity, and more doubt is sown. The irony of this reading is that there really does exist an entire class of people for whom politics really is something of a social game; it’s just that this class overwhelmingly comprises various pundits and other political hangers-on (e.g. David Brooks, Tom Friedman, Maureen Dowd, etc.) for whom actual political practice would entail, well, too much work. But the people actually doing the work, whether you deem that work good or bad, are not playing but practicing.

The reason I object so strongly to the use of this formulation is because, like all euphemisms, it crowds out meaningful understanding of its subject. To insinuate that politics is something apart from life is to mistakenly assume that it can be bracketed off from your existence; to accuse an opponent of being engaged in a kind of sophisticated pretense is to misjudge their motivations and the strength of their convictions. The accusation of “playing politics” serves to conceal the existence of genuine, perhaps ultimately irreconcilable conflicts by removing those conflicts to a realm of seeming abstraction inhabited by people who are not engaged in anything real.

Unfortunately, American political discourse is fundamentally infantile, conducted on a level that should be embarrassing to a sixth-grader, much less to grown adults. So we get constructions like this, in which the very act of achieving a political end takes the form of denying that politics exists at all. Our political language is in quite a bad way.

Stupid People Arguing About Stupid Things

Earlier today I was listening to yesterday’s podcast of the Diane Rehm Show on which the panel was discussing what the Amtrak accident means in light of our decaying infrastructure. Unfortunately, as is often the case with discussions of public transit, the debate got bogged down in the end in a very stupid Republican talking-point. Basically, any time Republicans encounter government money being spent on something they don’t like (as opposed to Good And True things like bombing Middle Eastern countries), they’ll complain about those things being “subsidized.” Why are we subsidizing Amtrak passengers?! cries Rep. Andrew Harris of Maryland, idiot.

Ed Rendell, a person who seems to have something resembling a functional nervous system, sensibly replied that all transit systems everywhere are subsidized. Unfortunately, while getting the particulars right, Rendell neglected to defend the larger principle. Ignore for the moment the fact that automotive transport has been the beneficiary of innumerable government subsidies for decades, not least of which is the actual interstate highway system the imminent collapse of which is going to kill us all presently because we won’t spend the money to repair it.

The larger principle that Rendell should have defended, but which apparently cannot be uttered in polite company, is that sometimes it makes sense to subsidize stuff. We “subsidize” public education, for example; we do it poorly and often reluctantly, and usually in racially inequitable ways, but we do do it. There are undertakings that we, as a society, deem worthwhile, and that means that we can choose to spend public resources on them. There’s nothing wrong with that determination! Rendell’s hemming on the issue serves to obscure this basic point, but it’s just as true of alternative energy or education as it is of infrastructure or public transit. There’s no magic way to get something you want without paying for it, and yet the inability to openly acknowledge this basic fact continues to hamper the ability to push for necessary public works

These are the fruits of decades of well-poisoning on the part of conservatives with regard to any notion of the public good. Even people who ostensibly favor such public efforts cannot bring themselves to say with a straight face that yes, these things are good, and we can and should spend money to achieve them. “Subsidy” is not a dirty word; it’s an integral part of development throughout the history of this country.

Tolerable Cruelty

If you want to read a sad, sad story of how miserably our standard approaches to drug addiction have fared, check out this long investigation into the lives and deaths of heroin and prescription opioid users in Kentucky. It takes a long time to get through; I think I needed an uninterrupted hour, at least, to finish reading it. The picture painted therein is not so much grim as nearly hopeless. I will spare you the suspense: we have in our toolbox drugs that could, very possibly, eliminate the threat of relapse and subsequent deaths from overdose for most addicts, and we refuse to use them on preposterous “moral” grounds.

There’s simply too much too good reporting in the linked piece for me to be able to summarize it in a way that does it justice, but a basic theme keeps emerging again and again: there’s a conflict between what we know works from a scientific and medical standpoint, and what facilities and people who are nominally charged with caring for addicts are actually dispensing. What we know works is something that blocks the withdrawal symtoms and eliminates the cravings, preferably without making the user sick. That something is called suboxone, and it is, as the article notes, pretty much the “standard of care” for treating opiod addiction.

What gets meted out to addicts, on the other hand, is best described as moralistic bullshit. Interview after interview cited in the article has people saying things like suboxone is “not sobriety… it’s being alive but you’re not clean and sober.” Or: “[treatment] is a drug-free model. There’s kind of a conflict between drug-free and suboxone.” Or, and this for me is maybe the worst of all because not only is it scientific ignorance but in my view actually judicial malpractice, the case of Judge Karen Thomas, who literally orders addicts off suboxone if they want a sentencing reduction. It’s hard to imagine the callousness required to utter the following:

“I understand they are talking about harm reduction,” Thomas said. “Those things don’t work in the criminal justice system.” In a subsequent interview, the judge added, “It sounds terrible, but I don’t give them a choice. This is the structure that I’m comfortable with.”

This is where we are as a society: the comfort of a judge taking precedence over medical standards of care.

Our model of thinking about addiction is, unfortunately, skewed because, as the article points out, addiction treatments got under way before we really understood anything about how it affects the brain. But the problem goes deeper than that. Consider the language used by those who speak negatively of suboxone, and you find the same words and phrases making an appearance across the board: “clean”, “abstinence”, “drug-free.” Why do these particular locutions have any moral weight? After all, we would not say that a cancer patient must remain “clean” or “drug-free.” We understand that cancer is a disease and that those who have it are not morally culpable for it[1]. We generally accept that treatment of diseases frequently involves the consumption of various drugs; all the talk about purity goes out the window when you come down with pneumonia.

Unfortunately, we routinely fail to extend this understanding to mental illness. Our folk theory of mind is terribly suited for talking about mental illness as actual illness. Or, if you prefer, the scientific image is not nearly as appealing as the manifest image. To suggest that an addict is sick rather than wicked seems to remove the possibility of condemnation, and if there’s one thing we’re desperately attached to in this country, it’s the ritual of condemning people for moral laxity. To use Judge Thomas’ terms, we just aren’t comfortable with a medical model of the brain, and our comfort clearly should take precedence over people’s real lives.

Cleanliness, purity, abstinence: whence the moral valence of these terms? They suggest a kind of “natural” state, uncorrupted by external influences. The mind as unsullied Eden, so to speak. Where the moral valence of that comes from, I don’t need to tell you. Out of this obsession with the rhetoric of the purge comes the idea that if addicts fail, it’s because they want to fail; if they had wanted to succeed, they would have. A circularly self-justifying chain of reasoning that admits no breaks into which some notion of medical effectiveness could penetrate. Cheap moralism, all the cheaper for the fact that the moralists never need justify themselves, operating as they do against a backdrop of erroneous assumptions about the nature of health and illness and about the mind’s relation to the body. Cartesian dualism is a hell of a drug, as deadly in its own way as any opioid.

It’s not an accident, of course, that blue Minnesota has seen successes where red Kentucky has failed. As usual, liberal states are much more willing to move from moralistic scolding to an attempt to actually do something about the problem[2]. Massachusetts and Maryland have had some success as well.

Ever since I learned about it from Rorty’s Contingency, Irony, and Solidarity, I’ve loved Judith Shklar’s definition of a liberal as someone who thinks that being cruel is the worst thing that one can do. And what is the denial of medical treatment but the most abject cruelty, visited by the state on some of its most vulnerable members, in service of a misguided attachment to a moral language it can barely articulate? This is the damage that the short-circuiting rhetoric of purity can do, measured in actual human lives.

[1] Or, at least, most of us understand this. There’s no shortage of people in the world more than happy to take to task a cancer patient for not having lived an approriately “clean” life, but they tend to occupy the fringe rather than the mainstream.

[2] Although, unfortunately, not as willing as they should be: far too many liberals ascribe unnecessary moral properties to “purity” and “cleanliness,” as the anti-vax and anti-GMO movements readily demonstrate.

Melian Dialogues

I don’t have anything terribly original to say on the topic of the current events in Baltimore. Anyone with two eyes, a few neurons to rub together, and a sense of history can understand for themselves that, whatever you think of riots as a political or moral phenomenon, it is impossible to detach those events from their manifestation as the reaction of a brutalized populace. Take a few minutes to read Ta-Nehisi Coates on this topic, then come back if you feel like it.

So now, a few words about the rhetoric of (non-)violence. Hardly anything in American political life is so reliable as the Grave Concerns of Very Serious People whenever more than three black people show up in the same spatiotemporal vicinity to express some degree of dissatisfaction with their treatment by the American police state; the ritual bemoaning of violence is metronomic in its regularity. Curiously, those same Very Serious People are somehow very quiet when it comes to the violence perpetrated upon those very same black people. Instead, we are subjected to the standard casual racism disguised as responsibility politics: that guy shouldn’t have run, this woman shouldn’t have spoken up, this other guy should have complied. Ad nauseam ad inifinitum.

There just isn’t any way of reconciling this double-standard that’s actually fair to the facts at hand, which is why any conversation on this topic turns into a prime example of goalpost-moving and evasion. “But destruction of property is still wrong!” “But you should still comply with police orders!” “But what about black-on-black crime?” Anything to avoid the unpleasant fact that police are enabled by the state to take lives with virtually no repercussions whatsoever, and that the lives taken are disproportionately black ones. It’s Thucydides meets Weber: the the powerful exact what they can, and the weak grant what they must, coupled with the state’s monopoly on violence and discretion in its distribution. No one familiar with the history of race in America ought to be surprised when this lethal mixture distributes that violence disproportionately onto African-Americans.

The Grave Concerns present an insurmountable double-bind. On the one hand, no one will speak for you if the police decide they like you better dead; on the other hand, no public expressions of outrage are allowed, lest you be labeled “violent.” Usually at this stage the Very Serious People suggest that the way to reform is through the voting booth, which only serves to remind everyone that none of these Very Serious People have ever lived as members of a politically disempowered community. The VSP’s tend to have a romantic notion of that decidedly un-romantic Weberian formulation, “the strong and slow boring of hard boards,” mostly because it allows an escape into cliche rather than obligating someone to actually do something, the way a real moral outrage would. After all, it’s not them who will have to do the actual boring, it’s other people, and for other people, especially other black people, justice can wait. Forever, if need be. Never mind that not being subjected to the arbitrary lethal power of the state manifested in its police force is one of those pretty basic things that one would think wouldn’t require “reform” to make happen.

This vicious circle will continue until it becomes an accepted fact in American politics that black lives are worth as much as white ones, and that a system of racial terror imposed on black communities is morally untenable. As long as that system persists, we’ll see more Baltimores for the simple reason that public protests are the only visible way that black communities have to protest against this. All the pearl-clutching over destroyed property and violence inappropriately issuing from rather than being directed at black people are just ways of avoiding that basic realization.

Ruminations on Science

The text below was modified slightly from a comment I left over at Crooked Timber. After writing it, I thought it held up ok as a separate piece of writing, divorced from the comment thread, so I’m just posting it here with minimal alterations:

Science is hard. It’s just really difficult to even achieve a small amount of mastery in an area of your own alleged expertise. There’s just so much of it, and so much more appearing every day. There are varying responses to this problem. One response is to just write off any results that disagree with conclusions that one has already reached by other means. Another is to set up institutions in which legitimate queries after truth can actually be carried out and debated. That’s a great meta-solution, in my view, but unsurprisingly it comes with its own meta-problems. Now you’ve got this whole other layer of professional scientists that, to the untutored observer, appear interposed priestlike between you and the truth. As with any sufficiently complex (i.e. involving more than 5 people) institutions, mystification sets in. If you’re already predetermined to disregard what the scientists are saying in the first place, what is in reality an imperfect mechanism for adjudicating truth claims begins looking like a conspiracy to suppress your great uncle’s naturopathic cure for cancer. And the thing about conspiracies is that they can never be disproven; any evidence counter to the conspiracist conclusion is merely additional proof that those who offer the evidence are in on the conspiracy.

In the right (wrong) sorts of circumstances, this problem becomes a horrible vicious circle. It can only be resolved by taking a step back and trying to understand science as a human institution and scientists as human practitioners; in other words, trying to figure out what scientists are doing and why. That is also very hard, especially if you come from outside a scientific discipline, because you’ll be entering into discussions in which you lack the requisite terminology for understanding all the little details. That’s why scientific communication is a two-way street: if the average person holds some responsibility for trying to understand how science gets done, then scientists have commensurate responsibility to explain that process in a way that’s understandable. Sadly, scientists have often failed at this task; those who can do it well, like Carl Sagan, Neil deGrasse Tyson, and P.Z. Myers, are worth their weight in gold because they’re quite rare.

The problem with people like global warming deniers and the anti-vax crowd is that everything they do undermines these institutions. If you only care about being right instead of getting it right (parsing the distinction is left as an exercise for the reader), then all this stuff like peer review and independent verification is just so much cruft that you can discard when it runs up against something you want badly to be true. The danger of that is that sooner or later you’ll cut down the very tree you sit in, as the Russian expression goes, and when you actually require those mechanisms and institutions to function properly because they impact your own life, they won’t.

Toothpicks and Bubblegum, Software Edition, Iteration 326

There’s nothing like working with an old *nix utility to remind you how brittle software is. Case in point: I’m trying to use flex and bison to design a very simple grammar for extracting some information from plaintext. Going by the book and everything, and it just doesn’t work. Keeps telling me it caught a PERIOD as its lookahead token when it expected a WORD and dies with a syntax error. I killed a whole day trying to track this down before I realized one simple thing: the order of token declarations in the parser (that’s your .y file) must match the order of token declaration in the lexer (your .l file). If it doesn’t, neither bison nor flex will tell you about this, of course (and how could they, when neither program processes files intended for the other?). It’s just that your program will stubbornly insist, against all indications to the contrary, that it has indeed caught a PERIOD when it expected a WORD and refuse to validate perfectly grammatical text.


I was so angry when this was happening and now I think I might be even angrier. Keep in mind that this fantastically pathological behavior is not documented anywhere, so I found myself completely baffled by what was happening. Where was PERIOD coming from? Why didn’t it just move on to the next valid token? Of course the correct thing is to include the tab.h file in the lexer, but I had written my definition down explicitly in the lexer file so I didn’t think to do that.

What’s ludicrous about this is that the flex/bison toolchain has to go through yet another auxiliary tool, m4, just to do its thing. m4, if you don’t know, is a macro language with a terrible, incomprehensible syntax that was invented for the purposes of text transformation, thereby proving years before its formulation Greenspun’s 10th rule, according to which any sufficiently advanced C project will end up reimplementing, badly, some subset of Common Lisp.

I have the utmost respect for Dennis Ritchie, but m4 is a clusterfuck that should have never survived this long. Once a language like Lisp existed, which could actually give you code and DSL transformations at a high level of abstraction, m4 became superfluous. It has survived, like so many awful tools of its generation, through what I can only assume is inertia.

Five Years in the Future

Oh gosh, it has been a long time, hasn’t it? My deepest apologies, sports fans. You know how life is, always getting in the way. Perhaps this will spur a production in verbal output but it’s just as likely that it’ll be a once-per-year salvo. Don’t get used to anything nice, my mother always told me.

Anyway, this too-prolix production has been made possible by a friend soliciting my input on the following article. That shit is long, so take a good thirty minutes out of your day if you plan to read it all, and then take another thirty to read this response, which I know you’re going to do because you love me that much.

I’ll save you some of the trouble by putting my thesis front and center so you can decide whether or not you want to continue reading or leave an angry comment: I think the linked piece is premised on some really flimsy assumptions and glosses over some serious problems, both empirical and logical, in its desire to attain its final destination. This is, sadly, par for the course in popular writing about AI; even very clever people often write very stupid things on this topic. There’s a lot of magical thinking going on in this particular corner of the Internet; much of it, I think, can be accounted for by a desire to believe in a bright new future about to dawn, coupled with a complete lack of consequences for being wrong about your predictions. That said, let’s get to the meat.

There are three basic problems with Tim Urban’s piece, and I’ll try and tackle all three of them. The first is that it relies throughout on entirely speculative and unjustified projections generated by noted “futurist” (here I would say, rather, charlatan, or perhaps huckster) Ray Kurzweil; these projections are the purest fantasy premised on selective interpretations of sparsely available data and once their validity is undermined, the rest of the thesis collapses pretty quickly. The second problem is that Urban repeatedly makes wild leaps of logic and inference to arrive at his favored result. Third, Urban repeatedly mischaracterizes or misunderstands the state of the science, and at one point even proposes a known mathematical and physical impossibility. There’s a sequel to Urban’s piece too, but I’ve only got it in me to tackle this one.

Conjectures and Refutations

Let me start with what’s easily the most objectionable part of Urban’s piece: the charts. Now, I realize that the charts are meant to be illustrative rather than precise scientific depictions of reality, but for all that they are still misleading. Let’s set aside for the moment the inherent difficulty of defining what exactly constitutes “human progress” and note that we don’t really have a good way of determining where we stand on that little plot even granting that such a plot could be made. Urban hints at this problem with his second “chart” (I guess I should really refer to them as “graphics” since they are not really charts in any meaningful sense), but then the problem basically disappears in favor of a fairly absurd thought experiment involving a time traveler from the 1750s. My general stance is that in all but the most circumscribed of cases, thought experiments are thoroughly useless, and I’d say that holds here. We just don’t know how a hypothetical time traveler retrieved from 250 years ago would react to modern society, and any extrapolation based on that idea should be suspect from the get-go. Yes, the technological changes from 1750 to today are quite extreme, perhaps more extreme than the changes from 1500 to 1750, to use Urban’s timeline. But they’re actually not so extreme that they’d be incomprehensible to an educated person from that time. For example, to boil our communication technology down to the basics, the Internet, cell phones, etc. are just uses of electrical signals to communicate information. Once you explain the encoding process at a high level to someone familiar with the basics of electricity (say, Ben Franklin), you’re not that far off from explicating the principles on which the whole thing is based, the rest being details. Consider further that in 1750 we are a scant 75 years away from Michael Faraday, and 100 years away from James Clerk Maxwell, the latter of whom would understand immediately what you’re talking about.

We can play this game with other advances of modern science, all of which had some precursors in the 1750s (combustion engines, the germ theory of disease, etc.). Our hypothetical educated time traveler might not be terribly shocked to learn that we’ve done a good job of reducing mortality through immunizations, or that we’ve achieved heavier-than-air flight. I doubt that however surprised they are it would be to the extent that they would die. The whole “Die Progress Unit” is, again, a tongue-in-cheek construct from Urban, meant to be illustrative, but rhetorically it functions to cloak all kinds of assumptions about how people would or would not react. It disguises a serious conceptual and empirical problem (just how do we define and measure things like “rates of progress”) behind a very glib imaginary scenario that is both not meant to be taken seriously and function as justification for the line of thinking that Urban pursues later in the piece.

The idea that ties this first part together is Kurzweil’s “Law of Accelerating Returns.” Those who know me won’t be surprised to learn that I don’t think much of Kurzweil or his laws. I think Kurzweil is one part competent engineer and nine parts charlatan, and that most of his ideas are garbage amplified by money. The “Law” of accelerating returns isn’t any such thing, certainly not in the simplistic way presented in Urban’s piece, and relying on it as if it were some sort of proven theorem is a terrible mistake. A full explanation of the problems with the Kurzweilian thesis will have to wait for another time, but I’ll sketch one of the biggest objections below. Arguendo I will grant an assumption that in my view is mostly unjustified, which is that the y-axis on those graphics can even be constructed in a meaningful way.

A very basic problem with accelerating returns is that it very much depends on what angle you look at it from. To give a concrete example, if you were a particle physicist in the 1950s, you could pretty much fall ass-backwards into a Nobel Prize if you managed to scrape together enough equipment to build yourself a modest accelerator capable of finding another meson. But then a funny thing happened, which is that ever incremental advance over the gathering of low-hanging fruit consumed disproportionately more energy. Unsurprisingly, the marginal returns on increased energy diminished greatly; the current most powerful accelerator in the world (the LHC at CERN) has beam energies that I believe will max out at somewhere around 7 TeV, give or take a few GeV. That’s one order of magnitude more powerful than the second-most powerful accelerator (the RHIC at Brookhaven), and it’s not unrealistic to believe that the discovery of any substantial new physics will require an accelerator another order of magnitude more powerful. In other words, the easy stuff is relatively easy and the hard stuff is disproportionately hard. Of course this doesn’t mean that all technologies necessarily follow this pattern, but note that what we’re running up against here is not a technological limit per se, but rather a fundamental physical limit: the increased energy scale just is where the good stuff lies. Likewise, there exist other real physical limits on the kind of stuff we can do. You can only make transistors so small until quantum effects kick in; you can only consume so much energy before thermodynamics dictates that you must cook yourself.

The astute reader will note that this pattern matches quite well (at least, phenomenologically speaking) the logistic S-curve that Urban draw in one of his graphics. But what’s really happening there? What Urban has done is to simply connect a bunch of S-curves and overlaid them on an exponential, declaring (via Kurzweil) that this is how technology advances. But does technology really advance this way? I can’t find any concrete argument that it does, just a lot of hand-waving about plateaus and explosions. What’s more, the implicit assumption lurking in the construction of this plot is that when one technology plays itself out, we will somehow be able to jump ship to another method. There is historical precedent for this assumption, especially in the energy sector: we started off by burning wood, and now we’re generating energy (at least potentially) from nuclear reactions and sunlight. All very nice, until you realize that the methods of energy generation that are practical to achieve on Earth are likely completely played out. We have fission, fusion, and solar, and that’s about it for the new stuff. Not because we aren’t sufficiently “clever” but because underlying energy generation is a series of real physical processes that we don’t get to choose. There may not be another accessible S-curve that you we can jump to.

Maybe other areas of science behave in this way and maybe they don’t; it’s hard to know for sure. But admitting ignorance in the face of incomplete data is a virtue, not a sin; we can’t be justified in assuming that we’ll be able to go on indefinitely appending S-curves to each other. At best, even if the S-curve is “real,” what we’re actually dealing with is an entire landscape of such curves, arranged in ways we don’t really understand. As such, predictions about the rate of technological increase are based on very little beyond extrapolating various conveniently-arranged plots; it’s just that instead of extrapolating linearly, Kurzweil (and Urban following after him) does so exponentially. Well, you can draw lines through any set of data that you like, but it doesn’t mean you actually understand anything about that data unless you understand the nature of the processes that give rise to it.

You can look at the just-so story of the S-curve and the exponential (also the title of a children’s book I’m working on) as a story about strategy and metastrategy. In other words, each S-curve technology is a strategy, and the metastrategy is that when one strategy fails we develop another to take its place. But of course this itself assumes that the metastrategy will remain valid indefinitely; what if it doesn’t? Hitting an upper or lower physical limit is an example of a real barrier that is likely not circumventable through “paradigm shifts” because there’s a real universe that dictates what is and isn’t possible. Kurzweil prefers to ignore things like this because they throw his very confident pronouncements into doubt, but if we’re actually trying to formulate at least a toy scientific theory of progress, we can’t discount these scenarios.

1. p → q;
2. r
3. therefore, q

Since Kurzweil’s conjectures (I won’t dignify them with the word “theory”) don’t actually generate any useful predictions, it’s impossible to test them in any real sense of the word. I hope I’ve done enough work above to persuade the reader that these projections are nothing more than fantasy predicated on the fallacious notion that the metastrategy of moving to new technologies is going to work forever. As though it weren’t already bad enough to rely on these projections as if they were proven facts, Urban repeatedly mangles logic in his desire to get where he’s going. For example, at one point, he writes:

So while nahhhhh might feel right as you read this post, it’s probably actually [sic] wrong. The fact is, if we’re being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect.

It’s hard to see why the skeptics are the ones who are “probably actually wrong” and not Urban and Kurzweil. If we’re being “truly logical” then, I’d argue, we aren’t making unjustified assumptions about what the future will look like based on extrapolating current non-linear trends, especially when we know that some of those extrapolations run up against basic thermodynamics.

That self-assured gem comes just after Urban commits an even grosser offense against reason. This:

And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either.

is not an argument. In the words of Wolfgang Pauli, it isn’t even wrong. This is a sequence of words that means literally nothing and no sensible conclusion can be drawn from it. To write this and to reason from such premises is to do violence to the very notion of logic that you’re trying to defend.

The entire series contains these kinds of logical gaps that are basically filled in by wishful thinking. Scales, trends, and entities are repeatedly postulated, then without any particular justification or reasoning various attributes are assigned to them. We don’t have the faintest idea of what an artificial general intelligence or super-intelligence might look like, but Urban (via Kurzweil) repeatedly gives it whatever form will make his article most sensational. If for some reason the argument requires an entity capable of things incomprehensible to human thought, that capability is magicked in wherever necessary.

The State of the Art

Urban’s taxonomy of “AI” is likewise flawed. There are not, actually, three kinds of AI; depending on how you define it, there may not even be one kind of AI. What we really have at the moment are a number of specialized algorithms that operate on relatively narrowly specified domains. Whether or not that represents any kind of “intelligence” is a debatable question; pace John McCarthy, it’s not clear that any system thus far realized in computational algorithms has any intelligence whatsoever. AGI is, of course, the ostensible goal of AI research generally speaking, but beyond general characteristics such as those outlined by Allen Newell, it’s hard to say what an AGI would actually look like. Personally, I suspect that it’s the sort of thing we’d recognize when we saw it, Turing-test-like, but pinning down any formal criteria for what AGI might be has so far been effectively impossible. Whether something like the ASI that Urban describes can even plausibly exist is of course the very thing in doubt; it will not surprise you, if you have not read all the way through part 2, that having postulated ASI in part 1, Urban immediately goes on to attribute various characteristics to it as though he, or anyone else, could possibly know what those characteristics might be.

I want to jump ahead for a moment and highlight one spectacularly dumb thing that Urban says at the end of his piece that I think really puts the whole thing in perspective:

If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us.

This scenario impossible. Not only does it violate everything we know about uncertainty principles, but it also effectively implies a being with infinite computational power; this is because even if atoms were classical particles, controlling the position of every atom logically entails running forward in time a simulation of the trajectories of those atoms to infinite precision, a feat that is impossible in a finite universe. Not only that, but the slightest error in initial conditions will accumulate exponentially (here, the exponential stuff is actually mathematically valid), so that e.g. improving your forecast horizon by a factor of 10 requires a factor of 100 increase in computational power and so on.

This might seem like an awfully serious takedown of an exaggerated rhetorical point, but it’s important because it demonstrates how little Urban knows, or worries, about the actual science at stake. For example, he routinely conflates raw computational power with the capabilities of actual mammalian brains:

So the world’s $1,000 computers are now beating the mouse brain and they’re at about a thousandth of human level.

But of course this is nonsense. We are not “beating” the mouse brain in any substantive sense, we merely have machines that do a number of calculations per second that is comparable to a number that we imagine the mouse brain is also doing. About the best we’ve been able to do is to mock up a network of virtual point neurons that kind of resembles a slice of the mouse brain, maybe, if you squint from far away, and run it for a few seconds. Which is a pretty impressive technical achievement, but saying that we’ve “beaten the mouse brain” is wildly misleading. “Affordable, widespread AGI-caliber hardware in ten years,” is positively fantastical even under the most favorable Moore’s Law assumptions.

Of course, even with that kind of hardware, AGI is not guaranteed; it takes architecture as much as computational power to get to intelligence. Urban recognizes this, but his proposed “solutions” to this problem again betray as misunderstanding of both the state of science and our capabilities. For example, his “emulate the brain” solution is basically bog-standard connectionism. Not that connectionism is bad or hasn’t produced some pretty interesting results, but neuroscientists have known for a long time now that the integrate-and-fire point neuron of connectionist models is a very, very, very loose abstraction that doesn’t come close to capturing all the complexities of what happens in the brain. As this paper on “the neuron doctrine” (PDF) makes clear, the actual biology of neural interaction is fiendishly complicated, and the simple “fire together-wire together” formalism is a grossly inadequate (if also usefully tractable) simplification. Likewise, the “whole brain simulation” story fails to take into account real biological complexities of faithfully simulating neuronal interactions. Urban links to an article which claims that whole-brain emulation of C. elegans has been achieved, but while the work done by the OpenWorm folks is certainly impressive, it’s still a deeply simplified model. It’s hard from the video to gauge how closely the robot-worm’s behavior matches the real worm’s behavior; it’s likely that, at least, it exhibits some types of behaviors that the worm also exhibits, but I doubt that even its creators would claim ecological validity for their mode. At the very best, it’s a proof of principle regarding how one might go about doing something like this in the future, and, keep in mind, that this is a 300-neuron creature whose connectome is entirely known.

Nor are genetic algorithms likely to do the trick. Overall, the track record of genetic algorithms in actually producing useful results is decidedly mixed. In a recent talk I went to, Christos Papadimitriou, a pretty smart guy, flat out claimed that “genetic algorithms don’t work.” (PDF, page 18). I do not possess sufficient expertise to judge the truth of this statement, but I think the probability that genetic algorithms will provide the solution is low. It does not help that we “know” what we’re aiming for; in truth we have no idea what we’re optimizing for, and our end-goal is something of the “we know it when we see it” variety, which isn’t something that lends itself terribly well to a targeted search. Evolution, unlike humans, optimized for certain sorts of local fitness maxima (to put it very, very simply), and wound up producing something that couldn’t possibly have been targeted for in such an explicit way.

All of this is to say that knowing the connectome and having some computing power at your disposal is a necessary but not sufficient condition for replicating even simple organismic functionality. Understanding how to go from even a complete map of the human brain to a model of how that brain produces intelligence is not a simple mapping, nor is it just a matter of how many gigaflops you can execute. You have to have the right theory or your computational power isn’t worth that much. A huge problem that one hits on when speaking with actual neuroscientists is that there’s really a dearth of theoretical machinery out there that even begins to accurately represent intelligence as a whole, and it isn’t for lack of trying.

The concluding discussion of what an AI might look like in relation to humans is hopelessly muddled. We barely have any coherent notion of how to quantify existing human intelligence, much less a possible artificial one. There’s no particular reason to think that intelligence follows some kind of linear scale, or that “170,000 more intelligent than a human,” is any sort of meaningful statement, rather than a number thrown out into the conversation without any context.

The problem with the entire conversation surrounding AI is that it’s almost entirely divorced from the realities of both neuroscience and computer science. The boosterism that emanates from places like the Singularity Institute and from people like Kurzweil and his epigones is hardly anything more than science fiction. Their projections are mostly obtained by drawing straight lines through some judiciously-selected data, and their conjectures about what may or may not be possible are mostly based on wishful thinking. It’s disappointing that Urban’s three weeks of research have produced a piece that reads like an SI press release, rather than any sort of sophisticated discussion of either the current state of the AI field or the tendencious and faulty logic driving  the hype.


None of this is to say that we should be pessimists about the possibility of artificial intelligence. As a materialist, I don’t believe that humans are somehow imbued with any special metaphysical status that is barred to machines. I hold out hope that some day we will, through diligent research into the structure of existing brains, human and otherwise, unravel the mystery of intelligence. But holding out hope is one thing; selling it as a foregone conclusion is quite another. Concocting bizarre stories about superintelligent machines capable of manipulating individual atoms through, apparently, the sheer power of will, is just fabulism. Perhaps no more succinct and accurate summary of this attitude has ever been formulated than that written by John Campbell of Pictures for Sad Children fame:

it’s flying car bullshit: surely the world will conform to our speculative fiction, surely we’re the ones who will get to live in the future. it gives spiritual significance to technology developed primarily for entertainment or warfare, and gives nerds something to obsess over that isn’t the crushing vacuousness of their lives

Maybe that’s a bit ungenerous, but I find that it’s largely true. Obsession about AI futures is not even a first world problem as much as a problem for a world that has never existed and might never exist. It’s like worrying about how you’ll interact with the aliens that you’re going to find on the other side of the wormhole before you even know how to get out of the solar system without it taking decades.