Missives from Beyond the Galactic Rim

Ok, maybe not the galactic rim. Maybe not even the Oort cloud. Let’s say the asteroid belt because how could you tell the difference anyway?

One of my problems is that I like to start things but not finish them. That applies to writing too; plenty of half-completed musings sitting around on my hard drive and awaiting the day when I return and end them with a pithy punchline. But then the punchline never materializes and it gets harder and harder to get yourself to wrap up your thoughts.

So I’ll be trying a slightly different tack, moving to a more discursive and meandering and less focused approach. I’m thinking that without the pressure to come to a neat conclusion, I might be more willing to just get my thoughts out. That means that the things you might read in this space in the near future may not have a tight thesis with three supporting paragraphs or whatever; they may not have any particular purpose at all other than as an exploration of whatever particular idea I’m entertaining at the moment. But since I’m not writing for an academic audience (actually: what audience am I writing for? So far as I can tell, it consists of me), concision and focus be damned. Reviewer #2 is unwelcome in these parts.

Enjoy. Or don’t. But better if you do.

Odi et Amo: Programming Edition

One of the things about being a programmer is that you use a lot of different tools, where “tool” is roughly defined as a discrete program that accomplishes some particular task. IDEs are tools; the command line is many tools accessed through a common interface; programming languages themselves are tools, which are typically surrounded by a larger ecosystem of other tools that are necessary to get things done; and so on. Most of these tools can be combined with one another, so a given user’s toolchain can, in theory, comprise any one of a exponentially exploding number of tool combinations.

The tragedy of the situation is how bad so many of these tools are.

There’s a phenomenon which I think is more prevalent in the software development world than in other technical disciplines which licenses half-ass work. There are a lot of contributing factors to this attitude. For example, the fact that most software isn’t running critical operations means that bugs are, relatively speaking, low priority. It’s just not that important to get it right the first time, unlike, say, building a bridge or engineering a car. Combine that with the valorization of hacking and you get a recipe for lots of shoddy software. Furthermore, there’s a legacy element to software that makes it hard to correct old mistakes or fix bad old tools; unlike physical tools, which wear down and have to be replaced, code is basically forever. So if a bad tool makes its way into the chain by e.g. having been constructed earlier than others, it’s near-impossible to get rid of it.

The other attitudinal problem that software development has as a community is a predisposition towards “tough it out” thinking. This is manifested as a predisposition to disregard such factors as user experience and design quality and shift the blame for any problems with the tool onto the user. These attitudes usually don’t appear (or are repressed) in environments where user experience is actually important, i.e. where money changes hands, but those are situations in which software developers are selling their product to non-developers or the general public, rather than creating tools for each other. When it comes for tools that we actually use, the conventional wisdom is that it’s the user’s responsibility to adapt to the tool, rather than the opposite. This problem is exacerbated by the fact that tools which have become entrenched from early adoption in the days when almost no tools existed have only marginal incentive to improve.

As a result of these factors, we’re left using a lot of extremely shitty tools to perform development tasks. The various shells that originated with Unix and have been passed down to its spiritual descendants are uniformly terrible. Shell scripts have a barely-comprehensible syntax while providing a fraction of the power of real programming languages. And yet these pieces of legacy code are used to maintain entire operating systems. That seems clearly absurd, but in a world where no scripting languages existed, a shitty scripting language was better than none.

Of course, when real scripting languages appeared they only improved the situation marginally. Perl was still shitty, but it allowed you to break so many more things and it had regexes as first-class objects for some reason so you didn’t have to pipe your text to awk (and learn yet another syntax), so people jumped all over it. Ignoring the fact that it initially had almost none of the useful abstractions of more advanced programming languages like Common Lisp; ignoring that its syntax was cobbled together from an unholy mixture of C, awk, and shell; ignoring that Larry Wall, for reasons that can only be assumed to be sadistic, designed it to mimic natural language; the internet went bonkers over it and adopted it wholesale. And now, even though it’s slowly dying out, every once in a while a programmer logs into some old system and discovers some legacy Perl scripts whose author is long gone and which are utterly incomprehensible. The old saw about Perl being the glue of the internet is apt: it’s a nasty, tacky substance which gets into small niches from which it is impossible to extract. Perl is built on the philosophy that there’s more than one way to do it and you should never be prevented from picking the wrong way. Perl sucks.

Autotools sucks. Autotools started out as a script that some guy wrote to manage his Makefiles, which also suck. Autotools is conceptually a correct idea, namely, that you shouldn’t have to write per-configuration Makefiles by hand, but it goes about it a bizarre way. Take a look at the truly byzantine dependency graph on the Autotools Wikipedia page. There are a ton of moving parts, each one with its own configuration logic. Naturally all of this is written in shell script instead of a real language, so Cthulhu help you if you ever have to get down into the weeds of Autotools. Most people run it like a ritual invocation; you just do the minimal amount necessary to get your project to build and hope nothing ever breaks. Actually, Autotools is written in at least two languages, because it uses the m4 macro processor, which Kernighan and Ritchie wrote for C back in the Paleolithic. Hmm, what other language do I know of that is useful for writing domain-specific languages because of its advanced macro capabilities? But of course using Common Lisp would have been too obvious, so m4, which, again, has a totally incomprehensible syntax and no real programming language functionality to speak of, is what gets used. As a result of a lot of talented people wasting a substantial portion of their lives, Autotools has been brought to a level where people can actually use it, with the result that this nightmarish rats’ nest of code has become irreplaceable basically forever. Autotools is the abyss.

PHP is a fractal of bad design. Created by someone who wasn’t really interested in programming for people who also apparently aren’t interested in programming, or consistency, or reliable behavior, or really any other normal markings of a functional piece of software, PHP caught on like consumption at a gathering of 19th century Romantics, because it allowed you to make terrible web pages, which is what everyone in the 90s wanted to do. However many number of revisions later, people are still phasing out shitty old features of the original language in the hopes of someday creating something one-third as pleasant to use as Python.

Make is terrible and confusing. Autotools was create so that you wouldn’t have to write Makefiles by hand, which tells you something about what a pleasant experience that was. Make was initially purely a rule-based system, but at some point it dawned on folks that perhaps they’d like to have things like “iteration” and “conditionals” in their build process, so naturally those got grafted onto Make during, I assume, some sort of Witches’ Sabbath, with Satan’s presence consecrating the unholy union. Despite being created contemporaneously with many of the tools mentioned above, Make does not share syntax with them. In order to avoid writing Makefiles, a complicated tool called cmake was invented, which allows you to write the files that write the Makefiles in yet another syntax which in comprehensibility is somewhere between make itself and a shell script. As per Greenspun’s 10th Rule, Make almost certainly contains at least a portion of a working Common Lisp interpreter. Make sucks.

All these terrible and weird legacy pieces of code have survived down the generations from early times when they were nothing more than convenient hacks that made it possible to automate things. Over the years, they’ve accreted corrections and version numbers and functionality and eventually the process of using them was either made somewhat tolerable or most users were insulated from the messy core by layers and layers of supporting infrastructure. Because replacing old stuff is hard, and because code doesn’t wear out the way that hardware does (and also because most of the cost of usability fall on the developers themselves), these tools just persist forever. Any discussion of their terrible usability or their shortcomings is met with, at best, indifferent shrugs (“It’s too bad, but who’s going to take on that job?”) or outright hostility. People become habituated to their tools and view any suggestion that they might be inadequate as a personal attack. Just check out that PHP post, in which a bunch of people in the comments defend PHP on the grounds that “it’s weird but we’ve gotten used to it!” Well, you can get used to driving a car with a faulty alignment or driving nails with a microscope, but that doesn’t mean you should. If you bring up the point of Perl’s syntax or its weird referencing rules, you’ll be told that you should just memorize these things and once you do it’s not that big of a deal. Suggestions that perhaps knowledge of modern programming practices should be put to good use by creating replacements for tools that behave in opaque and hard-to-understand ways are greeted with incredulity at the heresy.

As developers, I think we do ourselves no favors this way. We should demand, and work to build, better tools. We should have build systems that can be configured in a language that’s easy to parse and understand. We should make use of the strengths of the languages we do have, so that when we need a macro expander, we have one in Lisp or one of its variants. We should have languages that don’t confuse us with unnecessary visual clutter and which are easy to read. We should not be afraid of abandoning old tools because they’re old and were created by esteemed personages at the dawn of programming. We should, above all, pay lots of attention to human factors and usability studies because human time is precious but programming time is cheap. We should, in the end, not be afraid of change, of learning from past mistakes, and of abandoning rather than perpetuating legacy code. That’s my presidential platform; write me in next year in November.

Some Thoughts About Amazon

A recent New York Times article examining the alleged problems with Amazon’s work culture has been making waves all week. Depending on whom you want to believe, Amazon is either the province of the damned, chained to their cubicles and forced to work while being whipped by demons, or a glorious utopia of technological innovation where no one is ever unhappy. This unresolvable war of cross-firing anecdotes is impossible to adjudicate from the outside, for the simple reason that only Amazon could even collect the necessary data to do that, and it wouldn’t make them public in any case. So anyway, this prompted in me a few loosely-connected observations, presented in roughly ascending order of how interesting I find them:

  1. Large organizations are like the rainforests they’re sometimes named after: if you go looking for something, you’re likely to find either that thing or a reasonable facsimile thereof. If what you’re looking for is team dysfunction and people being drummed out of the company for having had the temerity to get cancer, you’ll find that; if you’re looking for a functional team of normal adults who treat each other well and all go home satisfied at the end of the day, I bet you could find that as well. Interviews with newspaper reporters aren’t nothing, but they’re not company-wide statistics, and neither are anecdotes from some guy who really loves it there. It wouldn’t be impossible to set up an experiment that attempted to describe at a macro level the effects of Amazon’s internal culture, but it would require a pretty serious resource investment from Amazon itself, which, despite their claims of being very data-driven, I doubt Amazon would actually undertake.

  2. One theme that sounds throughout the Amazonians’ replies to the NYT article is that the high-criticism stack-ranking culture just has to be the way it is in order for Amazon to be at its most awesomest. The natural question this raises is: how do they, or anyone, know that? Has Amazon ever experimented with any other system? What, put simply, is the control group for this comparison? Without this information, justification of ostensibly bad culture practices are nothing more than post hoc rationalizations by the survivors. Clearly this hazing made me into a superlative soldier/frat brother/programmer, so suck it up! Also recognizable as the kind of justification offered by people who beat their children. You’d think that an organization as allegedly devoted to data gathering as Amazon would have done some controlled studies on these questions but my guess is that Amazon gives precisely zero fucks about whether its culture is poisonous or not, except insofar as it affects their public image. There’s basically no incentive to care, since there’s always another fresh-out-of-college 23-year-old programmer to hire.

  3. Another common theme that Amazon’s defenders (and the tech world’s agitprop more generally) plays again and again is that of SOLVING THE VERY CHALLENGINGEST OF PROBLEMS. Here’s a thing that a grown-up person actually wrote:

    Yes. Amazon is, without question, the most innovative technology company in the world. The hardest problems in technology, bar none, are solved at Amazon.

    This, of course, is totally fucking ludicrous, and yet no one seems to ever question these claims. Obviously Amazon has some fairly serious problems that need solving; that would be true of almost any organization of its scale and scope. But in the end, those problems are about how to make the delivery of widgets slightly more efficient, so you can get your shit in two days instead of three. This, of course, twins with the tech world’s savior complex: not only are we solving the most challenging problems but they also happen to be the most pressing ones and also the ones that will result in the greatest improvements to standards of living/gross national happiness/overall karmic state of the universe. It’s never enough to merely deliver a successful business product if that product doesn’t come with messianic pretensions. So it is with Amazon, which must sell itself as the innovatingest innovator that ever innovated if it hopes to keep attracting those 23-year-olds. These grandiose claims are hard to square with the reality that marginal improvements in supply chain management and customer experience, while good for the bottom line (or, I guess in Amazon’s case, investors) and certainly not technically trivial, ain’t the fucking cure for cancer or even a Mars rover. If your shit gets here in three days after all, you’ll survive. Or to put it another way, Bell Labs invented C and UNIX and also won eight Nobel Prizes in Physics. That’s what actual innovation looks like.

Sports Still not a Morality Play

The St. Louis Cardinals’ inept illegal access of the Houston Astros’ database is a hilarious sports scandal for many reasons. As an IT professional, I am giddy with inappropriate excitement over the Astros’ terrible password policies, but as a hater of cheap sentiment and unctuous mythmaking, I’m super-delighted that this happened to the Cards.

I don’t follow baseball at all, but if you read any sort of sports media, it’s impossible to escape the cult that the Cardinals have wrapped themselves in. Not content to be merely one of the most successful teams of all times, the Cardinals PR-machine puts out endless reams of propaganda about how everything the organization “wins the right way” and is just such a moral paragon. That this has now backfired on them in the worst way possible (federal indictments might be coming!) is just the most delicious of ironies.

Here’s the thing: we routinely conflate external characteristics with internal virtue, or lack thereof. Not just in sports, but in society generally. Rich and attractive people are perceived to be more virtuous than poor and ugly ones, despite the fact that there’s no connection whatsoever between these things. Still, sports is particularly bad at this; there’s no more tired sports cliche than the assertion that on-field performance reflects personal worth, even though it’s manifestly untrue. What this story should teach us, but won’t, is that winning and being a good person are totally unconnected. Winning is a function of team or individual performance in a contest of skill, and being a good person is, well, a song from an entirely different opera, as my people like to say. Teams should, but will not, stop wrapping themselves in moralistic language and pretending that their sports triumphs are indicative of anything other than their performance in those contests. Sports teams aren’t moral undertakings; they’re businesses designed for entertainment, and if they succeed at entertaining us, that ought to be enough.

It turns out that good people often lose and bad people often triumph, and there’s no real rhyme or reason to it. It’s nice when “good guys” win, but being a good guy guarantees nothing. You know, kinda like life.

English and the Political Language

Among the strangest phenomena of American political life is one politician accusing another of “playing politics.” This terrible locution is bipartisan, employed as often by liberals as by conservatives, and I don’t know of another area of human activity in which practice partly consists of denying the existence of the very activity you are engaged in. To accuse a basketball coach of “playing basketball” or an egineer of “playing engineering” would be nonsensical, and yet in politics we routinely hear such accusations leveled.

Like any piece of widely employed nonsense, this phrasing does, of course, carry a certain kind of semantic content, one conveyed not so much by the phrase itself as by the fact of it being uttered. What does it mean, to “play politics?” That depends on where and how you split the phrase. In its naive usage, “playing politics” is normally used to signify that one’s opponent has taken a “non-political” question and rendered it political, somehow. For example, liberals are often accused of “playing politics with the troops” when either attempting to curb American warmaking abroad or provide some support for returning soldiers domestically; by the same token, conservatives will be called out for, say, “playing politics with women’s lives,” when attempting to enact limits on reproductive rights.

The paradoxical nature of the “playing politics” maneuver is its ubiquitous deployment by political actors engaged in the political process of achieving political goals. What is the question of, say, reproductive rights, if not a political issue? The actions of politicians carried out in the course of their professional work are almost definitionally “politics,” and the attempt to prevent the political success of an opponent is, again, definitionally political. So: what purpose does it serve? On my reading, one operation accomplished by the accusation of “playing politics” or “politicization” is the suggestion that politics itself is a kind of alien enterprise that no one should engage in. At the same time, by deploying this rhetoric, its user seeks to position themselves on the ground of consensus: all reasonable non-political people acknowledge the universal rightness of my position, and it is only the political operative who disagrees. Thus: to be political is to stand in fundamental disagreement with a presumed rightness. And more: to be political, to politicize, is to acknowledge conflict where the accuser demands recognition of trans-political necessary truth. It’s not just that the personal is not held ot be political, but even the political itself is transformed into a dishonorable practice.

That’s the “politics” fork of “playing politics.” What about the “playing?” To accuse someone of playing is, firstly, to accuse them of a sort of insincerity. You are not truly a fan of 1960s avant garde Czech cinema; you are merely playing at being one for nefarious purposes (hipster cred, presumably). In politics, that translates as follows: you are not really concerned about the issue that you claim to be concerned about; you are merely putting on a sort of act by pretending concern. While it’s certainly true that political debates are full of what might generously be described as concern-trolling, we do have a language for calling bullshit on those things: we merely say that the speaker is lying. Whether true or not, an accusation of lying is at least intelligible and, presumably, open to some sort of independent adjudication with reference to the facts at hand. But “playing politics” is precisely the kind of slippery non-phrase that can never be proven or disproven. Are we truly concerned or is our political face merely another actor’s mask we wear on the face we present in everyday political life? How can you tell the dancer from the dance? This of course is an unanswerable question, with unanswerability being just the point: the goal is not to establish a fact but to sow doubt.

A secondary, complementary meaning of the accusation of “playing” is to imply that the accused regards the process as a kind of game, games being the sorts of things you play. In other words: the accused may or may nor really care about the issue at hand, but is really employing it as a kind of point-scoring maneuver in a game that has no purpose beyond itself. This dovetails neatly with the first fork, which seeks to convey the sense of politics as a fundamentall alien activity. If politics is, in fact, alien, that is, if it has no real relevance to our lives, then of course any political engagement can only be understood not as an expression of particular principles, but rather as just another game in which the goal is not to achieve any particular end, but rather to “defeat” whatever opponent stands in your way. Couple that to the accusation of insincerity, and more doubt is sown. The irony of this reading is that there really does exist an entire class of people for whom politics really is something of a social game; it’s just that this class overwhelmingly comprises various pundits and other political hangers-on (e.g. David Brooks, Tom Friedman, Maureen Dowd, etc.) for whom actual political practice would entail, well, too much work. But the people actually doing the work, whether you deem that work good or bad, are not playing but practicing.

The reason I object so strongly to the use of this formulation is because, like all euphemisms, it crowds out meaningful understanding of its subject. To insinuate that politics is something apart from life is to mistakenly assume that it can be bracketed off from your existence; to accuse an opponent of being engaged in a kind of sophisticated pretense is to misjudge their motivations and the strength of their convictions. The accusation of “playing politics” serves to conceal the existence of genuine, perhaps ultimately irreconcilable conflicts by removing those conflicts to a realm of seeming abstraction inhabited by people who are not engaged in anything real.

Unfortunately, American political discourse is fundamentally infantile, conducted on a level that should be embarrassing to a sixth-grader, much less to grown adults. So we get constructions like this, in which the very act of achieving a political end takes the form of denying that politics exists at all. Our political language is in quite a bad way.

Stupid People Arguing About Stupid Things

Earlier today I was listening to yesterday’s podcast of the Diane Rehm Show on which the panel was discussing what the Amtrak accident means in light of our decaying infrastructure. Unfortunately, as is often the case with discussions of public transit, the debate got bogged down in the end in a very stupid Republican talking-point. Basically, any time Republicans encounter government money being spent on something they don’t like (as opposed to Good And True things like bombing Middle Eastern countries), they’ll complain about those things being “subsidized.” Why are we subsidizing Amtrak passengers?! cries Rep. Andrew Harris of Maryland, idiot.

Ed Rendell, a person who seems to have something resembling a functional nervous system, sensibly replied that all transit systems everywhere are subsidized. Unfortunately, while getting the particulars right, Rendell neglected to defend the larger principle. Ignore for the moment the fact that automotive transport has been the beneficiary of innumerable government subsidies for decades, not least of which is the actual interstate highway system the imminent collapse of which is going to kill us all presently because we won’t spend the money to repair it.

The larger principle that Rendell should have defended, but which apparently cannot be uttered in polite company, is that sometimes it makes sense to subsidize stuff. We “subsidize” public education, for example; we do it poorly and often reluctantly, and usually in racially inequitable ways, but we do do it. There are undertakings that we, as a society, deem worthwhile, and that means that we can choose to spend public resources on them. There’s nothing wrong with that determination! Rendell’s hemming on the issue serves to obscure this basic point, but it’s just as true of alternative energy or education as it is of infrastructure or public transit. There’s no magic way to get something you want without paying for it, and yet the inability to openly acknowledge this basic fact continues to hamper the ability to push for necessary public works

These are the fruits of decades of well-poisoning on the part of conservatives with regard to any notion of the public good. Even people who ostensibly favor such public efforts cannot bring themselves to say with a straight face that yes, these things are good, and we can and should spend money to achieve them. “Subsidy” is not a dirty word; it’s an integral part of development throughout the history of this country.

Tolerable Cruelty

If you want to read a sad, sad story of how miserably our standard approaches to drug addiction have fared, check out this long investigation into the lives and deaths of heroin and prescription opioid users in Kentucky. It takes a long time to get through; I think I needed an uninterrupted hour, at least, to finish reading it. The picture painted therein is not so much grim as nearly hopeless. I will spare you the suspense: we have in our toolbox drugs that could, very possibly, eliminate the threat of relapse and subsequent deaths from overdose for most addicts, and we refuse to use them on preposterous “moral” grounds.

There’s simply too much too good reporting in the linked piece for me to be able to summarize it in a way that does it justice, but a basic theme keeps emerging again and again: there’s a conflict between what we know works from a scientific and medical standpoint, and what facilities and people who are nominally charged with caring for addicts are actually dispensing. What we know works is something that blocks the withdrawal symtoms and eliminates the cravings, preferably without making the user sick. That something is called suboxone, and it is, as the article notes, pretty much the “standard of care” for treating opiod addiction.

What gets meted out to addicts, on the other hand, is best described as moralistic bullshit. Interview after interview cited in the article has people saying things like suboxone is “not sobriety… it’s being alive but you’re not clean and sober.” Or: “[treatment] is a drug-free model. There’s kind of a conflict between drug-free and suboxone.” Or, and this for me is maybe the worst of all because not only is it scientific ignorance but in my view actually judicial malpractice, the case of Judge Karen Thomas, who literally orders addicts off suboxone if they want a sentencing reduction. It’s hard to imagine the callousness required to utter the following:

“I understand they are talking about harm reduction,” Thomas said. “Those things don’t work in the criminal justice system.” In a subsequent interview, the judge added, “It sounds terrible, but I don’t give them a choice. This is the structure that I’m comfortable with.”

This is where we are as a society: the comfort of a judge taking precedence over medical standards of care.

Our model of thinking about addiction is, unfortunately, skewed because, as the article points out, addiction treatments got under way before we really understood anything about how it affects the brain. But the problem goes deeper than that. Consider the language used by those who speak negatively of suboxone, and you find the same words and phrases making an appearance across the board: “clean”, “abstinence”, “drug-free.” Why do these particular locutions have any moral weight? After all, we would not say that a cancer patient must remain “clean” or “drug-free.” We understand that cancer is a disease and that those who have it are not morally culpable for it[1]. We generally accept that treatment of diseases frequently involves the consumption of various drugs; all the talk about purity goes out the window when you come down with pneumonia.

Unfortunately, we routinely fail to extend this understanding to mental illness. Our folk theory of mind is terribly suited for talking about mental illness as actual illness. Or, if you prefer, the scientific image is not nearly as appealing as the manifest image. To suggest that an addict is sick rather than wicked seems to remove the possibility of condemnation, and if there’s one thing we’re desperately attached to in this country, it’s the ritual of condemning people for moral laxity. To use Judge Thomas’ terms, we just aren’t comfortable with a medical model of the brain, and our comfort clearly should take precedence over people’s real lives.

Cleanliness, purity, abstinence: whence the moral valence of these terms? They suggest a kind of “natural” state, uncorrupted by external influences. The mind as unsullied Eden, so to speak. Where the moral valence of that comes from, I don’t need to tell you. Out of this obsession with the rhetoric of the purge comes the idea that if addicts fail, it’s because they want to fail; if they had wanted to succeed, they would have. A circularly self-justifying chain of reasoning that admits no breaks into which some notion of medical effectiveness could penetrate. Cheap moralism, all the cheaper for the fact that the moralists never need justify themselves, operating as they do against a backdrop of erroneous assumptions about the nature of health and illness and about the mind’s relation to the body. Cartesian dualism is a hell of a drug, as deadly in its own way as any opioid.

It’s not an accident, of course, that blue Minnesota has seen successes where red Kentucky has failed. As usual, liberal states are much more willing to move from moralistic scolding to an attempt to actually do something about the problem[2]. Massachusetts and Maryland have had some success as well.

Ever since I learned about it from Rorty’s Contingency, Irony, and Solidarity, I’ve loved Judith Shklar’s definition of a liberal as someone who thinks that being cruel is the worst thing that one can do. And what is the denial of medical treatment but the most abject cruelty, visited by the state on some of its most vulnerable members, in service of a misguided attachment to a moral language it can barely articulate? This is the damage that the short-circuiting rhetoric of purity can do, measured in actual human lives.

[1] Or, at least, most of us understand this. There’s no shortage of people in the world more than happy to take to task a cancer patient for not having lived an approriately “clean” life, but they tend to occupy the fringe rather than the mainstream.

[2] Although, unfortunately, not as willing as they should be: far too many liberals ascribe unnecessary moral properties to “purity” and “cleanliness,” as the anti-vax and anti-GMO movements readily demonstrate.

Melian Dialogues

I don’t have anything terribly original to say on the topic of the current events in Baltimore. Anyone with two eyes, a few neurons to rub together, and a sense of history can understand for themselves that, whatever you think of riots as a political or moral phenomenon, it is impossible to detach those events from their manifestation as the reaction of a brutalized populace. Take a few minutes to read Ta-Nehisi Coates on this topic, then come back if you feel like it.

So now, a few words about the rhetoric of (non-)violence. Hardly anything in American political life is so reliable as the Grave Concerns of Very Serious People whenever more than three black people show up in the same spatiotemporal vicinity to express some degree of dissatisfaction with their treatment by the American police state; the ritual bemoaning of violence is metronomic in its regularity. Curiously, those same Very Serious People are somehow very quiet when it comes to the violence perpetrated upon those very same black people. Instead, we are subjected to the standard casual racism disguised as responsibility politics: that guy shouldn’t have run, this woman shouldn’t have spoken up, this other guy should have complied. Ad nauseam ad inifinitum.

There just isn’t any way of reconciling this double-standard that’s actually fair to the facts at hand, which is why any conversation on this topic turns into a prime example of goalpost-moving and evasion. “But destruction of property is still wrong!” “But you should still comply with police orders!” “But what about black-on-black crime?” Anything to avoid the unpleasant fact that police are enabled by the state to take lives with virtually no repercussions whatsoever, and that the lives taken are disproportionately black ones. It’s Thucydides meets Weber: the the powerful exact what they can, and the weak grant what they must, coupled with the state’s monopoly on violence and discretion in its distribution. No one familiar with the history of race in America ought to be surprised when this lethal mixture distributes that violence disproportionately onto African-Americans.

The Grave Concerns present an insurmountable double-bind. On the one hand, no one will speak for you if the police decide they like you better dead; on the other hand, no public expressions of outrage are allowed, lest you be labeled “violent.” Usually at this stage the Very Serious People suggest that the way to reform is through the voting booth, which only serves to remind everyone that none of these Very Serious People have ever lived as members of a politically disempowered community. The VSP’s tend to have a romantic notion of that decidedly un-romantic Weberian formulation, “the strong and slow boring of hard boards,” mostly because it allows an escape into cliche rather than obligating someone to actually do something, the way a real moral outrage would. After all, it’s not them who will have to do the actual boring, it’s other people, and for other people, especially other black people, justice can wait. Forever, if need be. Never mind that not being subjected to the arbitrary lethal power of the state manifested in its police force is one of those pretty basic things that one would think wouldn’t require “reform” to make happen.

This vicious circle will continue until it becomes an accepted fact in American politics that black lives are worth as much as white ones, and that a system of racial terror imposed on black communities is morally untenable. As long as that system persists, we’ll see more Baltimores for the simple reason that public protests are the only visible way that black communities have to protest against this. All the pearl-clutching over destroyed property and violence inappropriately issuing from rather than being directed at black people are just ways of avoiding that basic realization.

Ruminations on Science

The text below was modified slightly from a comment I left over at Crooked Timber. After writing it, I thought it held up ok as a separate piece of writing, divorced from the comment thread, so I’m just posting it here with minimal alterations:

Science is hard. It’s just really difficult to even achieve a small amount of mastery in an area of your own alleged expertise. There’s just so much of it, and so much more appearing every day. There are varying responses to this problem. One response is to just write off any results that disagree with conclusions that one has already reached by other means. Another is to set up institutions in which legitimate queries after truth can actually be carried out and debated. That’s a great meta-solution, in my view, but unsurprisingly it comes with its own meta-problems. Now you’ve got this whole other layer of professional scientists that, to the untutored observer, appear interposed priestlike between you and the truth. As with any sufficiently complex (i.e. involving more than 5 people) institutions, mystification sets in. If you’re already predetermined to disregard what the scientists are saying in the first place, what is in reality an imperfect mechanism for adjudicating truth claims begins looking like a conspiracy to suppress your great uncle’s naturopathic cure for cancer. And the thing about conspiracies is that they can never be disproven; any evidence counter to the conspiracist conclusion is merely additional proof that those who offer the evidence are in on the conspiracy.

In the right (wrong) sorts of circumstances, this problem becomes a horrible vicious circle. It can only be resolved by taking a step back and trying to understand science as a human institution and scientists as human practitioners; in other words, trying to figure out what scientists are doing and why. That is also very hard, especially if you come from outside a scientific discipline, because you’ll be entering into discussions in which you lack the requisite terminology for understanding all the little details. That’s why scientific communication is a two-way street: if the average person holds some responsibility for trying to understand how science gets done, then scientists have commensurate responsibility to explain that process in a way that’s understandable. Sadly, scientists have often failed at this task; those who can do it well, like Carl Sagan, Neil deGrasse Tyson, and P.Z. Myers, are worth their weight in gold because they’re quite rare.

The problem with people like global warming deniers and the anti-vax crowd is that everything they do undermines these institutions. If you only care about being right instead of getting it right (parsing the distinction is left as an exercise for the reader), then all this stuff like peer review and independent verification is just so much cruft that you can discard when it runs up against something you want badly to be true. The danger of that is that sooner or later you’ll cut down the very tree you sit in, as the Russian expression goes, and when you actually require those mechanisms and institutions to function properly because they impact your own life, they won’t.

Toothpicks and Bubblegum, Software Edition, Iteration 326

There’s nothing like working with an old *nix utility to remind you how brittle software is. Case in point: I’m trying to use flex and bison to design a very simple grammar for extracting some information from plaintext. Going by the book and everything, and it just doesn’t work. Keeps telling me it caught a PERIOD as its lookahead token when it expected a WORD and dies with a syntax error. I killed a whole day trying to track this down before I realized one simple thing: the order of token declarations in the parser (that’s your .y file) must match the order of token declaration in the lexer (your .l file). If it doesn’t, neither bison nor flex will tell you about this, of course (and how could they, when neither program processes files intended for the other?). It’s just that your program will stubbornly insist, against all indications to the contrary, that it has indeed caught a PERIOD when it expected a WORD and refuse to validate perfectly grammatical text.

OH. MY. GOD.

I was so angry when this was happening and now I think I might be even angrier. Keep in mind that this fantastically pathological behavior is not documented anywhere, so I found myself completely baffled by what was happening. Where was PERIOD coming from? Why didn’t it just move on to the next valid token? Of course the correct thing is to include the tab.h file in the lexer, but I had written my definition down explicitly in the lexer file so I didn’t think to do that.

What’s ludicrous about this is that the flex/bison toolchain has to go through yet another auxiliary tool, m4, just to do its thing. m4, if you don’t know, is a macro language with a terrible, incomprehensible syntax that was invented for the purposes of text transformation, thereby proving years before its formulation Greenspun’s 10th rule, according to which any sufficiently advanced C project will end up reimplementing, badly, some subset of Common Lisp.

I have the utmost respect for Dennis Ritchie, but m4 is a clusterfuck that should have never survived this long. Once a language like Lisp existed, which could actually give you code and DSL transformations at a high level of abstraction, m4 became superfluous. It has survived, like so many awful tools of its generation, through what I can only assume is inertia.