Toothpicks and Bubblegum, Software Edition, Iteration 326

There’s nothing like working with an old *nix utility to remind you how brittle software is. Case in point: I’m trying to use flex and bison to design a very simple grammar for extracting some information from plaintext. Going by the book and everything, and it just doesn’t work. Keeps telling me it caught a PERIOD as its lookahead token when it expected a WORD and dies with a syntax error. I killed a whole day trying to track this down before I realized one simple thing: the order of token declarations in the parser (that’s your .y file) must match the order of token declaration in the lexer (your .l file). If it doesn’t, neither bison nor flex will tell you about this, of course (and how could they, when neither program processes files intended for the other?). It’s just that your program will stubbornly insist, against all indications to the contrary, that it has indeed caught a PERIOD when it expected a WORD and refuse to validate perfectly grammatical text.

OH. MY. GOD.

I was so angry when this was happening and now I think I might be even angrier. Keep in mind that this fantastically pathological behavior is not documented anywhere, so I found myself completely baffled by what was happening. Where was PERIOD coming from? Why didn’t it just move on to the next valid token? Of course the correct thing is to include the tab.h file in the lexer, but I had written my definition down explicitly in the lexer file so I didn’t think to do that.

What’s ludicrous about this is that the flex/bison toolchain has to go through yet another auxiliary tool, m4, just to do its thing. m4, if you don’t know, is a macro language with a terrible, incomprehensible syntax that was invented for the purposes of text transformation, thereby proving years before its formulation Greenspun’s 10th rule, according to which any sufficiently advanced C project will end up reimplementing, badly, some subset of Common Lisp.

I have the utmost respect for Dennis Ritchie, but m4 is a clusterfuck that should have never survived this long. Once a language like Lisp existed, which could actually give you code and DSL transformations at a high level of abstraction, m4 became superfluous. It has survived, like so many awful tools of its generation, through what I can only assume is inertia.

Five Years in the Future

Oh gosh, it has been a long time, hasn’t it? My deepest apologies, sports fans. You know how life is, always getting in the way. Perhaps this will spur a production in verbal output but it’s just as likely that it’ll be a once-per-year salvo. Don’t get used to anything nice, my mother always told me.

Anyway, this too-prolix production has been made possible by a friend soliciting my input on the following article. That shit is long, so take a good thirty minutes out of your day if you plan to read it all, and then take another thirty to read this response, which I know you’re going to do because you love me that much.

I’ll save you some of the trouble by putting my thesis front and center so you can decide whether or not you want to continue reading or leave an angry comment: I think the linked piece is premised on some really flimsy assumptions and glosses over some serious problems, both empirical and logical, in its desire to attain its final destination. This is, sadly, par for the course in popular writing about AI; even very clever people often write very stupid things on this topic. There’s a lot of magical thinking going on in this particular corner of the Internet; much of it, I think, can be accounted for by a desire to believe in a bright new future about to dawn, coupled with a complete lack of consequences for being wrong about your predictions. That said, let’s get to the meat.

There are three basic problems with Tim Urban’s piece, and I’ll try and tackle all three of them. The first is that it relies throughout on entirely speculative and unjustified projections generated by noted “futurist” (here I would say, rather, charlatan, or perhaps huckster) Ray Kurzweil; these projections are the purest fantasy premised on selective interpretations of sparsely available data and once their validity is undermined, the rest of the thesis collapses pretty quickly. The second problem is that Urban repeatedly makes wild leaps of logic and inference to arrive at his favored result. Third, Urban repeatedly mischaracterizes or misunderstands the state of the science, and at one point even proposes a known mathematical and physical impossibility. There’s a sequel to Urban’s piece too, but I’ve only got it in me to tackle this one.

Conjectures and Refutations

Let me start with what’s easily the most objectionable part of Urban’s piece: the charts. Now, I realize that the charts are meant to be illustrative rather than precise scientific depictions of reality, but for all that they are still misleading. Let’s set aside for the moment the inherent difficulty of defining what exactly constitutes “human progress” and note that we don’t really have a good way of determining where we stand on that little plot even granting that such a plot could be made. Urban hints at this problem with his second “chart” (I guess I should really refer to them as “graphics” since they are not really charts in any meaningful sense), but then the problem basically disappears in favor of a fairly absurd thought experiment involving a time traveler from the 1750s. My general stance is that in all but the most circumscribed of cases, thought experiments are thoroughly useless, and I’d say that holds here. We just don’t know how a hypothetical time traveler retrieved from 250 years ago would react to modern society, and any extrapolation based on that idea should be suspect from the get-go. Yes, the technological changes from 1750 to today are quite extreme, perhaps more extreme than the changes from 1500 to 1750, to use Urban’s timeline. But they’re actually not so extreme that they’d be incomprehensible to an educated person from that time. For example, to boil our communication technology down to the basics, the Internet, cell phones, etc. are just uses of electrical signals to communicate information. Once you explain the encoding process at a high level to someone familiar with the basics of electricity (say, Ben Franklin), you’re not that far off from explicating the principles on which the whole thing is based, the rest being details. Consider further that in 1750 we are a scant 75 years away from Michael Faraday, and 100 years away from James Clerk Maxwell, the latter of whom would understand immediately what you’re talking about.

We can play this game with other advances of modern science, all of which had some precursors in the 1750s (combustion engines, the germ theory of disease, etc.). Our hypothetical educated time traveler might not be terribly shocked to learn that we’ve done a good job of reducing mortality through immunizations, or that we’ve achieved heavier-than-air flight. I doubt that however surprised they are it would be to the extent that they would die. The whole “Die Progress Unit” is, again, a tongue-in-cheek construct from Urban, meant to be illustrative, but rhetorically it functions to cloak all kinds of assumptions about how people would or would not react. It disguises a serious conceptual and empirical problem (just how do we define and measure things like “rates of progress”) behind a very glib imaginary scenario that is both not meant to be taken seriously and function as justification for the line of thinking that Urban pursues later in the piece.

The idea that ties this first part together is Kurzweil’s “Law of Accelerating Returns.” Those who know me won’t be surprised to learn that I don’t think much of Kurzweil or his laws. I think Kurzweil is one part competent engineer and nine parts charlatan, and that most of his ideas are garbage amplified by money. The “Law” of accelerating returns isn’t any such thing, certainly not in the simplistic way presented in Urban’s piece, and relying on it as if it were some sort of proven theorem is a terrible mistake. A full explanation of the problems with the Kurzweilian thesis will have to wait for another time, but I’ll sketch one of the biggest objections below. Arguendo I will grant an assumption that in my view is mostly unjustified, which is that the y-axis on those graphics can even be constructed in a meaningful way.

A very basic problem with accelerating returns is that it very much depends on what angle you look at it from. To give a concrete example, if you were a particle physicist in the 1950s, you could pretty much fall ass-backwards into a Nobel Prize if you managed to scrape together enough equipment to build yourself a modest accelerator capable of finding another meson. But then a funny thing happened, which is that ever incremental advance over the gathering of low-hanging fruit consumed disproportionately more energy. Unsurprisingly, the marginal returns on increased energy diminished greatly; the current most powerful accelerator in the world (the LHC at CERN) has beam energies that I believe will max out at somewhere around 7 TeV, give or take a few GeV. That’s one order of magnitude more powerful than the second-most powerful accelerator (the RHIC at Brookhaven), and it’s not unrealistic to believe that the discovery of any substantial new physics will require an accelerator another order of magnitude more powerful. In other words, the easy stuff is relatively easy and the hard stuff is disproportionately hard. Of course this doesn’t mean that all technologies necessarily follow this pattern, but note that what we’re running up against here is not a technological limit per se, but rather a fundamental physical limit: the increased energy scale just is where the good stuff lies. Likewise, there exist other real physical limits on the kind of stuff we can do. You can only make transistors so small until quantum effects kick in; you can only consume so much energy before thermodynamics dictates that you must cook yourself.

The astute reader will note that this pattern matches quite well (at least, phenomenologically speaking) the logistic S-curve that Urban draw in one of his graphics. But what’s really happening there? What Urban has done is to simply connect a bunch of S-curves and overlaid them on an exponential, declaring (via Kurzweil) that this is how technology advances. But does technology really advance this way? I can’t find any concrete argument that it does, just a lot of hand-waving about plateaus and explosions. What’s more, the implicit assumption lurking in the construction of this plot is that when one technology plays itself out, we will somehow be able to jump ship to another method. There is historical precedent for this assumption, especially in the energy sector: we started off by burning wood, and now we’re generating energy (at least potentially) from nuclear reactions and sunlight. All very nice, until you realize that the methods of energy generation that are practical to achieve on Earth are likely completely played out. We have fission, fusion, and solar, and that’s about it for the new stuff. Not because we aren’t sufficiently “clever” but because underlying energy generation is a series of real physical processes that we don’t get to choose. There may not be another accessible S-curve that you we can jump to.

Maybe other areas of science behave in this way and maybe they don’t; it’s hard to know for sure. But admitting ignorance in the face of incomplete data is a virtue, not a sin; we can’t be justified in assuming that we’ll be able to go on indefinitely appending S-curves to each other. At best, even if the S-curve is “real,” what we’re actually dealing with is an entire landscape of such curves, arranged in ways we don’t really understand. As such, predictions about the rate of technological increase are based on very little beyond extrapolating various conveniently-arranged plots; it’s just that instead of extrapolating linearly, Kurzweil (and Urban following after him) does so exponentially. Well, you can draw lines through any set of data that you like, but it doesn’t mean you actually understand anything about that data unless you understand the nature of the processes that give rise to it.

You can look at the just-so story of the S-curve and the exponential (also the title of a children’s book I’m working on) as a story about strategy and metastrategy. In other words, each S-curve technology is a strategy, and the metastrategy is that when one strategy fails we develop another to take its place. But of course this itself assumes that the metastrategy will remain valid indefinitely; what if it doesn’t? Hitting an upper or lower physical limit is an example of a real barrier that is likely not circumventable through “paradigm shifts” because there’s a real universe that dictates what is and isn’t possible. Kurzweil prefers to ignore things like this because they throw his very confident pronouncements into doubt, but if we’re actually trying to formulate at least a toy scientific theory of progress, we can’t discount these scenarios.

1. p → q;
2. r
3. therefore, q

Since Kurzweil’s conjectures (I won’t dignify them with the word “theory”) don’t actually generate any useful predictions, it’s impossible to test them in any real sense of the word. I hope I’ve done enough work above to persuade the reader that these projections are nothing more than fantasy predicated on the fallacious notion that the metastrategy of moving to new technologies is going to work forever. As though it weren’t already bad enough to rely on these projections as if they were proven facts, Urban repeatedly mangles logic in his desire to get where he’s going. For example, at one point, he writes:

So while nahhhhh might feel right as you read this post, it’s probably actually [sic] wrong. The fact is, if we’re being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect.

It’s hard to see why the skeptics are the ones who are “probably actually wrong” and not Urban and Kurzweil. If we’re being “truly logical” then, I’d argue, we aren’t making unjustified assumptions about what the future will look like based on extrapolating current non-linear trends, especially when we know that some of those extrapolations run up against basic thermodynamics.

That self-assured gem comes just after Urban commits an even grosser offense against reason. This:

And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either.

is not an argument. In the words of Wolfgang Pauli, it isn’t even wrong. This is a sequence of words that means literally nothing and no sensible conclusion can be drawn from it. To write this and to reason from such premises is to do violence to the very notion of logic that you’re trying to defend.

The entire series contains these kinds of logical gaps that are basically filled in by wishful thinking. Scales, trends, and entities are repeatedly postulated, then without any particular justification or reasoning various attributes are assigned to them. We don’t have the faintest idea of what an artificial general intelligence or super-intelligence might look like, but Urban (via Kurzweil) repeatedly gives it whatever form will make his article most sensational. If for some reason the argument requires an entity capable of things incomprehensible to human thought, that capability is magicked in wherever necessary.

The State of the Art

Urban’s taxonomy of “AI” is likewise flawed. There are not, actually, three kinds of AI; depending on how you define it, there may not even be one kind of AI. What we really have at the moment are a number of specialized algorithms that operate on relatively narrowly specified domains. Whether or not that represents any kind of “intelligence” is a debatable question; pace John McCarthy, it’s not clear that any system thus far realized in computational algorithms has any intelligence whatsoever. AGI is, of course, the ostensible goal of AI research generally speaking, but beyond general characteristics such as those outlined by Allen Newell, it’s hard to say what an AGI would actually look like. Personally, I suspect that it’s the sort of thing we’d recognize when we saw it, Turing-test-like, but pinning down any formal criteria for what AGI might be has so far been effectively impossible. Whether something like the ASI that Urban describes can even plausibly exist is of course the very thing in doubt; it will not surprise you, if you have not read all the way through part 2, that having postulated ASI in part 1, Urban immediately goes on to attribute various characteristics to it as though he, or anyone else, could possibly know what those characteristics might be.

I want to jump ahead for a moment and highlight one spectacularly dumb thing that Urban says at the end of his piece that I think really puts the whole thing in perspective:

If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us.

This scenario impossible. Not only does it violate everything we know about uncertainty principles, but it also effectively implies a being with infinite computational power; this is because even if atoms were classical particles, controlling the position of every atom logically entails running forward in time a simulation of the trajectories of those atoms to infinite precision, a feat that is impossible in a finite universe. Not only that, but the slightest error in initial conditions will accumulate exponentially (here, the exponential stuff is actually mathematically valid), so that e.g. improving your forecast horizon by a factor of 10 requires a factor of 100 increase in computational power and so on.

This might seem like an awfully serious takedown of an exaggerated rhetorical point, but it’s important because it demonstrates how little Urban knows, or worries, about the actual science at stake. For example, he routinely conflates raw computational power with the capabilities of actual mammalian brains:

So the world’s $1,000 computers are now beating the mouse brain and they’re at about a thousandth of human level.

But of course this is nonsense. We are not “beating” the mouse brain in any substantive sense, we merely have machines that do a number of calculations per second that is comparable to a number that we imagine the mouse brain is also doing. About the best we’ve been able to do is to mock up a network of virtual point neurons that kind of resembles a slice of the mouse brain, maybe, if you squint from far away, and run it for a few seconds. Which is a pretty impressive technical achievement, but saying that we’ve “beaten the mouse brain” is wildly misleading. “Affordable, widespread AGI-caliber hardware in ten years,” is positively fantastical even under the most favorable Moore’s Law assumptions.

Of course, even with that kind of hardware, AGI is not guaranteed; it takes architecture as much as computational power to get to intelligence. Urban recognizes this, but his proposed “solutions” to this problem again betray as misunderstanding of both the state of science and our capabilities. For example, his “emulate the brain” solution is basically bog-standard connectionism. Not that connectionism is bad or hasn’t produced some pretty interesting results, but neuroscientists have known for a long time now that the integrate-and-fire point neuron of connectionist models is a very, very, very loose abstraction that doesn’t come close to capturing all the complexities of what happens in the brain. As this paper on “the neuron doctrine” (PDF) makes clear, the actual biology of neural interaction is fiendishly complicated, and the simple “fire together-wire together” formalism is a grossly inadequate (if also usefully tractable) simplification. Likewise, the “whole brain simulation” story fails to take into account real biological complexities of faithfully simulating neuronal interactions. Urban links to an article which claims that whole-brain emulation of C. elegans has been achieved, but while the work done by the OpenWorm folks is certainly impressive, it’s still a deeply simplified model. It’s hard from the video to gauge how closely the robot-worm’s behavior matches the real worm’s behavior; it’s likely that, at least, it exhibits some types of behaviors that the worm also exhibits, but I doubt that even its creators would claim ecological validity for their mode. At the very best, it’s a proof of principle regarding how one might go about doing something like this in the future, and, keep in mind, that this is a 300-neuron creature whose connectome is entirely known.

Nor are genetic algorithms likely to do the trick. Overall, the track record of genetic algorithms in actually producing useful results is decidedly mixed. In a recent talk I went to, Christos Papadimitriou, a pretty smart guy, flat out claimed that “genetic algorithms don’t work.” (PDF, page 18). I do not possess sufficient expertise to judge the truth of this statement, but I think the probability that genetic algorithms will provide the solution is low. It does not help that we “know” what we’re aiming for; in truth we have no idea what we’re optimizing for, and our end-goal is something of the “we know it when we see it” variety, which isn’t something that lends itself terribly well to a targeted search. Evolution, unlike humans, optimized for certain sorts of local fitness maxima (to put it very, very simply), and wound up producing something that couldn’t possibly have been targeted for in such an explicit way.

All of this is to say that knowing the connectome and having some computing power at your disposal is a necessary but not sufficient condition for replicating even simple organismic functionality. Understanding how to go from even a complete map of the human brain to a model of how that brain produces intelligence is not a simple mapping, nor is it just a matter of how many gigaflops you can execute. You have to have the right theory or your computational power isn’t worth that much. A huge problem that one hits on when speaking with actual neuroscientists is that there’s really a dearth of theoretical machinery out there that even begins to accurately represent intelligence as a whole, and it isn’t for lack of trying.

The concluding discussion of what an AI might look like in relation to humans is hopelessly muddled. We barely have any coherent notion of how to quantify existing human intelligence, much less a possible artificial one. There’s no particular reason to think that intelligence follows some kind of linear scale, or that “170,000 more intelligent than a human,” is any sort of meaningful statement, rather than a number thrown out into the conversation without any context.

The problem with the entire conversation surrounding AI is that it’s almost entirely divorced from the realities of both neuroscience and computer science. The boosterism that emanates from places like the Singularity Institute and from people like Kurzweil and his epigones is hardly anything more than science fiction. Their projections are mostly obtained by drawing straight lines through some judiciously-selected data, and their conjectures about what may or may not be possible are mostly based on wishful thinking. It’s disappointing that Urban’s three weeks of research have produced a piece that reads like an SI press release, rather than any sort of sophisticated discussion of either the current state of the AI field or the tendencious and faulty logic driving  the hype.

Conclusion

None of this is to say that we should be pessimists about the possibility of artificial intelligence. As a materialist, I don’t believe that humans are somehow imbued with any special metaphysical status that is barred to machines. I hold out hope that some day we will, through diligent research into the structure of existing brains, human and otherwise, unravel the mystery of intelligence. But holding out hope is one thing; selling it as a foregone conclusion is quite another. Concocting bizarre stories about superintelligent machines capable of manipulating individual atoms through, apparently, the sheer power of will, is just fabulism. Perhaps no more succinct and accurate summary of this attitude has ever been formulated than that written by John Campbell of Pictures for Sad Children fame:

it’s flying car bullshit: surely the world will conform to our speculative fiction, surely we’re the ones who will get to live in the future. it gives spiritual significance to technology developed primarily for entertainment or warfare, and gives nerds something to obsess over that isn’t the crushing vacuousness of their lives

Maybe that’s a bit ungenerous, but I find that it’s largely true. Obsession about AI futures is not even a first world problem as much as a problem for a world that has never existed and might never exist. It’s like worrying about how you’ll interact with the aliens that you’re going to find on the other side of the wormhole before you even know how to get out of the solar system without it taking decades.

Priorities

A story has been making the internet rounds these past two weeks about a guy who got beat up by a group of bikers following a confrontation in which the bikers encircled him, did a brake-check, and then tried to break into his car when he tapped the dude who brake-checked him. The guy gunned it, fearing quite rightly for the safety of his wife and baby, who were in the car with him, and ran over one of the bikers in the process. The rest of the group decided to extract revenge, and here we are two weeks later with one idiot possibly paralyzed for life, a few more idiots probably going to jail, and a family that is undoubtedly traumatized by the whole thing. Way to go, America.

Here’s what I found weird about this. Go back to that link above and read the bit where it says “Update.” What is completely astounding to me is that the police knew about the planned ride all along. Not only that, but there were actual undercover cops riding with the bike gang! In other words, the police almost certainly knew exactly what the bikers were going to do, and did nothing to prevent it, allowing a bunch of outlaws to essentially terrorize a major highway in a major American city.

How the fuck, exactly, is this possible? The NYPD has spent untold amounts of resources spying on Muslims and college students, busting heads at Occupy protests, and racially profiling blacks and Latinos. Huge honking sums of money devoted to pretty much completely speculative undertakings that only served to build anger at the department itself. And yet, here, given a situation in which the police know almost exactly what is going to happen, they cannot muster up the resources to crack down on this? And the chief of the police feels entirely comfortable coming out and openly saying so?

This is beyond absurd. It is willful neglect. The NYPD knew about this and chose to do absolutely nothing. And its failure to act on information that definitively pointed to lawbreaking and endangerment of citizenry demonstrates exactly where its priorities lie: in covering its collective ass by falsifying police data, in making life difficult for non-white, non-rich residents of NYC (and indeed, for people living beyond its jurisidction), and in punching hippies. Oh, and in keeping the ride out of Times Square, something Ray Kelly seems to be quite proud of. Because if you’re not a tourist but just a regular driver on the West Side Highway, well, fuck you. This is what the NYPD cares about, this is what it’s for; certainly not for preventing entirely knowable illegal activities that pose real danger to the residents of the city which it is ostensibly tasked with protecting and serving.

This alone should be enough to get Kelly thrown out on his worthless ass. Here’s hoping that when de Blasio is elected he has the backbone to follow through with his promise to replace Kelly with someone who actually cares about protecting citizens, rather than turning the NYPD into a personal fiefdom.

Emboldening II: the Endumbening

So here I was, sitting in, how apropos, the George Bush Intercontinental Airport in Houston, and half-listening to John Kerry’s press conference where he’s outlining the case for bombing Assad. And there’s this one word that keeps coming up in these discussions, primarily from people who can’t get enough of bombing other countries, and that word is “embolden.” As in, “if we don’t bomb Assad, we’re going to embolden our enemies.” We’re going to look weak to them, you see.

Thesis: anyone who unironically uses the word “embolden” is either a credulous idiot or a lying piece of shit. Or both.

If you’re minimally aware of inter-generational trends in US military spending, you might know that the US spends more money on its military (excuse me, “defense”) than the next n countries combined, where n is a number that I’m too lazy to presently look up but which I’m quite positive is well into the double digits. If there’s anything that anyone knows at all about US foreign policy, it’s precisely that we’re not at all afraid of bombing anyone or anything we like. “We will wreck your shit,” has been longstanding US doctrine in one form or other for decades. I am willing to bet money that no one but an American would be stupid enough to think that the US needs to do anything to prove its willingness to use force.

The whole notion of “emboldening” is an absurd framework that falls apart at the slightest perturbation. The “emboldening thesis” holds that if a particular act (e.g. the use of chemical weapons) goes unpunished, then subsequent bad acts are likely to follow because everyone will assume the policing hegemon is too weak to respond. Italics because that’s the operative principle of the whole thing. But in reality, this has never been true; from the lowliest aspirant to Al Qaeda membership to the highest leadership of any other nation, everyone knows that the countries that end up suffering the wrath of the hegemon are those countries which are politically convenient to punish. The US will happily disregard international law in one instance (here, have some chemical weapons, Mr. Hussein!) where it suits it, while using it as a pretext elsewhere (bad, bad Bashar!), AND NOAH-WAHN DENIES THIS. The lesson any minimally competent observer will extract from this is not that the US is too “weak” to punish transgressors (because, you know, we just literally spent the last decade occupying two different countries, one of which we invaded for totally bullshit reasons), but that the US will just do whatever the fuck it wants when it wants, and it’s not going to explain anything to anyone.

There are lots of terrible reasons to launch a strike on Syria, including the fact that there’s no actual plan to do anything other than lob a few missiles, define whatever they hit as a strategic target, and declare victory. But I think the idea that “we have to do something because otherwise some bad people will be emboldened,” is probably the most idiotic reason for doing anything whatsoever; it flies in the face of all history and sense. The fact that this assertion is allowed to repeatedly go unchallenged in public discourse is just another testament to the sad state of our intellectual life.

Silly man says stupid thing, Silicon Valley edition

A fellow named Jerzy Gangi advances a bunch of hypotheses to answer a not-very-interesting question about why Silicon Valley funds some things (e.g. Instagram) and not others (e.g. Hyperloop). Along the way we get some speculation about the amount of cojones possessed by VCs (insufficient!) and how well the market rewards innovation (insufficiently!), but the question is boring because the answer is already well-known: infrastructure projects of the scope and scale of Hyperloop (provided they’re feasible to begin with) require massive up-front investments with uncertain returns, while an Instagram requires comparatively little investment with the promise of a big return. Mystery solved! You can PayPal me the $175 you would have given Gangi for the same information spread over an hour of time at grapesmoker@gmail.com.

Despite the fact that Gangi’s question is not very interesting on its own, his writeup of it actually contains an interesting kernel that I want to use as a touch-off point for exploring a a rather different idea. You see, while criticism of techno-utopianism (and Silicon Valley, its material manifestation which will be used metonymically with it from here on out) has been widespread, it usually doesn’t address a fundamental claim that Silicon Valley makes about itself; namely, that Silicon Valley is an innovative environment. Critics like Evgeny Morozov are likely to only be peripherally interested in the question; Morozov is far more concerned with asking whether the things Silicon Valley wants disrupted actually ought to be “disrupted.” Other critiques have focused on the increasing meaninglessness of that very concept and the deleterious effects that those disruptions have on the disrupted. But as a rule, discussion about Silicon Valley takes it for granted that Silicon Valley is the engine of innovation that it claims to be, even if that innovation comes at a price for some.

I think this is a fundamentally mistaken view. Silicon Valley is “innovative” only if your bar for innovation is impossibly low; (much) more often than not what Silicon Valley produces is merely a few well-known models repackaged in shinier wrapping. That this is so can be seen from looking at this list of recent Y Combinator startups. What, in all this, constitutes an “innovative” idea? The concept that one can use data to generate sales predictions? Or perhaps the idea of price comparison? The only thing on here that looks even remotely like something that’s developing new technology is the Thalmic whatsis, and even that is not likely to be anything particularly groundbreaking. These may or may not be good business ideas, but that’s not the question. The question is: where’s the innovation? And the answer is that there isn’t a whole lot of it, other than taking things that people used to do via other media (like buying health insurance) and making it possible to do over the internet.

There’s nothing wrong with not being innovative, by the way. Most companies are not innovative; they just try and sell a product that the consumer might want to buy. The problem is not the lack of innovation, but the blatant self-misrepresentation in which Silicon Valley collectively engages. It’s hardly possible to listen to any one of Silicon Valley’s ubiquitous self-promoters without hearing paeans to how wonderfully innovative it is; if the PR is to be believed, Silicon Valley is the source and font of all that is new and good in the world. Needless to say, this is a completely self-serving fantasy which bears very little resemblance to any historically or socially accurate picture of how real innovation actually happens. To the extent that any innovation took place in Silicon Valley, it didn’t take place at Y Combinator funded start-ups, but rather at pretty large industrial-size concerns like HP and Fairchild Semiconductor. No one in the current iteration of Silicon Valley has produced anything remotely as innovative as Bell Labs. Maybe the Tesla could yet live up to that lofty ideal, but it’s pretty unlikely that  internet company, no matter how successful, ever will.

Ha-Joon Chang has adroitly observed that the washing machine did more to change human life than the Internet has. But the washing machine is not shiny (anymore) or new (anymore) or sexy, so it’s easy to take it for granted. The Internet is not new (anymore) either, but unlike the washing machine, the capability exists to make it ever shinier, and then sell the resulting shiny objects as brand-new innovations when of course they aren’t really any such thing. As always, the actual product of Silicon Valley is, by and large, the myth of its own worth and merit; what’s being sold is not any actual innovation but a story about who is to be classed as properly innovative (and thereby preferably left untouched by regulation and untaxed by the very social arrangements which make their existence possible).

Toothpicks and Bubblegum

Oh my, it’s been a while, hasn’t it? So many important developments have taken place. For example, I got a new phone.

Let’s start off with something light, shall we? Let’s talk about how the web is a complete clusterfuck being held together by the digital equivalent of this post’s titular items.

The web of today would probably be, on the surface, not really recognizable to someone who was magically transported here from 20 years ago. Back then, websites were mostly static (pace the blink tag) and uniformly terrible-looking. Concepts like responsiveness and interactivity didn’t really exist to any serious extent on the web. You could click on stuff, that was about it; the application-in-a-browser concept that is Google Docs was hardly credible.

But on the other hand, that web would, under the surface, look quite familiar to our guest from 1993. The tags would have changed of course, but the underlying concept of a page structured by HTML and animated by JavaScript* would have been pretty unsurprising. Although application-in-a-browser did not exist, there was nothing in the makeup of Web 1.0 to preclude it; all the necessary programming ingredients were already in place to make it happen. What was missing, however, was the architecture which enabled applications like Google Docs to be realized. And that’s what brings us to today’s remarks.

You see, I am of the strong opinion that the client-side development architecture of the web is ten different kinds of fucked up. A lot of this is the result of the “architecture by committee” approach that seems to be the MO of the W3C, and a lot more seems to be just a plain lack of imagination. Most complaints about W3C focus on the fact that its processes move at a snail’s pace and that it doesn’t enforce its standards, but I think the much larger problem with the web today is that it’s running on 20-year-old technology and standards that were codified before the current iteration of the web was thought to be possible.

Let me explain that. As an aside, though, allow me to tell a relevant story: recently I went to a JavaScript programmers’ MeetUp where people were presenting on various web frameworks (e.g. Backbone, Ember, Angular, etc.). During the Backbone talk, the fellow who was giving it made a snarky comment about “not programming in C on the web.” This got a few laughs and was obviously supposed to be a dig of the “programming Fortran in any language” variety. What I found most revealing about this comment, though, was that it was made in reference not to any features that JavaScript has and C doesn’t (objects, garbage collection) but rather in reference to the notion that one’s program ought to be modularized and split over multiple files. This is apparently considered so ludicrous to a web programmer that the mere suggestion that one might want to do so is worthy of mockery.

At the same time, web programmers are no longer creating static pages with minimal responsivity and some boilerplate JavaScript to do simple things like validation. In fact, the entirety of this presentation was dedicated to people talking about frameworks that do precisely the sort of thing that desktop developers have been doing since forever: writing large, complicated applications. The only difference is that those applications are going to be running in a web browser and not on a desktop, which means they have to access server-side resources in an asynchronous fashion. Other than that, Google Docs doesn’t look much different from Word. And you can’t do large-scale app development by writing all your code in three or four files. I mean, you can do that, but it would be a very bad idea. It’s a bad idea independent of the specific paradigm in which you’re developing, because the idea itself is sort of meta-architectural to begin with. Modularization is the sine qua non of productive development because it allows you to split up your functionality into coherent and manageable work units. It’s a principle that underlies any effective notion of software engineering as such; to deride it as “programming in C on the web” while wanting all the benefits that modularization delivers is to demand, with Agnes Skinner, that all the groceries go in one bag, but that the bag also not be heavy.

What’s the point of this story? It’s this: if large-scale modularization is a necessary feature of truly productive application programming, then how come we have had to wait close to two decades for it to finally reach the web? In particular, why has it taken this long to become a feature of JavaScript that had to be bolted on afterwards by ideas such Asynchronous Module Definition and Require.js?

Because the presence of these projects and efforts, and the fact that they have solved (to whatever extent) the problem of modularization on the web, is ipso facto evidence that the problem existed, demanded a solution, and a solution was possible. Moreover, that this would be a problem for large-scale development would (and should) have been just as obvious to people 20 years ago as it is today. After all, the people responsible for the design and standardization of efforts like JavaScript were programmers themselves, for the most part; it’s not credible to believe that even the original JavaScript compiler was just one or two really long files of code. And yet somehow, the whole concept of breaking up your project into multiple dependencies never became an architectural part of the language despite the fact that this facility is present in virtually all modern programming languages.

To me, this is the first, and perhaps greatest, original sin of the client-side web. A language intended for use in the browser, and which could and is now being used to develop large-scale client-side web applications, originally came, and remains, without any architectural feature designed to support breaking up your program into discrete pieces of code. This isn’t meant to denigrate the awesome work done by the guy behind Require, for example, but the fact remains that Require shouldn’t even have been necessary. AMD, in some form, should have been a first-class architectural feature of the language right out of the box;. I should be able to simply write import("foo.js") in my file and have it work; instead, we were reduced to loading scripts in the “right order” in the header. This architectural mistake delayed the advent of web applications by a long time, and still hampers the modularization of complex web applications. Laughing this off as “programming C on the web” is terribly shortsighted, especially as recent developments in JavaScript framework land have demonstrated just exactly this need for breaking up your code. Google didn’t develop their Google Web Toolkit for shits and giggles; they did it because Java provides the kind of rigid structure for application development that JavaScript does not. Granted, they might have gone a bit too far in the other direction (I hate programming in Java, personally), but it’s obvious why they did it: because you can’t do large-scale development without an externally imposed architecture that dictates the flow of that development.

This is already getting way, way longer than I originally anticipated it being, so I’ll stop here. My next installment in this series will be a discussion of all the other terrible things that make the web suck: namely, HTML and CSS. So you know, I’m really covering all the bases here.

*: Yes, I know JavaScript didn’t appear until 1995. Shut up.

In Which A Great Secret Is Revealed

Prepare to be enlightened:

Here’s my tale of woe. I decided to upgrade my Enthought because it’s nice to be able to build things for a 64-bit architecture. However, Enthought decided to change their system to something called “Canopy” which I can only assume is some sort of dumb marketing gimmick. The name is meaningless, but what *does* mean something is that it changed the install path.

Ok, fine, it did do that. However, it relied, for reasons unknown to me, on an old set of build flags which were apparently a carry-over from the previous version of Enthought. When I tried to pip install scikit-learn, I got a bunch of compilation errors related to a missing header. Turns out that I never got these errors before because the missing headers only apply to 64-bit builds, and due to an issue with the Qt libraries not working right in previous versions of Enthought, I had been using the 32-bit version (which itself was problematic because it sometimes caused various version shifts relative to other Python-related tools, but NEVER MIND THAT FOR NOW).

Total bummer, right? What I discovered from reading the logs was that when I ran setup.py install, the CFLAGS parameter being used to build the C components of scikit-learn pointed to /Developer/SDKs/MacOSX10.6.sdk. This is a problem because when you upgrade XCode (which I absolutely had to do to get a different build working), XCode *ALSO* changes the paths to the SDKs! For reasons which are, like the above path change, shrouded in mystery, the SDK for OS X 10.7 and up now lives in /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk.

At this dramatic juncture in the narrative, I realized that I needed to tell distutils, which is Python’s build system, where to find the new SDK. But how do I do that? Where does it get its information? Well, to find out *that* carefully concealed secret, you have to go to the distutils directory and look at a file called sysconfig.py. Specifically, take a look at a function called _init_posix(). Therein you will discover that the dictionary from which distutils gets its flags is seeded by reading the top-level Python makefile, which is located in the lib/python2.7/config/ subdirectory of the location where Enthought (excuse me, “Canopy”) installed itself.

Armed with this powerful information, you can now symlink /Developer/SDKs/MacOSX10.7.sdk to the location where the true SDK for 10.7 lives, and edit the Makefile’s CFLAGS options to change the isysroot parameter to the SDK symlink.

And now you can sudo pip scikit-learn install, and it will build without errors. You’re welcome.

Beneath a Steel Sky

About a year and a half ago, I went to San Francisco for a conference, and while I was there I met up with an old high school friend who lived in the East Bay. We went walking around the city, talking, and in a totally unexpected twist in the narrative the topic of discussion veered toward political matters. One of the things we talked about was the difference in political attitudes between the former USSR and the present-day USA, and on that topic, I suggested that one of the biggest difference was the relative intangibility of crisis in America.

That needs explained, as they say here in the 412. What I mean by that is not that large sectors, or even the majority of the population, have been able to escape the consequences of the Great Recession and the corresponding catastrophe of governance that Republicans have managed to create at every level. What I mean is that despite all these things, the country still looks like it’s running. A great deal of that is obviously administrative inertia, but as we all learned in high school physics, momentum is mass times velocity. The bureaucratic machinery may not have a great deal of the latter, but it has enough of the former that it can grind on for quite some time (apparently Belgium did not cease to exist despite failing to have an official elected government for more than a year): you get up in the morning and you still go to your job (if you’re lucky enough to have one) via mostly-functioning highways; by and large, the electricity is still on in the vast majority of places (if you can afford to pay for it); the world physically looks more or less all right, since the rot underneath hasn’t quite surfaced as physical manifestation. In the USSR, on the other hand, that manifestation was ubiquitous; you can joke about bread lines all you want, but the truth is that bread lines absolutely existed. You couldn’t really go outside and not be confronted with your governments failures.

In short, it’s easy to be oblivious in America, as long as you’re still reasonably well off. It’s even easier and better if you’re unreasonably well off. And since it’s the unreasonably well off who control the tempo and content of our political conversation, a great number of people continue to walk around with a vague sense of unease that something has gone terribly wrong, but with no adequate framework for explaining just what it is that has gone wrong and no overt physical manifestation of the wrongness[1]. Because the machine grinds on.

Except when it doesn’t. Which brings us to the debt ceiling debate.

I feel like I should somehow be angrier about this than I really am, but owing to my own relatively privileged position within the great chain of being, I find myself every bit as much under the sway of the cognitive biases outlined above. The infrastructure may be crumbling all around me, but it’s doing it pretty slowly, so every day physically appears a lot like the previous day. The same (sometimes boarded-up) buildings are still there; the buses (fare increases and all) are still running; the streets (Moon-worthy craters notwithstanding) have not yet deteriorated to the point of being undriveable (though many are unbikeable). Everything looks like it’s more or less the same.

But it’s obviously not, because on December 31st, the federal government has officially run into the debt ceiling. Which means that the money required to keep the machine grinding is going to run out, and it’s going to run out very soon, because the Republican House refuses to do what the House has done year in and year out for about a century and raise it. Put simply, obligations already committed to by the government will go unpaid; the country might, quite literally, default.

This should be an unthinkable set of circumstances for the largest and wealthiest economy in the world, and yet here we are. Since Republicans have decided that economic terrorism was the way forward, they’ve quite sensibly taken a hostage that they’re willing to shoot; it just so happens that the hostage is the American economy. What “negotiation” is possible when a minority (and make no mistake about it, Republicans are nationally less popular than Democrats) literally threatens to destroy the economy if it doesn’t get its way? The term “nuclear option” is frequently overused, but that would seem appropriate here. The Republicans have strapped an economic nuclear weapon to themselves and are threatening to take down everyone, including their own constituents.

The whole thing seems absolutely surreal; it’s as though, while intellectually convinced that we’re on a plane headed towards a mountain, most of us (and most of our so-called elites) are sitting calmly in our seats knitting[2]. Maybe it’s the overwhelming sense of despair at being able to actually achieve anything, or the vapidity of the ongoing conversation. Sensible people keep saying things like “JESUS CHRIST PULL UP ON THE STICK SO WE CAN CLEAR THIS FUCKING MOUNTAIN BEFORE YOU KILL US ALL,” but one of the ornery co-pilots has wedged the control stick so far up his asshole that the only way to get to it would be to cut the bastard open stern-to-stern. Too bad all we’ve got on this flight are plastic knives.

At times like these it feels tempting to ascribe the whole thing to some sort of collective insanity. But that’s not fair to actually insane people, nor is it accurate as an assessment of what’s actually going on. You could do some game-theoretical reasoning about the lack of incentives that Republican have to cooperate due to gerrymandering and primary threats, and you would be right, but it takes a special kind of sociopath to look reality in the face and decide that we’re better off plunging into the mountain because it would avoid having to compromise one’s highly-principled stand that no airplane should fly above 10,000 feet. Which is all to say that all of this is deliberate and planned and explainable by simply reading what the principal participants have to say about it.

In the next few weeks, I suppose we’re going to find out whether we clear the mountain after all, or whether the price of America’s continued existence is going to be throwing a good third of the passengers off the plane mid-flight because otherwise we’re all going to die. Recent trends do not justify optimistic projections.


[1] Obviously, for many people the wrongness does manifest itself: in lost jobs, in rising health care premiums, in decreased funding for education, and in many other ways. The point isn’t that people aren’t suffering, it’s that the conversation is controlled by an upper stratum of the elite, who are decidedly not suffering at all; this prevents any kind of serious structural analysis from emerging to help people make sense of what’s going on.

[2] I don’t know why knitting except that it’s the kind of thing I imagine one might do if one wanted to calm themselves. Substitute your favorite calming activity here.

Movie Recommendation: Chasing Ice

Last night I went to see Chasing Ice at the local art-house theater, and I recommend the film to everyone without reservation.

Chasing Ice is a documentary film that focuses on the work that photographer James Balog did in setting up the Extreme Ice Survey. The EIS’s purpose is to chronicle the no-longer-gradual disappearance of the Arctic glaciers, and the result is perhaps the most visually stunning depiction of the consequences of global warming that I have ever seen. Despite some added schmaltz about Balog’s personal life, Chasing Ice is a fairly straightforward story about what is happening to Arctic ice year in and year out; if you have a friend or relative that likes to blather on about how “the science isn’t in yet,” I suggest taking them to see this film. Actually, you should go see it even if you’re up on the science, because it features some absolutely phenomenal photography by Balog. I won’t spoil it for you, but the final ten minutes contain some literally jaw-dropping footage (I kid you not, I watched with my mouth literally hanging open) that is damn worth seeing in theaters and justifies the price of admission by itself. I don’t hesitate to say completely sincerely that Chasing Ice, for all its somewhat dry tone, is as much a work of art as anything you could see in the theater; if it goes any appreciable distance towards convincing people of the immediacy of the climate change problem, it’ll be far more influential.

A Completely True Though By No Means Exhaustive List of Items Discovered While Cleaning Out the Trunk of My Car

2 basketballs
2 sets of dress socks, apparently never worn
1 permit for parking on the Harvard campus, date fall of 2007
1 car 12V-to-USB adapter
1 hardcover copy of The Indigestible Triton, by L. Ron Hubbard
1 paperback copy of Young Torless, by Robert Musil, cover missing
1 book of Erwin Panofsky’s essays, cover drenched in what appears to be laundry detergent
1 empty bottle of laundry detergent, Tide
1 automotive emergency kit, with gloves
1 tire iron
an indeterminate number of bungee cords, various sizes
an indeterminate amount of objects ostensible related to windsurfing, including mast base, sail ribs, and harness elements
1 empty cardboard box, apparently used to ship a keyboard
1 half-used roll of quarters
1 $1 bill
1 tube of toothpaste, still in original packaging, also drenched in laundry detergent
2 sets of ratchet ropes
1 piece of plastic apparently once removed from underside of car, function unknown
1 can of Raid Ant & Roach spray
2 Raid ant traps
2 null modem cables
1 BNC cable
1 binder full of astrophysics papers
1 binder containing the printed version of Dodelson’s Modern Cosmology
1 copy of A Documentary History of Art

Further reports as excavations progress.