Some Thoughts About Amazon

A recent New York Times article examining the alleged problems with Amazon’s work culture has been making waves all week. Depending on whom you want to believe, Amazon is either the province of the damned, chained to their cubicles and forced to work while being whipped by demons, or a glorious utopia of technological innovation where no one is ever unhappy. This unresolvable war of cross-firing anecdotes is impossible to adjudicate from the outside, for the simple reason that only Amazon could even collect the necessary data to do that, and it wouldn’t make them public in any case. So anyway, this prompted in me a few loosely-connected observations, presented in roughly ascending order of how interesting I find them:

  1. Large organizations are like the rainforests they’re sometimes named after: if you go looking for something, you’re likely to find either that thing or a reasonable facsimile thereof. If what you’re looking for is team dysfunction and people being drummed out of the company for having had the temerity to get cancer, you’ll find that; if you’re looking for a functional team of normal adults who treat each other well and all go home satisfied at the end of the day, I bet you could find that as well. Interviews with newspaper reporters aren’t nothing, but they’re not company-wide statistics, and neither are anecdotes from some guy who really loves it there. It wouldn’t be impossible to set up an experiment that attempted to describe at a macro level the effects of Amazon’s internal culture, but it would require a pretty serious resource investment from Amazon itself, which, despite their claims of being very data-driven, I doubt Amazon would actually undertake.

  2. One theme that sounds throughout the Amazonians’ replies to the NYT article is that the high-criticism stack-ranking culture just has to be the way it is in order for Amazon to be at its most awesomest. The natural question this raises is: how do they, or anyone, know that? Has Amazon ever experimented with any other system? What, put simply, is the control group for this comparison? Without this information, justification of ostensibly bad culture practices are nothing more than post hoc rationalizations by the survivors. Clearly this hazing made me into a superlative soldier/frat brother/programmer, so suck it up! Also recognizable as the kind of justification offered by people who beat their children. You’d think that an organization as allegedly devoted to data gathering as Amazon would have done some controlled studies on these questions but my guess is that Amazon gives precisely zero fucks about whether its culture is poisonous or not, except insofar as it affects their public image. There’s basically no incentive to care, since there’s always another fresh-out-of-college 23-year-old programmer to hire.

  3. Another common theme that Amazon’s defenders (and the tech world’s agitprop more generally) plays again and again is that of SOLVING THE VERY CHALLENGINGEST OF PROBLEMS. Here’s a thing that a grown-up person actually wrote:

    Yes. Amazon is, without question, the most innovative technology company in the world. The hardest problems in technology, bar none, are solved at Amazon.

    This, of course, is totally fucking ludicrous, and yet no one seems to ever question these claims. Obviously Amazon has some fairly serious problems that need solving; that would be true of almost any organization of its scale and scope. But in the end, those problems are about how to make the delivery of widgets slightly more efficient, so you can get your shit in two days instead of three. This, of course, twins with the tech world’s savior complex: not only are we solving the most challenging problems but they also happen to be the most pressing ones and also the ones that will result in the greatest improvements to standards of living/gross national happiness/overall karmic state of the universe. It’s never enough to merely deliver a successful business product if that product doesn’t come with messianic pretensions. So it is with Amazon, which must sell itself as the innovatingest innovator that ever innovated if it hopes to keep attracting those 23-year-olds. These grandiose claims are hard to square with the reality that marginal improvements in supply chain management and customer experience, while good for the bottom line (or, I guess in Amazon’s case, investors) and certainly not technically trivial, ain’t the fucking cure for cancer or even a Mars rover. If your shit gets here in three days after all, you’ll survive. Or to put it another way, Bell Labs invented C and UNIX and also won eight Nobel Prizes in Physics. That’s what actual innovation looks like.

Toothpicks and Bubblegum, Software Edition, Iteration 326

There’s nothing like working with an old *nix utility to remind you how brittle software is. Case in point: I’m trying to use flex and bison to design a very simple grammar for extracting some information from plaintext. Going by the book and everything, and it just doesn’t work. Keeps telling me it caught a PERIOD as its lookahead token when it expected a WORD and dies with a syntax error. I killed a whole day trying to track this down before I realized one simple thing: the order of token declarations in the parser (that’s your .y file) must match the order of token declaration in the lexer (your .l file). If it doesn’t, neither bison nor flex will tell you about this, of course (and how could they, when neither program processes files intended for the other?). It’s just that your program will stubbornly insist, against all indications to the contrary, that it has indeed caught a PERIOD when it expected a WORD and refuse to validate perfectly grammatical text.

OH. MY. GOD.

I was so angry when this was happening and now I think I might be even angrier. Keep in mind that this fantastically pathological behavior is not documented anywhere, so I found myself completely baffled by what was happening. Where was PERIOD coming from? Why didn’t it just move on to the next valid token? Of course the correct thing is to include the tab.h file in the lexer, but I had written my definition down explicitly in the lexer file so I didn’t think to do that.

What’s ludicrous about this is that the flex/bison toolchain has to go through yet another auxiliary tool, m4, just to do its thing. m4, if you don’t know, is a macro language with a terrible, incomprehensible syntax that was invented for the purposes of text transformation, thereby proving years before its formulation Greenspun’s 10th rule, according to which any sufficiently advanced C project will end up reimplementing, badly, some subset of Common Lisp.

I have the utmost respect for Dennis Ritchie, but m4 is a clusterfuck that should have never survived this long. Once a language like Lisp existed, which could actually give you code and DSL transformations at a high level of abstraction, m4 became superfluous. It has survived, like so many awful tools of its generation, through what I can only assume is inertia.

Five Years in the Future

Oh gosh, it has been a long time, hasn’t it? My deepest apologies, sports fans. You know how life is, always getting in the way. Perhaps this will spur a production in verbal output but it’s just as likely that it’ll be a once-per-year salvo. Don’t get used to anything nice, my mother always told me.

Anyway, this too-prolix production has been made possible by a friend soliciting my input on the following article. That shit is long, so take a good thirty minutes out of your day if you plan to read it all, and then take another thirty to read this response, which I know you’re going to do because you love me that much.

I’ll save you some of the trouble by putting my thesis front and center so you can decide whether or not you want to continue reading or leave an angry comment: I think the linked piece is premised on some really flimsy assumptions and glosses over some serious problems, both empirical and logical, in its desire to attain its final destination. This is, sadly, par for the course in popular writing about AI; even very clever people often write very stupid things on this topic. There’s a lot of magical thinking going on in this particular corner of the Internet; much of it, I think, can be accounted for by a desire to believe in a bright new future about to dawn, coupled with a complete lack of consequences for being wrong about your predictions. That said, let’s get to the meat.

There are three basic problems with Tim Urban’s piece, and I’ll try and tackle all three of them. The first is that it relies throughout on entirely speculative and unjustified projections generated by noted “futurist” (here I would say, rather, charlatan, or perhaps huckster) Ray Kurzweil; these projections are the purest fantasy premised on selective interpretations of sparsely available data and once their validity is undermined, the rest of the thesis collapses pretty quickly. The second problem is that Urban repeatedly makes wild leaps of logic and inference to arrive at his favored result. Third, Urban repeatedly mischaracterizes or misunderstands the state of the science, and at one point even proposes a known mathematical and physical impossibility. There’s a sequel to Urban’s piece too, but I’ve only got it in me to tackle this one.

Conjectures and Refutations

Let me start with what’s easily the most objectionable part of Urban’s piece: the charts. Now, I realize that the charts are meant to be illustrative rather than precise scientific depictions of reality, but for all that they are still misleading. Let’s set aside for the moment the inherent difficulty of defining what exactly constitutes “human progress” and note that we don’t really have a good way of determining where we stand on that little plot even granting that such a plot could be made. Urban hints at this problem with his second “chart” (I guess I should really refer to them as “graphics” since they are not really charts in any meaningful sense), but then the problem basically disappears in favor of a fairly absurd thought experiment involving a time traveler from the 1750s. My general stance is that in all but the most circumscribed of cases, thought experiments are thoroughly useless, and I’d say that holds here. We just don’t know how a hypothetical time traveler retrieved from 250 years ago would react to modern society, and any extrapolation based on that idea should be suspect from the get-go. Yes, the technological changes from 1750 to today are quite extreme, perhaps more extreme than the changes from 1500 to 1750, to use Urban’s timeline. But they’re actually not so extreme that they’d be incomprehensible to an educated person from that time. For example, to boil our communication technology down to the basics, the Internet, cell phones, etc. are just uses of electrical signals to communicate information. Once you explain the encoding process at a high level to someone familiar with the basics of electricity (say, Ben Franklin), you’re not that far off from explicating the principles on which the whole thing is based, the rest being details. Consider further that in 1750 we are a scant 75 years away from Michael Faraday, and 100 years away from James Clerk Maxwell, the latter of whom would understand immediately what you’re talking about.

We can play this game with other advances of modern science, all of which had some precursors in the 1750s (combustion engines, the germ theory of disease, etc.). Our hypothetical educated time traveler might not be terribly shocked to learn that we’ve done a good job of reducing mortality through immunizations, or that we’ve achieved heavier-than-air flight. I doubt that however surprised they are it would be to the extent that they would die. The whole “Die Progress Unit” is, again, a tongue-in-cheek construct from Urban, meant to be illustrative, but rhetorically it functions to cloak all kinds of assumptions about how people would or would not react. It disguises a serious conceptual and empirical problem (just how do we define and measure things like “rates of progress”) behind a very glib imaginary scenario that is both not meant to be taken seriously and function as justification for the line of thinking that Urban pursues later in the piece.

The idea that ties this first part together is Kurzweil’s “Law of Accelerating Returns.” Those who know me won’t be surprised to learn that I don’t think much of Kurzweil or his laws. I think Kurzweil is one part competent engineer and nine parts charlatan, and that most of his ideas are garbage amplified by money. The “Law” of accelerating returns isn’t any such thing, certainly not in the simplistic way presented in Urban’s piece, and relying on it as if it were some sort of proven theorem is a terrible mistake. A full explanation of the problems with the Kurzweilian thesis will have to wait for another time, but I’ll sketch one of the biggest objections below. Arguendo I will grant an assumption that in my view is mostly unjustified, which is that the y-axis on those graphics can even be constructed in a meaningful way.

A very basic problem with accelerating returns is that it very much depends on what angle you look at it from. To give a concrete example, if you were a particle physicist in the 1950s, you could pretty much fall ass-backwards into a Nobel Prize if you managed to scrape together enough equipment to build yourself a modest accelerator capable of finding another meson. But then a funny thing happened, which is that ever incremental advance over the gathering of low-hanging fruit consumed disproportionately more energy. Unsurprisingly, the marginal returns on increased energy diminished greatly; the current most powerful accelerator in the world (the LHC at CERN) has beam energies that I believe will max out at somewhere around 7 TeV, give or take a few GeV. That’s one order of magnitude more powerful than the second-most powerful accelerator (the RHIC at Brookhaven), and it’s not unrealistic to believe that the discovery of any substantial new physics will require an accelerator another order of magnitude more powerful. In other words, the easy stuff is relatively easy and the hard stuff is disproportionately hard. Of course this doesn’t mean that all technologies necessarily follow this pattern, but note that what we’re running up against here is not a technological limit per se, but rather a fundamental physical limit: the increased energy scale just is where the good stuff lies. Likewise, there exist other real physical limits on the kind of stuff we can do. You can only make transistors so small until quantum effects kick in; you can only consume so much energy before thermodynamics dictates that you must cook yourself.

The astute reader will note that this pattern matches quite well (at least, phenomenologically speaking) the logistic S-curve that Urban draw in one of his graphics. But what’s really happening there? What Urban has done is to simply connect a bunch of S-curves and overlaid them on an exponential, declaring (via Kurzweil) that this is how technology advances. But does technology really advance this way? I can’t find any concrete argument that it does, just a lot of hand-waving about plateaus and explosions. What’s more, the implicit assumption lurking in the construction of this plot is that when one technology plays itself out, we will somehow be able to jump ship to another method. There is historical precedent for this assumption, especially in the energy sector: we started off by burning wood, and now we’re generating energy (at least potentially) from nuclear reactions and sunlight. All very nice, until you realize that the methods of energy generation that are practical to achieve on Earth are likely completely played out. We have fission, fusion, and solar, and that’s about it for the new stuff. Not because we aren’t sufficiently “clever” but because underlying energy generation is a series of real physical processes that we don’t get to choose. There may not be another accessible S-curve that you we can jump to.

Maybe other areas of science behave in this way and maybe they don’t; it’s hard to know for sure. But admitting ignorance in the face of incomplete data is a virtue, not a sin; we can’t be justified in assuming that we’ll be able to go on indefinitely appending S-curves to each other. At best, even if the S-curve is “real,” what we’re actually dealing with is an entire landscape of such curves, arranged in ways we don’t really understand. As such, predictions about the rate of technological increase are based on very little beyond extrapolating various conveniently-arranged plots; it’s just that instead of extrapolating linearly, Kurzweil (and Urban following after him) does so exponentially. Well, you can draw lines through any set of data that you like, but it doesn’t mean you actually understand anything about that data unless you understand the nature of the processes that give rise to it.

You can look at the just-so story of the S-curve and the exponential (also the title of a children’s book I’m working on) as a story about strategy and metastrategy. In other words, each S-curve technology is a strategy, and the metastrategy is that when one strategy fails we develop another to take its place. But of course this itself assumes that the metastrategy will remain valid indefinitely; what if it doesn’t? Hitting an upper or lower physical limit is an example of a real barrier that is likely not circumventable through “paradigm shifts” because there’s a real universe that dictates what is and isn’t possible. Kurzweil prefers to ignore things like this because they throw his very confident pronouncements into doubt, but if we’re actually trying to formulate at least a toy scientific theory of progress, we can’t discount these scenarios.

1. p → q;
2. r
3. therefore, q

Since Kurzweil’s conjectures (I won’t dignify them with the word “theory”) don’t actually generate any useful predictions, it’s impossible to test them in any real sense of the word. I hope I’ve done enough work above to persuade the reader that these projections are nothing more than fantasy predicated on the fallacious notion that the metastrategy of moving to new technologies is going to work forever. As though it weren’t already bad enough to rely on these projections as if they were proven facts, Urban repeatedly mangles logic in his desire to get where he’s going. For example, at one point, he writes:

So while nahhhhh might feel right as you read this post, it’s probably actually [sic] wrong. The fact is, if we’re being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect.

It’s hard to see why the skeptics are the ones who are “probably actually wrong” and not Urban and Kurzweil. If we’re being “truly logical” then, I’d argue, we aren’t making unjustified assumptions about what the future will look like based on extrapolating current non-linear trends, especially when we know that some of those extrapolations run up against basic thermodynamics.

That self-assured gem comes just after Urban commits an even grosser offense against reason. This:

And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either.

is not an argument. In the words of Wolfgang Pauli, it isn’t even wrong. This is a sequence of words that means literally nothing and no sensible conclusion can be drawn from it. To write this and to reason from such premises is to do violence to the very notion of logic that you’re trying to defend.

The entire series contains these kinds of logical gaps that are basically filled in by wishful thinking. Scales, trends, and entities are repeatedly postulated, then without any particular justification or reasoning various attributes are assigned to them. We don’t have the faintest idea of what an artificial general intelligence or super-intelligence might look like, but Urban (via Kurzweil) repeatedly gives it whatever form will make his article most sensational. If for some reason the argument requires an entity capable of things incomprehensible to human thought, that capability is magicked in wherever necessary.

The State of the Art

Urban’s taxonomy of “AI” is likewise flawed. There are not, actually, three kinds of AI; depending on how you define it, there may not even be one kind of AI. What we really have at the moment are a number of specialized algorithms that operate on relatively narrowly specified domains. Whether or not that represents any kind of “intelligence” is a debatable question; pace John McCarthy, it’s not clear that any system thus far realized in computational algorithms has any intelligence whatsoever. AGI is, of course, the ostensible goal of AI research generally speaking, but beyond general characteristics such as those outlined by Allen Newell, it’s hard to say what an AGI would actually look like. Personally, I suspect that it’s the sort of thing we’d recognize when we saw it, Turing-test-like, but pinning down any formal criteria for what AGI might be has so far been effectively impossible. Whether something like the ASI that Urban describes can even plausibly exist is of course the very thing in doubt; it will not surprise you, if you have not read all the way through part 2, that having postulated ASI in part 1, Urban immediately goes on to attribute various characteristics to it as though he, or anyone else, could possibly know what those characteristics might be.

I want to jump ahead for a moment and highlight one spectacularly dumb thing that Urban says at the end of his piece that I think really puts the whole thing in perspective:

If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us.

This scenario impossible. Not only does it violate everything we know about uncertainty principles, but it also effectively implies a being with infinite computational power; this is because even if atoms were classical particles, controlling the position of every atom logically entails running forward in time a simulation of the trajectories of those atoms to infinite precision, a feat that is impossible in a finite universe. Not only that, but the slightest error in initial conditions will accumulate exponentially (here, the exponential stuff is actually mathematically valid), so that e.g. improving your forecast horizon by a factor of 10 requires a factor of 100 increase in computational power and so on.

This might seem like an awfully serious takedown of an exaggerated rhetorical point, but it’s important because it demonstrates how little Urban knows, or worries, about the actual science at stake. For example, he routinely conflates raw computational power with the capabilities of actual mammalian brains:

So the world’s $1,000 computers are now beating the mouse brain and they’re at about a thousandth of human level.

But of course this is nonsense. We are not “beating” the mouse brain in any substantive sense, we merely have machines that do a number of calculations per second that is comparable to a number that we imagine the mouse brain is also doing. About the best we’ve been able to do is to mock up a network of virtual point neurons that kind of resembles a slice of the mouse brain, maybe, if you squint from far away, and run it for a few seconds. Which is a pretty impressive technical achievement, but saying that we’ve “beaten the mouse brain” is wildly misleading. “Affordable, widespread AGI-caliber hardware in ten years,” is positively fantastical even under the most favorable Moore’s Law assumptions.

Of course, even with that kind of hardware, AGI is not guaranteed; it takes architecture as much as computational power to get to intelligence. Urban recognizes this, but his proposed “solutions” to this problem again betray as misunderstanding of both the state of science and our capabilities. For example, his “emulate the brain” solution is basically bog-standard connectionism. Not that connectionism is bad or hasn’t produced some pretty interesting results, but neuroscientists have known for a long time now that the integrate-and-fire point neuron of connectionist models is a very, very, very loose abstraction that doesn’t come close to capturing all the complexities of what happens in the brain. As this paper on “the neuron doctrine” (PDF) makes clear, the actual biology of neural interaction is fiendishly complicated, and the simple “fire together-wire together” formalism is a grossly inadequate (if also usefully tractable) simplification. Likewise, the “whole brain simulation” story fails to take into account real biological complexities of faithfully simulating neuronal interactions. Urban links to an article which claims that whole-brain emulation of C. elegans has been achieved, but while the work done by the OpenWorm folks is certainly impressive, it’s still a deeply simplified model. It’s hard from the video to gauge how closely the robot-worm’s behavior matches the real worm’s behavior; it’s likely that, at least, it exhibits some types of behaviors that the worm also exhibits, but I doubt that even its creators would claim ecological validity for their mode. At the very best, it’s a proof of principle regarding how one might go about doing something like this in the future, and, keep in mind, that this is a 300-neuron creature whose connectome is entirely known.

Nor are genetic algorithms likely to do the trick. Overall, the track record of genetic algorithms in actually producing useful results is decidedly mixed. In a recent talk I went to, Christos Papadimitriou, a pretty smart guy, flat out claimed that “genetic algorithms don’t work.” (PDF, page 18). I do not possess sufficient expertise to judge the truth of this statement, but I think the probability that genetic algorithms will provide the solution is low. It does not help that we “know” what we’re aiming for; in truth we have no idea what we’re optimizing for, and our end-goal is something of the “we know it when we see it” variety, which isn’t something that lends itself terribly well to a targeted search. Evolution, unlike humans, optimized for certain sorts of local fitness maxima (to put it very, very simply), and wound up producing something that couldn’t possibly have been targeted for in such an explicit way.

All of this is to say that knowing the connectome and having some computing power at your disposal is a necessary but not sufficient condition for replicating even simple organismic functionality. Understanding how to go from even a complete map of the human brain to a model of how that brain produces intelligence is not a simple mapping, nor is it just a matter of how many gigaflops you can execute. You have to have the right theory or your computational power isn’t worth that much. A huge problem that one hits on when speaking with actual neuroscientists is that there’s really a dearth of theoretical machinery out there that even begins to accurately represent intelligence as a whole, and it isn’t for lack of trying.

The concluding discussion of what an AI might look like in relation to humans is hopelessly muddled. We barely have any coherent notion of how to quantify existing human intelligence, much less a possible artificial one. There’s no particular reason to think that intelligence follows some kind of linear scale, or that “170,000 more intelligent than a human,” is any sort of meaningful statement, rather than a number thrown out into the conversation without any context.

The problem with the entire conversation surrounding AI is that it’s almost entirely divorced from the realities of both neuroscience and computer science. The boosterism that emanates from places like the Singularity Institute and from people like Kurzweil and his epigones is hardly anything more than science fiction. Their projections are mostly obtained by drawing straight lines through some judiciously-selected data, and their conjectures about what may or may not be possible are mostly based on wishful thinking. It’s disappointing that Urban’s three weeks of research have produced a piece that reads like an SI press release, rather than any sort of sophisticated discussion of either the current state of the AI field or the tendencious and faulty logic driving  the hype.

Conclusion

None of this is to say that we should be pessimists about the possibility of artificial intelligence. As a materialist, I don’t believe that humans are somehow imbued with any special metaphysical status that is barred to machines. I hold out hope that some day we will, through diligent research into the structure of existing brains, human and otherwise, unravel the mystery of intelligence. But holding out hope is one thing; selling it as a foregone conclusion is quite another. Concocting bizarre stories about superintelligent machines capable of manipulating individual atoms through, apparently, the sheer power of will, is just fabulism. Perhaps no more succinct and accurate summary of this attitude has ever been formulated than that written by John Campbell of Pictures for Sad Children fame:

it’s flying car bullshit: surely the world will conform to our speculative fiction, surely we’re the ones who will get to live in the future. it gives spiritual significance to technology developed primarily for entertainment or warfare, and gives nerds something to obsess over that isn’t the crushing vacuousness of their lives

Maybe that’s a bit ungenerous, but I find that it’s largely true. Obsession about AI futures is not even a first world problem as much as a problem for a world that has never existed and might never exist. It’s like worrying about how you’ll interact with the aliens that you’re going to find on the other side of the wormhole before you even know how to get out of the solar system without it taking decades.

Silly man says stupid thing, Silicon Valley edition

A fellow named Jerzy Gangi advances a bunch of hypotheses to answer a not-very-interesting question about why Silicon Valley funds some things (e.g. Instagram) and not others (e.g. Hyperloop). Along the way we get some speculation about the amount of cojones possessed by VCs (insufficient!) and how well the market rewards innovation (insufficiently!), but the question is boring because the answer is already well-known: infrastructure projects of the scope and scale of Hyperloop (provided they’re feasible to begin with) require massive up-front investments with uncertain returns, while an Instagram requires comparatively little investment with the promise of a big return. Mystery solved! You can PayPal me the $175 you would have given Gangi for the same information spread over an hour of time at grapesmoker@gmail.com.

Despite the fact that Gangi’s question is not very interesting on its own, his writeup of it actually contains an interesting kernel that I want to use as a touch-off point for exploring a a rather different idea. You see, while criticism of techno-utopianism (and Silicon Valley, its material manifestation which will be used metonymically with it from here on out) has been widespread, it usually doesn’t address a fundamental claim that Silicon Valley makes about itself; namely, that Silicon Valley is an innovative environment. Critics like Evgeny Morozov are likely to only be peripherally interested in the question; Morozov is far more concerned with asking whether the things Silicon Valley wants disrupted actually ought to be “disrupted.” Other critiques have focused on the increasing meaninglessness of that very concept and the deleterious effects that those disruptions have on the disrupted. But as a rule, discussion about Silicon Valley takes it for granted that Silicon Valley is the engine of innovation that it claims to be, even if that innovation comes at a price for some.

I think this is a fundamentally mistaken view. Silicon Valley is “innovative” only if your bar for innovation is impossibly low; (much) more often than not what Silicon Valley produces is merely a few well-known models repackaged in shinier wrapping. That this is so can be seen from looking at this list of recent Y Combinator startups. What, in all this, constitutes an “innovative” idea? The concept that one can use data to generate sales predictions? Or perhaps the idea of price comparison? The only thing on here that looks even remotely like something that’s developing new technology is the Thalmic whatsis, and even that is not likely to be anything particularly groundbreaking. These may or may not be good business ideas, but that’s not the question. The question is: where’s the innovation? And the answer is that there isn’t a whole lot of it, other than taking things that people used to do via other media (like buying health insurance) and making it possible to do over the internet.

There’s nothing wrong with not being innovative, by the way. Most companies are not innovative; they just try and sell a product that the consumer might want to buy. The problem is not the lack of innovation, but the blatant self-misrepresentation in which Silicon Valley collectively engages. It’s hardly possible to listen to any one of Silicon Valley’s ubiquitous self-promoters without hearing paeans to how wonderfully innovative it is; if the PR is to be believed, Silicon Valley is the source and font of all that is new and good in the world. Needless to say, this is a completely self-serving fantasy which bears very little resemblance to any historically or socially accurate picture of how real innovation actually happens. To the extent that any innovation took place in Silicon Valley, it didn’t take place at Y Combinator funded start-ups, but rather at pretty large industrial-size concerns like HP and Fairchild Semiconductor. No one in the current iteration of Silicon Valley has produced anything remotely as innovative as Bell Labs. Maybe the Tesla could yet live up to that lofty ideal, but it’s pretty unlikely that  internet company, no matter how successful, ever will.

Ha-Joon Chang has adroitly observed that the washing machine did more to change human life than the Internet has. But the washing machine is not shiny (anymore) or new (anymore) or sexy, so it’s easy to take it for granted. The Internet is not new (anymore) either, but unlike the washing machine, the capability exists to make it ever shinier, and then sell the resulting shiny objects as brand-new innovations when of course they aren’t really any such thing. As always, the actual product of Silicon Valley is, by and large, the myth of its own worth and merit; what’s being sold is not any actual innovation but a story about who is to be classed as properly innovative (and thereby preferably left untouched by regulation and untaxed by the very social arrangements which make their existence possible).

Toothpicks and Bubblegum

Oh my, it’s been a while, hasn’t it? So many important developments have taken place. For example, I got a new phone.

Let’s start off with something light, shall we? Let’s talk about how the web is a complete clusterfuck being held together by the digital equivalent of this post’s titular items.

The web of today would probably be, on the surface, not really recognizable to someone who was magically transported here from 20 years ago. Back then, websites were mostly static (pace the blink tag) and uniformly terrible-looking. Concepts like responsiveness and interactivity didn’t really exist to any serious extent on the web. You could click on stuff, that was about it; the application-in-a-browser concept that is Google Docs was hardly credible.

But on the other hand, that web would, under the surface, look quite familiar to our guest from 1993. The tags would have changed of course, but the underlying concept of a page structured by HTML and animated by JavaScript* would have been pretty unsurprising. Although application-in-a-browser did not exist, there was nothing in the makeup of Web 1.0 to preclude it; all the necessary programming ingredients were already in place to make it happen. What was missing, however, was the architecture which enabled applications like Google Docs to be realized. And that’s what brings us to today’s remarks.

You see, I am of the strong opinion that the client-side development architecture of the web is ten different kinds of fucked up. A lot of this is the result of the “architecture by committee” approach that seems to be the MO of the W3C, and a lot more seems to be just a plain lack of imagination. Most complaints about W3C focus on the fact that its processes move at a snail’s pace and that it doesn’t enforce its standards, but I think the much larger problem with the web today is that it’s running on 20-year-old technology and standards that were codified before the current iteration of the web was thought to be possible.

Let me explain that. As an aside, though, allow me to tell a relevant story: recently I went to a JavaScript programmers’ MeetUp where people were presenting on various web frameworks (e.g. Backbone, Ember, Angular, etc.). During the Backbone talk, the fellow who was giving it made a snarky comment about “not programming in C on the web.” This got a few laughs and was obviously supposed to be a dig of the “programming Fortran in any language” variety. What I found most revealing about this comment, though, was that it was made in reference not to any features that JavaScript has and C doesn’t (objects, garbage collection) but rather in reference to the notion that one’s program ought to be modularized and split over multiple files. This is apparently considered so ludicrous to a web programmer that the mere suggestion that one might want to do so is worthy of mockery.

At the same time, web programmers are no longer creating static pages with minimal responsivity and some boilerplate JavaScript to do simple things like validation. In fact, the entirety of this presentation was dedicated to people talking about frameworks that do precisely the sort of thing that desktop developers have been doing since forever: writing large, complicated applications. The only difference is that those applications are going to be running in a web browser and not on a desktop, which means they have to access server-side resources in an asynchronous fashion. Other than that, Google Docs doesn’t look much different from Word. And you can’t do large-scale app development by writing all your code in three or four files. I mean, you can do that, but it would be a very bad idea. It’s a bad idea independent of the specific paradigm in which you’re developing, because the idea itself is sort of meta-architectural to begin with. Modularization is the sine qua non of productive development because it allows you to split up your functionality into coherent and manageable work units. It’s a principle that underlies any effective notion of software engineering as such; to deride it as “programming in C on the web” while wanting all the benefits that modularization delivers is to demand, with Agnes Skinner, that all the groceries go in one bag, but that the bag also not be heavy.

What’s the point of this story? It’s this: if large-scale modularization is a necessary feature of truly productive application programming, then how come we have had to wait close to two decades for it to finally reach the web? In particular, why has it taken this long to become a feature of JavaScript that had to be bolted on afterwards by ideas such Asynchronous Module Definition and Require.js?

Because the presence of these projects and efforts, and the fact that they have solved (to whatever extent) the problem of modularization on the web, is ipso facto evidence that the problem existed, demanded a solution, and a solution was possible. Moreover, that this would be a problem for large-scale development would (and should) have been just as obvious to people 20 years ago as it is today. After all, the people responsible for the design and standardization of efforts like JavaScript were programmers themselves, for the most part; it’s not credible to believe that even the original JavaScript compiler was just one or two really long files of code. And yet somehow, the whole concept of breaking up your project into multiple dependencies never became an architectural part of the language despite the fact that this facility is present in virtually all modern programming languages.

To me, this is the first, and perhaps greatest, original sin of the client-side web. A language intended for use in the browser, and which could and is now being used to develop large-scale client-side web applications, originally came, and remains, without any architectural feature designed to support breaking up your program into discrete pieces of code. This isn’t meant to denigrate the awesome work done by the guy behind Require, for example, but the fact remains that Require shouldn’t even have been necessary. AMD, in some form, should have been a first-class architectural feature of the language right out of the box;. I should be able to simply write import("foo.js") in my file and have it work; instead, we were reduced to loading scripts in the “right order” in the header. This architectural mistake delayed the advent of web applications by a long time, and still hampers the modularization of complex web applications. Laughing this off as “programming C on the web” is terribly shortsighted, especially as recent developments in JavaScript framework land have demonstrated just exactly this need for breaking up your code. Google didn’t develop their Google Web Toolkit for shits and giggles; they did it because Java provides the kind of rigid structure for application development that JavaScript does not. Granted, they might have gone a bit too far in the other direction (I hate programming in Java, personally), but it’s obvious why they did it: because you can’t do large-scale development without an externally imposed architecture that dictates the flow of that development.

This is already getting way, way longer than I originally anticipated it being, so I’ll stop here. My next installment in this series will be a discussion of all the other terrible things that make the web suck: namely, HTML and CSS. So you know, I’m really covering all the bases here.

*: Yes, I know JavaScript didn’t appear until 1995. Shut up.

In Which I Solve More of Your Most Pressing Problems

Look at this bullshit. I’ve been trying to get plotting and Qt working under Lisp, on a Mac, because I fucking hate myself. Yinz are about to benefit from my experience.

Here are the steps if you are the kind of self-hating masochist who needs to get commonqt working under OS X:

Steps to do that:

1) use something like macports to get the smoke libraries, e.g.

sudo port install smokeqt

2) wait a long time, hopefully it finishes without exploding

3) if you’ve got quicklisp, do (ql:quickload "qt"). It will crash.

4) It crashes for the following reasons:

a) for reasons not obviously clear to me the compilation is linked against the debug libraries, i.e. libQtGui_debug. WHY?! It don’t matter, go to whatever directory quicklisp put your shit in and edit the commonqt.pro file to remove the debug from the build spec.
b) While you’re there you’ll also want to change lsmokeqtcore to lsmokebase because that’s the correct lib to link against, else CRASHX0R.

5) ok, you can now (ql:quickload "qt") again. Psych! no you can’t. It won’t work. WHY?!

6) it’s because the cffi library that gets loaded is incorrectly configured for OS X. That’s fucking right, you’re gonna want to change that line in info.lisp that goes:

#-(or mswindows windows win32) "libsmoke~A.so"

to something sensible like

#+(and (or unix) (not darwin)) "libsmoke~A.so"
#+(or mswindows windows win32) "smoke~A.dll"
#+(or darwin) "libsmoke~A.dylib"

so that now it actually loads libsmokecore.dylib and all that other jazz correctly.

7) ok, now run the quickload again! YOU ARE ALL SET MOTHERFUCKER.

How to solve the problem of multiple Qt installations on a Mac

Yeah, I know; this is probably not a common problem you have. But shit, this is as much or more for my benefit as yours. Anyway, let’s say you have Qt installed because you downloaded the official dmg, but then you also went and installed the Enthought Python distribution which comes with its own Qt. And also, that Qt might be a different architecture, so WHAT CAN YOU DO?!

Just export DYLD_FRAMEWORK_PATH, like so:

export DYLD_FRAMEWORK_PATH="/Library/Frameworks/Python.framework/Versions/7.2/Frameworks/:$DYLD_FRAMEWORK_PATH"

This should ensure that any loading of the Qt libraries references the Enthought ones. Of course yours might be installed in a different location (and in any case instead of referencing version 7.2 you should just use Current, but whatevs).

You are welcome.

Everyone Is Doing It So Why Not Me?

Assuming all five of you are the kind of sophisticated readers that frequent this blog, I’m not going to tell you anything you didn’t already know about SOPA. Now that pretty much every useful site on the Internet has gone to blackout mode in protest, there’s no shortage of opportunity to learn as much as you’d care to about this legislation. I could go on about how SOPA essentially establishes a presumption of guilt, how it puts what ought to be government power in the hands of private actors, and how its technical requirements would render useless much of what we value about the Internet, but you can get this anywhere; I’ll dedicate my time to saying something else.

And that something else is this: the more I live on this earth, in this American society, and the more I read and learn, the more I become convinced that capitalism (that is to say, the actual existing capitalism of today, not the fantasy capitalism of libertopia) has pretty much nothing to do with markets. I think this is true generally, and there are ample other opportunities to observe this fact, but the SOPA fight really brings the contrast between these two ideas into sharp relief. It’s pretty obvious that what’s going on here is the sort of expansion of rent-seeking that’s been the hallmark of copyright legislation for decades now; indeed, it would be surprising if something like SOPA (or it’s proposed almost-as-shitty replacement PIPA) were not the next step in the entertainment industry’s tireless fight to avoid the fate of the dinosaurs. The contrast of note here is industry’s reliance on market-based language for justification while doing the utmost to squelch any responsibility they might have toward satisfying their actual target market. This goes beyond the basic, easily-understood idea that many ordinary pirates are really just dissatisfied customers who are tired of paying exorbitant amounts of money to be treated like criminals; it’s at the point where Monster Cable (yes, those fuckers) includes Costco among its list of “rogue” sites that would likely fall under the purview of SOPA (see link above). It’s pretty obvious here that what Monster is doing (what all those who are pushing for SOPA are doing) is attempting to buy legislation that would effectively make it illegal to compete with it.

This, and numerous other instances, put the lie to the idea that the backers of SOPA have any interest in responding to market pressures. What they’re really trying to do is to legislate their competitors out of existence, i.e. to leverage the power of the state for the purposes of delivering private industry profits. If that sounds like a retread of something you might have heard before, it should; it’s the m.o. of the financial industry following the crash. Not exclusively, of course; there are thousands, maybe tens of thousands of small-time examples of this sort of thing. An extra clause in a bill here, an additional regulation there, and oh look, you can’t park on the street overnight in front of your own home because that’s basically like cheating landlords out of extra money they can squeeze from you for parking spaces (thanks, city of Providence!). In short, everyone likes to talk about capitalism, but no one, not even the big boys, actually want to live it.

SOPA by itself is tragedy enough, but what’s even sadder is that it’s just a symptom of something that’s been going on a long time: if you’ve got the money, you can buy yourself the legislation you need to screw your competitors, all the while paying lip service to some notion of markets that doesn’t exist anywhere outside of an Econ 101 textbook. Market-speak is just a useful tool to keep the proles outraged about paying taxes that might possibly benefit a poor and/or brown-skinned person someday, but when it comes right down to it, there ain’t nothing a big corporation hates as much as it hates competition. We want to keep the government out of our business, sure, but there’s nothing more we like than to get the government into someone else’s business, or better yet, someone else’s bedroom if we can. So remember that the next time you hear some immaculately-coifed suit expound on the virtues of free markets and competition: they don’t mean it, or to the extent that they mean it, it’s for thee (you) and not for me (them). These people are paid liars and if our press had half the integrity they like to think of themselves as having, they’d laugh these shills right out of the studio.

In conclusion, I’d like to announce that upon the formation of Glorious Socialist Utopia, all RIAA and MPAA execs will be sent down to the salt mines. You have been duly warned.

In Which A Considered Judgment Is Rendered

It is seemingly obligatory in any discussion of Skyrim, the fifth installment in Bethesda Softworks’ Elder Scrolls series, to mention the game’s scope. There’s a good reason for this: Skyrim is truly colossal in every sense in which a video game can be so. There are numbers out there suggesting that Skyrim is not, in terms of virtual hectares, the largest of the Elder Scrolls games, but it’s hard to deny that it feels larger than any of its predecessors (especially if you have the privilege of playing the game on a large-screen TV). When you step outside, the land stretches in every direction before you. Foreboding mountains loom on the horizon, and the sky changes with the weather, sometimes dark with rain and other times radiant with sunlight. The game’s dungeons are artistic masterworks; one almost gasps the first time one enters a gigantic underground cavern or sees the full majesty of a ruined Dwemer city revealed (in fact, your character’s companions will gasp in just this way). In its atmospheric qualities, Skyrim is unmatched by any other game, or probably, any other virtual production at all. It’s not really any exaggeration to say that no world of this scale that feels this real exists anywhere else.

In addition to its size and detail, the world of Skyrim improves on that of its predecessor, Oblivion, by harking back to its grandparent, Morrowind. Morrowind was not nearly as pretty or detailed as Skyrim is (for lack of technical capability, one assumes, rather than desire on the part of the design team), but its aesthetic was dark, threatening, and engrossing. In Morrowind, storms could kick up clouds of dust that reduced visibility, and the entire countryside appeared perpetually drab, lending background gravity to a plotline concerned with the resurrection of a dead god (or… something; my memory of Morrowind’s plot is somewhat hazy and all I recall is that you would end up being something called the Nerevarrine). By contrast, Oblivion, with its painstakingly detailed blades of grass, looked a little too happy a place, what with the possible end of the world on the horizon. Even the plane of Oblivion itself was a little too bright; only its brightness was of a red sort, which I suppose was intended to connote some sort of evil. In its visuals and aesthetics, Skyrim is closer to Morrowind’s spirit, coupled with superlative realization, and this is for the better.

The size and look of this world, remarkable as it is, nevertheless fades into the background relatively quickly as one progresses through the game. To be sure, staggeringly beautiful scenes are encountered throughout the game, but they cannot sustain a 50-hour (and that, I think, is on the low side of how much time people will, on average, sink into Skyrim) adventure. For that, you must rely on the narratives of the main and secondary quests, and on the gameplay. I suspect that, at least on the first front, few will be disappointed (notable exceptions include Grantland’s Tom Bissell, who found the game’s social world tedious). The social detail within Skyrim is at least a match for the physical detail. If you are so inclined, you can join an incipient native rebellion, or team up with the Imperial occupiers to suppress it; the rebels themselves display casual, open racism towards those who diverge from their cause or happen to have the wrong color skin, a detail I mention to highlight how much work has obviously gone into a realistic rendering of social interaction. You can clear out bandit camps for a bounty, hunt down dragons and harvest their souls (a key game mechanic), join societies dedicated to either magic, combat, or theft, run errands for nobles, purchase houses, assist in piracy, and run any number of other random errands. What is remarkable is how natural all of this feels within the context of the game-world; true, many of the quests are of the “go there, fetch that” variety, but cloaked within a series of interactions with NPCs so they become miniature stories within themselves whose completion you play out. The Daedric quests are the best of all of these, in my view, all the more so because they usually end up yielding quite powerful artifacts.

All in all, there is no shortage of things to do in Skyrim. The main quest, as compared to Morrowind, turns out to be rather disappointingly thin, and the punchline (you are the Dragonborn, surprise!) is given away pretty early (you had to work for the punchline in Morrowind, and Oblivion didn’t really have one), but that’s ok because most of the time you’re going to be doing something other than following the main narrative’s path anyway. As you travel Skyrim, various ruined forts, caves, towers, villages, camps, and other habitations reveal themselves to you, and it’s usually great fun to take a detour into a nearby cave to look for goodies or level up, especially in the early stages of the game. Skyrim’s level system operates on the ingenious “getting better at what you do” principle, whereby advancement is secured by improving one’s skills; no formal class is selected. So, if you want to become a better fighter, you pick up a sword and go at it; if you want to hone your magic skills, grab a few spells and go nuts. In addition to the standard fighter/mage/thief skillsets, there are a few “minor” skills, such as smithing and alchemy (more on those later), and level advancement provides perks that unlock additional abilities with the skill tree. Overall, the system captures most of the complexity of the previous Elder Scrolls games without turning the player into a micromanager, and this strikes me as an excellent balance between complete simplicity and the level of detail involved in games based around the D&D system.

Thus far, it’s all been praise, but Skyrim has warts that don’t become obvious until well into the game. Perhaps the most serious complaint that I have has to do with the realism of the physical landscape, not just in appearance but in interaction. As I mentioned before, Skyrim’s social world is ridiculously well-developed (and despite the meme about taking an arrow to the knee currently going around the Internet, it’s also incredibly well-acted by the voice actors), but its physical world, though stunning in its beauty, often feels quite literally skin-deep. An example: Skyrim features several large rivers and other bodies of water, but upon close examination, virtually all of them are revealed to be merely waist-deep. That’s right: you can more or less walk through most of Skyrim’s waterways, a fact which feels genuinely weird considering that dungeons in Skyrim can often feel a mile deep. Practically the only place where deep water is encountered on a regular basis is in the north (though somehow frolicking in Arctic seas results in no negative effects to the character’s health).

Skyrim may be beautiful, but getting around it can be a real pain in the ass. The aforementioned rivers appear navigable (e.g. docks will have ships moored in them) but there is no mechanic to sail a boat down the river. And that’s a real shame, because oftentimes to get from point A to point B, Skyrim will force you to take a long and seriously inconvenient route; it’s almost as if the developers felt that you wouldn’t appreciate the world unless you were compelled to travel the scenic way. Once a place is discovered, you can always fast-travel there, which is great, but often you will find yourself needing to cover what appears to be a short distance on the map, only to learn that in order to do this you have to follow a serpentine path across some mountains. It’s hard to see why you shouldn’t be able to sail up and down the river if you like (although this would be hard to do if the river is three feet deep), and it would certainly facilitate exploration early on. You can speed up your locomotion somewhat by purchasing a horse, but despite years of advanced engineering (and the existence of such excellent examples as the Assassin’s Creed games) Bethesda has apparently yet to solve the complicated problem of horse-mounted combat. Seriously, how hard can this be? If you encounter enemies while mounted, prepare to dismount and fight; also prepare for your idiot horse to attack them randomly and get itself killed. Once my first horse bought it in an otherwise unremarkable encounter with some bandits, leaving me a thousand gold pieces poorer after scarcely a few hours (real time) of exploitation, I decided I’d had it with pack animals. I can imagine they might be useful if you’re harvesting Dwemer metal (it tends to be pretty heavy) but other than that, horses are a useless extravagance, looking as if they were added in an afterthought rather than as integral parts of the game.

The mountains of Skyrim are equally frustrating. In more than one case, reaching some spot that you’re trying to get to will involve negotiating a complicated mountain path. Fortunately the Clairvoyance spell will point the way for you, but it’s irritating to have to run around zapping the spell every few seconds to see the next leg of your journey (and more on this: why is there no minimap on which Clairvoyance could draw your path, having it last for, say, a minute? I realize that minimaps might break the realism a little, but that seems like a small price to pay for being able to tell where you’re going). When a mountain gets in your way, you can do nothing but walk around it; in most cases, jumping up the rocks just won’t work. I frequently found myself bemoaning the lack of a climbing mechanic within Skyrim. What would it have hurt to allow the player to scale mountains via some kind of mountaineering skill (let’s say, if your skill is too low, you could fall to your death in a storm or something). As far as I can tell, the mountains never actually render any part of the map inaccessible; they only make access to it all that much more irritating.

I also found Skyrim’s smithing system to be flawed, at best. For example, there exist something like 10 different types of ore in Skyrim, which can be combined in various ways to produce various ingots, which only then can be used to upgrade weapons and armor. Furthermore, the ore itself can only be obtained from mines (or finding it in dungeons), and in those mines you actually have to… mine it? I don’t get it; who thought Skyrim was supposed to be an ore mining simulator? Once I realized the level of complexity involved in upgrading even simple objects, I simply gave up trying to do it. This wouldn’t be such a big deal if the system didn’t present one of the best prospects for upgrading your equipment when playing a warrior character; for some reason, you can’t pay other smiths in the game to upgrade your stuff for you. Nor can you break down any of the stuff you find in the world into its base components, i.e. melt down steel plate you don’t need into steel ingots. It’s hard to see what all this complexity adds to the game other than forcing you to roam the world, scavenging ore and ingots if you want to upgrade anything. And the steep learning curve of the smithing skill tree makes the skill itself even harder to use, since you need to be at a very high proficiency level before you can do anything really interesting. You can, of course, get there by simply grinding out levels (one way is to scavenge all scrap metal from Dwemer ruins, melt it down for ingots, and then forge stuff with it) but that’s a pretty boring thing to do; it would be much better if the process of gaining smithing knowledge were part of an organic development in the same way that the fighting and magic skills are.

Elder Scrolls afficionados will be unsurprised to find that Skyrim, like its predecessors, is full of clutter. Every imaginable thing you can think of can be picked up, even if no good use can be made of most of them. It’s a weird sort of realism, in light of the aforementioned inability to cannibalize items for raw materials (a mechanic featured, by the way, in the underrated Two Worlds II), to find an infinity of weapons lying around everywhere you go. In one way, this adds to the atmosphere of the dungeons (of course a bandit hideout would be replete with weapons caches) but at times this abundance feels overwhelming. At the same time, good items seem to come along relatively infrequently (it seems that their appearance correlates with level), and as a result, I finished the main storyline with armor and weapons acquired about halfway through. There’s enough weaponry lying about in Skyrim to arm a world ten times its size, but you can’t do much with any of it because it’s all crap.

And speaking of populations, this is the one way in which Skyrim geuninely felt small to me. The cities of Morrowind may not have been as visually imposing, but even a tiny backwater like Balmora seemed, well, populated, to say nothing of a capital city like Vivec. In Skyrim, even the relatively cosmopolitan centers of Solitude and Whiterun feel like they’ve got about half of the population they ought to have. The landscape is dotted with little farms and inns, but the farms are run by lonely individuals and the inns have only a few regulars in them. In fact, half of the population of Skyrim appears to be made up of guards of one kind or another who patrol the deserted streets of its cities. It is, again, strange for a game that put so much emphasis on social realism to leave out so much of what makes the social real, the people.

That incidentally brings us to money, which is another weird aspect of Skyrim. I realize that replicating economic reality was probably not high on Bethesda’s list of things to do, but the end result is a world in which money just doesn’t seem to have much currency. What can you do with gold in Skyrim? Well, you could purchase equipment in the stores, but that turns out to be pointless because you will do much better just by canvassing dungeons or fulfilling quests, especially Daedric ones. For horses, see above. You can buy property in the game, which is kind of cool, but unless you’d like to feel like you’re playing Landlord Mogul, there’s not much reason to buy anything beyond one, or maybe two houses. The only real uses for money in Skyrim that I found was to purchase training and to bribe people to do things you want them to do (unlike in Morrowind, where you would make a bribe to affect a character’s disposition towards you and then try talking to them, in Skyrim you just select the bribe option and it works every time). You can accrue stupid amounts of money from completing quests and looting bodies, but for whatever reason it seems damn near impossible to get any serious amounts by selling to shopowners, as they will run out of cash well before you run out of stuff to sell. In an ironically realistic twist, their money supplies might not recover for days, by which time you’ll have rustled up even more stuff to get rid of. You can conceivably solve this problem by traveling to various cities and selling to multiple traders, but this is tedious and also unnecessary; I just ended up stashing all my treasures in a chest in my house.

Skyrim’s combat system is, in my view, weak. It’s been lauded as an improvement over Morrowind and Oblivion, but the improvement is largely in the feel of the thing, not anything substantial. True, time-based shield blocking has been introduced, but it’s quirky and often doesn’t work right; other than that, the basic elements were all present in Morrowind (the archery mechanic has been slightly altered but the main pieces are all still there). Combat is usually best conducted in the first person, but even then it can be very cumbersome. There is no way to lock on to a single enemy, and it’s easy to mistarget and end up swinging at the wall while your opponents hack you from behind. Don’t even think about doding; you can strafe to avoid projectiles, magical and otherwise (although opposing mages are unbelievably accurate) but try and get out of the way of a dragon’s breath attack, and you’ll find you just can’t, especially if it’s a frost attack (which slows you down). Fighting has a pretty satisfying crunch in Skyrim (at higher levels, attacks can result in critical hits and pretty slick-looking fatality moves) and that gives it enough oomph to keep things fun, but the system as a whole is clearly inferior, requiring nothing more than button-mashing for success. Again, it’s a strange sort of realism that puts a multitude of weapons at the player’s disposal but makes it mostly boring to use any of them. As before, I want to point to the Assassin’s Creed games (especially ACII) as an example of a system that gets this right: in ACII, I never felt like the fights were boring or perfunctory, and I always had some tricks at my disposal, whereas in Skyrim, after a while every fight feels identical. The little-known-but-beloved-by-me Blade of Darkness (also called Severance: Blade of Darkness) also got this right way back in 2001 or so, with a combo-based combat and dodging system that allowed you to hack off your opponents’ limbs. It’s not clear why Skyrim couldn’t have borrowed, conceptually, from something like AC; true, it would have compromised the first-person experience a bit, but I think that would have been an acceptable tradeoff for a fighting system that actually feels real.

Throughout the hours (don’t ask how many) I spent playing Skyrim, the overwhelming impression that emerged was that of a world exquisitely designed, but poorly planned. Skyrim is gorgeous and breathtaking, but when it comes to interacting with its world, the options are surprisingly limited. What good is it to me that I can pick up any object in the game when I don’t want to do anything with any of them? What use magic items harvested from dungeons that are too weak to use (because I already have something better) but too expensive to sell? Yes, upgrading my one-handed sword attacks certainly improves the chance of decapitating my enemies, but why can I not also dodge out of the way of their attacks? Why does my horse have tapioca for brains? It’s frustrating inconsistencies like that disrupt the truly remarkable immersive experience provided by Skyrim’s landscape and people.

I compared Skyrim several times to the Assassin’s Creed series, and I think that comparison bears elaborating. The AC games are linear rather than sandbox, so their social world is substantially less detailed (the story is told in cutscenes anyway and actual interactive dialogue is nonexistent), but the physical world of AC overflows with just the right kind of detail. The virtual Florence of ACII is not just a remarkable reconstruction of the real thing, but it also feels like it. Its streets throng with townspeople, merchants, and guards. Sure, they’re just milling about, if you look at them closely, but in the end, so are the people of Skyrim. You’ll never look at any particular person in the game twice anyway, because the virtual Florentines are anonymous and are there for atmospheric purposes (that and to get in your way when you’re trying to evade the guards). In any case, they give the impression of an inhabited town in whose affairs Ezio’s quest is a minor blip; by contrast, the cities of Skyrim feel half-abandoned and no one looks like they have anything better to do than unload their problems on you.

Likewise, the physical interactions of AC are far more logical than those of Skyrim. The most obvious one is the ability to climb buildings (which of course pretty much the whole premise of the AC games) but in general the whole physical model of the AC world is far better developed than its Skyrim equivalent. Why doesn’t Skyrim have a climbing mechanic? Developing such a thing was clearly not part of Bethesda’s plan, but it would have made for a much more satisfying experience, and it’s not clear that anything else that Bethesda prides itself on (the social immersivity, the role playing aspects, etc.) would have been negatively impacted. Likewise the AC combat mechanic (especially in ACII and its sequels) is well-thought out, providing you with just enough tricks to make it fun while maintaining a decidedly visceral feel, especially on fatal strikes. From where I sit, such a mechanic would have only improved Skyrim by rendering the combat a physical reality instead of mostly a reflection of the character’s numerical stats.

It seems clear that Bethesda doesn’t terribly care about doing this, and it’s in some way to their credit that they’ve created a game that is so much fun to play despite lacking what I think are really key aspects of character-world physical interaction. Nevertheless, it’s hard to argue that Skyrim wouldn’t be improved if less time was spent on elaborate dungeon layouts and lore composition (in this I am in agreement with Bissell) and more time was spent thinking about what affordances the world should provide to the players. All these things nonwithstanding, Skyrim is still a great game. You’ll still (if you’re any kind of RPG fan at all) sink countless hours into it because it’s just that big and that fun. If I criticize, it is because I love, because I would go absolutely bonkers over a game that combined the size and elaborate construction of Skyrim with the physical model of something like AC. Whether Bethesda or some other game maker will ever realize my dream remains to be seen, but I think the results of such a meld would be phenomenal. If anyone from Bethesda happens to read this (ha!) and wants to get my input for their next project, you know where to find me.

More Apple fuckery

Blah blah my shit does not work no one cares. Ok then.

Here’s the thing: I’m used to having to hack things to get them to work. As such, I think that package managers like apt-get have come a long, long way. Nowadays, I just don’t even think, I apt-get and forget about it. 99% of the time that just works and I am a happy camper. Sometimes there’s some weird thing that doesn’t but ever since probably Ubuntu 8.10 or so, the number of package issues I’ve had could probably be counted on one hand.

So now I am using this fancy pants MacBook Pro for work and it’s a pretty sweet machine all things considered. That said, it’s a huge pain in the ass because I want pretty emacs like what comes standard on pretty much all Linux distros and I can’t get pretty emacs. Instead I have something called Aquamacs which is ok too, I guess, but NOT THE SAME. Not the same because unlike the emacs in Linux I can’t figure out how to make this one run slime, which is a Lisp thing. That’s fine though; what’s irritating is the inability to run X applications in general. Ok, you want me to do Macports, I’ll do Macports. What’s that, Macports crashed trying to install X?! FUUUUUUCK. The existing Python that comes with the OS is weird and won’t do anything right; gotta install the images from python.org to get numpy and scipy and matplotlib to work nicely together. Also, for some reason this laptop refuses to read a perfectly cromulent disk that was burned on my home machine and reads just fine in my cheap-ass car stereo.

WHAT I AM SAYING: yeah, some things are easy in OS X, that’s cool. Some things are not so easy. Also this fucking magic mouse is a piece of shit and I want to punch whoever came up with this idea. I don’t even have colossal bear paws or anything but hey, I’m an adult male which means this tiny fucking mouse (which, by the way, is never pictured near an actual human hand to give you a sense of scale, all the pictures make it look really huge like it’s the size of a fucking house or something) is way too small for my hand. Thanks for the carpal tunnel syndrome, Apple! I should have asked for the ergonomic logitech which for some reason was like $100 at the apple store even though I bought almost the same goddamn mouse for $35 on Newegg.

And then the worst part is that you are like, ok, how do I use this thing and you read reviews of it and some dude is all like, “maybe this isn’t the greatest idea on the face of the earth,” and of course a bazillion Apple fanboys and fangirls and fangoats and fanjellyfish all jump into this thread and are like “NO YOU DO NOT UNDERSTAND,” even though this guy totally gets why this mouse sucks. Stop being so devoted to some stupid fucking company, you assholes. They’re not your fucking saviors, and they make shitty products sometimes, like this stupid fucking mouse which is too small for my hands that are apparently larger than any hand of any person at Apple development HQ.

I like that little dock in OS X though. That’s nice. Also when the Adium duck hops up and down to let you know someone IM’ed you. Adorable.