What is the social purpose of David Brooks?

David Brooks is trash. This is so obviously the case that I will spend no time whatsoever proving it, except to link to his latest piece which equivocates between social democracy and the “alt-right,” or as we like to call it here in the literate world, fascism.

David Brooks being trash leads to an obvious question: what the fuck is he doing on the New York Times masthead? Who is so deeply invested in having him on board that they are willing to tolerate unceasing ridicule for it? It’s impossible to say for certain, of course, but I have a thesis.

Before I get to it, it’s important to point out that Brooks has virtually no qualifications to comment on anything; he owes his current position to nothing more than a moderate intelligence and a talent for social climbing. A child of two comfortable academics, he attended the University of Chicago where he begged William Buckley for a job via satirical sketch (no, really). He worked the crime beat for a local news service for a bit and then interned for the National Review; he also spent some time at the Hoover Institution. His only real job involved being editor of the book review for the Wall Street Journal, after which he promptly switched to the opinion page, where he pretty much has remained ever since. Brooks’ only qualifications for doing so involve being a “conservative commentator” of impeccable lineage, which is to say that he did time on the wingnut welfare circuit just like all the other washouts and losers who constitute the conservative “intellectual class.”

During his stint at the Times, which is ongoing from 2003, Brooks has repeatedly proven himself to be a howling void of thought, incapable of engaging on a serious and direct level with any idea he doesn’t already hold. His sociology is risible, and his lack of self-awareness is legendary; this is, after all, the man who unironically taught a class at Yale on the subject of humility and assigned them his own columns. His one constant is the rhetorical trick of always taking the most centrist position of milquetoast liberalism and the most insane positions of the right wing, splitting the difference, and then planting himself firmly in the “center” of political discourse that he has just engineered out of nowhere.

Again, what purpose does this serve? Who is actually moved by this? The answer, I think, is twofold: one is that David Brooks has the job that he has not because of any special qualifications (as shown above, he has none) or any critical capacity for insight (ditto) but rather because he knows the right people. Brooks went to the same schools they went to, interned at the “correct” magazines (in the upper echelons of boomer liberalism the National Review is incorrectly considered to have intellectual value), and has held the “correct” positions at more mainstream publications. That he is a waste of space doesn’t matter to anyone; once you ascend sufficiently high in this world, you can never fail out of it. No number of terrible columns or terrible books or terrible classes taught will ever disqualify Brooks from his post. His defenders will always shrug and point to his publication record, as though bylines and one’s name on a book jacket were more important than the actual work. In that sense, Brooks’ social value consists entirely of demonstrating how chummy quasi-nepotism and speaking the language of the elites is more important than one’s actual intellectual contributions.

The second answer is that Brooks remains a kind of lodestar for a certain class of Times reader: the relatively affluent, centrist (maybe even center-left) boomer, who of course does not at all go in for the vulgarity of a Trump, but finds protests by black people over their arbitrary murder at the hands of police just a bit too gauche. Brooks is their safe space, ready at all times to validate their fears of the masses they imagine must be assembled with torches and pitchforks right outside their castle gates. This goes double for Brooks’ liberal readers: they get to paint themselves as “openminded” and “willing to listen to the other side” by reading a man who will gently mock their bourgeois-liberal sensibilities while serving as a stalking-horse for your standard loot-the-country-and-fuck-the-poor Republicanism. Brooks provides the illusion of dissent which merely reinforces the circumscribing of acceptable political opinions into a boundary that just barely includes the center-left but mostly caters to the right. This works out very well for the center-left leadership of the Times, which cements its claims to “legitimacy” by employing obvious cranks like Brooks as a sop to the right while at the same time punching hard to the left to avoid any serious criticism of its own role in our coming nightmare. The twin poles of managerial liberalism and center-right culture-shaming generate between them a kind of affective niche occupied by the Times that it then markets to its readers.

David Brooks is trash, but he wouldn’t have a job if some other people weren’t socially predisposed to hiring him for frivolous reasons. Which is also why he’s going to keep that job until they carry him out of his office (real or metaphorical) feet first, and why we’re going to be reading his therapy notes in the form of thinly veiled columns about his divorce throughout the Trumpocalypse.

Silly man says stupid thing, Silicon Valley edition

A fellow named Jerzy Gangi advances a bunch of hypotheses to answer a not-very-interesting question about why Silicon Valley funds some things (e.g. Instagram) and not others (e.g. Hyperloop). Along the way we get some speculation about the amount of cojones possessed by VCs (insufficient!) and how well the market rewards innovation (insufficiently!), but the question is boring because the answer is already well-known: infrastructure projects of the scope and scale of Hyperloop (provided they’re feasible to begin with) require massive up-front investments with uncertain returns, while an Instagram requires comparatively little investment with the promise of a big return. Mystery solved! You can PayPal me the $175 you would have given Gangi for the same information spread over an hour of time at grapesmoker@gmail.com.

Despite the fact that Gangi’s question is not very interesting on its own, his writeup of it actually contains an interesting kernel that I want to use as a touch-off point for exploring a a rather different idea. You see, while criticism of techno-utopianism (and Silicon Valley, its material manifestation which will be used metonymically with it from here on out) has been widespread, it usually doesn’t address a fundamental claim that Silicon Valley makes about itself; namely, that Silicon Valley is an innovative environment. Critics like Evgeny Morozov are likely to only be peripherally interested in the question; Morozov is far more concerned with asking whether the things Silicon Valley wants disrupted actually ought to be “disrupted.” Other critiques have focused on the increasing meaninglessness of that very concept and the deleterious effects that those disruptions have on the disrupted. But as a rule, discussion about Silicon Valley takes it for granted that Silicon Valley is the engine of innovation that it claims to be, even if that innovation comes at a price for some.

I think this is a fundamentally mistaken view. Silicon Valley is “innovative” only if your bar for innovation is impossibly low; (much) more often than not what Silicon Valley produces is merely a few well-known models repackaged in shinier wrapping. That this is so can be seen from looking at this list of recent Y Combinator startups. What, in all this, constitutes an “innovative” idea? The concept that one can use data to generate sales predictions? Or perhaps the idea of price comparison? The only thing on here that looks even remotely like something that’s developing new technology is the Thalmic whatsis, and even that is not likely to be anything particularly groundbreaking. These may or may not be good business ideas, but that’s not the question. The question is: where’s the innovation? And the answer is that there isn’t a whole lot of it, other than taking things that people used to do via other media (like buying health insurance) and making it possible to do over the internet.

There’s nothing wrong with not being innovative, by the way. Most companies are not innovative; they just try and sell a product that the consumer might want to buy. The problem is not the lack of innovation, but the blatant self-misrepresentation in which Silicon Valley collectively engages. It’s hardly possible to listen to any one of Silicon Valley’s ubiquitous self-promoters without hearing paeans to how wonderfully innovative it is; if the PR is to be believed, Silicon Valley is the source and font of all that is new and good in the world. Needless to say, this is a completely self-serving fantasy which bears very little resemblance to any historically or socially accurate picture of how real innovation actually happens. To the extent that any innovation took place in Silicon Valley, it didn’t take place at Y Combinator funded start-ups, but rather at pretty large industrial-size concerns like HP and Fairchild Semiconductor. No one in the current iteration of Silicon Valley has produced anything remotely as innovative as Bell Labs. Maybe the Tesla could yet live up to that lofty ideal, but it’s pretty unlikely that  internet company, no matter how successful, ever will.

Ha-Joon Chang has adroitly observed that the washing machine did more to change human life than the Internet has. But the washing machine is not shiny (anymore) or new (anymore) or sexy, so it’s easy to take it for granted. The Internet is not new (anymore) either, but unlike the washing machine, the capability exists to make it ever shinier, and then sell the resulting shiny objects as brand-new innovations when of course they aren’t really any such thing. As always, the actual product of Silicon Valley is, by and large, the myth of its own worth and merit; what’s being sold is not any actual innovation but a story about who is to be classed as properly innovative (and thereby preferably left untouched by regulation and untaxed by the very social arrangements which make their existence possible).

In Which A Considered Judgment Is Rendered

It is seemingly obligatory in any discussion of Skyrim, the fifth installment in Bethesda Softworks’ Elder Scrolls series, to mention the game’s scope. There’s a good reason for this: Skyrim is truly colossal in every sense in which a video game can be so. There are numbers out there suggesting that Skyrim is not, in terms of virtual hectares, the largest of the Elder Scrolls games, but it’s hard to deny that it feels larger than any of its predecessors (especially if you have the privilege of playing the game on a large-screen TV). When you step outside, the land stretches in every direction before you. Foreboding mountains loom on the horizon, and the sky changes with the weather, sometimes dark with rain and other times radiant with sunlight. The game’s dungeons are artistic masterworks; one almost gasps the first time one enters a gigantic underground cavern or sees the full majesty of a ruined Dwemer city revealed (in fact, your character’s companions will gasp in just this way). In its atmospheric qualities, Skyrim is unmatched by any other game, or probably, any other virtual production at all. It’s not really any exaggeration to say that no world of this scale that feels this real exists anywhere else.

In addition to its size and detail, the world of Skyrim improves on that of its predecessor, Oblivion, by harking back to its grandparent, Morrowind. Morrowind was not nearly as pretty or detailed as Skyrim is (for lack of technical capability, one assumes, rather than desire on the part of the design team), but its aesthetic was dark, threatening, and engrossing. In Morrowind, storms could kick up clouds of dust that reduced visibility, and the entire countryside appeared perpetually drab, lending background gravity to a plotline concerned with the resurrection of a dead god (or… something; my memory of Morrowind’s plot is somewhat hazy and all I recall is that you would end up being something called the Nerevarrine). By contrast, Oblivion, with its painstakingly detailed blades of grass, looked a little too happy a place, what with the possible end of the world on the horizon. Even the plane of Oblivion itself was a little too bright; only its brightness was of a red sort, which I suppose was intended to connote some sort of evil. In its visuals and aesthetics, Skyrim is closer to Morrowind’s spirit, coupled with superlative realization, and this is for the better.

The size and look of this world, remarkable as it is, nevertheless fades into the background relatively quickly as one progresses through the game. To be sure, staggeringly beautiful scenes are encountered throughout the game, but they cannot sustain a 50-hour (and that, I think, is on the low side of how much time people will, on average, sink into Skyrim) adventure. For that, you must rely on the narratives of the main and secondary quests, and on the gameplay. I suspect that, at least on the first front, few will be disappointed (notable exceptions include Grantland’s Tom Bissell, who found the game’s social world tedious). The social detail within Skyrim is at least a match for the physical detail. If you are so inclined, you can join an incipient native rebellion, or team up with the Imperial occupiers to suppress it; the rebels themselves display casual, open racism towards those who diverge from their cause or happen to have the wrong color skin, a detail I mention to highlight how much work has obviously gone into a realistic rendering of social interaction. You can clear out bandit camps for a bounty, hunt down dragons and harvest their souls (a key game mechanic), join societies dedicated to either magic, combat, or theft, run errands for nobles, purchase houses, assist in piracy, and run any number of other random errands. What is remarkable is how natural all of this feels within the context of the game-world; true, many of the quests are of the “go there, fetch that” variety, but cloaked within a series of interactions with NPCs so they become miniature stories within themselves whose completion you play out. The Daedric quests are the best of all of these, in my view, all the more so because they usually end up yielding quite powerful artifacts.

All in all, there is no shortage of things to do in Skyrim. The main quest, as compared to Morrowind, turns out to be rather disappointingly thin, and the punchline (you are the Dragonborn, surprise!) is given away pretty early (you had to work for the punchline in Morrowind, and Oblivion didn’t really have one), but that’s ok because most of the time you’re going to be doing something other than following the main narrative’s path anyway. As you travel Skyrim, various ruined forts, caves, towers, villages, camps, and other habitations reveal themselves to you, and it’s usually great fun to take a detour into a nearby cave to look for goodies or level up, especially in the early stages of the game. Skyrim’s level system operates on the ingenious “getting better at what you do” principle, whereby advancement is secured by improving one’s skills; no formal class is selected. So, if you want to become a better fighter, you pick up a sword and go at it; if you want to hone your magic skills, grab a few spells and go nuts. In addition to the standard fighter/mage/thief skillsets, there are a few “minor” skills, such as smithing and alchemy (more on those later), and level advancement provides perks that unlock additional abilities with the skill tree. Overall, the system captures most of the complexity of the previous Elder Scrolls games without turning the player into a micromanager, and this strikes me as an excellent balance between complete simplicity and the level of detail involved in games based around the D&D system.

Thus far, it’s all been praise, but Skyrim has warts that don’t become obvious until well into the game. Perhaps the most serious complaint that I have has to do with the realism of the physical landscape, not just in appearance but in interaction. As I mentioned before, Skyrim’s social world is ridiculously well-developed (and despite the meme about taking an arrow to the knee currently going around the Internet, it’s also incredibly well-acted by the voice actors), but its physical world, though stunning in its beauty, often feels quite literally skin-deep. An example: Skyrim features several large rivers and other bodies of water, but upon close examination, virtually all of them are revealed to be merely waist-deep. That’s right: you can more or less walk through most of Skyrim’s waterways, a fact which feels genuinely weird considering that dungeons in Skyrim can often feel a mile deep. Practically the only place where deep water is encountered on a regular basis is in the north (though somehow frolicking in Arctic seas results in no negative effects to the character’s health).

Skyrim may be beautiful, but getting around it can be a real pain in the ass. The aforementioned rivers appear navigable (e.g. docks will have ships moored in them) but there is no mechanic to sail a boat down the river. And that’s a real shame, because oftentimes to get from point A to point B, Skyrim will force you to take a long and seriously inconvenient route; it’s almost as if the developers felt that you wouldn’t appreciate the world unless you were compelled to travel the scenic way. Once a place is discovered, you can always fast-travel there, which is great, but often you will find yourself needing to cover what appears to be a short distance on the map, only to learn that in order to do this you have to follow a serpentine path across some mountains. It’s hard to see why you shouldn’t be able to sail up and down the river if you like (although this would be hard to do if the river is three feet deep), and it would certainly facilitate exploration early on. You can speed up your locomotion somewhat by purchasing a horse, but despite years of advanced engineering (and the existence of such excellent examples as the Assassin’s Creed games) Bethesda has apparently yet to solve the complicated problem of horse-mounted combat. Seriously, how hard can this be? If you encounter enemies while mounted, prepare to dismount and fight; also prepare for your idiot horse to attack them randomly and get itself killed. Once my first horse bought it in an otherwise unremarkable encounter with some bandits, leaving me a thousand gold pieces poorer after scarcely a few hours (real time) of exploitation, I decided I’d had it with pack animals. I can imagine they might be useful if you’re harvesting Dwemer metal (it tends to be pretty heavy) but other than that, horses are a useless extravagance, looking as if they were added in an afterthought rather than as integral parts of the game.

The mountains of Skyrim are equally frustrating. In more than one case, reaching some spot that you’re trying to get to will involve negotiating a complicated mountain path. Fortunately the Clairvoyance spell will point the way for you, but it’s irritating to have to run around zapping the spell every few seconds to see the next leg of your journey (and more on this: why is there no minimap on which Clairvoyance could draw your path, having it last for, say, a minute? I realize that minimaps might break the realism a little, but that seems like a small price to pay for being able to tell where you’re going). When a mountain gets in your way, you can do nothing but walk around it; in most cases, jumping up the rocks just won’t work. I frequently found myself bemoaning the lack of a climbing mechanic within Skyrim. What would it have hurt to allow the player to scale mountains via some kind of mountaineering skill (let’s say, if your skill is too low, you could fall to your death in a storm or something). As far as I can tell, the mountains never actually render any part of the map inaccessible; they only make access to it all that much more irritating.

I also found Skyrim’s smithing system to be flawed, at best. For example, there exist something like 10 different types of ore in Skyrim, which can be combined in various ways to produce various ingots, which only then can be used to upgrade weapons and armor. Furthermore, the ore itself can only be obtained from mines (or finding it in dungeons), and in those mines you actually have to… mine it? I don’t get it; who thought Skyrim was supposed to be an ore mining simulator? Once I realized the level of complexity involved in upgrading even simple objects, I simply gave up trying to do it. This wouldn’t be such a big deal if the system didn’t present one of the best prospects for upgrading your equipment when playing a warrior character; for some reason, you can’t pay other smiths in the game to upgrade your stuff for you. Nor can you break down any of the stuff you find in the world into its base components, i.e. melt down steel plate you don’t need into steel ingots. It’s hard to see what all this complexity adds to the game other than forcing you to roam the world, scavenging ore and ingots if you want to upgrade anything. And the steep learning curve of the smithing skill tree makes the skill itself even harder to use, since you need to be at a very high proficiency level before you can do anything really interesting. You can, of course, get there by simply grinding out levels (one way is to scavenge all scrap metal from Dwemer ruins, melt it down for ingots, and then forge stuff with it) but that’s a pretty boring thing to do; it would be much better if the process of gaining smithing knowledge were part of an organic development in the same way that the fighting and magic skills are.

Elder Scrolls afficionados will be unsurprised to find that Skyrim, like its predecessors, is full of clutter. Every imaginable thing you can think of can be picked up, even if no good use can be made of most of them. It’s a weird sort of realism, in light of the aforementioned inability to cannibalize items for raw materials (a mechanic featured, by the way, in the underrated Two Worlds II), to find an infinity of weapons lying around everywhere you go. In one way, this adds to the atmosphere of the dungeons (of course a bandit hideout would be replete with weapons caches) but at times this abundance feels overwhelming. At the same time, good items seem to come along relatively infrequently (it seems that their appearance correlates with level), and as a result, I finished the main storyline with armor and weapons acquired about halfway through. There’s enough weaponry lying about in Skyrim to arm a world ten times its size, but you can’t do much with any of it because it’s all crap.

And speaking of populations, this is the one way in which Skyrim geuninely felt small to me. The cities of Morrowind may not have been as visually imposing, but even a tiny backwater like Balmora seemed, well, populated, to say nothing of a capital city like Vivec. In Skyrim, even the relatively cosmopolitan centers of Solitude and Whiterun feel like they’ve got about half of the population they ought to have. The landscape is dotted with little farms and inns, but the farms are run by lonely individuals and the inns have only a few regulars in them. In fact, half of the population of Skyrim appears to be made up of guards of one kind or another who patrol the deserted streets of its cities. It is, again, strange for a game that put so much emphasis on social realism to leave out so much of what makes the social real, the people.

That incidentally brings us to money, which is another weird aspect of Skyrim. I realize that replicating economic reality was probably not high on Bethesda’s list of things to do, but the end result is a world in which money just doesn’t seem to have much currency. What can you do with gold in Skyrim? Well, you could purchase equipment in the stores, but that turns out to be pointless because you will do much better just by canvassing dungeons or fulfilling quests, especially Daedric ones. For horses, see above. You can buy property in the game, which is kind of cool, but unless you’d like to feel like you’re playing Landlord Mogul, there’s not much reason to buy anything beyond one, or maybe two houses. The only real uses for money in Skyrim that I found was to purchase training and to bribe people to do things you want them to do (unlike in Morrowind, where you would make a bribe to affect a character’s disposition towards you and then try talking to them, in Skyrim you just select the bribe option and it works every time). You can accrue stupid amounts of money from completing quests and looting bodies, but for whatever reason it seems damn near impossible to get any serious amounts by selling to shopowners, as they will run out of cash well before you run out of stuff to sell. In an ironically realistic twist, their money supplies might not recover for days, by which time you’ll have rustled up even more stuff to get rid of. You can conceivably solve this problem by traveling to various cities and selling to multiple traders, but this is tedious and also unnecessary; I just ended up stashing all my treasures in a chest in my house.

Skyrim’s combat system is, in my view, weak. It’s been lauded as an improvement over Morrowind and Oblivion, but the improvement is largely in the feel of the thing, not anything substantial. True, time-based shield blocking has been introduced, but it’s quirky and often doesn’t work right; other than that, the basic elements were all present in Morrowind (the archery mechanic has been slightly altered but the main pieces are all still there). Combat is usually best conducted in the first person, but even then it can be very cumbersome. There is no way to lock on to a single enemy, and it’s easy to mistarget and end up swinging at the wall while your opponents hack you from behind. Don’t even think about doding; you can strafe to avoid projectiles, magical and otherwise (although opposing mages are unbelievably accurate) but try and get out of the way of a dragon’s breath attack, and you’ll find you just can’t, especially if it’s a frost attack (which slows you down). Fighting has a pretty satisfying crunch in Skyrim (at higher levels, attacks can result in critical hits and pretty slick-looking fatality moves) and that gives it enough oomph to keep things fun, but the system as a whole is clearly inferior, requiring nothing more than button-mashing for success. Again, it’s a strange sort of realism that puts a multitude of weapons at the player’s disposal but makes it mostly boring to use any of them. As before, I want to point to the Assassin’s Creed games (especially ACII) as an example of a system that gets this right: in ACII, I never felt like the fights were boring or perfunctory, and I always had some tricks at my disposal, whereas in Skyrim, after a while every fight feels identical. The little-known-but-beloved-by-me Blade of Darkness (also called Severance: Blade of Darkness) also got this right way back in 2001 or so, with a combo-based combat and dodging system that allowed you to hack off your opponents’ limbs. It’s not clear why Skyrim couldn’t have borrowed, conceptually, from something like AC; true, it would have compromised the first-person experience a bit, but I think that would have been an acceptable tradeoff for a fighting system that actually feels real.

Throughout the hours (don’t ask how many) I spent playing Skyrim, the overwhelming impression that emerged was that of a world exquisitely designed, but poorly planned. Skyrim is gorgeous and breathtaking, but when it comes to interacting with its world, the options are surprisingly limited. What good is it to me that I can pick up any object in the game when I don’t want to do anything with any of them? What use magic items harvested from dungeons that are too weak to use (because I already have something better) but too expensive to sell? Yes, upgrading my one-handed sword attacks certainly improves the chance of decapitating my enemies, but why can I not also dodge out of the way of their attacks? Why does my horse have tapioca for brains? It’s frustrating inconsistencies like that disrupt the truly remarkable immersive experience provided by Skyrim’s landscape and people.

I compared Skyrim several times to the Assassin’s Creed series, and I think that comparison bears elaborating. The AC games are linear rather than sandbox, so their social world is substantially less detailed (the story is told in cutscenes anyway and actual interactive dialogue is nonexistent), but the physical world of AC overflows with just the right kind of detail. The virtual Florence of ACII is not just a remarkable reconstruction of the real thing, but it also feels like it. Its streets throng with townspeople, merchants, and guards. Sure, they’re just milling about, if you look at them closely, but in the end, so are the people of Skyrim. You’ll never look at any particular person in the game twice anyway, because the virtual Florentines are anonymous and are there for atmospheric purposes (that and to get in your way when you’re trying to evade the guards). In any case, they give the impression of an inhabited town in whose affairs Ezio’s quest is a minor blip; by contrast, the cities of Skyrim feel half-abandoned and no one looks like they have anything better to do than unload their problems on you.

Likewise, the physical interactions of AC are far more logical than those of Skyrim. The most obvious one is the ability to climb buildings (which of course pretty much the whole premise of the AC games) but in general the whole physical model of the AC world is far better developed than its Skyrim equivalent. Why doesn’t Skyrim have a climbing mechanic? Developing such a thing was clearly not part of Bethesda’s plan, but it would have made for a much more satisfying experience, and it’s not clear that anything else that Bethesda prides itself on (the social immersivity, the role playing aspects, etc.) would have been negatively impacted. Likewise the AC combat mechanic (especially in ACII and its sequels) is well-thought out, providing you with just enough tricks to make it fun while maintaining a decidedly visceral feel, especially on fatal strikes. From where I sit, such a mechanic would have only improved Skyrim by rendering the combat a physical reality instead of mostly a reflection of the character’s numerical stats.

It seems clear that Bethesda doesn’t terribly care about doing this, and it’s in some way to their credit that they’ve created a game that is so much fun to play despite lacking what I think are really key aspects of character-world physical interaction. Nevertheless, it’s hard to argue that Skyrim wouldn’t be improved if less time was spent on elaborate dungeon layouts and lore composition (in this I am in agreement with Bissell) and more time was spent thinking about what affordances the world should provide to the players. All these things nonwithstanding, Skyrim is still a great game. You’ll still (if you’re any kind of RPG fan at all) sink countless hours into it because it’s just that big and that fun. If I criticize, it is because I love, because I would go absolutely bonkers over a game that combined the size and elaborate construction of Skyrim with the physical model of something like AC. Whether Bethesda or some other game maker will ever realize my dream remains to be seen, but I think the results of such a meld would be phenomenal. If anyone from Bethesda happens to read this (ha!) and wants to get my input for their next project, you know where to find me.

blah blah Kim Kardashian blahdy blah

Dear everyone:

Can we all shut up about Kim Kardashian’s divorce for a second? Here’s the deal: Kim Kardashian is a free woman in a free country. She’s free to get married and/or divorced because she feels like it, or because she thought it was a good idea, or as a publicity stunt, or FOR ANY REASON AT ALL WHATSOEVER. Yeah, ok, it’s a publicity stunt. But so what? Number one, the joke is on you because you paid attention. Number two, I am so fucking tired of hearing BLAH BLAH SANCTITY OF MARRIAGE SMIRK. Well, ok, conservatives have been riding that pony since forever and certainly won’t stop on my say-so. But liberals should know better, and yet inevitably otherwise well-meaning people will trot out this horrible HOW CAN YOU NOT LET GAYS MARRY WHEN KIM KARDASHIAN BLARGLE FLARGLE.

Let’s get something straight (no pun intended): marriage, for everyone, is a civil fucking right. The case for allowing everyone to marry the person they love does not depend on whether or not Kim Kardashian lives up to your stringent standards of what constitutes a valid marriage. Going on about marriage sanctity makes you sound like a self-righteous dick with no legitimate case for self-righteous dickery. Marriage is not sacred; it’s a civil institution and people are free to take advantage of it as they see fit. If Kim Kardashian wants to be married for 72 days, well, shit, that’s her prerogative, and I’ll defend her right to do so even as I implore everyone to stop paying attention to her silly antics.

POSTSCRIPT: Just in case it’s not clear (due to departure from my usually overly prolix style) this isn’t about LEAVE KIM ALONE, it’s about STOP BUYING INTO CONSERVATIVE FRAMES ON MARRIAGE.

Getting it mostly right and completely wrong

Lots of people seem to either like or hate Chuck Klosterman. As someone who never particularly formed any opinions regarding the guy, I’m happy to continue in my unwavering agnosticism towards his writing. But I am interested in a particular piece he wrote this week for Grantland, reviewing Lulu, the Metallica/Lou Reed collaboration based, apparently, on the Wedekind play of the same name.

Let that sink in for a minute. Lou Reed, who is old as dirt, and Metallica, who are only slightly younger and haven’t done anything of significance in a decade, have combined forces to put out an album which takes its theme from a play about the sexual mores of Wilhelmine Germany. WHAT COULD POSSIBLY GO WRONG WITH THIS SCENARIO?!

Unsurprisingly, the answer is “everything.” From Klosterman’s extremely funny review, I gather that the result is about as unpalatable as one could possibly expect. Klosterman spends a lot of time on the awfulness of Lulu, which seems totally appropriate, but it’s towards the end that his article goes off the rails and into some really problematic territory (how you like them mixed metaphors?).

See, the problem for Klosterman is that it seems to be causing him to re-evaluate his stand towards the collapse of the record industry (or if it’s not the catalyst, it at least seems to be a contributing factor). Klosterman’s allegation is that if we still lived in the 1992 where the record labels ran the show, something like Lulu would never exist. Let’s leave aside for a moment the question of whether this is even true; I would argue that the music industry has made its share of terrible decisions throughout its existence, and the only reason Klosterman thinks this is that he’s suffering from a common sort of cognitive bias where he only remembers the good stuff from the 90s. In the penultimate paragraph, Klosterman praises the concept, writing that “I’m glad Metallica and Reed tried this, if only because I’m always a fan of bad ideas.” He concludes:

The reason Lulu is so terrible is because the people making this music clearly don’t care if anyone else enjoys it. Now, here again — if viewed in a vacuum — that sentiment is admirable and important. But we don’t live in a vacuum. We live on Earth. And that means we have to accept the real-life consequences of a culture in which recorded music no longer has monetary value, and one of those consequences is Lulu.

Klosterman doesn’t come out and say it outright, but the implication of his last two paragraphs is that he thinks that this is a problem; that the actual realization of Lulu, while based on an admirable concept, is a mistake. With this, I beg to differ.

It has been my fervent belief for many years now that the most interesting of our cultural debris is the weird stuff. And not just the weird stuff, but the stuff that’s so divorced from any plausible standard of aesthetic quality that one struggles to comprehend how it even came to be. If I asked you to imagine Lulu, you couldn’t do it; you would either wind up with some forced Pynchonesque, or something far more mundane than actually happened. The fact that Lulu exists at all, the fact that we live in an environment which makes it possible, is, to me anyway, extremely important. Not because I would actually listen to Lulu (because I’m lazy enough as it is, and I refuse to expend cognitive effort to merely enjoy something ironically) but because its existence means that even in the stodgiest, most regulated corners of the cultural space, there exists an opportunity to do something mind-bogglingly stupid. And mind-boggling stupidity, especially produced in this way, is hilarious.

Failure is as much of an art as success, although typically success is achieved by consciously creating something of value, whereas artistic failure is something generally lucked into: either by dint of overreaching on the basis of your previous achievements (e.g. Lulu) or by being hilariously awful (e.g. Plan 9 From Outer Space, although honestly I never found it to be nearly as cringe-inducing as its defenders claim). The 1995-96 Chicago Bulls were a hardwood masterpiece made flesh, a team that won an astounding 72 games; the 2009-2010 New Jersey Nets were a hilarious embarrasment to the league, winning a mere 12. Sure, you’d rather watch the Jordan Bulls play (assuming you’re a neutral) but as a narrative, isn’t the Nets’ despair infinitely more compelling? After all, you already know a team like the Bulls is destined for a ring, but the Nets, right up until the end of the season, had the potential for badness of historical proportions (that they fell just short of that is disappointing in its own right, although they did set the historical mark for worst start to the season with 19 straight losses). I cared nothing for the Nets, but couldn’t help checking their results every morning just to see if the lows they’d fallen to would go even lower (further bizzaritude: of the three wins that kept New Jersey from tying for the worst-ever season, one was a win on the road against eventual finalists the Boston Celtics, and two more came in double-overtime wins against the Bulls and the Miami Heat, both playoff teams). Were not the historically abominable Detroit Lions far more interesting than if they’d gone 4-12? Of course they were, and you know it.

The same thing holds for artistic endeavors as much as athletic ones (though in truth the lines between the two are blurry). Is not the existence of a film such as Howard the Duck irrefutable proof of the non-existence of God? In what kind of just world would it be possible for the profoundly schizophrenic Hudson Hawk (which seems to begin as a relatively unremarkable action/heist film, and yet goes on to contain a scene in which a little girl in a museum is told “You’re a disgrace to your country!” in a scene which has only the remotest contextual relevance to the plot) to exist? Only in a world in which it was possible for someone to take Bruce Willis seriously as a screenwriter.

These various failures are like a sort of Ozymandias lining our cultural highways: look on my works, ye mighty, and despair. They are, by and large, fascinating examples of people trying out completely preposterous, downright stupid ideas, and winding up with colossal failure that demands appreciation on an aesthetic level. Studied competence is a quality we demand from doctors and civil servants; our artistic products, to be interesting, should be either transcendentally successful, or implausibly horrid. Edgar Bulwer-Lytton (he of the much-and-unjustly-malgined “It was a dark and stormy night” fame, an opening sentence about as unremarkable as they get), towards the end of his life, concocted an absurd proto-science-fiction story about a subterranean race that thrived on a mysterious form of energy. The novel apparently caused a minor mania, becoming an obsession of Theosophists and people who thought Atlantis was real, as it was taken as a veridical description of reality; can anything remotely similarly fascinating be said about one of Hardy’s ponderous chronicles of English peasant life or the interminably dull regional fiction of a Sarah Orne Jewett? Sure, Bulwer-Lytton is considered to have failed aesthetically, but he failed in such a spectacular fashion that you can’t help but admire the audacity.

The best of our aesthetic artifacts share this kind of demented energy with the worst; they contain sparks (or even full-blown fires) of something crazy, something you won’t see if all you shoot for is competence. Metallica producing a competent, or even relatively good (say, on par with the Black Album) record would not arouse the slightest curiosity, but Metallica teaming up with Lou Reed to adapt Wedekind is a fascinating, not even but especially since it’s so disastrous. Klosterman is wrong to be filled with (admittedly limited) nostalgia for the world of record label control; the fact that the destruction of that world allowed something like Lulu to be created is direct evidence that we’re better off without it.

how do i awarded prize

I don’t know when I stopped paying attention to the National Book Awards because I’m not sure I ever paid attention to them in the first place. I suppose “National Book Award Winner” is some additional motivation for acquiring reading material, but it ranks pretty low on the list of criteria that I care to look at when making my decisions. Does anyone outside the publishing industry really care that much about them?

One person who does care about the awards is Laura Miller, Salon‘s book critic. And a few weeks ago, she published in Salon what I think is a really bizarre analysis of this year’s slate of nominees. Before I start in, I just want to say that I haven’t read any of the books up for the award this year nor do I have any opinion whatsoever regarding their quality or lack thereof. My reaction here is solely to Miller’s confusing and poorly-reasoned article. Take it away, Miller:

Over the next day or two, expect to see observers pointing out the absence of two widely praised fall novels — “The Art of Fielding” by Chad Harbach and “The Marriage Plot” by Jeffrey Eugenides — and the fact that four of the five shortlisted titles are by women. (Those with longer memories will hearken back to the much-discussed all-female short list of 2004.) However, two prominent new novels by women, Ann Patchett’s “State of Wonder” and Amy Waldman’s “The Submission,” were passed over, as well.

Again, maybe The Marriage Plot (book titles in italics, damn it!) belongs on the list of nominees. I have been hearing great things about it. But it’s not clear to me why the fact that four of the five novels are by women needs explaining. Miller seems to think it does, but if it does, she doesn’t explain it, and if it doesn’t, why bother mentioning it? It’s a weird “some people might say X” formulation without bothering to check if X is important or relevant in any way. It’s also not clear whether “prominent” is supposed to mean “good” in this paragraph. Are the novels by Patchett and Waldman any good? I have no idea, and it would seem to be at least concomitant with a book critics responsibility to inform her readers regarding their quality.

Although the judges for the NBAs change every year, the sense that the fiction jury is locked in a frustrating impasse with the press and the public is eternal. (One notable recent exception: the selection of Colum McCann’s “Let the Great World Spin” as the winner two years ago.) The press, assuming that the amount of media coverage a novel gets is a reliable indicator of its merit, expresses bafflement. The judges, if they respond at all, defend their choices as simply the best books submitted.

There’s a “sense” here of something, but Miller is so thin on evidence that it’s impossible to tell whether that sense is actually supported by anything that transpires in the world. Who is “the press” in this context? Is it book critics like Miller, who, presumably, have the capacity to make independent evaluations of the various books out there? Or is it someone else? I read “the press” every day, and so far as I can tell, no one outside the actual circle of literary reviewers seems to devote any real time or energy to writing about books. The one article per year in the arts section about who won what prize might count as “press,” I suppose, but that’s almost always just straight reporting (see the Times article about the awards from last year). There’s no real “bafflement” on evidence. As one might expect, the judges don’t really feel obligated to justify themselves to “the press” (or in any case they shouldn’t), but with regard to their (supposed, Miller gives no evidence for this) defense of their choisces as “simply the best books submitted,” Miller asserts,

Neither view is entirely persuasive.

Why is Miller unpersuaded of this? It remains a mystery because she doesn’t say. What she does say is,

While it’s certainly true that celebrated novels are not necessarily good, it’s also true that they aren’t necessarily bad, either.

Wait, what? What is the argument here, exactly? Celebrated novels aren’t necessarily bad, therefore… more celebrated novels should be picked? How is that in any way contradictory to the judges’ (imagined by Miller) defense of their choices? If I tell you that Book X is better than Book Y, that doesn’t mean Book Y is trash; it just means that I think Book X is better and should… I don’t know, win a prize or something? The bafflement here is all Miller-generated.

Whatever policy each panel of judges embraces, over the years, the impression has arisen that already-successful titles are automatically sidelined in favor of books that the judges feel deserve an extra boost of attention. The NBA for fiction often comes across as a Hail Mary pass on behalf of “writer’s writers,” authors respected within a small community of literary devotees but largely unknown outside.

“The impression has arisen that Saddam Hussein possessed weapons of mass destruction.” I mean, is there some evidence for this impression? Is this all a play being staged within the Cartesian theater of Miller’s mind? Who knows?! But even if this “impression” is accurate, what of it? Given several novels of putatively equal quality, there’s nothing wrong with giving the award to the less successful novel on the grounds that the bestseller doesn’t need the exposure. Why not promote a “writer’s writer” who might also become a reader’s writer? Miller seems to think that this is a problem, but it’s not clear what the problem actually is. Miller seems to think that “the reading public has… proven recalcitrant” to picking up these books, and offers this gem:

If you categorically rule out books that a lot of people like, you shouldn’t be surprised when a lot of people don’t like the books you end up with. This is especially common when the nominated books exhibit qualities — a poetic prose style, elliptical or fragmented storytelling — that either don’t matter much to nonprofessional readers, or even put them off.

I don’t know if Miller (unlike me) actually intends to insult the reading public, but this has got to be the most backhanded compliment ever. Hey reading public: Laura Miller thinks you’re too stupid for a poetic prose style!

And what the fuck is a “professional reader?” I mean, I know what the answer is: it’s a book critic. But that’s a really dumb and insulting way of phrasing things. Because here’s the thing: either the “reading public” actually consists of literate people capable of forming their own opinions about books and working their way through challenging literature (in which case they aren’t likely to complain about it in the first place, so why is Miller doing it for them?) or the reading public is a bunch of children who get put off by such oh-so-complicated literary innovations as… a poetic prose style! or elliptical and fragmented storytelling! in which case, fuck the reading public. I don’t ask a buch of 15-year olds about their literary opinions, because their literary opinions are shit; they probably think Ender’s Game is the apogee of the literary canon. If that’s the level of the American reading public’s opinions, then to hell with their opinions.

What’s really, really awful about this is that in the next paragraph Miller basically undermines her entire thesis:

If outsiders fail to sympathize with the judges’ perspective, the judges often have a distorted sense of the role literature plays in the lives of ordinary readers. People who can find time for only two or three new novels per year (if that) want to make sure that they’re reading something significant. Chances are they barely notice media coverage of books — certainly not enough to see some titles as “overexposed” — and instead rely on personal recommendations, bookstore browsing and Amazon rankings.

So let’s look back on the path of this argument: there’s a “sense” or possibly an “impression” that the awards are somehow hostile to ordinary readers, who actually don’t follow press coverage (so where does this “sense” come from?) and want to read “something significant” in their limited spare time. Ok, ordinary readers, well, we’ve convened this panel of critics to hand out an “award” to the best “book” published “nationally” this year. What’s that? You’d rather make your selection from Amazon rankings? Uh, go fuck yourselves.

Jesus Christ, I know that critics don’t always get it right and all that, but if you really care about reading the one or two most important books of the year, you could do a lot worse than consult a group of people who read books for a living. Which in fact Miller acknowledges, if somewhat obliquely and reluctantly:

Prizes are one part of this mix, if an influential one, and the public mostly wants the major awards to help them sort out the most important books of the year, not to point them toward overlooked gems with a specialized appeal.

Simple logic, people: an “overlooked gem” can in fact be one of the most important books of the year. The word “important” is doing a lot of work in this sentence; it’s being used as a sort of code for “popular.” The prize panel is telling you, “read this book, because we think it’s great,” not “read this book because it will flatter your own limited capacity for aesthetic appreciation.”

All this reminds me of a joke that was popular among Russian Jews. So two Jews meet and are talking about their lives and one of them says, “Hey, are you going to see the new production of Aida?” and the other one says, “Nah, my friend sang it to me over the phone, and it sounded terrible!”

It wouldn’t be any fun if Miller’s incoherent article didn’t conclude with a “Fuck you, Dad! I won’t do what you tell me!”

For these reasons, the National Book Award in fiction, more than any other American literary prize, illustrates the ever-broadening cultural gap between the literary community and the reading public. The former believes that everyone reads as much as they do and that they still have the authority to shape readers’ tastes, while the latter increasingly suspects that it’s being served the literary equivalent of spinach. Like the Newbery Medal for children’s literature, awarded by librarians, the NBA has come to indicate a book that somebody else thinks you ought to read, whether you like it or not.

Ok, number-fucking-one: the next asshole who insults vegetables should have their fingers broken so they can never type this bullshit again. Seriously, spinach, Brussels sprouts, and broccoli are:

1. Delicious
2. Good for you

This is entirely analogous to when conservative man-children throw fits about the promotion of nutrition in schools; if it ain’t meat and potatoes, it’s socialism. That’s some good company you’re keeping there, Laura Miller. I’m gonna go out on a limb here: if the most creative food you can imagine consuming can be obtained at Steak ‘n Shake, you probably have horrible taste in food. Analogously, if the most sophisticated literature you’re capable of consuming involves nothing more complex than droning monosyllables, you probably have horrible taste in literature and shouldn’t be listened to in discussions concerning the same.

Yes, the NBA (hehe) indicates a book that “somebody else thinks you ought to read.” That is the whole point of the fucking award! That’s how literary awards work! Someone reads a bunch of books, then picks the one that they think is best, and then says, “We think this book is the bees’ knees! You should read it!” Apparently in Miller-world, awards are supposed to just reinforce the pre-existing prejudices of the reader; their job would seem to be to say, “Hey, you’re doing good!” even if what you’re really doing is drooling all over yourself.

The coup de grace is the “proof by childhood reminiscence”:

As a kid, after several such medicinal reading experiences (“… And Now Miguel” by Joseph Krumgold was a particular chore to get through), I took to avoiding books with that gold Newbery badge stamped on their covers. If it weren’t for a desperate lack of alternatives one afternoon, I’d never have resorted to E. L. Konigsburg’s “From the Mixed-Up Files of Mrs. Basil E. Frankweiler,” which became one of my favorites. Today’s adult readers, with millions of titles a mere click away, are unlikely to find themselves in such straits.

That’s fucking right: Laura Miller got so tired of reading all that shit that librarians want you to read so much that they give it a Newberry Award that she went out and read… 1968 Newberry Award-winning book From the Mixed-Up Files of Mrs. Basil E. Frankweiler. You can’t make this shit up.

Inglorious Basterds

I have wanted to say something for a long time about Tarantino’s “Inglorious Basterds.” I have a complicated relationship with QT; most people of my generation regard him as an unquestioned genius, whereas my opinion of him is usually inclined towards the critical. I didn’t really like the “Kill Bill” movies, wasn’t moved by “Pulp Fiction,” or “Reservoir Dogs,” and in general thought that Tarantino’s best film was the film that was least “like” him, “Jackie Brown.” So I can’t say that I had any a priori disposition towards liking “Basterds.”

Nevertheless, fairness demands that I acknowledge the film’s general quality. There’s probably no other filmmaker of such a renown who can film conversational scenes of the quality that Tarantino pulls off. Indeed, I think that the scene in the German tavern may well be some of the best 40 or so minutes ever shot on that theme. The way it’s written and played is simply beyond reproach; Tarantino keeps the entire situation balanced on the point of disaster which, when it inevitably comes, strikes with cruel efficiency and swiftness. In fact, one can probably say that about several of the longer scenes in “Basterds,” including Landa’s initial interrogation and Shoshanna’s preparations for the theater fire: they are exquisitely constructed set pieces that unfailingly hurtle towards a violent and (sometimes literally) explosive resolution. But insofar as this is Tarantino’s great strength and a demonstration of his best qualities, it seems also to be an exhibition of his greatest weakness; to wit, nothing about “Basterds” really feels like a complete movie. Rather there is a feeling that the whole thing is stitched together from disparate, somewhat related scenes which don’t particularly add up to a coherent whole.

That’s a stylistic observation, but there’s a content-related observation that’s worth making too. Which is: why does this movie eixst? What, exactly, is it for, anyway? That question is directly related to the entire fake-history conceit that drives the movie’s secondary (non-Shoshanna) plot. After all, if you’re going so far as to conceptualize a world in which Jewish death squads terrorize the German countryside during World War II, there must be some kind of idea behind it. I’ve engaged in arguments elsewhere on the Internet (specifically, on Pandagon, around the time “Basterds” came out) where one explanation offered was that this was meant to stand the traditional view of Jews as passive victims on its head. But having thought about it since then, that doesn’t seem at all right to me. The Basterds as characters are only barely fleshed out; they have minimal personalities just adequate to make them slightly more appealing than cardboard cutouts. Moreover, it seems pretty odd that this group of Jewish fighters isn’t actually being led by a Jew, but by a half-Apache, half-Italian from Tennessee (if I remember this right). One sure way of demonstrating Jewish agency might have been, you know, to actually put a Jew in charge of the whole operation (never mind acknowledging the Jews who actually contributed to the war effort; that would defeat the fake-history premise). Outside the general premise of the Basterds’ existence, they and their storyline are actually incredibly boring (with, again, the notable exception of the tavern scene). The Shoshanna storyline is, at least, generally comprehensible (and the movie is far, far more about her anyway than it is about the actual Basterds) and probably could have stood on its own as a complete film. But the Basterds’ half of the film (if it can even be called that; it’s really Aldo’s half of the film), seems tacked and unnatural.

The violence in “Basterds” doesn’t make a whole lot of sense to me either. Unlike in, say, “No Country for Old Men,” much of the violence in “Basterds,” doesn’t seem in service of any particular goal. The Bear Jew? The two Jewish suicide bombers? The glee with which frankly gruesome and cruel acts are committed? It’s rather hard to find intrinsic motivations for any of this. Is the point that Jews could be just as cruel in their retalliation to their oppressors? That seems to be one possible reading, but certainly another equally plausible one might be that once you descend to the level of a Nazi executioner, it’s hard to tell the two apart. Perhaps this moral ambiguity is intentional. Amanda Marcotte argued that the unapologetic violence is intended as a reaction to movies like “The Reader,” which tried to look at the personal lives of Nazis, but it’s hard to pick this out from the actual contents of the film. If that thesis is true, then the ambiguity I’m referencing is a weakness, not a strength. And, most puzzlingly, when Aldo has a chance to commit an act of violence which would have some real meaning and consequences for him, he chooses to forgo killing Landa, though of course he would have been eminently justified in doing so. The carving of the swastika into his forehead has the air of an act of showmanship more than anything.

The films ending leaves me deeply unsatisfied. I’m thinking in particular of the scene at the end in which Shoshanna is killed. It unfolds thusly: as the German soldier (I forget his name) lies bleeding to death, Shoshanna catches a glimpse of a film in which he starred on the screen. Momentarily enchanted by his handsome visage, she bends over him, whereupon he shoots her with a concealed gun just as Marcel sets fire to the theater. A warning about the seductive power of images and the hazards associated with taking them for a truth? Yes, I would say so, but in the context of “Basterds,” it feels a bit cheap. After all, we’ve just sat through more than 3 hours of an alternative history which may have had a point other than offering a fantastical version of WWII, and to have such a coda to it is almost like an admission that it’s all been a joke. For surely, if we heed the scene’s warning and apply it retroactively to the movie itself, that’s a plausible conclusion to draw. And it brings us right back to the question of “Basterds'” purpose and whether it can sustain any idea beyond “beware the treachery of images.” Furthermore, it would call into question any interpretations of previous scenes that could lead to anything resembling moral engagement. If in the end, everything is an ironic inversion, then how can we be justified in taking any individual part of it seriously?

Maybe it’s unfair to criticize Tarantino according to canons that he may not necessarily adhere to. But I think a lot of people do see a kind of seriousness in his work, so at least from that persepctive, I think these criticisms are relevant. At least to me, it seems that Tarantino is perpetually unable to let even movies ostensibly attempting to address a serious theme take their course without, in the end, giving the game away with a typically ironic maneuver which makes it hard to take seriously anything that came before it. This is really disappointing to me, because I think with his talent and his ear for conversations, Tarantino could be a truly great filmmaker if he could only let go of his constant need for ironic posturing. Instead, for me, he remains an unquestionable technical talent that resides below the upper tier of directors. Of course, for some people, that ironic posture is one of his greatest merits; de gustibus, etc. Perhaps there isn’t even a way to separate it from his other skills. I suppose we’ll see in the not-too-distant future if a 60-year old Tarantino moves beyond this.

Who needs facts?

Not John McWhorter. In his review of Amy Wax’s book, Race, Wrongs, and Remedies, McWhorter waxes (ho ho) poetic about the persuasiveness of the argument, but completely fails to relate just what it is that makes it persuasive. The review begins, as such things so often do, with a complete strawman:

There is a school of thought in America which argues that the government must be the main force that provides help to the black community.

Notice the unspoken assumptions smuggled into this sentence. First, it is simply assumed that such a “school of thought” exists, although none of its representatives are even identified, much less given a voice. The second assumption is that this school (whatever it is, if it even exists) believes that government must be the “main force,” in helping the black community; is there even a metric that allows one to compare who or what is a “main” force and what is an auxillary? I would suppose that if one actually spoke to people who study issues of this sort, one would discover a much more nuanced view on the role of government in bringing about racial equality.

The review, and, I must assume from the text, Wax’s book itself, contains one of those horrible appeals to analogy which is neither illuminating nor valid. McWhorter paraphrases it thus:

Wax appeals to a parable in which a pedestrian is run over by a truck and must learn to walk again. The truck driver pays the pedestrian’s medical bills, but the only way the pedestrian will walk again is through his own efforts. The pedestrian may insist that the driver do more, that justice has not occurred until the driver has himself made the pedestrian learn to walk again. But the sad fact is that justice, under this analysis, is impossible.

How this is supposed to teach us anything about the history of African-Americans is unclear. Justice is “impossible,” under this analysis because the framework of the “parable” is structured to prevent it from being possible. Even internally the example isn’t particularly coherent; we might well ask what happens if the truck driver has paralyzed the pedestrian, which would seem to be a reasonable question given the analogy. Now, the pedestrian can’t learn to walk, no matter how hard he tries! What kind of justice does the pedestrian, now crippled for life, deserve in this case?

Of course, to even begin to make this counter-argument is already a problem because it implies the acceptance of the analogy, which is in no way legitimate. Collisions between truck drivers and pedestrians are individual processes; the condition of blacks in America is not an individual process but a historic one. Truck drivers didn’t create structural conditions that continuously result in pedestrians being run over, whereas white America unquestionably did create (and continues to perpetuate) structural conditions that leave blacks at a disadvantage.

McWhorter goes on:

The legal theory about remedies, Wax points out, grapples with this inconvenience—and the history of the descendants of African slaves, no matter how horrific, cannot upend its implacable logic. As she puts it, “That blacks did not, in an important sense, cause their current predicament does not preclude charging them with alleviating it if nothing else will work.

The italics in the quotation are mine. Let me first object to the use of the word “implacable” here as a mean rhetorical trick designed to move the faulty analogy out of the realm of debate. In fact, as is clear after minimal reflection, nothing about this logic is implacable at all; it’s actually quite faulty and not at all applicable to the situation in question, which in any case ought to be treated on its own merits. But even granting this false analogy, I still have to wonder by what mechanism of elimination Wax has concluded that “nothing else will work.” Does Wax’s book contain a thorough examination of various social programs together with an analysis of their performance? I don’t have the book, but I suspect that it’s not something you can do in 190 pages (and anyway, Wax is a lawyer, not a sociologist, so likely such an analysis would be beyond her expertise anyway). In fact, one might suppose that there are lots of things we haven’t tried that could certainly alleviate the difficulties that blacks face in America; for example, we could end the ludicrous and patently racist “war on drugs,” which locks up young black men at unprecedented rates. I doubt that this would solve every problem ever, but it sure would help. In the next paragraph, McWhorter’s argument (really, Wax’s argument, but McWhorter seems to agree with it) gets downright weird:

Wax is well aware that past discrimination created black-white disparities in education, wealth, and employment. Still, she argues that discrimination today is no longer the “brick wall” obstacle it once was, and that the main problems for poor and working-class blacks today are cultural ones that they alone can fix. Not that they alone should fix—Wax is making no moral argument—but that they alone can fix.

Let’s grant for a moment Wax’s argument that discrimination today isn’t a “brick wall.” I don’t believe it’s true, but for the sake of argument I’ll allow it. It still remains true that the people alive today are the victims of actual discrimination from decades past. Since I assume that no one would make the argument that racism just disappeared abruptly, even if one believes it doesn’t really exist today, then certainly one must grant that blacks were, in fact, discriminated against in the past. What that means, for those of you who are adept at following causation, is that blacks today are living with the end product of that discrimination. Wax clearly acknowledges this, but wants to pretend that in this brave new world, that doesn’t really matter. I can’t see how this is a coherent position. Those structural deficiencies created by explicitly and implicitly discriminatory policies still exist. I’ve already mentioned  the war on drugs, but you can just as easily look at the difference in funding between urban and suburban school districts. When I was a high school student in California, I was lucky enough to attend a very rich school whose tax base was La Jolla, one of the wealthiest communities in the state. But I also had the chance to see numerous other campuses, which were decrepit by comparison. So long as such stark and undisputed inequalities persist, it’s hard to see how Wax’s apparent belief that we have done all we can could possibly stand up under scrutiny.

McWhorter acknowledges these difficulties at the end of the article, though in a rather oblique manner. Before he gets there, he throws out a couple of studies without a lot of context: that completing high school and delaying having kids is conducive to success, that the IAT is not the best indicator of discriminatory behavior (this is asserted and nothing is cited in support, but let’s roll with it), and that poor women don’t marry the fathers of their children not because the fathers are unemployed but because they are not dependable. The obvious question that arises here is how those factors are disentangled; wouldn’t someone who is undependable be likely to be unemployed? Potshots are thrown at random “black radicals” (who, I’m guessing, are probably of little relevance to the overall struggles of day-to-day life in black communities anyway) for failing to address out-of-wedlock births and Jeremiah Wright is trotted out to complete the parade of horribles.

What’s disappointing about all of this is that at the end, it’s not like McWhorter doesn’t understand that government has a role to play. Having thrown out some pretty categorical statements early on, he effectively backtracks to admit that government can in fact do things like improve educational equality, ease the transition of felons back into society, and enforce civil rights violations. And that it should be doing those things. Still, he can’t help but sign on to this paragraph from Wax:

The government cannot make people watch less television, talk to their children, or read more books. It cannot ordain domestic order, harmony, tranquility, stability, or other conditions conducive to academic success and the development of sound character. Nor can it determine how families structure their interactions and routines or how family resources—including time and money—are expended. Large-scale programs are especially ineffective in changing attitudes and values toward learning, work, and marriage.

Government can certainly not do any of those things by fiat (although the last sentence seems of dubious validity). But it can, and should, try to create conditions in which those kinds of attitudes will flourish. Poverty, as I suspect McWhorter would acknowledge, has a logic of its own that has little to do intrinsically with whether one is black or white. For historical reasons, we have a black underclass in this country, but being black doesn’t somehow cause you to adopt the “wrong culture.” On the other hand, there is a clear causal connection between being black and finding yourself the persistent victim of structural inequalities predicated, in the not-too-distant past, on racial discrimination. Once you find yourself a member of that underclass, with the corresponding limited horizons and substantially greater day-to-day travails, you can’t just will yourself out of it. Well, maybe if you’re really good, you can, but the average person, black or white or anything in between, is going to struggle, and understandably so. To think otherwise is just fantasy. It’s especially bizarre for Wax to ask,

Is it possible to pursue an arduous program of self-improvement while simultaneously thinking of oneself as a victim of grievous mistreatment and of one’s shortcomings as a product of external forces?

Well, is it? It would seem that Wax believes the answer to this question is negative, though this isn’t stated anywhere. But more importantly, what if one really is a victim of grievous mistreatment and one’s shortcomings (a loaded term in and of itself) are actually a product of external forces?

McWhorter concludes his review with the suggestion that saying that government and personal choices both have a role to play is like having your cake and eating it too. But I would counter that such a statement is simply a truism, and that Wax is playing a dishonest shell game. On the one hand, it’s impossible to not acknowledge the great injustices perpetrated against blacks over the course of this country’s history; on the other hand, such an acknowledgment leads naturally to the conclusion that this isn’t just a private problem but a social problem that can and should be addressed in policy. And that’s not acceptable to Wax for whatever reason, so she quickly has to swap in the idea that we’ve already done all we can and the rest is the responsibility of the black community. Nevermind that this isn’t supported by any real evidence and that so much more can actually be done. And this is why discussions of culture never really get you anywhere; they simply serve to redirect the discourse from the actual, useful things we as a society can do to blaming black people for not being committed enough to not being poor. McWhorter is right when he says that “the bulk of today’s discussion of black America is performance art,” but not in the way he thinks.

Dear film critics: kill yourselves

Film Salon – Salon.com

So apparently I found out via Salon that James Cameron won some kind of “Molten Glob” or some shit for Avatar. Over The Hurt Locker, which is apparently directed by his ex-wife Katherine Bigelow. Ok, sure. I haven’t seen The Hurt Locker, which I am told is very good. I have however seen Avatar, and I have this to say to anyone who voted for that movie over… well, anything else:

Please, just off yourselves right now. Are you even trying here? Are there two functional neurons firing within your skull? Avatar is an overblown, ridiculously pretty movie with a plot and direction that could have been conceived by a 10-year old, and probably better executed. It’s not a movie so much as it is a tech demo. If you voted for Cameron to win a best director award for this nonsense, just drink some bleach, put the gun in your mouth and pull the trigger, and make sure you do this on top of a diving board positioned off the edge of the Grand Canyon.

Or alternately, put your fucking thinking hats on for just a goddamn second and stop being so goddamn stupid. Avatar? You’ve got to be shitting me.

When I’m King Shit of Hollywood Mountain, I’m going to disband all the award shows. Few things are more annoying than watching a gaggle of morons pretend that shitty movies are masterpieces. This is why we can’t have nice things, America.

In relevant film news

I am pretty surprised, actually, that the number of critical essays on The Big Lebowski to be found in a cursory JSTOR search appears to be “one,” and maybe not all that surprised that the number of worthwhile critical essays on the same is “zero.” Here’s the link to the one essay I did find that discusses the film directly (it’s on JSTOR, so you need institutional access to view it). It’s laughably badly written academese that says almost nothing interesting about the film itself but does feature lovely footnotes citing Derrida and Heidegger. My favorite part:

I read The Big Lebowski in order to think through the problem of narratival [1], or mythic violence, and how, ultimately, to interrupt myth in the exterior world of Bush, Hussein, and the Persian Gulf.

Man, there sure are some lovely trees around here, but where the heck did that forest go?!

In other news, by the end of the weekend I plan to have an essay up about A Serious Man, in which I will try to place it in the broader context of the Coens’ canon and also try to persuade people that it’s a good movie worth watching.

[1] Goddamn it, we already have a fine word for this kind of thing. That word is “narrative” which can be used as either a noun or an adjective. You don’t need to tack on an awkward ending to show everyone how smart you are.

Addendum: if you want to see what an actually insightful review sounds like, you can read the very next thing I found on JSTOR, which is a review of O Brother Where Art Thou? by none other than (in cooperation with two others) the inestimable Tim Kreider, he of “The Pain” comics. Kreider, by the way, is a terrific film reviewer in general, and his writeup of Eyes Wide Shut is fantastic.