Archive for May, 2016

The question comes up with some regularity: “Where were you when …?” What goes in place of the ellipsis has changed with the generations. For my parents, it was the (first) Kennedy assassination (1963). For my siblings and chronological peers, it was first the Three Mile Island accident (1979) and then the Challenger disaster (1986). For almost everyone since, it’s been the September 11 attacks (2001), though a generation lacking memory of those events is now entering adulthood. These examples are admittedly taken from mainstream U.S. culture. If one lives elsewhere, it might be the Mexico City earthquake (1985), the Chernobyl disaster (1986), the Indonesian tsunami (2004), the Haitian earthquake (2010), the Fukushima disaster (2011), or any number of other candidates populating the calendar. Even within the U.S., other more local events might take on greater significance, such as Hurricane Katrina (2005) along the Gulf Coast or Hurricane Iniki (1992) in Hawaii, the latter of which gives September 11 a very different meaning for those who suffered through it.

What these events all have in common is partially a loss of innocence (particularly the man-made disasters) and a feeling of suspended animation while events are sorted out and news is processed. I remember with clarity how the TV news went into full disaster mode for the Challenger disaster, which proved to be the ridiculous template for later events, including 9/11. Most of the coverage was denial of the obvious and arrant speculation, but the event itself was captivating enough that journalists escaped wholesale condemnation they plainly deserved. The “where were you?” question is usually answered with the moment one became aware of some signal event, such as “I was on the bus” or “I was eating dinner.” News vectors changed dramatically from 1986 to 2001, as two relatively arbitrary points of comparison within my lifetime. Being jarred out of complacency and perceiving the world suddenly as a rather dangerous place hardly expresses the weird wait-and-see response most of us experience in the wake of disaster.

Hurricanes typically come with a narrow warning, but other events strike without clear expectation — except perhaps in terms of their inevitability. That inevitability informs expectations of further earthquakes (e.g., San Andreas and New Madrid faults) and volcanic eruptions (e.g., the Yellowstone supervolcano), though the predictive margin of error can be hundreds or even thousands of years. My more immediate concern is with avoidable man-made disasters that are lined up to fall like a series of dominoes. As with natural disasters, we’re all basically sitting ducks, completely vulnerable to what armed mayhem may arise. But rather than enter into suspended animation in the wake of events, current political, financial, and ecological conditions find me metaphorically ducking for cover in expectation of the inevitable blow(s). Frankly, I’ve been expecting political and financial crack-ups for years, and current events demonstrate extremely heightened risk. (Everyone seems to be asking which will be worse: a Trump or Clinton presidency? No one believes either candidate can guide us successfully through the labyrinth.) A tandem event (highly likely, in my view) could easily trigger a crisis of significant magnitude, given the combination of violent hyperbole and thin operational tolerance for business as usual. I surmise that anyone who offers the line “may you live in interesting times” has a poor understanding of what’s truly in store for us. What happens with full-on industrial collapse is even harder to contemplate.

Over at Gin and Tacos, the blogger has an interesting take on perverse incentives that function to inflate grades (and undermine learning), partly by encouraging teachers to give higher grades than deserved at the first hint of pushback from consumers students, parents, or administration. The blog post is more specifically about Why Johnny Can’t Write and references a churlish article in Salon. All well and good. The blog author provides consistently good analysis as a college professor intimate with the rigors of higher education and the often unprepared students deposited in his classroom. However, this comment got my attention in particular. The commentator is obviously a troll, and I generally don’t feed trolls, so I made only one modest comment in the comments section. Because almost no one reads The Spiral Staircase, certainly no one from the estimable Gin and Tacos crowd, I’ll indulge myself, not the troll, by examining briefly the main contention, which is that quality of writing, including correct grammar, doesn’t matter most of the time.

Here’s most of the comment (no link to the commentator’s blog, sorry):

1. Who gives a flying fuck about where the commas go? About 99.999999999999999% of the time, it makes NO DIFFERENCE WHATSOEVER in terms of understanding somebody’s point if they’ve used a comma splice. Is it a technical error? Sure. Just like my unclear pronoun reference in the last sentence. Did you understand what I meant? Unless you were actively trying not to, yes, you did.

2. There’s are hundreds of well-conducted peer-reviewed studies by those of us who actually specialize in writing pedagogy documenting the pointlessness of teaching grammar skills *unless students give a fuck about what they’re writing.* We’ve known this since the early 1980s. So when the guy from the high school English department in the article says they can’t teach grammar because students think it’s boring, he’s unwittingly almost making the right argument. It’s not that it’s boring–it’s that it’s irrelevant until the students have something they want to say. THEN we can talk about how to say it well.

Point one is that people manage to get their points across adequately without proper punctuation, and point two is that teaching grammar is accordingly a pedagogical dead end. Together, they assert that structure, rules, syntax, and accuracy make no difference so long as communication occurs. Whether one takes the hyperbole “99.999999999999999% of the time” as the equivalent of all the time, almost all the time, most of the time, etc. is not of much interest to me. Red herring served by a troll.

/rant on

As I’ve written before, communication divides neatly into receptive and expressive categories: what we can understand when communicated to us and what we can in turn communicate effectively to others. The first category is the larger of the two and is greatly enhanced by concerted study of the second. Thus, reading comprehension isn’t merely a matter of looking up words in the dictionary but learning how it’s customary and correct to express oneself within the rules and guidelines of Standard American English (SAE). As others’ writing and communication becomes more complex, competent reception is more nearly an act of deciphering. Being able to parse sentences, grasp paragraph structure, and follow the logical thread (assuming those elements are handled well) is essential. That’s what being literate means, as opposed to being semi-literate — the fate of lots of adults who never bothered to study.

To state flatly that “good enough” is good enough is to accept two unnecessary limitations: (1) that only a portion of someone’s full, intended message is received and (2) that one’s own message is expressed with no better than adolescent sophistication, if that. Because humans are not mind readers, loss of fidelity between communicated intent and receipt is acknowledged. By further limiting oneself to lazy and unsophisticated usage is, by way of analogy, to reduce the full color spectrum to black and white. Further, the suggestion that students can learn to express themselves properly once they have something to say misses the whole point of education, which is to prepare them with adult skills in advance of need.

As I understand it, modern American schools have shifted their English curricula away from the structural, prescriptive approach toward licentious usage just to get something onto the page, or in a classroom discussion, just to choke something out of students between the hemming ums, ers, uhs, ya knows, and I dunnos. I’d like to say that I’m astonished that researchers could provide cover for not bothering to learn, relieving both teachers and students of the arduous work needed to develop competence, if not mastery. That devaluation tracks directly from teachers and administrators to students and parents, the latter of whom would rather argue for their grades than earn them. Pity the fools who grub for grades without actually learning and are left holding worthless diplomas and degrees — certificates of nonachievement. In all likelihood, they simply won’t understand their own incompetence because they’ve been told all their lives what special flowers they are.

/rant off

While I’m revisiting old posts, the term digital exhaust came up again in a very interesting article by Shoshana Zuboff called “The Secrets of Surveillance Capitalism,” published in Frankfurter Allgemeine in March 2016. I no longer remember how I came upon it, but its publication in a obscure (to Americans) German newspaper (German and English versions available) was easily located online with a simple title search. The article has certainly not gone viral the way social media trends work, but someone obviously picked it up, promoted it, and raised it to the level of awareness of lots of folks, including me.

My earlier remarks about digital exhaust were that the sheer volume of information produced and exchanged across digital media quickly becomes unmanageable, with the result that much of it becomes pointless ephemera — the useless part of the signal-to-noise ratio. Further, I warned that by turning our attention to digital sources of information of dubious value, quality, and authority, we face an epistemological shift that could take considerable hindsight to describe accurately. That was 2007. It may not yet be long enough to fully understand effect(s) too well, or to crystallize the moment (to reuse my pet phrase yet again), but the picture is already clarifying somewhat terrifyingly.


I already updated my original post from 2009 once based on Tom Engelhardt’s analysis, adding a few of my own thoughts. I want to revisit the original, provide an addendum to my review of Oliver Stone’s Untold History, and draw attention to Andrew Bacevich’s alternative narrative titled “American Imperium.” This is about geopolitics and military history, which fall outside my usual areas of interest and blogging focus (excepting the disgrace of torture), but they’re nonetheless pretty central to what’s going on the world.

Having now watched the remainder of Untold History, it’s clear that every administration since WWII was neck deep in military adventurism. I had thought at least one or two would be unlike the others, and maybe Gerald Ford only waded in up to his knees, but the rest deployed the U.S. military regularly and forcefully enough to beggar the imagination: what on earth were they doing? The answer is both simple and complex, no doubt. I prefer the simple one: they were pursuing global American hegemony — frequently with overweening force against essentially medieval cultures. It’s a remarkably sad history, really, often undertaken with bland justifications such as “American interests” or “national security,” neither of which rings true. I’ve likened the U.S. before to the playground bully who torments others but can never be psychologically satisfied and so suffers his own private torments on the way to becoming a sociopath. Why does every American president resemble that profile (war criminals all), so afraid to look weak that he (thus far in U.S. history, always a he) must flex those muscles at the expense of ordinary people everywhere? Women in positions of authority (e.g., Sec. of State, National Security Advisor), by the way, exhibit the same behavior: advising striking at weaklings to prove they can wear pants, too.