I’m not paying close attention to the RNC in Cleveland. Actually, I’m ignoring it completely, still hoping that it doesn’t erupt in violence before the closing curtain. Yet I can’t help but hear some relevant news, and I have read a few commentaries. Ultimately, the RNC sounds like a sad, sad nonevent put on by amateurs, with many party members avoiding coming anywhere near. What’s actually transpiring is worthy of out-loud laughter at the embarrassment and indignities suffered by participants. The particular gaffe that caught my attention is cribbing from Michelle Obama in the speech delivered on Monday by Melania Trump. The speech writer, Meredith McIver, has accepted blame for it and characterized it as an innocent mistake.

Maybe someone else has already said or written this, but I suspect innocent plagiarism is probably true precisely because that’s the standard in quite a lot of academe these days. Students get away with it all the time, just not on a national stage. Reworking another’s ideas is far easier than coming up with one’s own original ideas, and Melania Trump has no reputation (near as I can tell) as an original thinker. The article linked to above indicates she admires Michelle Obama, so the plagiarism is from a twisted perspective an encomium.

The failure of Trump campaign officials to review the speech (or if they did review it, then do so effectively) is another LOL gaffe. It doesn’t rise to the level of the McCain campaign’s failure to vet Sarah Palin properly and won’t have any enduring effects, but it does reflect upon the Trump campaign’s ineptitude. My secret fear is that ineptitude is precisely why a lot of folks intend to vote for Trump: so that he can accelerate America’s self-destruction. It’s a negative view, and somewhat devil-may-care, to say “let’s make it a quick crash and get things over with already.” Or maybe it’s darkly funny only until suffering ramps up.

I hate to sound a conspiratorial note, and you’re free to disregard what follows, but it seems worthwhile to take further notice of the rash of violence last week.

In a commentary by John Whitehead at The Rutherford Institute, blame for what Whitehead calls “America’s killing fields” is laid at the feet of a variety of entities, including numerous elected officials and taxpayer-funded institutions. The more important quote appearing right at the top is this:

We have long since passed the stage at which a government of wolves would give rise to a nation of sheep. As I point out in my book Battlefield America: The War on the American People, what we now have is a government of psychopaths that is actively breeding a nation of psychopathic killers. [links redacted]

While this may read as unsupported hyperbole to some, I rather suspect Whitehead tells a truth hidden in plain sight — one we refuse to acknowledge because it’s so unsavory. Seeing that Whitehead gave it book-length consideration, so I’m inclined to grant his contention. One can certainly argue about intent, objectives, mechanisms, and techniques. Those are open to endless interpretation. I would rather concentrate on results, which speak for themselves. The fact is that in the U.S., Western Europe, and the Middle East, a growing number of people are in effect wind-up toys being radicalized and set loose. Significantly, recent perpetrators of violence are not only the disenfranchised but also police, current and former military, politicians, and pundits whose mindsets are not directed to diplomacy but instead establish “taking out the enemy” as the primary response to conflict. The enemy is also being redefined irrationally to include groups identified by race, religion, vocation, political persuasion, etc. (always has been, in fact, though the more virulent manifestations were driven underground for a time).

Childhood wind-up toys are my chosen metaphor because they’re mindless, pointless devices that are energized, typically by tightening a spring, and released for idle entertainment to move around and bump into things harmlessly until they sputter out. Maniacal mass killers “bump into” targets selected randomly via simple proximity to some venue associated with the killer’s pet peeve, so victims are typically in the wrong place at the wrong time. Uniformed police might be the exception. One might ask who or what is doing the winding of the spring. The pull quote above says it’s a government of psychopaths breeding yet more psychopaths. That is certainly true with respect to the ruling classes — what used to be the aristocracy in older cultures but now is more nearly a kleptocracy in the U.S. — and members of a monstrous security apparatus (military, civil police, intelligence services, etc.) now that the U.S. has effectively become a garrison state. Self-reinforcing structures have hardened over time, and their members perpetuate them. I’ve even heard suspicions that citizens are being “chipped,” that is, programmed in the sense of psyops to explode into mayhem with unpredictable certainty, though for what purpose I can only imagine.

The simpler explanation that makes more sense to me is that our culture is crazy-making. We no longer function well in a hypercomplex world — especially one so overloaded with information — without losing our grounding, our grip on truth, meaning, and value, and going mad. Contemporary demands on the nervous system have outstripped biological adaptation, so we respond to constant strain and stress with varying levels of dysfunction. No doubt some folks handle their difficulties better than others; it’s the ones who snap their springs who are of grave concern these days. Again, the mechanism isn’t all that important, as the example from Nice, France, demonstrates. Rather, it’s about loss of orientation that allows someone to rationalize killing a bunch of people all at once as somehow a good idea. Sadly, there is no solution so long as our collective attention is trained on the wrong things, perpetuating a network of negative feedback loops that makes us all loopy and a few of us highly dangerous. Welcome to the asylum.

Events of the past few days have been awful: two further shootings of black men by police under questionable circumstances (Louisiana and Minnesota), and in response, a sniper killing five police officers (Texas) and injuring more. Everything is tragic and inexcusable; I offer no refuge for armed men on both sides of the law using lethal force against others. But I will attempt to contextualize. Yes, issues of race, guns, and public safety are present. The first two are intractable debates I won’t wade into. However, the issue of public safety seems to me central to what’s going on, namely, the constant beat of threatening drums and related inflammatory speech that together have the effect of putting everyone on edge and turning some into hair-triggers.

I’ve read news reports and opinion columns that subject these events to the usual journalistic scrutiny: factual information strung together with calm, measured assurance that what occurred was the result of intemperate individuals not representative of the public at large. So go ahead and worry, but not too much: those guys are all outliers — a few bad apples. As I take the temperature of the room (the country, actually), however, my sense is that we are approaching our boiling point and are frankly likely to boil over soon, perhaps in concert with party nominating conventions expected to break with convention and further reveal already disastrous operations of the federal government. The day-to-day,  smooth surface of American life — what we prefer in times of relative peace and prosperity — has also been punctuated for decades now with pops and crackles in the form of mass shootings (schools, theaters, churches, clubs, etc.) and a broad pattern of civil authorities surveilling and bringing force to bear against the public they’re meant to protect and serve. How long before it all becomes a roiling, uncontrollable mess, with mobs and riots being put down (or worse) by National Guardsmen just like the 1960s? Crowd control and management techniques have been refined considerably since that last period of civil unrest (I wrote about it briefly here), which is to say, they’re a lot creepier than water cannons, tear gas, and pepper spray (nothing to laugh about if one has been on the receiving end of any of those).

My question, to anyone with the equanimity to think twice about it, is this: aren’t these outcomes a rather predictable result of the bunker mentality we’ve adopted since being instructed by the media and politicians alike that everyone the world over is coming to take away our guns freedom? Further, aren’t the vague, unfocused calls to action spouted constantly by arch-conservative demagogues precisely the thing that leads some unhinged folks to actually take action because, well, no one else is? Donald Trump has raised diffuse threats and calls to action to an art form at his rallies, with supporters obliging by taking pot shots at others at the mere whiff of dissent from his out-of-tune-with-reality message. (Don’t even think about being nonwhite in one of those crowds.) He’s only one of many stirring the pot into a froth. Moreover, weak minds, responding in their lizard brains to perceived threat, have accepted with gusto the unfounded contention that ISIS in particular, terrorism in general, represents an existential threat to the U.S., and thus, generalizing the threat, are now calling for curtailing the practice of Islam (one of three Abrahamic religions arising in the ancient world with over 2 billion adherents worldwide) in the U.S. Apparently, the absolutism of freedom of religion (can also be interpreted as freedom from establishment of a state religion) enshrined in the 1st Amendment to the U.S. Constitution is lost on those whose xenophobia erases all reasoned thought.

The mood is turning quite ugly. A quick survey of history probably reveals that it’s always been that way. Many of us (by no means all of us) understand calls to “make America great again” as coded speech advocating return to a white male Christian dominated culture. So much for our vaunted freedom.

The holiday weekend (July 4th) is one of several spots on the calendar given to unusual crowd formation as events ranging from barbecues, concerts, parades, festivals, and fireworks displays invite multitudes to assemble. The combo concert/fireworks display is perhaps the most well attended, as the fetish for watching shit blow up never flags. The Taste of Chicago is about to begin and is a notable example of severe overcrowding; the pics on the website do not show sun-baked, sweaty, overfed attendees elbowing each other for room to move or just to stand still and eat, but that’s been my experience. In honor of their 100-year anniversaryOutside Magazine also devoted a recent issue to the National Parks, which are setting attendance records. I’ve written before about the self-defeating result of drawing unmanageable crowds together. Consider this frightening image of a crowded beach in China, which is a frequent problem around our overpopulated globe:

crowded_beach_china_02

Read the rest of this entry »

Something I wrote ten years ago at Creative Destruction. Probably worth an update.

Creative Destruction

We’ve all see the reports. U.S. high schoolers rank at or near the bottom in math and science. Admittedly, that link is to a story eight years old, but I doubt rankings have changed significantly. A new study and report are due out next year. See this link.

What interests me is that we live in an era of unprecedented technological advancement. While the U.S. may still be in the vanguard, I wonder how long that can last when the source of inspiration and creativity — human knowledge and understanding — is dying at the roots in American schools. It’s a sad joke, really, that follow-the-directions instructions for setting the clock on a VCR (remember those?) proved so formidable for most end users that a time-setting function is built into more recent recording systems such as TIVO. Technical workarounds may actually enable ever-increasing levels of disability working with our…

View original post 834 more words

An enduring trope of science fiction is naming of newly imagined gadgets and technologies (often called technobabble with a mixture of humor and derision), as well as naming segments of human and alien societies. In practice, that means renaming already familiar things to add a quasi-futuristic gleam, and it’s a challenge faced by every story that adopts an alternative or futuristic setting: describing the operating rules of the fictional world but with reference to recognizable human characteristics and institutions. A variety of recent Young Adult (YA) fiction has indulged in this naming and renaming, some of which have been made into movies, mostly dystopic in tone, e.g., the Hunger Games tetralogy, the Twilight saga, the Harry Potter series, the Maze Runner, and the Divergent trilogy. (I cite these because, as multipart series, they are stronger cultural touchstones, e.g., Star Wars, than similar once-and-done adult cinematic dystopias, e.g., Interstellar and ElyseumStar Trek is a separate case, considering how it has devolved after being rebooted from its utopian though militaristic origins into a pointless series of action thrillers set in space.) Some exposition rises to the level of lore but is mostly mere scene-setting removed slightly from our own reality. Similar naming schemes are used in cinematic universes borne out of comic books, especially character names, powers, and origins. Because comic book source material is extensive, almost all of it becomes lore, which is enjoyed by longtime children initiates into the alternate universes created by the writers and illustrators but mildly irritating to adult moviegoers like me.

History also has names for eras and events sufficiently far back in time for hindsight to provide a clear vantage point. In the U.S., we had the Colonial Era, the Revolutionary Period, The Frontier Era and Wild West, the Industrial/Mechanical Age, Modernism, and Postmodernism, to name a few but by no means all. Postmodernism is already roughly 40 years old, yet we have not yet named the era in which we now live. Indeed, because we’re the proverbial fish inside the fishbowl, unable to recognize the water in which we swim, the contemporary moment may have no need of naming, now or at any given time. That task awaits those who follow. We have, however, given names to the succession of generations following the Baby Boom. How well their signature characteristics fit their members is the subject of considerable debate.

As regular readers of this blog already know, I sense that we’re on the cusp of something quite remarkable, most likely a hard, discontinuous break from our recent past. Being one of the fish in the bowl, I probably possess no better understanding of our current phase of history than the next. Still, if had to choose one word to describe the moment, it would be dissolution. My 4-part blog post about dissolving reality is one attempt to provide an outline. A much older post called aged institutions considers the time-limited effectiveness of products of human social organization. The grand question of our time might be whether we are on the verge of breaking apart or simply transitioning into something new — will it be catastrophe or progress?

News this past week of Britain’s exit from the European Union may be only one example of break-up vs. unity, but the drive toward secession and separatism (tribal and ideological, typically based on bogus and xenophobic identity groups constantly thrown in our faces) has been gaining momentum even in the face of economic globalization (collectivism). Scotland very nearly seceded from the United Kingdom last year; Quebec has had multiple referenda about seceding from Canada, none yet successful; and Vermont, Texas, and California have all flirted with secession from the United States. No doubt some would argue that such examples of dissolution, actual or prospective, are actually transitional, meaning progressive. And perhaps they do in fact fulfill the need for smaller, finer, localized levels of social organization that many have argued are precisely what an era of anticipated resource scarcity demands. Whether what actually manifests will be catastrophe (as I expect it will) is, of course, what history and future historians will eventually name.

The first Gilded Age in the U.S. and the Roaring Twenties were periods that produced an overabundance of wealth for a handful of people. Some of them became synonymous with the term robber baron precisely for their ability to extract and accumulate wealth, often using tactics that to say the least lacked scruples when not downright criminal. The names include Rockefeller, Carnegie, Astor, Mellon, Stanford, Vanderbilt, Duke, Morgan, and Schwab. All have their names associated in posterity with famous institutions. Some are colleges and universities, others are banking and investment behemoths, yet others are place names and commercial establishments. Perhaps the philanthropy they practiced was not entirely generous, as captains of industry (then and today) seem to enjoy burnishing their legacies with a high level of name permanence. Still, one can observe that most of the institutions bearing their names are infrastructure useful to the general public, making them public goods. This may be partly because the early 20th century was still a time of nation building, whereas today is arguably a time of decaying empire.

The second Gilded Age in the U.S. commenced in the 1980s and is still going strong as measured by wealth inequality. However, the fortunes of today’s tycoons appear to be directed less toward public enrichment than toward self-aggrandizement. The very nature of philanthropy has shifted. Two modern philanthropists appear to be transitional: Bill Gates and Ted Turner. The Gates Foundation has a range of missions, including healthcare, education, and access to information technology. Ted Turner’s well-publicized $1 billion gift to the United Nations Foundation in 1997 was an open dare to encourage similar philanthropy among the ultrarich. The Turner Foundation website’s byline is “protecting & restoring the natural world.” Not to be ungrateful or uncharitable, but both figureheads are renowned for highhandedness in the fashion in which they gathered up their considerable fortunes and are redirecting some portion of their wealth toward pet projects that can be interpreted as a little self-serving. Philanthropic efforts by Warren Buffet appear to be less about giving away his own fortune to charities or establishing institutions bearing his name as they are about using his notoriety to raise charitable funds from others sources and thus stimulating charitable giving. The old saying applies especially to Buffet: “no one gets rich by giving it away.” More galling, perhaps, is another group of philanthropists, who seem to be more interested in building shrines to themselves. Two entries stand out: The Lucas Museum (currently seeking a controversial site in Chicago) and The Walmart Museum. Neither resembles a public good, though their press packages may try to convince otherwise.

Charity has also shifted toward celebrity giving, with this website providing philanthropic news and profiles of celebrities complete with their causes and beneficiaries. With such a wide range of people under consideration, it’s impossible to make any sweeping statements about the use or misuse of celebrity, the way entertainers are overcompensated for their talents, or even how individuals such as Richard Branson and Elon Musk have been elevated to celebrity status primarily for being rich. (They undoubtedly have other legitimate claims to fame, but they’re overshadowed in a culture that celebrates wealth before any other attribute.) And then there are the wealthy contributors to political campaigns, such as the Koch brothers, George Soros, and Sheldon Adelson, just to name a few. It’s fair to say that every contributor wants some bang for their buck, but I daresay that political contributors (not strictly charity givers) expect a higher quotient of influence, or in terms more consistent with their thinking, a greater return on investment.

None of this takes into account the charitable work and political contributions stemming from corporations and unions, or indeed the umbrella corporations that exist solely to raise funds from the general public, taking a sizeable share in administrative fees before passing some portion onto the eventual beneficiary. Topical charities and scams also spring up in response to whatever is the latest natural disaster or atrocity. What’s the average citizen to do when the pittance they can donate pales in comparison to that offered by the 1% (which would be over 3 million people in the U.S. alone)? Or indeed how does one guard against being manipulated by scammers (including the burgeoning number of street panhandlers) and political candidates into throwing money at fundamentally insoluble problems? Are monetary gifts really the best way of demonstrating charity toward the needy? Answers to these questions are not forthcoming.

Update: Closure has been achieved on the Lucas Museum coming to Chicago. After 2 years of litigation blocking any building on his proposed site on the lakefront, George Lucas has decided to seek a site in California instead. Both sides had to put their idiotic PR spin on the result, but most people I know are relieved not to have George Lucas making inroads into Chicago architecture. Now if only we could turn back time and stop Donald Trump.

The question comes up with some regularity: “Where were you when …?” What goes in place of the ellipsis has changed with the generations. For my parents, it was the (first) Kennedy assassination (1963). For my siblings and chronological peers, it was first the Three Mile Island accident (1979) and then the Challenger disaster (1986). For almost everyone since, it’s been the September 11 attacks (2001), though a generation lacking memory of those events is now entering adulthood. These examples are admittedly taken from mainstream U.S. culture. If one lives elsewhere, it might be the Mexico City earthquake (1985), the Chernobyl disaster (1986), the Indonesian tsunami (2004), the Haitian earthquake (2010), the Fukushima disaster (2011), or any number of other candidates populating the calendar. Even within the U.S., other more local events might take on greater significance, such as Hurricane Katrina (2005) along the Gulf Coast or Hurricane Iniki (1992) in Hawaii, the latter of which gives September 11 a very different meaning for those who suffered through it.

What these events all have in common is partially a loss of innocence (particularly the man-made disasters) and a feeling of suspended animation while events are sorted out and news is processed. I remember with clarity how the TV news went into full disaster mode for the Challenger disaster, which proved to be the ridiculous template for later events, including 9/11. Most of the coverage was denial of the obvious and arrant speculation, but the event itself was captivating enough that journalists escaped wholesale condemnation they plainly deserved. The “where were you?” question is usually answered with the moment one became aware of some signal event, such as “I was on the bus” or “I was eating dinner.” News vectors changed dramatically from 1986 to 2001, as two relatively arbitrary points of comparison within my lifetime. Being jarred out of complacency and perceiving the world suddenly as a rather dangerous place hardly expresses the weird wait-and-see response most of us experience in the wake of disaster.

Hurricanes typically come with a narrow warning, but other events strike without clear expectation — except perhaps in terms of their inevitability. That inevitability informs expectations of further earthquakes (e.g., San Andreas and New Madrid faults) and volcanic eruptions (e.g., the Yellowstone supervolcano), though the predictive margin of error can be hundreds or even thousands of years. My more immediate concern is with avoidable man-made disasters that are lined up to fall like a series of dominoes. As with natural disasters, we’re all basically sitting ducks, completely vulnerable to what armed mayhem may arise. But rather than enter into suspended animation in the wake of events, current political, financial, and ecological conditions find me metaphorically ducking for cover in expectation of the inevitable blow(s). Frankly, I’ve been expecting political and financial crack-ups for years, and current events demonstrate extremely heightened risk. (Everyone seems to be asking which will be worse: a Trump or Clinton presidency? No one believes either candidate can guide us successfully through the labyrinth.) A tandem event (highly likely, in my view) could easily trigger a crisis of significant magnitude, given the combination of violent hyperbole and thin operational tolerance for business as usual. I surmise that anyone who offers the line “may you live in interesting times” has a poor understanding of what’s truly in store for us. What happens with full-on industrial collapse is even harder to contemplate.

Over at Gin and Tacos, the blogger has an interesting take on perverse incentives that function to inflate grades (and undermine learning), partly by encouraging teachers to give higher grades than deserved at the first hint of pushback from consumers students, parents, or administration. The blog post is more specifically about Why Johnny Can’t Write and references a churlish article in Salon. All well and good. The blog author provides consistently good analysis as a college professor intimate with the rigors of higher education and the often unprepared students deposited in his classroom. However, this comment got my attention in particular. The commentator is obviously a troll, and I generally don’t feed trolls, so I made only one modest comment in the comments section. Because almost no one reads The Spiral Staircase, certainly no one from the estimable Gin and Tacos crowd, I’ll indulge myself, not the troll, by examining briefly the main contention, which is that quality of writing, including correct grammar, doesn’t matter most of the time.

Here’s most of the comment (no link to the commentator’s blog, sorry):

1. Who gives a flying fuck about where the commas go? About 99.999999999999999% of the time, it makes NO DIFFERENCE WHATSOEVER in terms of understanding somebody’s point if they’ve used a comma splice. Is it a technical error? Sure. Just like my unclear pronoun reference in the last sentence. Did you understand what I meant? Unless you were actively trying not to, yes, you did.

2. There’s are hundreds of well-conducted peer-reviewed studies by those of us who actually specialize in writing pedagogy documenting the pointlessness of teaching grammar skills *unless students give a fuck about what they’re writing.* We’ve known this since the early 1980s. So when the guy from the high school English department in the article says they can’t teach grammar because students think it’s boring, he’s unwittingly almost making the right argument. It’s not that it’s boring–it’s that it’s irrelevant until the students have something they want to say. THEN we can talk about how to say it well.

Point one is that people manage to get their points across adequately without proper punctuation, and point two is that teaching grammar is accordingly a pedagogical dead end. Together, they assert that structure, rules, syntax, and accuracy make no difference so long as communication occurs. Whether one takes the hyperbole “99.999999999999999% of the time” as the equivalent of all the time, almost all the time, most of the time, etc. is not of much interest to me. Red herring served by a troll.

/rant on

As I’ve written before, communication divides neatly into receptive and expressive categories: what we can understand when communicated to us and what we can in turn communicate effectively to others. The first category is the larger of the two and is greatly enhanced by concerted study of the second. Thus, reading comprehension isn’t merely a matter of looking up words in the dictionary but learning how it’s customary and correct to express oneself within the rules and guidelines of Standard American English (SAE). As others’ writing and communication becomes more complex, competent reception is more nearly an act of deciphering. Being able to parse sentences, grasp paragraph structure, and follow the logical thread (assuming those elements are handled well) is essential. That’s what being literate means, as opposed to being semi-literate — the fate of lots of adults who never bothered to study.

To state flatly that “good enough” is good enough is to accept two unnecessary limitations: (1) that only a portion of someone’s full, intended message is received and (2) that one’s own message is expressed with no better than adolescent sophistication, if that. Because humans are not mind readers, loss of fidelity between communicated intent and receipt is acknowledged. By further limiting oneself to lazy and unsophisticated usage is, by way of analogy, to reduce the full color spectrum to black and white. Further, the suggestion that students can learn to express themselves properly once they have something to say misses the whole point of education, which is to prepare them with adult skills in advance of need.

As I understand it, modern American schools have shifted their English curricula away from the structural, prescriptive approach toward licentious usage just to get something onto the page, or in a classroom discussion, just to choke something out of students between the hemming ums, ers, uhs, ya knows, and I dunnos. I’d like to say that I’m astonished that researchers could provide cover for not bothering to learn, relieving both teachers and students of the arduous work needed to develop competence, if not mastery. That devaluation tracks directly from teachers and administrators to students and parents, the latter of whom would rather argue for their grades than earn them. Pity the fools who grub for grades without actually learning and are left holding worthless diplomas and degrees — certificates of nonachievement. In all likelihood, they simply won’t understand their own incompetence because they’ve been told all their lives what special flowers they are.

/rant off

While I’m revisiting old posts, the term digital exhaust came up again in a very interesting article by Shoshana Zuboff called “The Secrets of Surveillance Capitalism,” published in Frankfurter Allgemeine in March 2016. I no longer remember how I came upon it, but its publication in a obscure (to Americans) German newspaper (German and English versions available) was easily located online with a simple title search. The article has certainly not gone viral the way social media trends work, but someone obviously picked it up, promoted it, and raised it to the level of awareness of lots of folks, including me.

My earlier remarks about digital exhaust were that the sheer volume of information produced and exchanged across digital media quickly becomes unmanageable, with the result that much of it becomes pointless ephemera — the useless part of the signal-to-noise ratio. Further, I warned that by turning our attention to digital sources of information of dubious value, quality, and authority, we face an epistemological shift that could take considerable hindsight to describe accurately. That was 2007. It may not yet be long enough to fully understand effect(s) too well, or to crystallize the moment (to reuse my pet phrase yet again), but the picture is already clarifying somewhat terrifyingly.

Read the rest of this entry »