Archive for the ‘Media’ Category

Continuing my book-blogging project on Orality and Literacy, Ong provides context for the oral tradition that surrounded the two great Homeric classics: The Iliad and The Odyssey. According to Ong, it took decades for literary critics and sociologists to overcome their bias, borne out of literacy, and recognize how formulaic are the two epics. They are essentially pastiches of commonplace plots, phrases, and sayings of the time, which was a notable strength when oral delivery based on memorization was how epic poetry was transmitted. In a literate era, such clichés are to be avoided (like the plague).

Aside: my review of David Serota’s Back to Our Future mentions the dialect he and his brother developed, filled with one-liners and catchphrases from entertainment media, especially TV and movies. The three-word (also three-syllable) form seems to be optimal: “Beam me up” (Star Trek), “Use the Force” (Star Wars), “Make my day” (Dirty Harry), “I’ll be back” (The Terminator), etc. This construction is short, punchy, and memorable. The first holder of high office in the U.S. to attempt to govern by catchphrase was probably Ronald Reagan, followed (of course) by Arnold Schwarzenegger and then Donald Trump. Mustn’t overlook that all three (and others) came to prominence via the entertainment industry rather than through earnest (Kennedyesque) public service. Trump’s numerous three-word phrases (shtick, really) lend themselves especially well to being chanted by adoring crowds at his pep rallies, swept up in groupthink, with a recognizable beat-beat-beat-(silence) structure. The rock band Queen stumbled upon this same elemental rhythm with its famous stomp-stomp-clap-(wait) from the anthem “We Are the Champions,” consciously intended for audience participation (as I understand it).

Further aside: “We Are the Champions” combines its iconic rhythm with a recitation tone sourced in antiquity. Make of that what you will.

Ong goes on to provide a discussion of the psychodynamics of orality, which I list here without substantive discussion (read for yourself):

  • orality is additive rather than subordinative
  • orality is aggregative rather than analytic
  • orality is redundant or copious
  • orality is conservative or traditionalist
  • orality is close to the human lifeworld
  • orality is agonistically toned
  • orality is empathetic and participatory rather than objectively distanced
  • orality is homeostatic
  • orality is situational rather than abstract

Of particular interest is Ong’s description of how language functions within oral cultures distinctly from literate cultures, which is the source of the bias mentioned above. To wit:

Fully literate persons can only with great difficulty imagine what a primary oral culture is like, that is, a culture with no knowledge whatsoever of writing or even the possibility of writing … In a primary oral culture, the expression ‘to look up something’ is an empty phrase … [w]ithout writing, words as such have no visual presence, even when the objects they represent are visual … [for] ‘primitive’ (oral) people … language is a mode of action and not simply a countersign of thought — oral people commonly, and probably universally, consider words to have great power. [pp. 31–32]

If this sounds conspicuously reminiscent this previous post, well, congratulations on connecting the dots. The whole point, according to a certain perspective, is that words are capable of violence, which is (re)gaining adherents as our mental frameworks undergo continuous revision. It’s no small thing that slurs, insults, and fighting words (again) provoke offense and violent response and that mere verbal offense equates to violence. Not long ago, nasty words were reclaimed, nullified, and thus made impotent (with varying levels of irrational rules of usage). Well, now they sting again and are used as ammo to cancel (a form of administrative violence, often undertaken anonymously, bureaucratically, and with the assistance of the digital mob) anyone with improper credentials to deploy them.

Let me draw another connection. Here’s a curious quote by Walter Pater, though not well known:

All art constantly aspires towards the condition of music. For while in all other kinds of art it is possible to distinguish the matter from the form, and the understanding can always make this distinction, yet it is the constant effort of art to obliterate it.

Put another way, the separation of signifier from signified, an abstraction conditioned by literacy and rationalism (among other things) is removed (“obliterated”) by music, which connects to emotion more directly than representational art. Similarly, speech within primary oral cultures exists purely as sound and possesses an ephemeral, even effervescence (Ong’s term) quality only experienced in the flow of time. (Arguably, all of human experience takes place within the flow of time.) Music and “primitive” speech are accordingly dynamic and cannot be reduced to static snapshots, that is, fixed on a page as text or committed to a canvas or photograph as a still image (hence, the strange term still life). That’s why a three-word, three-syllable chant, or better yet, the Queen rhythm or the Wave in sports arenas (a gesture requiring subscription of everyone), can possess inherent power, especially as individuals are entrained in groupthink. Music and words-as-violence get inside us and are nearly wholly subjective, not objective — something we all experience organically in early childhood before being taught to read and write (if in fact those skills are learned beyond functional literacy). Does that mean culture is reverting to an earlier stage of development, more primitive, childlike, and irrational?

While working, I half listen to a variety of podcasts via YouTube, usually minimizing the window so that I don’t see the video. Some report that long-haul truckers are also avid podcast listeners (presumably discarding AM radio); who knows? At any rate, I find it dispiriting that nearly every podcast has attracted sponsors and now features unavoidable, in-your-face advertising on top of ubiquitous exhortations to like, subscribe, ring the bell, and buy merch. Ads are sometimes read live, no longer being prerecorded bits during regular commercial breaks. Segues into ad reads are often tortured, with tastelessness being an inverted badge of honor somehow.

I get that for those who have made podcasting their primary incomes, opining on anything and everything ad nauseum (sorta like me, actually), sponsorship is what keeps them stocked with peanut butter. Why do I still tune in? Well, some are actually entertaining, while others are exceptional clearinghouses for information I wouldn’t otherwise gather — at least when not pedantic and irritating. Good thing I’m only half listening. Case in point: a few weeks back, the DarkHorse Podcast (no link) announced it would begin doing ads, but to make the bitter pill easier to swallow, free endorsements (unpaid ads) would also be presented. Right … more of what I don’t want. In characteristic fashion, the two hosts beat that damn horse well into the afterlife, softening none of the irksome content (at least for me). Although legacy media (e.g., radio, TV, magazines, newsprint) has always required forfeiting some part of one’s time and attention to ignoring or filtering out ads, streaming services and online blockers have done away with much of the unwanted marketing. Perhaps that’s why I’m exasperated at it now being unavoidable again.

With this in mind, here’s my promise to you, dear reader: I will never monetize this blog or put it behind a paywall. I won’t even put up a tip jar or coffee mug to entice micropayments. The blog will also never connect to Facebook or Twitter or any other platform. This blog is totally free and unencumbered (except the ads WordPress puts in, which are relatively easy to dismiss and/or circumvent). Maybe I’m fortunate that I earn my living elsewhere and disavow any desire to be a pundit, influencer, or media figure. Those folks are uniformly unenviable, especially when distorted by their own celebrity so that they forget who they are. Instead, this blog will remain what it’s always been: a venue for me to work out my ideas and secondarily share them.

For more than a decade, I’ve had in the back of my mind a blog post called “The Power of Naming” to remark that bestowing a name gives something power, substance, and in a sense, reality. That post never really came together, but its inverse did. Anyway, here’s a renewed attempt.

The period of language acquisition in early childhood is suffused with learning the names of things, most of which is passive. Names of animals (associated closely with sounds they make) are often a special focus using picture books. The kitty, doggie, and horsie eventually become the cat, dog, and horse. Similarly, the moo-cow and the tweety-bird shorten to cow and bird (though songbird may be an acceptable holdover). Words in the abstract are signifiers of the actual things, aided by the text symbols learned in literate cultures to reinforce mere categories instead of examples grounded in reality. Multiply the names of things several hundred thousand times into adulthood and indeed throughout life and one can develop a formidable vocabulary supporting expressive and nuanced thought and speech. Do you know the differences between acute, right, obtuse, straight, and reflex angles? Does it matter? Does your knowledge of barware inform when to use a flute, coupe, snifter, shot (or shooter or caballito), nosing glass (or Glencairn), tumbler, tankard, goblet, sling, and Stein? I’d say you’ve missed something by never having drunk dark beer (Ger.: Schwarzbier) from a frosted schooner. All these varieties developed for reasons that remain invisible to someone content to drink everything from the venerable red Solo cup. Funnily enough, the red Solo cup now comes in different versions, fooling precisely no one.

Returning to book blogging, Walter Ong (in Orality and Literacy) has curious comparisons between primarily oral cultures and literate cultures. For example:

Oral people commonly think of names (one kind of words) as conveying power over things. Explanations of Adam’s naming of the animals in Genesis 2:20 usually call condescending attention to this presumably quaint archaic belief. Such a belief is in fact far less quaint than it seems to unreflective chirographic and typographic folk. First of all, names do give humans beings power over what they name: without learning a vast store of names, one is simply powerless to understand, for example, chemistry and to practice chemical engineering. And so with all other intellectual knowledge. Secondly, chirographic and typographic folk tend to think of names as labels, written or printed tags imaginatively affixed to an object named. Oral folk have no sense of a name as a tag, for they have no idea of a name as something that can be seen. Written or printed representations of words can be labels; real, spoken words cannot be. [p. 33]

This gets at something that has been developing over the past few decades, namely, that as otherwise literate (or functionally literate) people gather more and more information through electronic media (screens that serve broadcast and cable TV, YouTube videos, prerecorded news for streaming, and podcasts, and most importantly, audiobooks — all of which speak content to listeners), the spoken word (re)gains primacy and the printed word fades into disuse. Electronic media may produce a hybrid of orality/literacy, but words are no longer silent, internal, and abstract. Indeed, words — all by themselves — are understood as being capable of violence. Gone are the days when “stick and stones ….” Now, fighting words incite and insults sting again.

Not so long ago, it was possible to provoke a duel with an insult or gesture, such as a glove across the face. Among some people, defense of honor never really disappeared (though dueling did). History has taken a strange turn, however. Proposed legislation to criminalize deadnaming (presumably to protect a small but growing number of transgender and nonbinary people who have redefined their gender identity and accordingly adopted different names) recognizes the violence of words but then tries to transmute the offense into an abstract criminal law. It’s deeply mixed up, and I don’t have the patience to sort it out.

More to say in later blog posts, but I’ll raise the Counter-Enlightenment once more to say that the nature of modern consciousness if shifting somewhat radically in response to stimuli and pressures that grew out of an information environment, roughly 70 years old now but transformed even more fundamentally in the last 25 years, that is substantially discontinuous from centuries-old traditions. Those traditions displaced even older traditions inherited from antiquity. Such is the way of the world, I suppose, and with the benefit of Walter Ong’s insights, my appreciation of the outlines is taking better shape.

Wanted to provide an update to the previous post in my book-blogging project on Walter Ong’s Orality and Literacy to correct something that wasn’t clear to me at first. The term chirographic refers to writing, but I conflated writing more generally with literacy. Ong actually distinguishes chirographic (writing) from typographic (type or print) and includes another category: electronic media.

Jack Goody … has convincingly shown how shifts hitherto labeled as shifts from magic to science, or from the so-called ‘prelogical’ to the more and more ‘rational’ state of consciousness, or from Lévi-Strauss’s ‘savage’ mind to domesticated thought, can be more economically and cogently explained as shifts from orality to various stages of literacy … Marshall McLuhan’s … cardinal gnomic saying, ‘The medium is the message’, registered his acute awareness of the importance of the shift from orality through literacy and print to electronic media. [pp. 28–29]

So the book’s primary contrast is between orality and literacy, but literacy has a sequence of historical developments: chirographic, typographic, and electronic media. These stages are not used interchangeably by Ong. Indeed, they exist simultaneously in the modern world and all contribute to overall literacy while each possesses unique characteristics. For instance, reading from handwriting (printing or cursive, the latter far less widely used now except for signatures) is different from reading from print on paper or on the screen. Further, writing by hand, typing on a typewriter, typing into a word-processor, and composing text on a smartphone each has its effects on mental processes and outputs. Ong also mentions remnants of orality that have not yet been fully extinguished. So the exact mindset or style of consciousness derived from orality vs. literacy is neither fixed nor established universally but contains aspects from each category and subcategory.

Ong also takes a swing at Julian Jaynes. Considering that Jaynes’ book The Origin of Consciousness in the Breakdown of the Bicameral Mind (1977) (see this overview) was published only seven years prior to Orality and Literacy (1982), the impact of Jaynes’ thesis must have still been felt quite strongly (as it is now among some thinkers). Yet Ong disposes of Jaynes rather parsimoniously, stating

… if attention to sophisticated orality-literacy contrasts is growing in some circles, it is still relatively rare in many fields where it could be helpful. For example, the early and late stages of consciousness which Julian Jaynes (1977) describes and related to neuro-physiological changes to the bicameral mind would also appear to lend themselves largely to much simpler and more verifiable descriptions in terms of a shift from orality to literacy. [p. 29]

In light of the details above, it’s probably not accurate to say (as I did before) that we are returning to orality from literacy. Rather, the synthesis of characteristics is shifting, as it always has, in relation to new stimuli and media. Since the advent of cinema and TV — the first screens, now supplemented by the computer and smartphone — the way humans consume information is undergoing yet another shift. Or perhaps it’s better to conclude that it’s always been shifting, not unlike how we have always been and are still evolving, though the timescales are usually too slow to observe without specialized training and analysis. Shifts in consciousness arguably occur far more quickly than biological evolution, and the rate at which new superstimuli are introduced into the information environment suggest radical discontinuity with even the recent past — something that used to be call the generation gap.

I’ve always wondered what media theorists such as McLuhan (d. 1980), Neil Postman (d. 2003), and now Ong (d. 2003) would make of the 21st century had they lived long enough to witness what has been happening, with 2014–2015 being the significant inflection point according to Jonathan Haidt. (No doubt there are other media theorists working on this issue who have not risen to my attention.) Numerous other analyses point instead to the early 20th century as the era when industrial civilization harnessed fossil fuels and turned the mechanisms and technologies of innovators decidedly against humanity. Pick your branching point.

Considering the acceleration of practically everything in the late-modern world (postmodern refers to something quite different), which makes planning one’s higher education somewhat fraught if the subject matter studied is rendered flatly out-of-date or moribund by the time of either graduation or entry into the workforce, I’ve heard it recommended that expertise in any particular subject area may be less important than developing expertise in at least one subject that takes a systems approach. That system might be language and communications, mathematics (or any other hard science), history, economics and finance, business administration, computer coding, law and governance, etc. So long as a rigorous understanding of procedures and rules is developed, a structuralist mindset can be repeated and transferred into other subject areas. Be careful, however, not to conflate this approach with a liberal arts education, which is sometimes described as learning how to learn and is widely applicable across disciplines. The liberal arts have fallen distinctly out of favor in the highly technological and technocratic world, which cares little for human values resistant to quantification. Problem is, Western societies in particular are based on liberal democratic institutions now straining due to their sclerotic old age. And because a liberal arts education is scarcely undertaken anymore, civics and citizenship are no longer taught. Even the study of English has now been corrupted (postmodern does apply here) to the point that the basic liberal arts skill of critical thinking is being lost through attrition. Nowhere is that more abundantly clear than in bristling debate over free speech and censorship.

Aside. Although society tinkers and refines itself (sometimes declines) over time, a great body of cultural inheritance informs how things are done properly within an ideology or system. When tinkering and refinement become outright intransigence and defiance of an established order, it’s commonplace to hear the objection “but that’s not how _______ works.” For instance, debate over climate science or the utility of vaccines often has one party proclaiming “trust [or believe] the science.” However, that’s not how science works (i.e., through unquestioning trust or belief). The scientific method properly understood includes verification, falsification, and revision when results and assertions fail to establish reasonable certainty (not the same as consensus). Similarly, critical thinking includes a robust falsification check before “facts” can be accepted at face value. So-called “critical studies” (a/k/a grievance studies), like religious faith, typically positions bald assertions beyond the reach of falsification. Well, sorry, that’s not how critical thinking works.

Being older and educated before critical studies were fully legitimized (or gave rise to things as risible as feminist glaciology), my understand has always been that free speech and other rights are absolutes that cannot be sliced and diced into bits. That way lies casuistry, where law founders frequently. Thus, if one wishes, say, to trample or burn the U.S. flag in protest, no law can be passed or constitutional amendment enacted to carve out an exception disallowed that instance of dissenting free speech. A lesser example is kneeling silently rather than participating in singing the national anthem before a sporting event. Though offensive to certain individual’s sensibilities, silencing speech is far worse according to liberal democratic values. Whatever our ideological or political difference are, we cannot work them out when one party has the power to place topics out or bounds or remove others from discussion entirely. The point at which spirited debate crosses over into inciting violence or fomenting insurrection is a large gray area, which is the subject of the second impeachment of 45. Civil law covers such contingencies, so abridging free speech, deplatforming, and adopting the formulation “language is violence” are highly improper responses under the liberal form of government codified in the U.S. Constitution, which includes the Bill of Rights originally omitted from the U.S. Constitution but quickly added to articulate the rights fully.

Liberal democratic ideology arose in mercantile, substantially agrarian Western societies before scientific, industrial, and capitalist revolutions built a full head of steam, so to speak. Considering just how much America has developed since the Colonial Period, it’s no surprise society has outgrown its own founding documents. More pointedly, the intellectual commons was a much smaller environment, often restricted to a soapbox in the town square and the availability of book, periodicals,and broadsides. Today, the public square has moved online to a bewildering array of social media platforms that enables publication of one’s ideas well beyond the sound of one’s voice over a crowd or the bottleneck of a publisher’s printing press. It’s an entirely new development, and civil law has not kept pace. Whether Internet communications are regulated like the airwaves or nationalized like the U.S. military, it’s clear that the Wild West uber-democratic approach (where anyone can basically say anything) has failed. Demands for regulation (restrictions on free speech) are being taken seriously and acted upon by the private corporations that run social media platforms. During this interim phase, it’s easy for me, as a subscriber to liberal democratic values, to insist reflexively on free speech absolutism. The apparent mood of the public lies elsewhere.

Evil exists in the world. History and current events both bear this out amply. Pseudo-philosophers might argue that, like emotions and other immaterial sensations, good and evil are merely reified concepts, meaning they are human constructs with no palpable external reality. Go tell that to victims of evildoers. Human suffering can’t be anonymized, rationalized, or philosophized away quite so handily.

It was sort of refreshing, back in the day, when Google’s motto and/or corporate code of conduct was simple: “Don’t Be Evil.” It acknowledged the potential for being or becoming evil (like any of the Bigs: Big Tobacco, Big Soda, Big Oil, Big Pharma, Big Media, Big Agriculture, etc.) and presumably aspired to resist obvious temptations. That was then (from 2000 to 2018), this is now (2021 until death take us — soon enough, I fear). But like all entities possessed of absurd levels of wealth and power, Google (now reorganized as a subsidiary of Alphabet, but who actually refers to it that way?) and its Silicon Valley brethren have succumbed to temptation and become straight-up evil.

One might charitably assess this development as something unbidden, unanticipated, and unexpected, but that’s no excuse, really. I certainly don’t envy celebrity executives experiencing difficulty resulting from having created unmanageable behemoths loosed on both public and polity unable to recognize beastly fangs until already clamped on their necks. As often occurs, dystopian extrapolations are explored in fiction, sometimes satirically. The dénouement of the HBO show Silicon Valley depicts tech mogul wannabes succeeding in creating an AI (or merely a sophisticated algorithm? doesn’t matter …) that would in time become far too powerful in blind execution of its inner imperative. In the show, characters recognize what they had done and kill their own project rather than allow it to destroy the world. In reality, multiple developers of computer tech platforms (and their embedded dynamic, including the wildly unhelpful albeit accurate term algorithm) lacked the foresight to anticipate awful downstream effects of their brainchildren. Yet now that those effects are manifesting recognizably, these corporations continue to operate and wreak havoc.

Silicon Valley shows a extended software development period of bungling ineptitude punctuated by brilliant though momentary breakthroughs. Characters are smart, flawed people laughably unable to get out of the way of their own success. The pièce de résistance was yoking one so-called “learning machine” to another and initiating what would become a runaway doomsday process (either like ecological collapse, building slowly the making the biosphere uninhabitable all at once, or like the gray goo problem, progressively “processing” biomass at the molecular level until all that remains is lifeless goo). It was a final act of bumbling that demanded the characters’ principled, ethical response before the window of opportunity closed. Real Silicon Valley tech platforms are in the (ongoing) process of rending the social fabric, which is no laughing matter. The issue du jour surrounds free speech and its inverse censorship. More broadly, real Silicon Valley succeeded in gaming human psychology for profit in at least two aspects (could be more as yet unrecognized): (1) mining behavioral data as an exploitable resource, and (2) delivering inexhaustible streams of extremely divisive content (not its own) to drive persistent engagement with its platforms. Yoked together, they operate to drive society mad, and yet, mounting evidence of this development has not produced even an inkling that maybe the damned doomsday devices ought to be shut off. As with the environment, we operate with freedom enough to destroy ourselves. Instead, politicians issue stunningly ineffectual calls for regulation or break-up of monopolies. In the meantime, ever more absurd wealth and power are concentrated in the hands of a few executives who have clearly punted and decided “let’s be evil.” No restraints on their behavioral experimentation across whole societies exist.

Much more to say on this topic in additional parts to come.

Already widely reported but only just having come to my awareness is an initiative by Rolling Stone to establish a Culture Council: “an Invitation-Only Community of Influencers, Innovatives, and Creatives.” The flattering terms tastemakers and thought leaders are also used. One must presume that submissions will be promotional and propaganda pieces masquerading as news articles. Selling advertising disguised as news is an old practice, but the ad usually has the notation “advertisement” somewhere on the page. Who knows whether submissions will be subject to editorial review?

To be considered for membership, candidates must sit in a senior-level position at a company generating at least $500K in annual revenue or have obtained at least $1M in total institutional funding.

Rolling Stone‘s website doesn’t say it anywhere I can locate, but third-party reports indicate that members pay either a $1,500 annual fee and $500 submission fee (one-time? repeat?) or a flat $2,000 submission fee. Not certain which. Just to be abundantly clear, fees would be paid by the submitter to the magazine, reversing how published content is normally acquired (i.e., by paying staff writers and free lancers). I’d say this move by Rolling Stone is unprecedented, but of course, it’s not. However, it is a more brazen pay-to-play scheme than most and may be a harbinger of even worse developments to come.

Without describing fully how creative content (arts and news) was supported in the past, I will at least observe that prior to the rise of full-time creative professions in the 18th and 19th centuries (those able to scratch out earn a living on commissions and royalties), creative work was either a labor of love/dedication, typically remunerated very poorly if at all, or was undertaken through the patronage of wealthy European monarchs, aristocrats, and religious institutions (at least in the developing West). Unless I’m mistaken, self-sustaining news organizations and magazines came later. More recent developments include video news release and crowd sourcing, the latter of which sometimes accomplished under the pretense of running contests. The creative commons is how many now operative (including me — I’ve refused to monetize my blog), which is exploited ruthlessly by HuffPost (a infotainment source I ignore entirely), which (correct me if wrong) doesn’t pay for content but offers exposure as an inducement to journalists trying to develop a byline and/or audience. Podcasts, YouTube channels, and news sites also offer a variety of subscription, membership, and voluntary patronage (tipping) schemes to pay the bills (or hit it big if an outlier). Thus, business models have changed considerably over time and are in the midst of another major transformation, especially for news-gathering organizations and the music recording industry in marked retreat from their former positions.

Rolling Stone had always been a niche publication specializing in content that falls outside my usual scope of interest. I read Matt Taibbi’s reporting that appeared in Rolling Stone, but the magazine’s imprint (read: reputation) was not the draw. Now that the Rolling Stone is openly soliciting content through paid membership in the Culture Council, well, the magazine sinks past irrelevance to active avoidance.

It’s always been difficult to separate advertising and propaganda from reliable news, and some don’t find it important to keep these categories discrete, but this new initiative is begging to be gamed by motivated PR hacks and self-promoters with sufficient cash to burn. It’s essentially Rolling Stone whoring itself out. Perhaps more worrying is that others will inevitably follow Rolling Stone‘s example and sell their journalistic integrity with similar programs, effectively putting the final nails in their own coffins (via brand self-destruction). The models in this respect are cozy, incestuous relationships between PACs, lobbying groups, think tanks, and political campaigns. One might assume that legacy publications such as Rolling Stone would have the good sense to retain as much of their valuable brand identity as possible, but the relentless force of corporate/capitalist dynamics are corrupting even the incorruptible.

I simply can’t keep up with all the reading, viewing, and listening in my queue. Waking hours are too few, and concentration dissipates long before sleep overtakes. Accordingly, it’s much easier to settle into couch-potato mode and watch some mindless drivel, such as the Netflix hit Bridgerton binged in two sittings. (Unlike cinema critics, I’m not bothered especially by continuity errors, plot holes, clunky dialogue, weak character motivation, gaps of logic, or glossy decadence of the fictional worlds. I am bothered by the Kafka trap sprung on anyone who notices casting decisions that defy time and place — an ill-advised but now commonplace historical revisionism like editing Mark Twain.) As a result, blog posts are less frequent than they might perhaps be as I pronounce upon American (or more broadly, Western) culture, trying vainly to absorb it as a continuously moving target. Calls to mind the phrase Après moi, le déluge, except that there is no need to wait. A deluge of entertainment, news, analysis, punditry, and trolling has buried everyone already. So rather than the more careful consideration I prefer to post, here are some hot takes.

The Irregular Aphorist. Caitlin Johnstone offers many trenchant observations in the form of aphorisms (some of which I’ve quoted before), all gathered under the subtitle Notes From The Edge Of The Narrative Matrix. The modifier irregular only means that aphorisms are a regular but not constant feature. Her site doesn’t have a tag to that effect but probably ought to. Here’s one in particular that caught my attention:

Everything our species has tried has led us to a dying world and a society that is stark raving mad, so nobody is in any position to tell you that you are wrong.

Twin truths here are (1) the dying world and (2) societal madness, both of which I’ve been describing for some time. Glad when others recognize them, too.

Piling on. Though few still are willing to admit it, nonpharmaceutical interventions (NPIs, e.g., distancing, masks, and lockdowns) to stall or reduce the spread of the virus failed to achieve their objectives according to this study. Instead, NPIs piled on suffering no one could forestall. I read somewhere (no link) that the world is approaching half of total, cumulative deaths/infections predicted had nothing been done to impede the pandemic running its course. Adding in deaths of despair (numbers not entirely up to date), we’re using the wrong tools to fight the wrong battle. Of course, interventions opened up giant opportunities for power grabs and vulture capitalism, so the cynic in me shrugs and wonders half aloud “what did you expect, really?”

Growth of the Managerial Bureaucracy. A blog called Easily Distracted by Timothy Burke (never on my blogroll) publishes only a few times per year, but his analysis is terrific — at least when it doesn’t wind up being overlong and inconclusive. Since a student debt jubilee is back in the news (plenty of arguments pro and con), unintended consequences are anticipated in this quote:

When you set out to create elaborate tiers that segregate the deserving poor from the comfortable middle-class and the truly wealthy, you create a system that requires a massive bureaucracy to administer and a process that forces people into petitionary humiliation in order to verify their eligibility. You create byzantine cutoff points that become business opportunities for predatory rentiers.

Something similar may well be occurring with stimulus checks being issued pro rata (has anyone actually gotten one?), but at least we’re spared any petitionary humiliations. We get whatever the algorithms (byzantine cutoff points) dictate. How those funds will be gamed and attached is not yet clear. Stay alert.

No Defense of Free Speech. Alan Jacobs often recommends deleting, unsubscribing, and/or ignoring social media accounts (after his own long love-hate relationship with them) considering how they have become wholly toxic to a balanced psyche as well as principal enablers of surveillance capitalism and narrative control. However, in an article about the manorial elite, he’s completely lost the plot that absolutism is required in defense of free speech. It’s not sufficient to be blasé or even relieved when 45 is kicked off Twitter permanently or when multiple parties conspire to kill Parler. Establishing your own turf beyond the reach of Silicon Valley censors is a nice idea but frankly impractical. Isn’t that what whoever ran Parler (or posted there) must have thought? And besides, fencing off the digital commons these very entities created has catapulted them into the unenviable position of undemocratic, unelected wielders of monopolistic power and co-conspirators to boot. That’s what needs to be curtailed, not free speech.

The Taxonomic Apocalypse. Although drawn from fiction and thus largely hypothetical, a new book (coming late 2021) by Adam Roberts called It’s the End of the World: But What Are We Really Afraid Of? surveys doomsday stories and categorizes different versions of how it all ends. Alan Jacobs (yeah, him again — must have an advance copy of the manuscript) recommends it as “a delightful and provocative little book” but fails to grok two things: (1) these stories are rehearsals-cum-preparations for the real thing, and (2) the real thing really is bearing down on us implacably and so is no longer a mere hypothetical to contemplate and categorize for shits and grins. Despite acceptance of the eventualities that await all of us, reading Roberts’ taxonomy is not something I would expect to find delightful. Skip.

Narrative Collapse. Ran Prier (no link) sometimes makes statements revealing an unexpected god’s-eye view:

[45] is a mean rich kid who figured out that if he does a good Archie Bunker impression, every lost soul with an authoritarian father will think he’s the messiah. We’re lucky that he cares only about himself, instead of having some crazy utopian agenda. But the power, and the agency, is with the disaffected citizens of a declining empire, tasting barbarism.

This is all about people wanting to be part of a group that’s part of a story. Lately, some of the big group-stories have been dying: sky father religion, American supremacy, the conquest of nature, the virtue of wealth-seeking. In their place, young and clumsy group-stories struggle and rise.

Collapse of certain fundamental stories that animate our thinking is at the core of The Spiral Staircase (see About Brutus at top), though it’s often couched in terms of consciousness in transition. Getting through the transition (only temporarily, see previous item in list) probably means completion of the Counter-Enlightenment historical arc, which necessarily includes further descent into barbarism.

Hail Mary for Individualism. I always take special notice when someone cites Allan Bloom. Alan Jacobs (um, yeah, he’s prolific and I’m using his ideas again — sue me) cites Bloom to argue that individualism or the sovereign self, a product of the Enlightenment, is already dead. No doubt, the thought-world described so ably by Bloom no longer exists, but individualism has not yet died out by attrition or been fully dissolved in nonduality. Many of us born before the advent of the Internet retain selfhood and authenticity not yet coopted by or incorporated into mass mind. Moreover, ongoing struggles over identity (e.g., gender, sexual orientation, and race that are often used improperly to define the self) result from an inchoate sense that individualism is eroding precipitously, not that it’s already passé. Defiant attempts to (re)establish an authentic self (contravening all logic and becoming critical theory of one sort or another) in the face of this loss may well be a last-ditch effort to save the self, but it’s failing.

David Sirota, author of Back to our Future: How the 1980s Explain the World We Live in Now — Our Culture, Our Politics, Our Everything (2011), came to my attention (how else?) through a podcast. He riffed pretty entertainingly on his book, now roughly one decade old, like a rock ‘n’ roller stuck (re)playing his or her greatest hits into dotage. However, his thesis was strong and appealing enough that I picked up a copy (read: borrowed from the library) to investigate despite the datedness of the book (and my tardiness). It promised to be an easy read.

Sirota’s basic thesis is that memes and meme complexes (a/k/a memeplexes, though Sirota never uses the term meme) developed in the 80s and deployed through a combination of information and entertainment media (thus, infotainment) form the narrative background we take for granted in the early part of the 20th century. Children fed a steady diet of clichés, catchphrases, one-liners, archetypes, and story plots have now grown to adulthood and are scarcely able to peer behind the curtain to question the legitimacy or subtext of the narrative shapes and distortions imbibed during childhood like mother’s milk. The table of contents lists four parts (boldface section titles are Sirota’s; descriptive text is mine):

  • Liking Ike, Hating Woodstock. How the 50s and 60s decades were (the first?) assigned reductive demographic signifiers, handily ignoring the true diversity of experience during those decades. More specifically, the boom-boom 50s (economics, births) were recalled nostalgically in 80s TV and films while the 60s were recast as being all about those dirty, hairy hippies and their music, drugs, and sexual licentiousness, all of which had to be invalidated somehow to regain lost wholesomeness. The one-man promotional vehicle for this pleasing self-deception was Michael J. Fox, whose screen personae (TV and film) during the 80s (glorifying the 50s but openly shitting on the 60s) were instrumental in reforming attitudes about our mixed history.
  • The Jump Man Chronicles. How the Great Man Theory of History was developed through glorification of heroes, rogues, mavericks, and iconoclasts who came into their own during the 80s. That one-man vehicle was Michael Jordan, whose talents and personal magnetism were so outsized that everyone aspired to be “like Mike,” which is to say, a superhero elevated beyond mere mortal rules and thus immortalized. The effect was duplicated many times over in popular culture, with various entertainment icons and political operatives subverting thoughtful consideration of real-world problems in favor of jingoistic portrayals.
  • Why We (Continue to) Fight. How the U.S. military was rehabilitated after losing the Vietnam War, gifting us with today’s hypermilitarism and permanent wars. Two principal tropes were deployed to shape public opinion: the Legend of the Spat upon Veteran and the Hands Tied Behind Their Backs Myth. Each was trotted out reliably whenever we needed to misremember our past as fictionalized in the 80s.
  • The Huxtable Effect. How “America’s dad” helped accommodate race relations to white anxiety, primarily to sell a TV show. In contrast with various “ghetto TV” shows of the 70s that depicted urban working poor (various ethnicities), The Cosby Show presented an upscale black family who transcended race by simply ignoring the issue — a privilege of wealth and celebrity. The Obama campaign and subsequent administration copied this approach, pretending American society had become postracial despite his never truly being able to escape the modifier black because the default (no modifier needed) in America is always white. This is the most fraught part of the book, demonstrating that despite whatever instructions we get from entertainment media and pundits, we remain stuck in an unresolved, unhealed, inescapable trap.

(more…)

/rant on

Remember all those folks in the weeks and days preceding election day on November 4, 2020, who were buying guns, ammo, and other provisions in preparation for civil breakdown? (No one known personally, of course, and gawd no not actually any of us, either; just them other others who don’t read blogs or anything else.) Well, maybe they were correct adopting the precautionary principal (notably absent from a host of other perils besetting us). But as of this writing, nothing remotely resembling widespread disruption — feared by some, hotly anticipated by others — has developed. But wait! There’s still time. Considering Americans were set up by both political parties to distrust the outcome of the presidential race no matter which candidate claimed to have prevailed, we now face weeks or months of legal challenges and impatient formation of agitators (again, both sides) demanding their candidate be declared the winner (now, dammit!) by the courts instead of either official ballot-counters or the liberal-biased MSM. To say our institutions have failed us, and further, that political operatives all the way up to the sitting president have been openly fomenting violence in the streets, is a statement of the obvious.

Among my concerns more pressing than who gets to sit in the big chair, however, is the whipsawing stock market. Although no longer an accurate proxy of overall economic health or asset valuation, the stock market’s thoroughly irrational daily reaction to every rumor of, say, a vaccine for the raging coronavirus, or resumption of full economic activity and profitability despite widespread joblessness, renewed lockdowns, and a massive wave of homelessness in the offing due to bankruptcies, evictions, and foreclosures, none of this bodes well for the short-term future and maintenance of, oh, I dunno, supply lines to grocery stores. Indeed, I suspect we are rapidly approaching our very own Minsky Moment, which Wikipedia describes as “a sudden, major collapse of asset values which marks the end of the growth phase of a cycle in credit markets or business activity” [underlying links omitted]. This is another prospective event (overdue, actually) for which the set-up has been long prepared. Conspiratorial types call it “the great reset” — something quite different from a debt jubilee.

For lazy thinkers, rhyming comparisons with the past frequently resort to calling someone a Nazi (or the new Hitler) or reminding everyone of U.S. chattel slavery. At the risk of being accused of similar stupidity, I suggest that we’re not on the eve of a 1929-style market crash and ensuing second great depression (though those could well happen, too, bread lines having already formed in 2020) but are instead poised at the precipice of hyperinflation and intense humiliation akin to the Weimar Republic in 1933 or so. American humiliation will result from recognition that the U.S. is now a failed state and doesn’t even pretend anymore to look after its citizens or the commonweal. Look no further than the two preposterous presidential candidates, neither of whom made any campaign promises to improve the lives of average Americans. Rather, the state has been captured by kleptocrats. Accordingly, no more American exceptionalism and no more lying to ourselves how we’re the model for the rest of the world to admire and emulate.

Like Germany in the 1930s, the U.S. has also suffered military defeats and stagnation (perhaps by design) and currently demonstrates a marked inability to manage itself economically, politically, or culturally. Indeed, the American people may well be ungovernable at this point, nourished on a thin gruel of rugged individualism that forestalls our coming together to address adversity effectively. The possibility of another faux-populist savior arising out of necessity only to lead us over the edge (see the Great Man Theory of history) seems eerily likely, though the specific form that descent into madness would take is unclear. Recent history already indicates a deeply divided American citizenry having lost its collective mind but not yet having gone fully apeshit, flinging feces and destroying what remains of economically ravaged communities for the sheer sport of it. (I’ve never understood vandalism.) That’s what everyone was preparing for with emergency guns, ammo, and provisions. How narrowly we escaped catastrophe (or merely delayed it) should be clear in the fullness of time.

/rant off