Posts Tagged ‘Memes’

“Language is dynamic” is a phrase invoked in praise or derision of shifts in usage. Corollaries include “the only constant is change” and “time’s arrow points in only one direction” — both signalling that stasis is an invalid and ultimately futile conservative value. The flip side might well be the myth of progress, understood in reference not to technological advancement but human nature’s failure to rise above its base (animal) origins. This failure is especially grotesque considering that humans currently albeit temporarily live in an age of material abundance that would provide amply for everyone if that largesse were justly and equitably produced and distributed. However, resources (including labor) are being systematically exploited, diverted, and hoarded by a small, unethical elite (what some call “alpha chimps”) who often use state power to subjugate vulnerable populations to funnel further tribute to the already obscenely wealthy top of the socioeconomic hierarchy. But that’s a different diatribe.

Although I’m sensitive the dynamism of language — especially terms for broad ideas in need of short, snappy neologisms — I’m resistant to adopting most new coin. For instance, multiple colors of pill (red, blue, white, and black to my knowledge) refer to certain narrative complexes that people, in effect, swallow. Similarly, the “blue church” is used to refer to legacy media struggling desperately (and failing) to retain its last shreds of legitimacy and authority. (Dignity is long gone.) Does language really need these terms or are hipsters just being clever? That question probably lacks a definitive answer.

My real interest with this blog post, however, is how the modern digital mediascape has given rise to a curious phenomenon associated with cancel culture: deletion of tweets and social media posts to scrub one’s past of impropriety as though the tweet or post never happened. (I’ve never deleted a post nor have any plans to.) Silicon Valley hegemons can’t resist getting their piece of the action, too, by applying deeply flawed algorithms to everyone’s content to demonetize, restrict, and/or remove (i.e., censor) offensive opinion that runs counter to (shifting) consensus narratives decided upon in their sole discretion as water carriers for officialdom. Algorithmic dragnets are not effective kludges precisely because thoughts are not synonymous with their online expression; one merely points to the other. Used to be said that the Internet is forever, so wait before posting or tweeting a reasonable duration so that irresponsible behavior (opinion and trolling, mostly) can be tempered. Who knows who possesses technical expertise and access to tweet and video archives other than, say, the Wayback Machine? When a public figure says or does something dumb, a search-and-destroy mission is often launched to resurrect offending and damning past utterance. Of course, scrub-a-dub erasure or deletion is merely another attempt to manage narrative and isn’t a plea for forgiveness, which doesn’t exist in the public sphere anyway except for rehabilitated monsters such as past U.S. presidents a/k/a war criminals. And the Internet isn’t in fact forever; ask an archivist.

Shifting language, shifting records, shifting sentiment, shifting intellectual history are all aspects of culture that develop naturally and inevitably over time. We no longer believe, for instance, in the four elements or geocentrism (a/k/a the Aristotelian-Ptolemaic system; never mind the intransigent Flat Earthers who need not be silenced). Darker aspects of these shifts, however, include the remarkable Orwellian insight that “Who controls the past controls the future: who controls the present controls the past” from the 1949 novel Nineteen Eighty-Four. Here’s the passage for context:

Who controls the past, controls the future: who controls the present, controls the past … The mutability of the past is the central tenet of Ingsoc. Past events, it is argued, have no objective existence, but survive only in written records and in human memories. The past is whatever the records and the memories agree upon. And since the Party is in full control of all records, and in equally full control of the minds of its members, it follows that the past is whatever the Party chooses to make it.

In 2021, the awful lesson is taken to heart by multiple parties (not the Party in the novel but wannabes) who have latched maniacally onto Orwellian mechanisms of thought control specifically through the manipulation of records, history, and language. But as mentioned above, policing mere expression is not the same as policing thought itself, at least among those who retain critical thinking skills and independence of mind. I abstain judgment how effective attempted brainwashing is with the masses but will at least mention that Yeonmi Park, who escaped from North Korea in 2007 before settling in the U.S. in 2014, describes the chilling totalitarian thought control exercised by the North Korean government — the stuff of nightmare dystopianism. The template is by now well established and despots everywhere are only too happy to implement it repeatedly, following an evil trajectory that should be resisted at every turn while still possible.

In my neighborhood of Chicago, it’s commonplace to see vehicles driving on the road with a giant Puerto Rican flag flying from a pole wedged in each of the rear windows. Often, one of the two flags’ traditional colors (red, white, and blue) is changed to black and white — a symbol of resistance. Puerto Rican politics is a complicated nest of issues I don’t know enough about to say more. However, the zeal of my neighbors is notable. Indeed, as I visited a local farmer’s market last weekend, I couldn’t help but notice quite a welcome diversity on display and folks entirely untroubled by the presence of others who didn’t look just like them (tattoos, unnatural hair colors, long beards and shaved heads, nonstandard attire and accoutrements, etc.). I’m actually pleased to see a level of comfort and freedom to present oneself is such manner as one wishes, and not just because of the buzz phrase “diversity and inclusion.” So go ahead: fly your freak flag high! (This same value applies to viewpoint diversity.)

In contrast, when I venture to some far-flung suburb for sundry activities now that lockdowns and restrictions have been lifted, I encounter mostly white, middle-aged, middle-class suburbanites who admittedly look just like me. It’s unclear that folks in those locales are xenophobic in any way, having withdrawn from city life in all its messiness for a cozy, upscale, crime-free subdivision indistinguishable from the next one over. Maybe that’s an artifact of mid-20th-century white flight, where uniformity of presentation and opinion is the norm. Still, it feels a little weird. (Since the 1980s, some rather well-put-together people have returned to the city center, but that usually requires a king-sized income to purchase a luxury condo in some 50-plus-storey tower. After last summer’s BLM riots, that influx turned again to outflux.) One might guess that, as a visible minority within city confines, I would be more comfortable among my own cohort elsewhere, but that’s not the case. I rather like rubbing elbows with others of diverse backgrounds and plurality of perspectives.

I’ve also grown especially weary of critical race theory being shoved in my face at every turn, as though race is (or should be) the primary lens through which all human relations must be filtered. Such slavish categorization, dropping everyone giant, ill-fitted voting blocs, is the hallmark of ideologues unable to break out of the pseudo-intellectual silos they created for themselves and seek to impose on others. Yet I haven’t joined the growing backlash and instead feel increasingly ill at ease in social situations that appear (on the surface at least) to be too white bread. Shows, perhaps, how notions of race that were irrelevant for most of my life have now crept in and invaded my conscience. Rather than solving or resolving longstanding issues, relentless focus on race instead spreads resentment and discomfort. The melting pot isn’t boiling, but summer is not yet over.

I have observed various instances of magical thinking in mainstream culture, especially here, which I find problematical. Although it’s not my ambition to disabuse anyone of magical thinking, which extends far beyond, say, religious thought, I was somewhat taken aback at the suggestion found in the comic at this link (not embedded). For those not familiar with Questionable Content (one of two online comics I read regularly), the comic presents an extended cast of characters, mostly in their early 20s, living in a contemporary New England college town. Those characters are supplemented by a few older parents and lots of AIs (in robot bodies). The AIs are not particularly futuristic but are simply accepted as a normal (if curious) part of the world of the comic. Major story arcs involve characters and AIs (the AIs are characters, I suppose) in the process of discovering and establishing themselves as they (the humans, anyway) transition into early adulthood. There are no great political themes or intrusions into life in a college town. Rather, the comic is largely about acceptance of difference. Often, that means washing away meaningful difference in the name of banal tolerance. Real existential struggle is almost entirely absent.

In the linked comic, a new character comes along and offers advice to an established character struggling with sexual attractions and orientation. The dialogue includes this exchange:

Character A: If tarot or astrology or religion halps you make sense of the world and your place in it, then why not use them?
Character B: But they’re not real. [emphasis in original]
Character A: It doesn’t matter, if you use them constructively!

There it is in a nutshell: believe whatever you want if it, um, halps. I’ve always felt that being wrong (i.e., using unreal or make-believe things) was a sufficient injunction against anchoring oneself to notions widely known to be false. Besides, isn’t it often remarked that the biggest fool is one who fools himself? (Fiction as a combination of entertainment and building a worldview is quite normative, but it’s understood as fiction, or to a lesser degree, as life imitating art and its inverse. Exceptions abound, which are regarded as psychopathy.) The instruction in that dialogue (part object lesson, part lesson in cognition) is not that it’s OK to make mistakes but that knowingly believing something false has worthwhile advantages.

Surveying examples where promulgating false beliefs have constructive and destructive effects is too large a project. Well short of that, nasty categories include fraud, gaslighting, and propaganda, which are criminal in many cases and ought to be in most others (looking at you, MSM! — or not, since I neither trust nor watch). One familiar benevolent category is expressed in the phrase fake it til you make it, often recommended to overcome a lack of confidence. Of course, a swindle is also known as a confidence game (or by its diminutive, a con), so beware overconfidence when asked by another to pay for something (e.g., tarot or astrology readings), take risks, or accept an ideology without question.

As philosophy, willful adoption of falsity for its supposed benefits is half-baked. Though impossible to quantify, my suspicion is that instances of positive outcomes are overbalanced by negative ones. Maybe living in a constructed reality or self-reinforcing fantasy is what people want. The comic discussed is certainly in line with that approach. However, while we dither and delude ourselves with happy, aspirational stories based on silliness, the actual world around us, including all the human institutions that used to serve us but no longer do, falls to tatters. Is it better going through life and eventually to one’s grave refusing to see that reality? Should childlike wonder and innocence be retained in spite of what is easily observable just by poking one’s head up and dismissing comforting lies? Decide for yourself.

Evil exists in the world. History and current events both bear this out amply. Pseudo-philosophers might argue that, like emotions and other immaterial sensations, good and evil are merely reified concepts, meaning they are human constructs with no palpable external reality. Go tell that to victims of evildoers. Human suffering can’t be anonymized, rationalized, or philosophized away quite so handily.

It was sort of refreshing, back in the day, when Google’s motto and/or corporate code of conduct was simple: “Don’t Be Evil.” It acknowledged the potential for being or becoming evil (like any of the Bigs: Big Tobacco, Big Soda, Big Oil, Big Pharma, Big Media, Big Agriculture, etc.) and presumably aspired to resist obvious temptations. That was then (from 2000 to 2018), this is now (2021 until death take us — soon enough, I fear). But like all entities possessed of absurd levels of wealth and power, Google (now reorganized as a subsidiary of Alphabet, but who actually refers to it that way?) and its Silicon Valley brethren have succumbed to temptation and become straight-up evil.

One might charitably assess this development as something unbidden, unanticipated, and unexpected, but that’s no excuse, really. I certainly don’t envy celebrity executives experiencing difficulty resulting from having created unmanageable behemoths loosed on both public and polity unable to recognize beastly fangs until already clamped on their necks. As often occurs, dystopian extrapolations are explored in fiction, sometimes satirically. The dénouement of the HBO show Silicon Valley depicts tech mogul wannabes succeeding in creating an AI (or merely a sophisticated algorithm? doesn’t matter …) that would in time become far too powerful in blind execution of its inner imperative. In the show, characters recognize what they had done and kill their own project rather than allow it to destroy the world. In reality, multiple developers of computer tech platforms (and their embedded dynamic, including the wildly unhelpful albeit accurate term algorithm) lacked the foresight to anticipate awful downstream effects of their brainchildren. Yet now that those effects are manifesting recognizably, these corporations continue to operate and wreak havoc.

Silicon Valley shows a extended software development period of bungling ineptitude punctuated by brilliant though momentary breakthroughs. Characters are smart, flawed people laughably unable to get out of the way of their own success. The pièce de résistance was yoking one so-called “learning machine” to another and initiating what would become a runaway doomsday process (either like ecological collapse, building slowly the making the biosphere uninhabitable all at once, or like the gray goo problem, progressively “processing” biomass at the molecular level until all that remains is lifeless goo). It was a final act of bumbling that demanded the characters’ principled, ethical response before the window of opportunity closed. Real Silicon Valley tech platforms are in the (ongoing) process of rending the social fabric, which is no laughing matter. The issue du jour surrounds free speech and its inverse censorship. More broadly, real Silicon Valley succeeded in gaming human psychology for profit in at least two aspects (could be more as yet unrecognized): (1) mining behavioral data as an exploitable resource, and (2) delivering inexhaustible streams of extremely divisive content (not its own) to drive persistent engagement with its platforms. Yoked together, they operate to drive society mad, and yet, mounting evidence of this development has not produced even an inkling that maybe the damned doomsday devices ought to be shut off. As with the environment, we operate with freedom enough to destroy ourselves. Instead, politicians issue stunningly ineffectual calls for regulation or break-up of monopolies. In the meantime, ever more absurd wealth and power are concentrated in the hands of a few executives who have clearly punted and decided “let’s be evil.” No restraints on their behavioral experimentation across whole societies exist.

Much more to say on this topic in additional parts to come.

I simply can’t keep up with all the reading, viewing, and listening in my queue. Waking hours are too few, and concentration dissipates long before sleep overtakes. Accordingly, it’s much easier to settle into couch-potato mode and watch some mindless drivel, such as the Netflix hit Bridgerton binged in two sittings. (Unlike cinema critics, I’m not bothered especially by continuity errors, plot holes, clunky dialogue, weak character motivation, gaps of logic, or glossy decadence of the fictional worlds. I am bothered by the Kafka trap sprung on anyone who notices casting decisions that defy time and place — an ill-advised but now commonplace historical revisionism like editing Mark Twain.) As a result, blog posts are less frequent than they might perhaps be as I pronounce upon American (or more broadly, Western) culture, trying vainly to absorb it as a continuously moving target. Calls to mind the phrase Après moi, le déluge, except that there is no need to wait. A deluge of entertainment, news, analysis, punditry, and trolling has buried everyone already. So rather than the more careful consideration I prefer to post, here are some hot takes.

The Irregular Aphorist. Caitlin Johnstone offers many trenchant observations in the form of aphorisms (some of which I’ve quoted before), all gathered under the subtitle Notes From The Edge Of The Narrative Matrix. The modifier irregular only means that aphorisms are a regular but not constant feature. Her site doesn’t have a tag to that effect but probably ought to. Here’s one in particular that caught my attention:

Everything our species has tried has led us to a dying world and a society that is stark raving mad, so nobody is in any position to tell you that you are wrong.

Twin truths here are (1) the dying world and (2) societal madness, both of which I’ve been describing for some time. Glad when others recognize them, too.

Piling on. Though few still are willing to admit it, nonpharmaceutical interventions (NPIs, e.g., distancing, masks, and lockdowns) to stall or reduce the spread of the virus failed to achieve their objectives according to this study. Instead, NPIs piled on suffering no one could forestall. I read somewhere (no link) that the world is approaching half of total, cumulative deaths/infections predicted had nothing been done to impede the pandemic running its course. Adding in deaths of despair (numbers not entirely up to date), we’re using the wrong tools to fight the wrong battle. Of course, interventions opened up giant opportunities for power grabs and vulture capitalism, so the cynic in me shrugs and wonders half aloud “what did you expect, really?”

Growth of the Managerial Bureaucracy. A blog called Easily Distracted by Timothy Burke (never on my blogroll) publishes only a few times per year, but his analysis is terrific — at least when it doesn’t wind up being overlong and inconclusive. Since a student debt jubilee is back in the news (plenty of arguments pro and con), unintended consequences are anticipated in this quote:

When you set out to create elaborate tiers that segregate the deserving poor from the comfortable middle-class and the truly wealthy, you create a system that requires a massive bureaucracy to administer and a process that forces people into petitionary humiliation in order to verify their eligibility. You create byzantine cutoff points that become business opportunities for predatory rentiers.

Something similar may well be occurring with stimulus checks being issued pro rata (has anyone actually gotten one?), but at least we’re spared any petitionary humiliations. We get whatever the algorithms (byzantine cutoff points) dictate. How those funds will be gamed and attached is not yet clear. Stay alert.

No Defense of Free Speech. Alan Jacobs often recommends deleting, unsubscribing, and/or ignoring social media accounts (after his own long love-hate relationship with them) considering how they have become wholly toxic to a balanced psyche as well as principal enablers of surveillance capitalism and narrative control. However, in an article about the manorial elite, he’s completely lost the plot that absolutism is required in defense of free speech. It’s not sufficient to be blasé or even relieved when 45 is kicked off Twitter permanently or when multiple parties conspire to kill Parler. Establishing your own turf beyond the reach of Silicon Valley censors is a nice idea but frankly impractical. Isn’t that what whoever ran Parler (or posted there) must have thought? And besides, fencing off the digital commons these very entities created has catapulted them into the unenviable position of undemocratic, unelected wielders of monopolistic power and co-conspirators to boot. That’s what needs to be curtailed, not free speech.

The Taxonomic Apocalypse. Although drawn from fiction and thus largely hypothetical, a new book (coming late 2021) by Adam Roberts called It’s the End of the World: But What Are We Really Afraid Of? surveys doomsday stories and categorizes different versions of how it all ends. Alan Jacobs (yeah, him again — must have an advance copy of the manuscript) recommends it as “a delightful and provocative little book” but fails to grok two things: (1) these stories are rehearsals-cum-preparations for the real thing, and (2) the real thing really is bearing down on us implacably and so is no longer a mere hypothetical to contemplate and categorize for shits and grins. Despite acceptance of the eventualities that await all of us, reading Roberts’ taxonomy is not something I would expect to find delightful. Skip.

Narrative Collapse. Ran Prier (no link) sometimes makes statements revealing an unexpected god’s-eye view:

[45] is a mean rich kid who figured out that if he does a good Archie Bunker impression, every lost soul with an authoritarian father will think he’s the messiah. We’re lucky that he cares only about himself, instead of having some crazy utopian agenda. But the power, and the agency, is with the disaffected citizens of a declining empire, tasting barbarism.

This is all about people wanting to be part of a group that’s part of a story. Lately, some of the big group-stories have been dying: sky father religion, American supremacy, the conquest of nature, the virtue of wealth-seeking. In their place, young and clumsy group-stories struggle and rise.

Collapse of certain fundamental stories that animate our thinking is at the core of The Spiral Staircase (see About Brutus at top), though it’s often couched in terms of consciousness in transition. Getting through the transition (only temporarily, see previous item in list) probably means completion of the Counter-Enlightenment historical arc, which necessarily includes further descent into barbarism.

Hail Mary for Individualism. I always take special notice when someone cites Allan Bloom. Alan Jacobs (um, yeah, he’s prolific and I’m using his ideas again — sue me) cites Bloom to argue that individualism or the sovereign self, a product of the Enlightenment, is already dead. No doubt, the thought-world described so ably by Bloom no longer exists, but individualism has not yet died out by attrition or been fully dissolved in nonduality. Many of us born before the advent of the Internet retain selfhood and authenticity not yet coopted by or incorporated into mass mind. Moreover, ongoing struggles over identity (e.g., gender, sexual orientation, and race that are often used improperly to define the self) result from an inchoate sense that individualism is eroding precipitously, not that it’s already passé. Defiant attempts to (re)establish an authentic self (contravening all logic and becoming critical theory of one sort or another) in the face of this loss may well be a last-ditch effort to save the self, but it’s failing.

I’ve never before gone straight back with a redux treatment of a blog post. More typically, it takes more than a year before revisiting a given topic, sometimes several years. This time, supplemental information came immediately, though I’ve delayed writing about it. To wit, a Danish study published November 18, 2020, in the Annals of Internal Medicine indicates our face mask precautions against the Coronavirus may be ineffective:

Our results suggest that the recommendation to wear a surgical mask when outside the home among others did not reduce, at conventional levels of statistical significance, the incidence of SARS-CoV-2 infection in mask wearers in a setting where social distancing and other public health measures were in effect, mask recommendations were not among those measures, and community use of masks was uncommon. Yet, the findings were inconclusive and cannot definitively exclude a 46% reduction to a 23% increase in infection of mask wearers in such a setting. It is important to emphasize that this trial did not address the effects of masks as source control or as protection in settings where social distancing and other public health measures are not in effect.

The important phrase there is “did not reduce, at conventional levels of statistical significance,” which is followed by the caveat that the study was partial and so is inconclusive. To say something is statistically insignificant means that results do not exceed the calculated margin of error or randomness. A fair bit of commentary follows the published study, which I have not reviewed.

We’re largely resorting to conventional wisdom with respect to mask wearing. Most businesses and public venues (if open at all) have adopted the mask mandate out of conformity and despite wildly conflicting reports of their utility. Compared to locking down all nonessential social and economic activity, however, I remain resigned to their adoption even though I’m suspicious (as any cynic or skeptic should be) that they don’t work — at least not after the virus is running loose. There is, however, another component worth considering, namely, the need to been seen doing something, not nothing, to address the pandemic. Some rather bluntly call that virtue signalling, such as the pathologist at this link.

In the week since publication of the Danish study and the pathologist’s opinion (note the entirely misleading title), there has been a deluge of additional information, editorials, and protests (no more links, sorry) calling into question recommendations from health organizations and responses by politicians. Principled and unprincipled dissent was already underway since May 2020, which is growing with each month hardship persists. Of particular note is the Supreme Court’s 5-4 decision against New York Gov. Andrew Cuomo’s mandate that religious services be restricted to no more than 10 people in red zones and no more than 25 in orange zones. Score one for the Bill of Rights being upheld even in a time of crisis.

I’ve mentioned the precautionary principle several times, most notably here. Little of our approach to precautions has changed in the two years since that blog post. At the same time, climate change and Mother Nature batter us aggressively. Eventualities remain predictable. Different precautions are being undertaken with respect to the pandemic currently gripping the planet. Arguably, the pandemic is either a subset of Mother Nature’s fury or, if the virus was created in a lab, a self-inflicted wound. Proper pandemic precautions have been confounded by undermining of authority, misinformation, lack of coordination, and politically biased narratives. I’m as confused as the next poor sap. However, low-cost precautions such as wearing masks are entirely acceptable, notwithstanding refusals of many Americans to cooperate after authorities muddied the question of their effectiveness so completely. More significant precautions such as lockdowns and business shutdowns have morphed into received wisdom among government bodies yet are questioned widely as being a cure worse than the disease, not to mention administrative overreach (conspiratorial conjecture withheld).

Now comes evidence published in the New England Journal of Medicine on November 11, 2020, that costly isolation is flatly ineffective at stemming infection rates. Here are the results and conclusions from the abstract of the published study:

Results
A total of 1848 recruits volunteered to participate in the study; within 2 days after arrival on campus, 16 (0.9%) tested positive for SARS-CoV-2, 15 of whom were asymptomatic. An additional 35 participants (1.9%) tested positive on day 7 or on day 14. Five of the 51 participants (9.8%) who tested positive at any time had symptoms in the week before a positive qPCR test. Of the recruits who declined to participate in the study, 26 (1.7%) of the 1554 recruits with available qPCR results tested positive on day 14. No SARS-CoV-2 infections were identified through clinical qPCR testing performed as a result of daily symptom monitoring. Analysis of 36 SARS-CoV-2 genomes obtained from 32 participants revealed six transmission clusters among 18 participants. Epidemiologic analysis supported multiple local transmission events, including transmission between roommates and among recruits within the same platoon.
Conclusions
Among Marine Corps recruits, approximately 2% who had previously had negative results for SARS-CoV-2 at the beginning of supervised quarantine, and less than 2% of recruits with unknown previous status, tested positive by day 14. Most recruits who tested positive were asymptomatic, and no infections were detected through daily symptom monitoring. Transmission clusters occurred within platoons.

So an initial 0.9% tested positive, then an additional 1.9%. This total 2.8% compares to 1.7% in the control group (tested but not isolated as part of the study). Perhaps the experimental and control groups are a bit small (1848 and 1554, respectively), and it’s not clear why the experimental group infection rate is higher than that of the control group, but the evidence points to the uselessness of trying to limit the spread of the virus by quarantining and/or isolation. Once the virus is present in a population, it spreads despite precautions.

A mantra is circulating that we should “trust the science.” Are these results to be trusted? Can we call off all the lockdowns and closures? It’s been at least eight months that the virus has been raging throughout the U.S. Although there might be some instances of isolated populations with no infection, the wider population has by now been exposed. Moreover, some individuals who self-isolated effectively may not have been exposed, but in all likelihood, most of us have been. Accordingly, renewed lockdowns, school and business closures, and destruction of entire industries are a pretense of control we never really had. Their costs are enormous and ongoing. A stay-at-home order (advisory, if you prefer) just went into effect for the City of Chicago on November 16, 2020. My anecdotal observation is that most Chicagoans are ignoring it and going about their business similar to summer and fall months. It’s nothing like the ghost town effect of March and April 2020. I daresay they may well be correct to reject the received wisdom of our civic leaders.

I might have thought that the phrase divide and conquer originated in the writings of Sun Tzu or perhaps during the Colonial Period when so many Western European powers mobilized to claim their share of the New World. Not so. This link indicates that, beyond its more immediate association with Julius Caesar (Latin: divide et impera), the basic strategy is observed throughout antiquity. The article goes on to discuss Narcissism, Politics, and Psychopathy found in the employ of divide-and-conquer strategies, often in business competition. Knowing that our information environment is polluted with mis- and disinformation, especially online, I struggle awarding too much authority to some dude with a website, but that dude at least provides 24 footnotes (some of which are other Internet resources). This blanket suspicion applies to this dude (me), as well.

I also read (can’t remember where, otherwise I would provide a hyperlink — the online equivalent of a footnote) that Americans’ rather unique, ongoing, dysfunctional relationship with racism is an effective divide-and-conquer strategy deployed to keep the races (a sociological category, not a biological one) constantly preoccupied with each other rather than uniting against the true scourge: the owners and rulers (plus the military, technocrats, and managerial class that enable them). The historical illustration below shows how that hierarchy breaks down:

If the proportions were more statistically accurate, that bottom layer would be much, much broader, more like the 99% vs. the infamous 1% brought to acute awareness by the Occupy Movement. The specific distributions are probably impossible to determine, but it’s fair to say that the downtrodden masses are increasing in number as wealth inequality skews continuously and disproportionately to the benefit of the top quintile and higher. Is it really any question that those occupying the upper layers seek to keep balanced on top of the confection like an ill-fated Jenga wedding cake? Or that the bottom layer is foundational?

If class warfare is the underlying structural conflict truly at work in socioeconomic struggles plaguing the United States, race warfare is the bait to displace attention and blame for whatever befalls the masses. It’s divide and conquer, baby, and we’re falling for it like brawlers in a bar fight who don’t know why they’re fighting. (Meanwhile, someone just emptied the till.) On top, add the pandemic keeping people apart and largely unable to communicate meaningfully (read: face-to-face). As the U.S. election draws to a close, the major division among the American people is misunderstood primarily as red/blue (with associated Democratic and Republican memes, since neither has bothered to present a coherent political platform). Other false dichotomies are at work, no doubt. So when election results are contested next week, expect to see lines draw incorrectly between groups that are suffering equally at the hands of a different, hidden-in-plain-sight group only too happy to set off bar fights while keeping the focus off themselves. It’s a proven strategy.

Most of us are familiar with a grandpa, uncle, or father who eventually turns into a cranky old man during late middle age or in his dotage. (Why is it a mostly male phenomenon?) In the last three decades, Clint Eastwood typecast himself as a cranky old man, building on lone-wolf characters (mostly cops, criminals, and cowboys) established earlier in his career. In real life, these guys spout talking points absorbed from mainstream media and narrative managers, or if they are truly lazy and/or can’t articulate anything coherently on their own, merely forward agitprop via e-mail like chain mail of yore. They also demonstrate remarkably forgivable racism, sexism, and bigotry, such as Eastwood’s rather enjoyable and ultimately redeemed character in the film Gran Torino. If interaction with such a fellow is limited to Thanksgiving gatherings once per year, crankiness can be tolerated fairly easily. If interactions are ongoing, then a typical reaction is simply to delete e-mail messages unread, or in the case of unavoidable face-to-face interaction, to chalk it up: Well, that’s just Grandpa Joe or Uncle Bill or Dad. Let him rant; he’s basically harmless now that he’s so old he creaks.

Except that not all of them are so harmless. Only a handful of the so-called Greatest Generation (I tire of the term but it’s solidly established) remain in positions of influence. However, lots of Boomers still wield considerable power despite their advancing age, looming retirement (and death), and basic out-of-touchness with a culture that has left them behind. Nor are their rants and bluster necessarily wrong. See, for instance, this rant by Tom Engelhardt, which begins with these two paragraphs:

Let me rant for a moment. I don’t do it often, maybe ever. I’m not Donald Trump. Though I’m only two years older than him, I don’t even know how to tweet and that tells you everything you really need to know about Tom Engelhardt in a world clearly passing me by. Still, after years in which America’s streets were essentially empty, they’ve suddenly filled, day after day, with youthful protesters, bringing back a version of a moment I remember from my youth and that’s a hopeful (if also, given Covid-19, a scary) thing, even if I’m an old man in isolation in this never-ending pandemic moment of ours.

In such isolation, no wonder I have the urge to rant. Our present American world, after all, was both deeply unimaginable — before 2016, no one could have conjured up President Donald Trump as anything but a joke — and yet in some sense, all too imaginable …

If my own father (who doesn’t read this blog) could articulate ideas as well as Engelhardt, maybe I would stop deleting unread the idiocy he forwards via e-mail. Admittedly, I could well be following in my father’s footsteps, as the tag rants on this blog indicates, but at least I write my own screed. I’m far less accomplished at it than, say, Engelhardt, Andy Rooney (in his day), Ralph Nader, or Dave Barry, but then, I’m only a curmudgeon-in-training, not having fully aged (or elevated?) yet to cranky old manhood.

As the fall presidential election draws near (assuming that it goes forward), the choice in the limited U.S. two-party system is between one of two cranky old men, neither of which is remotely capable of guiding the country through this rough patch at the doomer-anticipated end of human history. Oh, and BTW, echoing Engelhardt’s remark above, 45 has been a joke all of my life — a dark parody of success — and remains so despite occupying the Oval Office. Their primary opponent up to only a couple months ago was Bernie Sanders, himself a cranky old man but far more endearing at it. This is what passes for the best leadership on offer?

Many Americans are ready to move on to someone younger and more vibrant, able to articulate a vision for something, well, different from the past. Let’s skip right on past candidates (names withheld) who parrot the same worn-out ideas as our fathers and grandfathers. Indeed, a meme emerged recently to the effect that the Greatest Generation saved us from various early 20th-century scourges (e.g., Nazis and Reds) only for the Boomers to proceed in their turn to mess up the planet so badly nothing will survive new scourges already appearing. It may not be fair to hang such labels uniformly around the necks of either generation (or subsequent ones); each possesses unique characteristics and opportunities (some achieved, others squandered) borne out of their particular moment in history. But this much is clear: whatever happens with the election and whichever generational cohort assumes power, the future is gonna be remarkably different.

Purpose behind consumption of different genres of fiction varies. For most of us, it’s about responding to stimuli and experiencing emotions vicariously, which is to say, safely. For instance, tragedy and horror can be enjoyed, if that’s the right word, in a fictional context to tweak one’s sensibilities without significant effect outside the story frame. Similarly, fighting crime, prosecuting war, or repelling an alien invasion in a video game can be fun but is far removed from actually doing those things in real life (not fun). For less explicit narrative forms, such as music, feelings evoked are aesthetic and artistic in nature, which makes a sad song or tragic symphony enjoyable on its own merits without bleeding far into real sadness or tragedy. Cinema (now blurred with broadcast TV and streaming services) is the preeminent storytelling medium that provokes all manner of emotional response. After reaching a certain age (middle to late teens), emotional detachment from depiction of sexuality and violent mayhem makes possible digestion of such stimulation for the purpose of entertainment — except in cases where prior personal trauma is triggered. Before that age, nightmare-prone children are prohibited.

Dramatic conflict is central to driving plot and story forward, and naturally, folks are drawn to some stories while avoiding others. Although I’m detached enough not to be upset by, say, zombie films where people and zombies alike are dispatched horrifically, I wouldn’t say I enjoy gore or splatter. Similarly, realistic portrayals of war (e.g., Saving Private Ryan) are not especially enjoyable for me despite the larger story, whether based on true events or entirely made up. The primary reason I leave behind a movie or TV show partway through is because I simply don’t enjoy watching suffering.

Another category bugs me even more: when fiction intrudes on reality to remind me too clearly of actual horrors (or is it the reverse: reality intruding on fiction?). It doesn’t happen often. One of the first instances I recall was in Star Trek: The Next Generation when the story observed that (fictional) warp travel produced some sort of residue akin to pollution. The reminder that we humans are destroying the actual environment registered heavily on me and ruined my enjoyment of the fictional story. (I also much prefer the exploration and discovery aspects of Star Trek that hew closer to Gene Roddenberry’s original vision than the militaristic approach now central to Star Trek.) A much more recent intrusion occurs in the rather adolescent TV show The 100, where a global nuclear exchange launched by an artificial intelligence has the follow-on effect a century later of remaining nuclear sites going critical, melting down, and irradiating the Earth, making it uninhabitable. This bothers me because that’s my expectation what happens in reality, probably not too long (decades) after industrial civilization collapses and most or all of us are dead. This prospect served up as fiction is simply too close to reality for me to enjoy vicariously.

Another example of fiction intruding too heavily on my doomer appreciation of reality occurred retroactively. As high-concept science fiction, I especially enjoyed the first Matrix movie. Like Star Trek, the sequels degraded into run-of-the-mill war stories. But what was provocative about the original was the matrix itself: a computer-generated fiction situated within a larger reality. Inside the matrix was pleasant enough (though not without conflict), but reality outside the matrix was truly awful. It was a supremely interesting narrative and thought experiment when it came out in 1999. Now twenty-one years later, it’s increasingly clear that we are living in a matrix-like, narrative-driven hyperreality intent on deluding ourselves with a pleasant equilibrium that simply isn’t in evidence. In fact, as societies and as a civilization, we’re careening out of control, no brakes, no steering. Caitlin Johnstone explores this startling after-the-fact realization in an article at Medium.com, which I found only a couple days ago. Reality is in fact far worse than the constructed hyperreality. No wonder no one wants to look at it.