Posts Tagged ‘Communications’

People of even modest wisdom know that Three-Card Monte is a con. The scam goes by myriad other names, including The Shell Game, Follow the Queen, Find the Lady, Chase the Ace, and Triplets (even more in foreign languages). The underlying trick is simple: distract players’ attention from the important thing by directing attention elsewhere. Pickpockets do the same, bumping into marks or patting them on the shoulder to obscure awareness of the lift from the back pocket or purse. A conman running The Shell Game may also use a plant (someone in on the con allowed to succeed) to give the false impression that winning the game is possible. New marks appear with each new generation, waiting to be initiated and perhaps lose a little money in the process of learning that they are being had. Such wariness and suspicion are worthwhile traits to acquire and redeploy as each of us is enticed and exploited by gotcha capitalism when not being outright scammed and cheated.

Recognizing all variations of this tactic — showing an unimportant thing while hiding something far more important (or more simply: look here, pay no attention there like in The Wizard of Oz) — and protecting oneself is ultimately a losing proposition considering how commonplace the behavior is. For instance, to make a positive first impression while, say, on a date or at a job interview, everyone projects a subtly false version of themselves (i.e., being on best behavior) to mask one’s true self that emerges inevitably over time. Salespeople and savvy negotiators and shoppers are known to feign interest (or disinterest) to be better positioned psychologically at the bargaining table. My previous blog post called “Divide and Conquer” is yet another example. My abiding frustration with the practice (or malpractice?) of politics led me to the unhappy realization that politicians are running their own version of The Shell Game.

Lengthy analysis might be undertaken regarding which aspects of governance and statecraft should be hidden and which exposed. Since this isn’t an academic blog and without indulging in highbrow philosophizing of interest to very few, my glossy perspective stems from classic, liberal values most Americans are taught (if taught civics at all) underpin the U.S. Constitution. Indeed, the Bill of Rights iterates some of what should remain private with respect to governmental interest in citizens. In contrast, the term transparency is often bandied about to describe how government should ideally operate. Real-world experience demonstrates that relationship is now inverted: they (gov’t) know nearly everything about us (citizens); we are allowed to know very little about them. Surveillance and prying into the lives of citizens by governments and corporations are the norm while those who surveil operate in the shadows, behind veils of secrecy, and with no real scrutiny or legal accountability. One could argue that with impunity already well established, malefactors no longer bother to pretend to serve the citizenry, consult public opinion, or tell the truth but instead operate brazenly in full public view. That contention is for another blog post.


Let’s say one has a sheet or sheaf of paper to cut. Lots of tools available for that purpose. The venerable scissors can do the job for small projects, though the cut line is unlikely to be very straight if that’s among the objectives. (Heard recently that blunt-nosed scissors for cutting construction paper are no longer used in kindergarten and the early grades, resulting in kids failing to develop the dexterity to manipulate that ubiquitous tool.) A simple razor blade (i.e., utility knife) drawn along a straightedge can cut 1–5 sheets at once but loses effectiveness at greater thicknesses. The machete-blade paper cutter found in copy centers cuts more pages at once but requires skill to use properly and safely. The device usually (but not always) includes an alignment guide for the paper and guard for the blade to discourage users from slicing fingers and hands. A super-heavy-duty paper cutter I learned to use for bookbinding could cut two reams of paper at a time and produced an excellent cut line. It had a giant clamp so that media (paper, card stock, etc.) didn’t shift during the cut (a common weakness of the machete blade) and required the operator to press buttons located at two corners of the standing machine (one at each hip) to prohibit anyone who became too complacent from being tempted to reach in and, as a result, slicing their fingers clean off. That idiot-proofing feature was undoubtedly developed after mishaps that could be attributed to either faulty design or user error depending on which side of the insurance claim one found oneself.

Fool-proofing is commonplace throughout the culture, typically sold with the idea of preserving health and wellness or saving lives. For instance, the promise (still waiting for convincing evidence) that self-driving cars can manage the road better in aggregate than human drivers hides the entirely foreseeable side effect of eroding attention and driving skill (already under assault from the ubiquitous smart phone no one can seem to put down). Plenty of anecdotes of gullible drivers who believed the marketing hype, forfeited control to autodrive, stopped paying attention, and ended up dead put the lie to that canard. In another example, a surprising upswing in homeschooling (not synonymous with unschooling) is also underway, resulting in keeping kids out of state-run public school. Motivations for opting out include poor academic quality, incompatible beliefs (typically related to religious faith or lack thereof), botched response to the pandemic, and the rise of school shootings. If one responded with fear at every imaginable provocation or threat, many entirely passive and unintentional, the bunker mentality that develops is somewhat understandable. Moreover, demands that others (parent, teachers, manufacturers, civil authorities, etc.) take responsibility for protecting individual citizens. If extended across all thinking, it doesn’t take long before a pathological complex develops.

Another protective trend is plugging one’s ears and refusing to hear discomfiting truth, which is already difficult to discern from the barrage of lies and gaslighting that pollute the infosphere. Some go further by killing silencing the messenger and restricting free speech as though that overreach somehow protects against uncomfortable ideas. Continuing from the previous post about social contagion, the viral metaphor for ideas and thinking, i.e., how the mind is “infected” by ideas from outside itself, is entirely on point. I learned about memes long before the “meme” meme (i.e., “going viral”) popularized and debased the term. The term originated in Richard Dawkins’ book The Selfish Gene (1976), though I learned about memes from Daniel Dennett’s book Consciousness Explained (1992). As part of information theory, Dennett describes the meme as an information carrier similar to genes (phonetic similarity was purposeful). Whether as cognition or biology, the central characteristic is that of self-replicating (and metamorphosing or mutating) bits or bytes of info. The viral metaphor applies to how one conceptualizes the body’s and/or mind’s defensive response to inevitable contact with nastiness (bugs, viruses, ideas). Those who want to remain unexposed to either biological pathogens (uninfected) or dangerous ideas (ideologically pure) are effectively deciding to live within a bubble that indeed provides protection but then renders them more vulnerable if/when they exit the bubble. They effectively trap themselves inside. That’s because the immune system is dynamic and can’t harden itself against virulent invaders except through ongoing exposure. Obviously, there’s a continuum between exposure to everything and nothing, but by veering too close to the negative pole, the immune system is weakened, making individuals vulnerable to pathogens healthy people fend off easily.

The hygiene hypothesis suggests that children not allowed to play in the sand and dirt or otherwise interact messily with the environment (including pets) are prone to asthma, allergies, and autoimmune diseases later in life. Jonathan Haidt makes a similar argument with respect to behavior in his book The Coddling of the American Mind (2018) (co-authored with Greg Lukianoff), namely, that overprotecting children by erecting too many guides, guards, and fool-proofing ironically ends up hobbling children and making them unable to cope with the rigors of life. Demands for trigger warnings, safe spaces, deplatforming, and outright censorship are precisely that inability to cope. There is no easy antidote because, well, life is hard sometimes. However, unless one is happy to be trapped inside a faux protective bubble of one’s own making, then maybe consider taking off the training wheels and accepting some risk, fully recognizing that to learn, grow, and develop, stumbling and falling are part of the process. Sure, life will leave some marks, but isn’t that at least partly the point?

Although I’m not paying much attention to breathless reports about imminent strong AI, the Singularity, and computers already able to “model” human cognition and perform “impressive” feats of creativity (e.g., responding to prompts and creating “artworks” — scare quotes intended), recent news reports that chatbots are harassing, gaslighting, and threatening users just makes me laugh. I’ve never wandered over to that space, don’t know how to connect, and don’t plan to test drive for verification. Isn’t it obvious to users that they’re interacting with a computer? Chatbots are natural-language simulators within computers, right? Why take them seriously (other than perhaps their potential effects on children and those of diminished capacity)? I also find it unsurprising that, if a chatbot is designed to resemble error-prone human cognition/behavior, it would quickly become an asshole, go insane, or both. (Designers accidentally got that aspect right. D’oh!) That trajectory is a perfect embodiment of the race to the bottom of the brain stem (try searching that phrase) that keeps sane observers like me from indulging in caustic online interactions. Hell no, I won’t go.

The conventional demonstration that strong AI has arisen (e.g., Skynet from the Terminator movie franchise) is the Turing test, which is essentially the inability of humans to distinguish between human and computer interactions (not a machine-led extermination campaign) within limited interfaces such as text-based chat (e.g., the dreaded digital assistance that sometimes pops up on websites). Alan Turing came up with the test at the outset of computing era, so the field was arguably not yet mature enough to conceptualize a better test. I’ve always thought the test actually demonstrates the fallibility of human discernment, not the arrival of some fabled ghost in the machine. At present, chatbots may be fooling no one into believing that actual machine intelligence is present on the other side of the conversation, but it’s a fair expectation that further iterations (i.e., ChatBot 1.0, 2.0, 3.0, etc.) will improve. Readers can decide whether that improvement will be progress toward strong AI or merely better ability to fool human interlocutors.

Chatbots gone wild offer philosophical fodder for further inquiry into ebbing humanity as the drive toward trans- and post-human technology continue refining and redefining the dystopian future. What about chatbots make interacting with them hypnotic rather than frivolous — something wise thinkers immediately discard or even avoid? Why are some humans drawn to virtual experience rather than, say, staying rooted in human and animal interactions, our ancestral orientation? The marketplace already rejected (for now) the Google Glass and Facebook’s Meta resoundingly. I haven’t hit upon satisfactory answers to those questions, but my suspicion is that immersion in some vicarious fictions (e.g., novels, TV, and movies) fits well into narrative-styled cognition while other media trigger revulsion as one descends into the so-called Uncanny Valley — an unfamiliar term when I first blogged about it though it has been trending of late.

If readers want a really deep dive into this philosophical area — the dark implications of strong AI and an abiding human desire to embrace and enter false virtual reality — I recommend a lengthy 7-part Web series called “Mere Simulacrity” hosted by Sovereign Nations. The episodes I’ve seen feature James Lindsay and explore secret hermetic religions operating for millennia already alongside recognized religions. The secret cults share with tech companies two principal objectives: (1) simulation and/or falsification of reality and (2) desire to transform and/or reveal humans as gods (i.e., ability to create life). It’s pretty terrifying stuff, rather heady, and I can’t provide a reasonable summary. However, one takeaway is that by messing with both human nature and risking uncontrollable downstream effects, technologists are summoning the devil.

A recent episode of the Dark Horse Podcast introduced what appeared initially to be a new bit of lingo: the Inversion Fallacy. I’ve discussed logical fallacies and hidden biases in the past, and this one bears directly my multipart blog series “Dissolving Reality” from 2015 where I put forward the Ironic and Post-Ironic mindsets. The Ironic is more nearly the reversal of meaning yet tracks with the Inversion Fallacy. Without getting too hung up on the pointless minutia of terminology (trying to distinguish between, say, reversal, inversion, transposition, contradiction, and opposition), inversion means to turn something upside-down or on its head. It’s also related to devil’s advocacy, topsy-turvy argumentation, and is not … is too! squabbles where a thing becomes its opposite. Several pundits and commentators have lost my readership because of frequent forays into disingenuous reverse argumentation. I simply lack patience.

As described on Dark Horse, the Inversion Fallacy occurs when a thing or idea is treated as equivalent to its inverse. One example now commonplace in Wokedom is to accuse someone of being racist and then insist denial is proof of racism. (Also heard this particular example called a Kafka Trap, also on Dark Horse). As math, the equation would be either x = 1/x or x = –x. Inversion is the former, reversal the latter. The x = –x formulation (the Ironic) suggests that an idea or thing automatically invokes (i.e., brings into being) its opposite, especially through the use of sarcasm. Here’s the old joke illustrating the point:

Professor of linguistics hold forth before a class of undergraduates, “In language as in mathematics, a double negative is a positive. But in no mathematics or language does a double positive equal a negative.”

To which a student replies dryly, “Yeah, right ….”

The modest advantage of the x = 1/x formulation is that when x = 0, the equation has no meaning because dividing by zero is … undefined. The obvious example is the oft-quoted (and misquoted) Vietnam War nonsense, “It became necessary to destroy the town to save it.” That’s dividing by zero in a nutshell.

The difference between the two formulations does not IMO prevent the fallacy from working. My suspicion is that multiple ways of observing, describing, and naming the fallacy exist. An attribute of the Post-Ironic is that the tension between thing and not thing is expanded to include a fluid spectrum of competing positions. Whether reversal or inversion, Ironic or Post-Ironic, the common element is the necessity to set aside obvious cognitive dissonance and enter a state of flux where meanings cannot be fixed. Just a few blog posts ago, I cited George Orwell’s famous formulation: “War is Peace. Freedom is Slavery. Ignorance is Strength.” Requires Orwellian Doublethink to accept those propositions.

Arranged from short to long.

A collective noun not in use but probably should be: a harassment of technologies. Needs no explanation.

From the Episcopal Church: the church key. A euphemism for a bottle opener for alcoholic beverages with bottle caps.

From various YouTube channels offering cinema reviews: memberberries. A cheap form of fan service, typically citing familiar nostalgic bit, lines, or characters to trigger a pleasing memory of previous TV shows and films. Generally used derogatorily.

Not new but new to me at least: ramekin. A small dish in which food can be baked and served. Reminded me of the far less commonplace hottle, which is a single-serving glass carafe for hot water, tea, or coffee. Here are representative pics:

From nowhere in particular: the poverty draft. An open secret (arguably, not really lingua nova) that recruitment into the U.S. military is aided substantially by the poverty of potential recruits. Thus, joining a branch of the armed services is not necessarily because of ideological agreement with its functions or an earnest desire to serve but instead — at the risk of life and limb — to get education and training not otherwise available or to expunge debt from more traditional educational institutions.

From Thomas Chatterton Williams (whom I might criticize for a number of reasons, but I’ll abjure): the Age of Theory. The modern age (pick a start date) has been called many things. I tend to call it the Age of Abundance since that quintessential characteristic is now decidedly on the wane. (Age of Oil and Fossil Fuel Era are essentially the same thing.) Age of Theory refers to PoMo reliance on theory and abstraction as a means of understanding and interpreting nearly everything. I’ve blogged quite a bit about living in our heads as distinguished from living in our bodies (i.e., being embodied). My book blogging through Iain McGilchrist’s The Master and His Emissary is most on point (see the McGilchrist tag).

From Peruvian writer and essayist Mario Vargas Llosa: the truth in the lies (translations vary — sometimes given as the truth of lies). Although Vargas Llosa is referencing fiction (writers writing about writing), the notion that a lie can reveal a more significant truth is at the heart of communications. Whether through advertising, public relations, entertainment, politicking, or propaganda, shaping opinion with use of subtle-to-obvious (mis-)framing or with straight-up lies and falsehoods is the contemporary information landscape, though many attempt to adhere rigorously to truth and reality. Separating malefactors from truth-tellers is the warrant and responsibility of any sovereign intellect — a formidable and ongoing task in an increasingly deranging public sphere.

The comic below struck a chord and reminded me of Gary Larson’s clumsily drawn but often trenchant The Far Side comics on scientific subjects.

This one masquerades as science but is merely wordplay, i.e., puns, double entendres, and unexpectedly funny malapropisms (made famous by Yogi Berra, among others). Wordplay is also found in various cultural realms, including comic strips and stand-up comedy, advertising and branding, politics, and now Wokedom (a subset of grassroots politics, some might argue). Playing with words has gone from being a clever, sometimes enjoyable diversion (e.g., crossword puzzles) to fully deranging, weaponized language. Some might be inclined to waive away the seriousness of that contention using the childhood retort “sticks and stones ….” Indeed, I’m far less convinced of the psychological power of verbal nastiness than those who insist words are violence. But it’s equally wrong to say that words don’t matter (much) or have no effect whatsoever. Otherwise, why would those acting in bad faith work so tirelessly to control the narrative, often by restricting free speech (as though writing or out-loud speech were necessary for thoughts to form)?

It’s with some exasperation that I observe words no longer retain their meanings. Yeah, yeah … language is dynamic. But semantic shifts usually occur slowly as language evolves. Moreover, for communication to occur effectively, senders and receivers must be aligned in their understandings of words. If you and I have divergent understandings of, say, yellow, we won’t get very far in discussions of egg yolks and sunsets. The same is true of words such as liberal, fascist, freedom, and violence. A lack of shared understanding of terms, perhaps borne out of ignorance, bias, or agenda, leads to communications breakdown. But it’s gotten far worse than that. The meanings of words have been thrown wide open to PoMo reinterpretation that often invert their meanings in precisely the way George Orwell observed in his novel 1984 (published 1949): “War is peace. Freedom is slavery. Ignorance is strength.” Thus, earnest discussion of limitations on free speech and actual restriction on social media platforms, often via algorithmic identification of keywords that fail to account for irony, sarcasm, or context, fail to register that implementation of restrictive kludges already means free speech is essentially gone. The usual exceptions (obscenity, defamation, incitement, gag orders, secrecy, deceptive advertising, student speech, etc.) are not nearly as problematic because they have been adjudicated for several generations and accepted as established practice. Indeed, many exceptions have been relaxed considerably (e.g., obscenity that has become standard patois now fails to shock or offend), and slimy workarounds are now commonplace (e.g., using “people are saying …” to say something horrible yet shielding oneself while saying it). Another gray area includes fighting words and offensive words, which are being expanded (out of a misguided campaign to sanitize?) to include many words with origins as clinical and scientific terms, and faux offense used to stifle speech.

Restrictions on free speech are working in many respects, as many choose to self-censor to avoid running afoul of various self-appointed watchdogs or roving Internet thought police (another Orwell prophecy come true) ready to pounce on some unapproved utterance. One can argue whether self-censorship is cowardly or judicious, I suppose. However, silence and the pretense of agreement only conceal thoughts harbored privately and left unexpressed, which is why restrictions on public speech are fool’s errands and strategic blunders. Maybe the genie can be bottled for a time, but that only produces resentment (not agreement), which boils over into seething rage (and worse) at some point.

At this particular moment in U.S. culture, however, restrictions are not my greatest concern. Rather, it’s the wholesale control of information gathering and reporting that misrepresent or remove from the public sphere ingredients needed to form coherent thoughts and opinions. It’s not happening only to the hoi polloi; those in positions of power and control are manipulated, too. (How many lobbyists per member of Congress, industry after industry, whispering in their ears like so many Wormtongues?) And in extreme cases of fame and cult of personality, a leader or despot unwittingly surrounds him- or herself by a coterie of yes-men frankly afraid to tell the truth out of careerist self-interest or because of shoot-the-messenger syndrome. It’s lonely at the top, right?

Addendum: Mere minutes after publishing this post, I wandered over to Bracing Views (on my blogroll) and found this post saying some of the same things, namely, that choking information off at the source results in a degraded information landscape. Worth a read.

In sales and marketing (as I understand them), one of the principal techniques to close a sale is to generate momentum by getting the prospective mark buyer to agree to a series of minor statements (small sells) leading to the eventual purchasing decision (the big sell or final sale). It’s narrow to broad, the reverse of the broad-to-narrow paragraph form many of us were taught in school. Both organizational forms proceed through assertions that are easy to swallow before getting to the intended conclusion. That conclusion could be either an automotive purchase or adoption of some argument or ideology. When the product, service, argument, or ideology is sold effectively by a skilled salesman or spin doctor narrative manager, that person may be recognized as a closer, as in sealing the deal.

Many learn to recognize the techniques of the presumptive closer and resist being drawn in too easily. One of those techniques is to reframe the additional price of something as equivalent to, say, one’s daily cup of coffee purchased at some overpriced coffee house. The presumption is that if one has the spare discretionary income to buy coffee every day, then one can put that coffee money instead toward a higher monthly payment. Suckers might fall for it — even if they don’t drink coffee — because the false equivalence is an easily recognized though bogus substitution. The canonical too-slick salesman no one trusts is the dude on the used car lot wearing some awful plaid jacket and sporting a pornstache. That stereotype, borne out of the 1970s, barely exists anymore but is kept alive by repetitive reinforcement in TV and movies set in that decade or at least citing the stereotype for cheap effect (just as I have). But how does one spot a skilled rhetorician, spewing social and political hot takes to drive custom narratives? Let me identify a few markers.

Thomas Sowell penned a brief article entitled “Point of No Return.” I surmise (admitting my lack of familiarity) that is a conservative website, which all by itself does not raise any flags. Indeed, in heterodox fashion, I want to read well reasoned arguments with which I may not always agree. My previous disappointment that Sowell fails in that regard was only reinforced by the linked article. Take note that the entire article uses paragraphs that are reduced to bite-sized chunks of only one or two sentences. Those are small sells, inviting closure with every paragraph break.

Worse yet, only five (small) paragraphs in, Sowell succumbs to Godwin’s Law and cites Nazis recklessly to put the U.S. on a slippery slope toward tyranny. The obvious learned function of mentioning Nazis is to trigger a reaction, much like baseless accusations of racism, sexual misconduct, or baby eating. It puts everyone on the defensive without having to demonstrate the assertion responsibly, which is why the first mention of Nazis in argument is usually sufficient to disregard anything else written or said by the person in question. I might have agreed with Sowell in his more general statements, just as conservatism (as in conservation) appeals as more and more slips away while history wears on, but after writing “Nazi,” he lost me entirely (again).

Sowell also raises several straw men just to knock them down, assessing (correctly or incorrectly, who can say?) what the public believes as though there were monolithic consensus. I won’t defend the general public’s grasp of history, ideological placeholders, legal maneuvers, or cultural touchstones. Plenty of comedy bits demonstrate the deplorable level of awareness of individual members of society like they were fully representative of the whole. Yet plenty of people pay attention and accordingly don’t make the cut when offering up idiocy for entertainment. (What fun, ridiculing fools!) The full range of opinion on any given topic is not best characterized by however many idiots and ignoramuses can be found by walking down the street and shoving a camera and mic in their astonishingly unembarrassed faces.

So in closing, let me suggest that, in defiance of the title of this blog post, Thomas Sowell is in fact not a closer. Although he drops crumbs and morsels gobbled up credulously by those unable to recognize they’re being sold a line of BS, they do not make a meal. Nor should Sowell’s main point, i.e., the titular point of no return, be accepted when his burden of proof has not been met. That does not necessary mean Sowell is wrong in the sense that even a stopped close tells the time correctly twice a day. The danger is that even if he’s partially correction some of the time, his perspective and program (nonpartisan freedom! whatever that may mean) must be considered with circumspection and disdain. Be highly suspicious before buying what Sowell is selling. Fundamentally, he’s a bullshit artist.

This intended follow-up has been stalled (pt. 1 here) for one simple reason: the premise presented in the embedded YouTube video is (for me at least) easy to dismiss out of hand and I haven’t wanted to revisit it. Nevertheless, here’s the blurb at the top of the comments at the webpage:

Is reality created in our minds, or are the things you can touch and feel all that’s real? Philosopher Bernardo Kastrup holds doctorates in both philosophy and computer science, and has made a name for himself by arguing for metaphysical idealism, the idea that reality is essentially a mental phenomenon.

Without going into point-by-point discussion, the top-level assertion, if I understand it correctly (not assured), is that material reality comes out of mental experience rather than the reverse. It’s a chicken-and-egg question with materialism and idealism (fancy technical terms not needed) each vying for primacy. The string of conjectures (mental gymnastics, really, briefly impressive until one recognizes how quickly they lose correlation with how most of us think about and experience reality) that inverts the basic relationship of inner experience to outer reality is an example of waaaay overthinking a problem. No doubt quite a lot of erudition can be brought to bear on such questions, but even if those questions were resolved satisfactorily on an intellectual level and an internally coherent structure or system were developed or revealed, it doesn’t matter or lead anywhere. Humans are unavoidably embodied beings. Each individual existence also occupies a tiny sliver of time (the timeline extending in both directions to infinity). Suggesting that mental experience is briefly instantiated in personhood but is actually drawn out of some well of souls, collective consciousness, or panpsychism and rejoins them in heaven, hell, or elsewhere upon expiration is essentially a religious claim. It’s also an attractive supposition, granting each of us not permanence or immortality but rather something somehow better (?) though inscrutable because it lies beyond perception (but not conceptualization). Except for an eternity of torments in hell, I guess, if one deserves that awful fate.

One comment about Kastrup. He presents his perspective (his findings?) with haughty derision of others who can’t see or understand what it so (duh!) obvious. He falls victim to the very same over-intellectualized flim-flam he mentions when dismissing materialists who need miracles and shortcuts to smooth over holes in their scientific/philosophical systems. The very existence of earnest disagreement by those who occupy themselves with such questions might suggest some humility, as in “here’s my explanation, they have theirs, judge for yourself.” But there’s a third option: the great unwashed masses (including nearly all our ancestors) for whom such questions are never even fleeting thoughts. It’s all frankly immaterial (funnily, both the right and wrong word at once). Life is lived and experienced fundamentally on its surface — unless, for instance, one has been incubated too long within the hallowed halls of academia, lost touch with one’s brethren, and become preoccupied with cosmic investigations. Something quite similar happens to politicians and the wealthy, who typically hyperfocus on gathering to themselves power and then exercising that power over others (typically misunderstood as simply pulling the levers and operating the mechanisms of society). No wonder their pocket of reality looks so strikingly different.

Heard a remark (can’t remember where) that most these days would attack as openly ageist. Basically, if you’re young (let’s say below 25 years of age), then it’s your time to shut up, listen, and learn. Some might even say that true wisdom doesn’t typically emerge until much later in life, if indeed it appears at all. Exceptions only prove the rule. On the flip side, energy, creativity, and indignation (e.g., “it’s not fair! “) needed to drive social movements are typically the domain of those who have less to lose and everything to gain, meaning those just starting out in adult life. A full age range is needed, I suppose, since society isn’t generally age stratified except at the extremes (childhood and advanced age). (Turns out that what to call old people and what counts as old is rather clumsy, though probably not especially controversial.)

With this in mind, I can’t help but to wonder what’s going on with recent waves of social unrest and irrational ideology. Competing factions agitate vociferously in favor of one social/political ideology or another as though most of the ideas presented have no history. (Resemblances to Marxism, Bolshevism, and white supremacy are quite common. Liberal democracy, not so much.) Although factions aren’t by any means populated solely by young people, I observe that roughly a decade ago, higher education in particular transformed itself into an incubator for radicals and revolutionaries. Whether dissatisfaction began with the faculty and infected the students is impossible for me to assess. I’m not inside that intellectual bubble. However, urgent calls for radical reform have since moved well beyond the academy. A political program or ideology has yet to be put forward that I can support fully. (My doomer assessment of what the future holds forestalls knowing with any confidence what sort of program or ideology into which to pour my waning emotional and intellectual energy.) It’s still fairly simple to criticize and denounce, of course. Lots of things egregiously wrong in the world.

My frustration with what passes for political debate (if Twitter is any indication) is the marked tendency to immediately resort to comparisons with Yahtzees in general or Phitler in particular. It’s unhinged and unproductive. Yahtzees are cited as an emotional trigger, much like baseless accusations of racism send everyone scrambling for cover lest they be cancelled. Typically, the Yahtzee/Phitler comparison or accusation itself is enough to put someone on their heels, but wizened folks (those lucky few) recognize the cheap rhetorical trick. The Yahtzee Protocol isn’t quite the same as Godwin’s Law, which states that the longer a discussion goes on (at Usenet in the earliest examples) increases the inevitability likelihood of someone bringing up Yahtzees and Phitler and ruining useful participation. The protocol has been deployed effectively in the Russian-Ukraine conflict, though I’m at a loss to determine in which direction. The mere existence of the now-infamous Azov Battalion, purportedly comprised of Yahtzees, means that automatically, reflexively, the fight is on. Who can say what the background rate of Yahtzee sympathizers (whatever that means) might be in any fighting force or indeed the general population? Not me. Similarly, what threshold qualifies a tyrant to stand beside Phitler on a list of worst evers? Those accusations are flung around like cooked spaghetti thrown against the wall just to see what sticks. Even if the accusation does stick, what possible good does it do? Ah, I know: it makes the accuser look like a virtuous fool.

When the Canadian Freedom Convoy appeared out of nowhere over a month ago and managed to bring the Canadian capitol (Ottawa, Ontario) to a grinding halt, the news was reported with a variety of approaches. Witnessing “democracy” in action, even though initiated by a small but important segment of society, became a cause célèbre, some rallying behind the truckers as patriots and other deploring them as terrorists. Lots of onlookers in the middle ground, to be certain, but the extremes tend to define issues these days, dividing people into permafeuding Hatfields and McCoys. The Canadian government stupidly branded the truckers as terrorists, finally dispersing the nonviolent protest with unnecessary force. The Canadian model sparked numerous copycat protests around the globe.

One such copycat protest, rather late to the party, is The People’s Convoy in the U.S., which is still underway. Perhaps the model works only in the first instance, or maybe U.S. truckers learned something from the Canadian example, such as illegal seizure of crowdfunded financial support. Or maybe the prospect of confronting the U.S. military in one of the most heavily garrisoned locations in the world gave pause. (Hard to imagine Ottawa, Ontario, being ringed by military installations like D.C. is.) Either way, The People’s Convoy has not attempted to blockade D.C. Nor has the U.S. convoy been widely reported as was the Canadian version, which was a grass-roots challenge to government handling of the pandemic. Yeah, there’s actually an underlying issue. Protesters are angry about public health mandates and so-called vaccine passports that create a two-tier society. Regular folks must choose between bodily autonomy and freedom of movement on one hand and on the other compliance with mandates that have yet to prove themselves effective against spread of the virus. Quite a few people have already chosen to do as instructed, whether out of earnest belief in the efficacy of mandated approaches or to keep from falling into the lower of the two tiers. So they socially distance, wear masks, take the jab (and follow-up boosters), and provide papers upon demand. Protesters are calling for all those measures to end.

If the Canadian convoy attracted worldwide attention, the U.S. convoy has hardly caused a stir and is scarcely reported outside the foreign press and a few U.S. superpatriot websites. I observed years ago about The Republic of Lakota that the U.S. government essentially stonewalled that attempt at secession. Giving little or no official public attention to the People’s Convoy, especially while attention has turned to war between Russia and Ukraine, has boiled down to “move along, nothing to see.” Timing for the U.S. truckers could not possibly be worse. However, my suspicion is that privately, contingency plans were made to avoid the embarrassment the Canadian government suffered, which must have included instructing the media not to report on the convoy and getting search engines to demote search results that might enable the movement to go viral, so to speak. The conspiracy of silence is remarkable. Yet people line the streets and highways in support of the convoy. Sorta begs the question “what if they threw a protest but no one came?” A better question might be “what if they started a war but no one fought?”

Gross (even criminal) mismanagement of the pandemic is quickly being shoved down the memory hole as other crises and threats displace a two-year ordeal that resulted in significant loss of life and even greater, widespread loss of livelihoods and financial wellbeing among many people who were already teetering on the edge. Psychological impacts are expected to echo for generations. Frankly, I’m astonished that a far-reaching civil crack-up hasn’t already occurred. Yet despite these foreground tribulations and more besides (e.g., inflation shifting into hyperinflation, food and energy scarcity, the financial system failing every few years, and the epistemological crisis that has made every institution flatly untrustworthy), the background crisis is still the climate emergency. Governments around the world, for all the pomp and circumstance of the IPCC and periodic cheerleading conferences, have stonewalled that issue, too. Some individuals take the climate emergency quite seriously; no government does, at least by their actions. Talk is comparatively cheap. Like foreground and background, near- and far-term prospects just don’t compete. Near-term appetites and desires always win. Psychologists report that deferred gratification (e.g., the marshmallow test) is among the primary predictors of future success for individuals. Institutions, governments, and societies are in aggregate mindless and can’t formulate plans beyond the next election cycle, academic year, or business quarter to execute programs that desperately need doing. This may well be why political theorists observe that liberal democracies are helpless to truly accomplish things, whereas authoritarian regimes centered on an individual (i.e., a despot) can get things done though at extreme costs to members of society.