Posts Tagged ‘Education’

Renewed twin memes Universal Basic Income (UBI) and Debt Jubilees (DJ) have been in the news recently. I write renewed because the two ideas are quite literally ancient, unlearnt lessons that are enjoying revitalized interest in the 21st century. Both are capable of sophisticated support from historical and contemporary study, which I admit I haven’t undertaken. However, others have done the work and make their recommendations with considerable authority. For instance, Andrew Yang, interviewed repeatedly as a 2020 U.S. presidential candidate, has made UBI the centerpiece of his policy proposals, whereas Michael Hudson has a new book out called … and forgive them their debts: Lending, Foreclosure and Redemption — From Bronze Age Finance to the Jubilee Year that offers a forgotten history of DJ.

Whenever UBI or DJ comes up in conversation, the most obvious, predicable response I hear (containing a kernel of truth) is that either proposal would reward the losers in today’s capitalist regime: those who earn too little or those who carry too much debt (often a combination of both). Never mind that quality education and economic opportunities have been steadily withdrawn over the past half century. UBI and DJ would thus be giveaways, and I daresay nothing offends a sense of fairness more than others getting something for nothing. Typical resentment goes, “I worked hard, played by the rules, and met my responsibilities; why should others who slacked, failed, or cheated get the benefit of my hard work?” It’s a commonplace “othering” response, failing to recognize that as societies we are completely interconnected and interdependent. Granting the winners in the capitalist contest a pass on fair play is also a major assumption. The most iconic supreme winners are all characterized by shark-like business practices: taking advantage of tax loopholes, devouring everything, and shrewdly understanding their predatory behavior not in terms of producing value but rather as gobbling or destroying competition to gain market share. More than a few companies these days are content to operate for years on venture capital, reporting one quarterly loss after another until rivals are vanquished. is the test case, though how many times its success can be repeated is unknown.

With my relative lack of economic study and sophistication, I take my lessons instead from the children’s game Monopoly. As an oversimplification of the dynamics of capital formation and ownership, Monopoly even for children reaches its logical conclusion well before its actual end, where one person “wins” everything. The balancing point when the game is no longer worth playing is debatable, but some have found through experience the answer is “before it starts.” It’s just no fun destroying bankrupting other players utterly through rent seeking. The no-longer-fun point is analogous to late-stage capitalism, where the conclusion has not yet been fully reached but is nonetheless clear. The endgame is, in a word, monopoly — the significant element being “mono,” as in there can be only one winner. (Be careful what you wish for: it’s lonely and resentful at the top.) Others take a different, aspirational lesson from Monopoly, which is to figure out game dynamics, or game the game, so that the world can be taken by force. One’s growing stranglehold on others disallows fair negotiation and cooperation (social rather than capitalist values) precisely because one party holds all the advantages, leading to exploitation of the many for the benefit of a few (or one).

Another unlearnt ancient lesson is that nothing corrupts so easily or so much as success, power, fame, wealth. Many accept that corruption willingly; few take the lesson to heart. (Disclosure: I’ve sometimes embarked on the easy path to wealth by buying lottery tickets. Haven’t won, so I’m not corruptible yet corrupted. Another case of something for nearly nothing, or for those gambling away their rent and grocery money, nothing for something.) Considering that money makes the world go around, especially in the modern age, the dynamics of capitalism are inescapable and the internal contradictions of capitalism are well acknowledged. The ancient idea of DJ is essentially a reset button depressed before the endgame leads to rebellion and destruction of the ownership class. Franklin D. Roosevelt is credited in some accounts of history as having saved capitalism from that near endgame by transferring wealth back to the people through the New Deal and the war economy. Thus, progressives are calling for a Green New Deal, through it’s not clear they are aware that propping up capitalism only delays its eventual collapse through another couple cycles (reversals) of capital flow. Availability of cheap, plentiful energy that allowed economies (and populations) to balloon over the past two and a half centuries cannot continue for much longer, so even if we get UBI or DJ, the endgame remains unchanged.


Everyone is familiar with the convention in entertainment media where characters speak without the use of recognizable language. (Not related really to the convention of talking animals.) The first instance I can recall (someone correct me if earlier examples are to be found) is the happy-go-lucky bird Woodstock from the old Peanuts cartoons (do kids still recognize that cast of characters?), whose dialog was shown graphically as a series of vertical lines:

When the cartoon made its way onto TV for holiday specials, its creator Charles Schultz used the same convention to depict adults, never shown onscreen but with dialogue voiced by a Harmon-muted trombone. Roughly a decade later, two characters from the Star Wars franchise “spoke” in languages only other Star Wars characters could understand, namely, Chebacca (Chewie) and R2D2. More recently, the character Groot from Guardians of the Galaxy (known to me only through the Marvel movie franchise, not through comic books) speaks only one line of dialogue, “I am Groot,” which is understood as full speech by others Guardians characters. When behemoths larger than a school bus (King Kong, Godzilla, Jurassic dinosaurs, Cloverfield, Kaiju, etc.) appear, the characters are typically denied the power of speech beyond the equivalent of a lion’s roar. (True villains talk little or not at all as they go about their machinations — no monologuing! unless it’s a James Bond film. An exception notable for its failure to charm audiences is Ultron, who wouldn’t STFU. You can decide for yourself which is the worse kind of villainy.)

This convention works well enough for storytelling and has the advantage of allowing the reader/viewer to project onto otherwise blank speech. However, when imported into the real world, especially in politics, the convention founders. There is no Babelfish universal translator inserted in the ear to transform nonsense into coherence. The obvious example of babblespeech is 45, whose speech when off the teleprompter is a series of rambling non sequiturs, free associations, slogans, and sales pitches. Transcripts of anyone’s extemporaneous speech reveal lots of restarts and blind alleys; we all interrupt ourselves to redirect. However, word salad that substitutes for meaningful content in 45’s case is tragicomic: alternately entirely frustrating or comically entertaining depending on one’s objective. Satirical news shows fall into the second category.

45 is certainly not the first. Sarah Palin in her time as a media darling (driver of ratings and butt of jokes — sound familiar?) had a knack for crazy speech combinations that were utter horseshit yet oddly effective for some credulous voters. She was even a hero to some (nearly a heartbeat away from being the very first PILF). We’ve also now been treated to a series of public interrogations where a candidate for a cabinet post or an accused criminal offers testimony before a congressional panel. Secretary of Education Betsy DeVos famously evaded simple yes/no questions during her confirmation hearing, and Supreme Court Justice Brett Kavanaugh similarly refused to provide direct answers to direct questions. Unexpectedly, sacrificial lamb Michael Cohen does give direct answers to many questions, but his interlocutors then don’t quite know how to respond considering their experience and expectation that no one answers appropriately.

What all this demonstrates is that there is often a wide gulf between what is said and what is heard. In the absence of what might be understood as effective communication (honest, truthful, and forthright), audiences and voters fill in the blanks. Ironically, we also can’t handle hear too much truth when confronted by its awfulness. None of this is a problem in storytelling, but when found in politic narratives, it’s emblematic of how dysfunctional our communications have become, and with them, the clear thought and principled activity of governance.

As I reread what I wrote 2.5 years ago in my first blog on this topic, I surmise that the only update needed to my initial assessment is a growing pile of events that demonstrate my thesis: our corrupted information environment is too taxing on human cognition, with the result that a small but growing segment of society gets radicalized (wound up like a spring) and relatively random individuals inevitably pop, typically in a self-annihilating gush of violence. News reports bear this out periodically, as one lone-wolf kook after another takes it upon himself (are there any examples of females doing this?) to shoot or blow up some target, typically chosen irrationally or randomly though for symbolic effect. More journalists and bloggers are taking note of this activity and evolving or resurrecting nomenclature to describe it.

The earliest example I’ve found offering nomenclature for this phenomenon is a blog with a single post from 2011 (oddly, no follow-up) describing so-called stochastic terrorism. Other terms include syntactic violence, semantic violence, and epistemic violence, but they all revolve around the same point. Whether on the sending or receiving end of communications, some individuals are particularly adept at or sensitive to dog whistles that over time activate and exacerbate tendencies toward radical ideology and violence. Wired has a brief article from a few days ago discussing stochastic terrorism as jargon, which is basically what I’m doing here. Admittedly, the last of these terms, epistemic violence (alternative: epistemological violence), ranges farther afield from the end effect I’m calling wind-up toys. For instance, this article discussing structural violence is much more academic in character than when I blogged on the same term (one of a handful of “greatest hits” for this blog that return search-engine hits with some regularity). Indeed, just about any of my themes and topics can be given a dry, academic treatment. That’s not my approach (I gather opinions differ on this account, but I insist that real academic work is fundamentally different from my armchair cultural criticism), but it’s entirely valid despite being a bit remote for most readers. One can easily get lost down the rabbit hole of analysis.

If indeed it’s mere words and rhetoric that transform otherwise normal people into criminals and mass murderers, then I suppose I can understand the distorted logic of the far Left that equates words and rhetoric themselves with violence, followed by the demand that they be provided with warnings and safe spaces lest they be triggered by what they hear, read, or learn. As I understand it, the fear is not so much that vulnerable, credulous folks will be magically turned into automatons wound up and set loose in public to enact violent agendas but instead that virulent ideas and knowledge (including many awful truths of history) might cause discomfort and psychological collapse akin to what happens to when targets of hate speech and death threats are reduced, say, to quivering agoraphobia. Desire for protection from harm is thus understandable. The problem with such logic, though, is that protections immediately run afoul of free speech, a hallowed but misunderstood American institution that preempts quite a few restrictions many would have placed on the public sphere. Protections also stall learning and truth-seeking straight out of the gate. And besides, preemption of preemption doesn’t work.

In information theory, the notion of a caustic idea taking hold of an unwilling person and having its wicked way with him or her is what’s called a mind virus or meme. The viral metaphor accounts for the infectious nature of ideas as they propagate through the culture. For instance, every once in a while, a charismatic cult emerges and inducts new members, a suicide cluster appears, or suburban housewives develop wildly disproportionate phobias about Muslims or immigrants (or worse, Muslim immigrants!) poised at their doorsteps with intentions of rape and murder. Inflaming these phobias, often done by pundits and politicians, is precisely the point of semantic violence. Everyone is targeted but only a few are affected to the extreme of acting out violently. Milder but still invalid responses include the usual bigotries: nationalism, racism, sexism, and all forms of tribalism, “othering,” or xenophobia that seek to insulate oneself safely among like folks.

Extending the viral metaphor, to protect oneself from infectious ideas requires exposure, not insulation. Think of it as a healthy immune system built up gradually, typically early in life, through slow, steady exposure to harm. The alternative is hiding oneself away from germs and disease, which has the ironic result of weakening the immune system. For instance, I learned recently that peanut allergies can be overcome by gradual exposure — a desensitization process — but are exacerbated by removal of peanuts from one’s environment and/or diet. This is what folks mean when they say the answer to hate speech is yet more (free) speech. The nasty stuff can’t be dealt with properly when it’s quarantined, hidden away, suppressed, or criminalized. Maybe there are exceptions. Science fiction entertains those dangers with some regularity, where minds are shunted aside to become hosts for invaders of some sort. That might be overstating the danger somewhat, but violent eruptions may provide some credence.

I’ve written a different form of this blog post at least once before, maybe more. Here’s the basic thesis: the bizarro unreality of the world in which we now live is egregious enough to make me wonder if we haven’t veered wildly off the path at some point and now exist within reality prime. I suppose one can choose any number of historical inflections to represent the branching point. For me, it was the reelection of George W. Bush in 2004. (The 9/11 attacks and “wars” in Afghanistan and Iraq had already occurred or commenced by then, and it had already revealed as well that lies — Saddam had WMDs — that sold the American public on the Iraq “war” were effective and remain so today.) Lots of other events changed the course of history, but none other felt as much to me like a gut punch precisely because, in the case of the 2004 presidential election, we chose our path. I fantasized waking up from my reality-prime nightmare but eventually had to grudgingly accept that if multiverses exist, ours mine had become one where we chose (collectively, and just barely) to keep in office an executive who behaved like a farce of stupidity. Well, joke’s on us. Twelve years later, we chose someone even more stupid, though with a “certain serpentine cunning,” and with arguably the worst character of any U.S. executive in living history.

So what to do in the face of this dysfunctional state of affairs? Bret Weinstein below has ideas. (As usual, I’m quite late, embedding a video that by Internet standards is already ancient. I also admit this is equivalent to a smash cut because I don’t have a particularly good transition or justification for turning so suddenly to Weinstein.) Weinstein is an evolutionary biologist, so no surprise that the approach he recommends is borne out of evolutionary thinking. In fairness, a politician would logically recommend political solutions, a financier would recommend economic solutions, and other professionals would seek solutions from within their areas of expertise.

The title of the interview is “Harnessing Evolution,” meaning Weinstein suggests we use evolutionary models to better understand our own needs and distortions to guide or plot proper path(s) forward and get back on track. Never mind that a healthy minority of the U.S. public rejects evolution outright while an additional percentage takes a hybrid stance. While I’m impressed that Weinstein has an answer for everything (pedagogue or demagogue or both?) and has clearly thought through sociopolitical issues, I daresay he’s living in reality double-prime if he thinks science education can be a panacea for what ails us. My pessimism is showing.

Among the many complaints that cross my path in the ongoing shitshow that American culture has become is an article titled “The Tragic Decline of Music Literacy (and Quality),” authored by Jon Henschen. His authorship is a rather unexpected circumstance since he is described as a financial advisor rather than an authority on music, technology, or culture. Henschen’s article reports on (without linking to it as I do) an analysis by Joan Serrà et al., a postdoctoral scholar at the Artificial Intelligence Research Institute. Curiously, the analysis is reported on and repackaged by quite a few news sites and blogs since its publication in 2012. For example, the YouTube video embedded below makes many of the same arguments and cites the so-called Millennial Whoop, a hook or gesture now ubiquitous in pop music that’s kinda sorta effective until one recognizes it too manifestly and it begins to sound trite, then irritating.

I won’t recount or summarize arguments except to say that neither the Henschen article nor the video discusses the underlying musical issues quite the way a trained musician would. Both are primarily quantitative rather than qualitative, equating an observed decrease in variety of timbre, loudness, and pitch/harmony as worse music (less is more worse). Lyrical (or poetical) complexity has also retreated. It’s worth noting, too, that the musical subject is narrowed to recorded pop music from 1955 to 2010. There’s obviously a lot to know about pop music, but it’s not generally the subject of serious study among academic musicians. AFAIK, no accredited music school offers degrees in pop music. Berklee College of Music probably comes the closest. (How exactly does songwriting as a major differ from composition?) That standard may be relaxing.

Do quantitative arguments demonstrate degradation of pop music? Do reduced variety, range, and experimentation make pop music the equivalent of a paint-by-the-numbers image with the self-imposed limitation that allows only unmixed primary colors? Hard to say, especially if one (like me) has a traditional education in art music and already regards pop music as a rather severe degradation of better music traditions. Reduction of the artistic palette from the richness and variety of, say, 19th-century art music proceeded through the 20th century (i.e., musical composition is now understood by the lay public to mean songs, which is just one musical genre among many) to a highly refined hit-making formula that has been proven to work remarkably well. Musical refinements also make use of new technological tools (e.g., rhythm machines, autotune, digital soundfield processing), which is another whole discussion.

Musical quality isn’t mere quantity (where more is clearly better), however, and some manage pretty well with limited resources. Still, a sameness or blandness is evident and growing within a genre that is already rather narrowly restricted to using drums, guitars, keyboards, vocals. The antidote Henschen suggests (incentivizing musical literacy and participation, especially in schools) might prove salutary, but such recommendations are ubiquitous throughout modern history. The magical combination of factors that actually catalyzes creativity, as opposed to degradation, is rather quixotic. Despite impassioned pleas not to allow quality to disappear, nothing could be more obvious than that culture drifts according to its own whims (to anthropomorphize) rather than being steered by well-meaning designs.

More to say in part 2 to follow.

Heard a curious phrase used with some regularity lately, namely, that “we’ve Nerfed the world.” Nerf refers to the soft, foam toys popular in the 70s and beyond that made balls and projectiles essentially harmless. The implication of the phrase is that we’ve become soft and vulnerable as a result of removing the routine hazards (physical and psychological) of existence. For instance, in the early days of cell phones, I recall padded street poles (like endzone goalposts) to prevent folks with their attention fixated too intently on their phones from harming themselves when stumbling blindly down the sidewalk.

Similarly, anti-bullying sentiment has reached fevered pitch such that no level of discomfort (e.g., simple name calling) can be tolerated lest the victim be scarred for life. The balancing point between preparing children for competitive realities of the world and protecting their innocence and fragility has accordingly moved heavily in favor of the latter. Folks who never develop the resilience to suffer even modest hardships are snowflakes, and they agitate these days on college campuses (and increasingly in workplaces) to withdraw into safe spaces where their beliefs are never challenged and experiences are never challenging. The other extreme is a hostile, cruel, or at least indifferent world where no one is offered support or opportunity unless he or she falls within some special category, typically connected through family to wealth and influence. Those are entitled.

A thermostatic response (see Neil Postman for more on this metaphor) is called for here. When things veer too far toward one extreme or the other, a correction is inevitable. Neither extreme is healthy for a functioning society, though the motivations are understandable. Either it’s toughen people up by providing challenge, which risks brutalizing people unnecessarily, or protect people from the rigors of life or consequences of their own choices to such a degree that they become dependent or dysfunctional. Where the proper balance lies is a question for the ages, but I daresay most would agree it’s somewhere squarely in the middle.

Jonathan Haidt and Greg Lukianoff have a new book out called The Coddling of the American Mind: How Good Intentions and Bad Ideas Are Setting Up a Generation for Failure (2018), which is an expansion of an earlier article in The Atlantic of the same title. (Both are callbacks to Allan Bloom’s notorious The Closing of the American Mind (1987), which I’ve read twice. Similar reuse of a famous title references Robert Bork’s Slouching Toward Gomorrah (1996).) I haven’t yet read Haidt’s book and doubt I will bother, but I read the source article when it came out. I also don’t work on a college campus and can’t judge contemporary mood compared to when I was an undergraduate, but I’m familiar with the buzzwords and ​intellectual fashions reported by academics and journalists. My alma mater is embroiled in these battles, largely in connection with identity politics. I’m also aware of detractors who believe claims of Haidt and Lukianoff (and others) are essentially hysterics limited to a narrow group of progressive colleges and universities.

As with other cultural developments that lie outside my expertise, I punt when it comes to offering (too) strong opinions. However, with this particular issue, I can’t help but to think that the two extremes coexist. A noisy group of students attending highly competitive institutions of higher education lead relatively privileged lives compared to those outside the academy, whereas high school grads and dropouts not on that track (and indeed grads of less elite schools) frequently struggle getting their lives going in early adulthood. Most of us face that struggle early on, but success, despite nonsensical crowing about the “best economy ever” from the Oval Office, is difficult to achieve now as the broad socioeconomic middle is pushed toward the upper and lower margins (mostly lower). Stock market notwithstanding, economic reality is frankly indifferent to ideology.

One of the very best lessons I took from higher education was recognizing and avoiding the intentional fallacy — in my own thinking no less than in that of others. Although the term arguably has more to do with critical theory dealing specifically with texts, I learned about it in relation to abstract fine arts, namely, painting and music. For example, the enigmatic expression of the Mona Lisa by Leonardo Da Vinci continues to spark inquiry and debate. What exactly does that smile mean? Even when words or programs are included in musical works, it’s seductively easy to conclude that the composer intends this or the work itself means that. Any given work purportedly allows audiences to peer into the mind of its creator(s) to interrogate intent. Conclusions thus drawn, however, are notoriously unreliable though commonplace.

It’s inevitable, I suppose, to read intent into artistic expression, especially when purpose feels so obvious or inevitable. Similar excavations of meaning and purpose are undertaken within other domains of activity, resulting in no end of interpretation as to surface and deep strategies. Original intent (also originalism) is a whole field of endeavor with respect to interpretation of the U.S. Constitution and imagining the framers’ intent. Geopolitics is another domain where hindsight analysis results in some wildly creative but ultimately conjectural interpretations of events. Even where authorial (and political) intent is explicitly recorded, such as with private diaries or journals, the possibility of deceptive intent by authors keeps everyone wondering. Indeed, although “fake news” is modern coin, a long history of deceptive publishing practice well beyond the adoption of a nom de plume attests to hidden or unknowable intent making “true intent” a meta property.

The multi-ring circus that the modern information environment has become, especially in the wake of electronic media (e.g., YouTube channels) produced by anyone with a camera and an Internet connection, is fertile ground for those easily ensnared by the intentional fallacy. Several categories of intent projected onto content creators come up repeatedly: profit motive, control of the narrative (no small advantage if one believes this blog post), setting the record straight, correcting error, grandstanding, and trolling for negative attention. These categories are not mutually exclusive. Long ago, I pointed to the phenomenon of arguing on-line and how it typically accomplishes very little, especially as comment threads lengthen and civility breaks down. These days, comments are an Internet legacy and/or anachronism that many content creators persist in offering to give the illusion of a wider discussion but in fact roundly ignore. Most blogs and channels are actually closed conversations. Maybe a Q&A follows the main presentation when held before an audience, but video channels are more often one-way broadcasts addressing an audience but not really listening. Public square discussion is pretty rare.

Some celebrate this new era of broadcasting, noting with relish how the mainstream media is losing its former stranglehold on attention. Such enthusiasm may be transparently self-serving but nonetheless rings true. A while back, I pointed to New Media Rockstars, which traffics in nerd culture entertainment media, but the term could easily be expanded to include satirical news, comedy, and conversational webcasts (also podcasts). Although some folks are rather surprised to learn that an appetite for substantive discussion and analysis exists among the public, I surmise that the shifting media landscape and disintegrated cultural narrative have bewildered a large segment of the public. The young in particular are struggling to make sense of the world, figure out what to be in life and how to function, and working out an applied philosophy that eschews more purely academic philosophy.

By way of example of new media, let me point to a trio of YouTube channels I only recently discovered. Some More News parodies traditional news broadcasts by sardonically (not quite the same as satirically) calling bullshit on how news is presented. Frequent musical cues between segments make me laugh. Unlike the mainstream media, which are difficult not to regard as propaganda arms of the government, Some More News is unapologetically liberal and indulges in black humor, which doesn’t make me laugh. Its raw anger and exasperation are actually a little terrifying. The second YouTube channel is Three Arrows, a sober, thorough debunking of news and argumentation found elsewhere in the public sphere. The speaker, who doesn’t appear onscreen, springs into action especially when accusations of current-day Nazism come up. (The current level of debate has devolved to recklessly calling nearly everyone a Nazi at some stage. Zero points scored.) Historical research often puts things into proper context, such as the magnitude of the actual Holocaust compared to some garden-variety racist running his or her mouth comparatively harmlessly. The third YouTube channel is ContraPoints, which is rather fanciful and profane but remarkably erudite considering the overall tone. Labels and categories are explained for those who may not have working definitions at the ready for every phrase or ideology. Accordingly, there is plenty of jargon. The creator also appears as a variety of different characters to embody various archetypes and play devil’s advocate.

While these channels may provide abundant information, correcting error and contextualizing better than most traditional media, it would be difficult to conclude they’re really moving the conversation forward. Indeed, one might wonder why bother preparing these videos considering how time consuming it has to be to do research, write scripts, assemble pictorial elements, etc. I won’t succumb to the intentional fallacy and suggest I know why they bother holding these nondebates. Further, unless straight-up comedy, I wouldn’t say they’re entertaining exactly, either. Highly informative, perhaps, if one pays close attention to frenetic online pace and/or mines for content (e.g., studying transcripts or following links). Interestingly, within a fairly short period of time, these channels are establishing their own rhetoric, sometimes useful, other times too loose to make strong impressions. It’s not unlike the development of new stylistic gestures in music or painting. What if anything worthwhile will emerge from the scrum will be interesting.

If the previous blog in this series was about how some ideas and beliefs become lodged or stuck in place (fixity bias), this one is about how other ideas are notoriously mutable (flexibility bias), especially the latest, loudest thing to turn one’s head and divert attention. What makes any particular idea (or is it the person?) prone to one bias or another (see this list) is mysterious to me, but my suspicion is that a character disposition toward openness and adherence to authoritative evidence figure prominently in the case of shifting opinion. In fact, this is one of the primary problems with reason: if evidence can be deployed in favor of an idea, those who consider themselves “reasonable” and thus rely on accumulation of evidence and argumentation to sharpen their thinking are vulnerable to the latest “finding” or study demonstrating sumpinorutha. It’s the intellectual’s version of “New! Improved!”

Sam Harris exploits rationalism to argue against the existence of free will, saying that if sufficient evidence can be brought to bear, a disciplined thinker is compelled to subscribe to the conclusions of reasoned argument. Choice and personal agency (free will) are removed. I find that an odd way to frame the issue. Limitless examples of lack of choice are nonequivalent to the destruction of free will. For example, one can’t decide not to believe in gravity and fly up into the air more than a few inches. One can’t decide that time is an illusion (as theoretical physicists now instruct) and decide not to age. One can’t decide that pooping is too disgusting and just hold it all in (as some children attempt). Counter-evidence doesn’t even need to be argued because almost no one pretends to believe such nonsense. (Twisting one’s mind around to believe in the nonexistence of time, free will, or the self seems to be the special province of hyper-analytical thinkers.) Yet other types of belief/denial — many of them conspiracy theories — are indeed choices: religion, flat Earth, evolution, the Holocaust, the moon landings, 9/11 truth, who really killed JFK, etc. Lots of evidence has been mustered on different sides (multiple facets, actually) of each of these issues, and while rationalists may be compelled by a preponderance of evidence in favor of one view, others are free to fly in the face of that evidence for reasons of their own or adopt by default the dominant narrative and not worry or bother so much.

The public struggles in its grasp of truthful information, as reported in a Pew Research Center study called “Distinguishing Between Factual and Opinion Statements in the News.” Here’s the snapshot:

The main portion of the study, which measured the public’s ability to distinguish between five factual statements and five opinion statements, found that a majority of Americans correctly identified at least three of the five statements in each set. But this result is only a little better than random guesses. Far fewer Americans got all five correct, and roughly a quarter got most or all wrong.

Indiscriminate adoption by many Americans of a faulty viewpoint, or more pointedly, the propaganda and “fake news” on offer throughout the information environment, carries the implication that disciplined thinkers are less confused about truth or facts, taking instead a rational approach as the basis for belief. However, I suggest that reason suffers its own frailties not easily recognized or acknowledged. In short, we’re all confused, though perhaps not hopelessly so. For several years now, I’ve sensed the outline of a much larger epistemological crisis where quintessential Enlightenment values have come under open attack. The irony is that the wicked stepchild of science and reason — runaway technology —  is at least partially responsible for this epochal conflict. It’s too big an idea to grok fully or describe in a paragraph or two, so I’ll simply point to it an move on.

My own vulnerability to flexibility bias manifests specifically in response to appeals to authority. Although well educated, a lifelong autodidact, and an independent thinker, I’m careful not to succumb to the hubris of believing I’ve got it all figgered. Indeed, it’s often said that as one gains expertise and experience in the world, the certainty of youth yields to caution precisely because the mountain of knowledge and understanding one lacks looms larger even as one accumulates wisdom. Bodies of thought become multifaceted and all arguments must be entertained. When an expert, researcher, or academic proposes something outside my wheelhouse, I’m a sitting duck: I latch onto the latest, greatest utterance as the best truth yet available. I don’t fall for it nearly so readily with journalists, but I do recognize that some put in the effort and gain specialized knowledge and context well outside the bounds of normal life, such as war reporters. Various perverse incentives deeply embedded in the institutional model of journalism, especially those related to funding, make it nearly impossible to maintain one’s integrity without becoming a pariah, so only a handful have kept my attention. John Pilger, Chris Hedges, and Matt Taibbe figure prominently.

By way of example, one of the topics that has been of supreme interest to me, though its historical remove renders it rather toothless now, is the cataclysm(s) that occurred at the conclusion of the last ice age roughly 12,000 years ago. At least three hypotheses (of which I’m aware) have been proposed to explain why glacial ice disappeared suddenly over the course of a few weeks, unleashing the Biblical Flood: Earth crust displacement, asteroidal impact(s), and coronal mass ejection(s). Like most hypotheses, evidence is both physical and conjectural, but a sizable body of evidence and argumentation for each is available. As I became familiar with each, my head turned and I became a believer, sorta. Rather than “last one is the rotten egg,” however, the latest, most recent one typically displaces the previous one. No doubt another hypothesis will appear to turn my head and disorient me further. With some topics, especially politics, new information piling on top of old is truly dizzying. And as I’ve written about many topics, I simply lack the expertise to referee competing claims, so whatever beliefs I eventually adopt are permanently provisional.

Finally, my vulnerability to authoritative appeal also reacts to the calm, unflappable tones and complexity of construction of speakers such as Sam Harris, Steven Pinker, and Charles Murray. Their manner of speaking is sometimes described pejoratively as “academese,” though only Pinker has a teaching position. Murray in particular relies heavily on psychometrics, which may not be outright lying with statistics but allows him to rationalize (literally) extraordinarily taboo subjects. In contrast, it’s easy to disregard pundits and press agents foaming and fulminating over their pet narratives. Yet I also recognize that with academese, I’m being soothed more by style than by substance, a triumph of form over function. In truth, this communication style is an appeal to emotion masquerading as an appeal to authority. I still prefer it, just as I prefer a steady, explanatory style of journalism over the snarky, reinterpretive style of disquisition practiced by many popular media figures. What communicates most effectively to me and (ironically) pushes my emotional buttons also weakens my ability to discriminate and think properly.

Yet still more to come in part 5.

Language acquisition in early childhood is aided by heavy doses of repetition and the memorable structure of nursery rhymes, songs, and stories that are repeated ad nauseum to eager children. Please, again! Again, again … Early in life, everything is novel, so repetition and fixity are positive attributes rather than causes for boredom. The music of one’s adolescence is also the subject of endless repetition, typically through recordings (radio and Internet play, mp3s played over headphones or earbuds, dances and dance clubs, etc.). Indeed, most of us have mental archives of songs heard over and over to the point that the standard version becomes canonical: that’s just the way the song goes. When someone covers a Beatles song, it’s recognizably the same song, yet it’s not the same and may even sound wrong somehow. (Is there any acceptable version of Love Shack besides that of the B52’s?) Variations of familiar folk tales and folk songs, or different phrasing in The Lord’s Prayer, imprinted in memory through sheer repetition, also possess discomfiting differences, sometimes being offensive enough to cause real conflict. (Not your Abrahamic deity, mine!)

Performing musicians traverse warhorses many times in rehearsal and public performance so that, after an undetermined point, how one performs a piece just becomes how it goes, admitting few alternatives. Casual joke-tellers may improvise over an outline, but as I understand it, the pros hone and craft material over time until very little is left to chance. Anyone who has listened to old comedy recordings of Bill Cosby, Steve Martin, Richard Pryor, and others has probably learned the jokes (and timing and intonation) by heart — again through repetition. It’s strangely comforting to be able to go back to the very same performance again and again. Personally, I have a rather large catalogue of classical music recordings in my head. I continue to seek out new renditions, but often the first version I learned becomes the default version, the way something goes. Dislodging that version from its definitive status is nearly impossible, especially when it’s the very first recording of a work (like a Beatles song). This is also why live performance often fails in comparison with the studio recording.

So it goes with a wide variety of phenomenon: what is first established as how something goes easily becomes canonical, dogmatic, and unquestioned. For instance, the origin of the universe in the big bang is one story of creation to which many still hold, while various religious creation myths hold sway with others. News that the big bang has been dislodged from its privileged position goes over just about as well as dismissing someone’s religion. Talking someone out of a fixed belief is hardly worth the effort because some portion of one’s identity is anchored to such beliefs. Thus, to question a cherished belief is to impeach a person’s very self.

Political correctness is the doctrine that certain ideas and positions have been worked out effectively and need (or allow) no further consideration. Just subscribe and get with the program. Don’t bother doing the mental work or examining the issue oneself; things have already been decided. In science, steady evidenciary work to break down a fixed understanding is often thankless, or thanks arrives posthumously. This is the main takeaway of Thomas Kuhn’s The Structure of Scientific Revolutions: paradigms are changed as much through attrition as through rational inquiry and accumulation of evidence.

One of the unanticipated effects of the Information and Communications Age is the tsunami of information to which people have ready access. Shaping that information into a cultural narrative (not unlike a creation myth) is either passive (one accepts the frequently shifting dominant paradigm without compunction) or active (one investigates for oneself as an attribute of the examined life, which with wizened folks never really arrives at a destination, since it’s the journey that’s the point). What’s a principled rationalist to do in the face of a surfeit of alternatives available for or even demanding consideration? Indeed, with so many self-appointed authorities vying for control over cultural narratives like the editing wars on Wikipedia, how can one avoid the dizzying disorientation of gaslighting and mendacity so characteristic of the modern information environment?

Still more to come in part 4.

Fully a decade ago, I analyzed with more length than I usually allow myself an article from The New Yorker that examined how media trends were pushing away from literacy (the typographic mind) toward listening and viewing (orality) as primary modes of information gathering and entertainment. The trend was already underway with the advent of radio, cinema, and television, which moved the relatively private experience of silent reading to a public or communal realm as people shared experiences around emerging media. The article took particular aim at TV. In the intervening decade, media continue to contrive new paths of distribution, moving activity back to private information environments via the smart phone and earbuds. The rise of the webcast (still called podcast by some, though that’s an anachronism), which may include a video feed or display a static image over discussion and/or lecture, and streaming services are good examples. Neither has fully displaced traditional media just yet, but the ongoing shift in financial models is a definite harbinger of relentless change.

This comes up again because, interestingly, The New Yorker included with an article I popped open on the Web an audio file of the very same article read by someone not the author. The audio was 40 minutes, whereas the article may have taken me 15 to 20 minutes had I read it. For undisclosed reasons, I listened to the audio. Not at all surprisingly, I found it odd and troublesome. Firstly, though the content was nominally investigative journalism (buttressed by commentary), hearing it read to me made it feel like, well, storytime, meaning it was fiction. Secondly, since my eyes weren’t occupied with reading, they sought other things to do and thus fragmented my attention.

No doubt The New Yorker is pandering to folks who would probably not be readers but might well become listeners. In doing so, it’s essentially conceding the fight, admitting that the effort to read is easily eclipsed by the effortlessness of listening. As alternative and unequal modes of transmitting the content of the article, however, it strikes me as an initiative hatched not by writers and editors capable of critical thought and addressing a similarly enabled readership but by a combination of sales and marketing personnel attempting to capture a widening demographic of listeners (read: nonreaders). Navigating to the article might be a modest extra complication, but if a link to the audio file can be tweeted out (I don’t actually know if that’s possible), then I guess the text isn’t truly necessary.

Here part of what I wrote a decade ago:

If the waning of the typographic mind proceeds, I anticipate that the abstract reasoning and critical thinking skills that are the legacy of Enlightenment Man will be lost except to a few initiates who protect the flame. And with so many other threats cropping up before us, the prospect of a roiling mass of all-but-in-name barbarians ruled by a narrow class of oligarchs does indeed spell the total loss of democracy.

Are we getting perilously close that this dystopia? Maybe not, since it appears that many of those in high office and leadership positions labor under their own failures/inabilities to read at all critically and so execute their responsibilities with about the same credibility as hearsay. Even The New Yorker is no longer protecting the flame.

I recall Nathaniel Hawthorne’s short story The Celestial Railroad railing against the steam engine, an infernal machine, that disrupts society (agrarian at that time). It’s a metaphor for industrialization. The newest infernal machine (many candidates have appeared since Hawthorne’s time only to be supplanted by the next) is undoubtedly the smart phone. Its disruption of healthy formation of identity among teenagers has already been well researched and documented. Is it ironic that as an object of our own creation, it’s coming after our minds?