Archive for the ‘Education’ Category

For readers coming to this blog post lacking context, I’m currently reading and book-blogging Pankaj Mishra’s Age of Anger. It explores Western intellectual history that gives rise to feelings of radical discontent over injustices that have not been addressed or remedied successfully for the entirety of the modern era despite centuries of supposed progress.

Continuing from part 1, the case of Voltaire is a curious one. A true child of the Enlightenment, my inference is that he came along too late to participate in the formulation of foundational Enlightenment ideals but later became one of their chief proponents as they diffused throughout Europe and into Russia and elsewhere. He joined many, many others in a belief (against a preponderance of evidence) in human progress, if not perfectibility. (Technical progress is an entirely different matter.) One of the significant aspects of his ideology and writings was his sustained attack on Christianity, or more particularly, Catholicism. More than three centuries later, the secularization of Europe and diminished influence of medieval church dogma stand out as part of the same intellectual tradition.

Enlightenment canon includes aspirational faith in the ability of reason, mechanisms, systems, and administrative prowess to order the affairs of men properly. (How one defines properly, as distinct from equitably or justly, is a gaping hole primed for debate.) In the course of the last few centuries, history has demonstrated that instrumental logic spawned by this ideology has given rise to numerous totalitarian regimes that have subjugated entire populations, often quite cruelly, in modernizing and Westernizing projects. Voltaire found himself in the thick of such projects by willingly aligning himself with despots and rulers who victimized their own peoples in pursuit of industrialization and imitation of urbane French and British models. Russians Peter the Great (reigned May 7, 1682 to February 8, 1725) and Catherine the Great (reigned July 9, 1762 to November 17, 1796) were among those for whom Voltaire acted as apologist and intellectual co-conspirator. Here’s what Mishra has to say:

Voltaire was an unequivocal top-down modernizer, like most of the Enlightenment philosophes, and an enraptured chronicler in particular of Peter the Great. Russian peasants had paid a steep price for Russia’s Westernization, exposed as they were to more oppression and exploitation as Peter tried in the seventeenth century to build a strong military and bureaucratic state. Serfdom, near extinct in most of Western Europe by the thirteen century, was actually strengthened by Peter in Russia. Coercing his nobles into lifetime service to the state, [effectively] postponing the emergence of a civil society, Peter the Great waged war endlessly. But among educated Europeans, who until 1789 saw civilization as something passed down from the enlightened few to the ignorant many, Russia was an admirably progressive model. [pp. 98–99]

and slightly later

… it was Voltaire who brought a truly religious ardour to the cult of Catherine. As the Empress entered into war with Poland and Turkey in 1768, Voltaire became her cheerleader. Catherine claimed to be protecting the rights of religious minorities residing in the territories of her opponents. The tactic, repeatedly deployed by later European imperialists in Asia and Africa, had the expected effect on Voltaire, who promptly declared Catherine’s imperialistic venture to be a crusade for the Enlightenment. [p. 102]

No doubt plenty of rulers throughout history understood in the proverbial sense that to make an omelette, a few eggs must be broken, and that by extension, their unpopular decisions must be reshaped and propagandized to the masses to forestall open revolt. Whose eggs are ultimately broken is entirely at issue. That basic script is easily recognizable as being at work even today. Justifications for administrative violence ought to fail to convince those on the bottom rungs of society who make most of the real sacrifices — except that propaganda works. Thus, the United States’ multiple, preemptive wars of aggression and regime change (never fully declared or even admitted as such) have continued to be supported or at least passively accepted by a majority of Americans until quite recently. Mishra makes this very same point using an example different from mine:

… cossetted writers and artists would in the twentieth century transfer their fantasies of an idea society to Soviet leaders, who seemed to be bringing a superhuman energy and progressive rhetoric to Peter the Great’s rational schemes of social engineering. Stalin’s Russia, as it ruthlessly eradicated its religious and evidently backward enemies in the 1930s, came to ‘constitute … a quintessential Enlightenment utopia’. But the Enlightenment philosophes had already shown, in their blind adherence to Catherine, how reason could degenerate into dogma and new, more extensive forms of domination, authoritarian state structures, violent top-down manipulation of human affairs (often couched in terms of humanitarian concern) and indifference to suffering. [pp. 104–105]

As I reread the chapter in preparation for this blog post, I was surprised to find somewhat less characterization of Voltaire than of Rousseau. Indeed, it is more through Rousseau’s criticism of the dominant European paradigm that the schism between competing intellectual traditions is explored. Mishra circles back to Rousseau repeatedly but does not hesitate to show where his ideas, too, are insufficient. For instance, whereas pro-Enlightenment thinkers are often characterized as being lost in abstraction and idealization (i.e., ideologically possessed), thus estranged from practical reality or history, Rousseau’s empathy and identification with commoners does not provide enough structure for Rousseau to construct a viable alternative to the historical thrust of the day. Mishra quotes a contemporary critic (Joseph de Maistre) who charged Rousseau with irresponsible radicalism:

… he often discovers remarkable truths and expresses them better than anyone else, but these truths are sterile to his hands … No one shapes their materials better than he, and no one builds more poorly. Everything is good except his systems. [p. 110]

The notion that leaders (monarchs, emperors, presidents, prime ministers, social critics, and more recently, billionaires) ought to be in the business of engineering society rather than merely managing it is tacitly assumed. Indeed, there is a parallel hubris present in Rousseau as a thought leader having questionable moral superiority through his vehement criticism of the Enlightenment:

His confidence and self-righteousness derived from his belief that he had at least escaped the vices of modern life: deceit and flattery. In his solitude, he was convinced, like many converts to ideological causes and religious beliefs, that he was immune to corruption. A conviction of his incorruptibility was what gave his liberation from social pieties a heroic aura and moved him from a feeling of powerlessness to omnipotence. In the movement from victimhood to moral supremacy, Rousseau enacted the dialectic of ressentiment that has become commonplace in our time. [pp. 111–112]

This is a recapitulation of the main thesis of the book, which Mishra amplifies only a couple paragraphs later:

Rousseau actually went beyond the conventional political categories and intellectual vocabularies of left and right to outline the basic psychological outlook of those who perceive themselves as abandoned or pushed behind. He provided the basic vocabulary for their characteristic new expressions of discontent, and then articulated their longing for a world cleansed of the social sources of dissatisfaction. Against today’s backdrop of near-universal political rage, history’s greatest militant lowbrow seems to have grasped, and embodied, better than anyone the incendiary appeal of victimhood in societies built around the pursuit of wealth and power. [p. 112]

Does “the incendiary appeal of victimhood” sound like a potent component of today’s Zeitgeist? Or for that matter “militant lowbrow” (names withheld)? At the end of the 18th century, Voltaire and Rousseau were among the primary men of letters, the intelligentsia, the cognoscenti, articulating competing social views and values with major sociopolitical revolutions following shortly thereafter. The oft-observed rhyming (not repetition) of history suggests another such period may well be at hand.

Renewed twin memes Universal Basic Income (UBI) and Debt Jubilees (DJ) have been in the news recently. I write renewed because the two ideas are quite literally ancient, unlearnt lessons that are enjoying revitalized interest in the 21st century. Both are capable of sophisticated support from historical and contemporary study, which I admit I haven’t undertaken. However, others have done the work and make their recommendations with considerable authority. For instance, Andrew Yang, interviewed repeatedly as a 2020 U.S. presidential candidate, has made UBI the centerpiece of his policy proposals, whereas Michael Hudson has a new book out called … and forgive them their debts: Lending, Foreclosure and Redemption — From Bronze Age Finance to the Jubilee Year that offers a forgotten history of DJ.

Whenever UBI or DJ comes up in conversation, the most obvious, predicable response I hear (containing a kernel of truth) is that either proposal would reward the losers in today’s capitalist regime: those who earn too little or those who carry too much debt (often a combination of both). Never mind that quality education and economic opportunities have been steadily withdrawn over the past half century. UBI and DJ would thus be giveaways, and I daresay nothing offends a sense of fairness more than others getting something for nothing. Typical resentment goes, “I worked hard, played by the rules, and met my responsibilities; why should others who slacked, failed, or cheated get the benefit of my hard work?” It’s a commonplace “othering” response, failing to recognize that as societies we are completely interconnected and interdependent. Granting the winners in the capitalist contest a pass on fair play is also a major assumption. The most iconic supreme winners are all characterized by shark-like business practices: taking advantage of tax loopholes, devouring everything, and shrewdly understanding their predatory behavior not in terms of producing value but rather as gobbling or destroying competition to gain market share. More than a few companies these days are content to operate for years on venture capital, reporting one quarterly loss after another until rivals are vanquished. Amazon.com is the test case, though how many times its success can be repeated is unknown.

With my relative lack of economic study and sophistication, I take my lessons instead from the children’s game Monopoly. As an oversimplification of the dynamics of capital formation and ownership, Monopoly even for children reaches its logical conclusion well before its actual end, where one person “wins” everything. The balancing point when the game is no longer worth playing is debatable, but some have found through experience the answer is “before it starts.” It’s just no fun destroying bankrupting other players utterly through rent seeking. The no-longer-fun point is analogous to late-stage capitalism, where the conclusion has not yet been fully reached but is nonetheless clear. The endgame is, in a word, monopoly — the significant element being “mono,” as in there can be only one winner. (Be careful what you wish for: it’s lonely and resentful at the top.) Others take a different, aspirational lesson from Monopoly, which is to figure out game dynamics, or game the game, so that the world can be taken by force. One’s growing stranglehold on others disallows fair negotiation and cooperation (social rather than capitalist values) precisely because one party holds all the advantages, leading to exploitation of the many for the benefit of a few (or one).

Another unlearnt ancient lesson is that nothing corrupts so easily or so much as success, power, fame, wealth. Many accept that corruption willingly; few take the lesson to heart. (Disclosure: I’ve sometimes embarked on the easy path to wealth by buying lottery tickets. Haven’t won, so I’m not corruptible yet corrupted. Another case of something for nearly nothing, or for those gambling away their rent and grocery money, nothing for something.) Considering that money makes the world go around, especially in the modern age, the dynamics of capitalism are inescapable and the internal contradictions of capitalism are well acknowledged. The ancient idea of DJ is essentially a reset button depressed before the endgame leads to rebellion and destruction of the ownership class. Franklin D. Roosevelt is credited in some accounts of history as having saved capitalism from that near endgame by transferring wealth back to the people through the New Deal and the war economy. Thus, progressives are calling for a Green New Deal, though it’s not clear they are aware that propping up capitalism only delays its eventual collapse through another couple cycles (reversals) of capital flow. Availability of cheap, plentiful energy that allowed economies (and populations) to balloon over the past two and one-half centuries cannot continue for much longer, so even if we get UBI or DJ, the endgame remains unchanged.

Everyone is familiar with the convention in entertainment media where characters speak without the use of recognizable language. (Not related really to the convention of talking animals.) The first instance I can recall (someone correct me if earlier examples are to be found) is the happy-go-lucky bird Woodstock from the old Peanuts cartoons (do kids still recognize that cast of characters?), whose dialog was shown graphically as a series of vertical lines:

When the cartoon made its way onto TV for holiday specials, its creator Charles Schultz used the same convention to depict adults, never shown onscreen but with dialogue voiced by a Harmon-muted trombone. Roughly a decade later, two characters from the Star Wars franchise “spoke” in languages only other Star Wars characters could understand, namely, Chebacca (Chewie) and R2D2. More recently, the character Groot from Guardians of the Galaxy (known to me only through the Marvel movie franchise, not through comic books) speaks only one line of dialogue, “I am Groot,” which is understood as full speech by others Guardians characters. When behemoths larger than a school bus (King Kong, Godzilla, Jurassic dinosaurs, Cloverfield, Kaiju, etc.) appear, the characters are typically denied the power of speech beyond the equivalent of a lion’s roar. (True villains talk little or not at all as they go about their machinations — no monologuing! unless it’s a James Bond film. An exception notable for its failure to charm audiences is Ultron, who wouldn’t STFU. You can decide for yourself which is the worse kind of villainy.)

This convention works well enough for storytelling and has the advantage of allowing the reader/viewer to project onto otherwise blank speech. However, when imported into the real world, especially in politics, the convention founders. There is no Babelfish universal translator inserted in the ear to transform nonsense into coherence. The obvious example of babblespeech is 45, whose speech when off the teleprompter is a series of rambling non sequiturs, free associations, slogans, and sales pitches. Transcripts of anyone’s extemporaneous speech reveal lots of restarts and blind alleys; we all interrupt ourselves to redirect. However, word salad that substitutes for meaningful content in 45’s case is tragicomic: alternately entirely frustrating or comically entertaining depending on one’s objective. Satirical news shows fall into the second category.

45 is certainly not the first. Sarah Palin in her time as a media darling (driver of ratings and butt of jokes — sound familiar?) had a knack for crazy speech combinations that were utter horseshit yet oddly effective for some credulous voters. She was even a hero to some (nearly a heartbeat away from being the very first PILF). We’ve also now been treated to a series of public interrogations where a candidate for a cabinet post or an accused criminal offers testimony before a congressional panel. Secretary of Education Betsy DeVos famously evaded simple yes/no questions during her confirmation hearing, and Supreme Court Justice Brett Kavanaugh similarly refused to provide direct answers to direct questions. Unexpectedly, sacrificial lamb Michael Cohen does give direct answers to many questions, but his interlocutors then don’t quite know how to respond considering their experience and expectation that no one answers appropriately.

What all this demonstrates is that there is often a wide gulf between what is said and what is heard. In the absence of what might be understood as effective communication (honest, truthful, and forthright), audiences and voters fill in the blanks. Ironically, we also can’t handle hear too much truth when confronted by its awfulness. None of this is a problem in storytelling, but when found in politic narratives, it’s emblematic of how dysfunctional our communications have become, and with them, the clear thought and principled activity of governance.

Have to admit, when I first saw this brief article about middle school kids being enrolled in mandatory firearms safety classes, my gut response was something sarcastic to the effect “No, this won’t end badly at all ….” Second thought (upon reading the headline alone) was that it had to be Texas. Now that I’ve calmed down some, both responses are no longer primary in my thinking.

I’ve written before about perception and function of guns of differing types. I daresay that little clarity has been achieved on the issue, especially because a gun is no longer understood as a tool (with all the manifold purposes that might entail) but is instead always a weapon (both offensive and defensive). The general assumption is that anyone brandishing a weapon (as in open carry) is preparing to use it imminently (so shoot first!). A corollary is that anyone merely owning a gun is similarly prepared but only in the early, hypothetical, or contingent stages. These are not entirely fair assumptions but demonstrate how our perception of the tool has frequently shifted over toward emotionalism.

My father’s generation may be among the last without specialized training (e.g., hunters and those having served in the military) who retain the sense of a gun being a tool, both of which still account for quite a lot of people. Periodic chain e-mails sometimes point out that students (especially at rural and collar county schools) used to bring guns to school to stow in their lockers for after-school use with Gun Club. I’d say “imagine doing that now” except that Iowa is doing just that, though my guess is that the guns are stored more securely than a student locker. Thus, exposure to gun safety/handling and target practice may remove some of the stigma assigned to the tool as well as teach students respect for the tool.

Personally, I’ve had limited exposure to guns and tend to default (unthinkingly, reflexively) to what I regard as a liberal/progressive left opinion that I don’t want to own a gun and that guns should be better regulated to stem gun violence. However, only a little circumspection is needed to puncture that one-size-fits-all bubble. And as with so many complicated issues of the day, it’s a little hard to know what to wish for or to presume that I have the wisdom to know better than others. Maybe Iowa has it right and this may not end so badly.

As I reread what I wrote 2.5 years ago in my first blog on this topic, I surmise that the only update needed to my initial assessment is a growing pile of events that demonstrate my thesis: our corrupted information environment is too taxing on human cognition, with the result that a small but growing segment of society gets radicalized (wound up like a spring) and relatively random individuals inevitably pop, typically in a self-annihilating gush of violence. News reports bear this out periodically, as one lone-wolf kook after another takes it upon himself (are there any examples of females doing this?) to shoot or blow up some target, typically chosen irrationally or randomly though for symbolic effect. More journalists and bloggers are taking note of this activity and evolving or resurrecting nomenclature to describe it.

The earliest example I’ve found offering nomenclature for this phenomenon is a blog with a single post from 2011 (oddly, no follow-up) describing so-called stochastic terrorism. Other terms include syntactic violence, semantic violence, and epistemic violence, but they all revolve around the same point. Whether on the sending or receiving end of communications, some individuals are particularly adept at or sensitive to dog whistles that over time activate and exacerbate tendencies toward radical ideology and violence. Wired has a brief article from a few days ago discussing stochastic terrorism as jargon, which is basically what I’m doing here. Admittedly, the last of these terms, epistemic violence (alternative: epistemological violence), ranges farther afield from the end effect I’m calling wind-up toys. For instance, this article discussing structural violence is much more academic in character than when I blogged on the same term (one of a handful of “greatest hits” for this blog that return search-engine hits with some regularity). Indeed, just about any of my themes and topics can be given a dry, academic treatment. That’s not my approach (I gather opinions differ on this account, but I insist that real academic work is fundamentally different from my armchair cultural criticism), but it’s entirely valid despite being a bit remote for most readers. One can easily get lost down the rabbit hole of analysis.

If indeed it’s mere words and rhetoric that transform otherwise normal people into criminals and mass murderers, then I suppose I can understand the distorted logic of the far Left that equates words and rhetoric themselves with violence, followed by the demand that they be provided with warnings and safe spaces lest they be triggered by what they hear, read, or learn. As I understand it, the fear is not so much that vulnerable, credulous folks will be magically turned into automatons wound up and set loose in public to enact violent agendas but instead that virulent ideas and knowledge (including many awful truths of history) might cause discomfort and psychological collapse akin to what happens to when targets of hate speech and death threats are reduced, say, to quivering agoraphobia. Desire for protection from harm is thus understandable. The problem with such logic, though, is that protections immediately run afoul of free speech, a hallowed but misunderstood American institution that preempts quite a few restrictions many would have placed on the public sphere. Protections also stall learning and truth-seeking straight out of the gate. And besides, preemption of preemption doesn’t work.

In information theory, the notion of a caustic idea taking hold of an unwilling person and having its wicked way with him or her is what’s called a mind virus or meme. The viral metaphor accounts for the infectious nature of ideas as they propagate through the culture. For instance, every once in a while, a charismatic cult emerges and inducts new members, a suicide cluster appears, or suburban housewives develop wildly disproportionate phobias about Muslims or immigrants (or worse, Muslim immigrants!) poised at their doorsteps with intentions of rape and murder. Inflaming these phobias, often done by pundits and politicians, is precisely the point of semantic violence. Everyone is targeted but only a few are affected to the extreme of acting out violently. Milder but still invalid responses include the usual bigotries: nationalism, racism, sexism, and all forms of tribalism, “othering,” or xenophobia that seek to insulate oneself safely among like folks.

Extending the viral metaphor, to protect oneself from infectious ideas requires exposure, not insulation. Think of it as a healthy immune system built up gradually, typically early in life, through slow, steady exposure to harm. The alternative is hiding oneself away from germs and disease, which has the ironic result of weakening the immune system. For instance, I learned recently that peanut allergies can be overcome by gradual exposure — a desensitization process — but are exacerbated by removal of peanuts from one’s environment and/or diet. This is what folks mean when they say the answer to hate speech is yet more (free) speech. The nasty stuff can’t be dealt with properly when it’s quarantined, hidden away, suppressed, or criminalized. Maybe there are exceptions. Science fiction entertains those dangers with some regularity, where minds are shunted aside to become hosts for invaders of some sort. That might be overstating the danger somewhat, but violent eruptions may provide some credence.

I’ve written a different form of this blog post at least once before, maybe more. Here’s the basic thesis: the bizarro unreality of the world in which we now live is egregious enough to make me wonder if we haven’t veered wildly off the path at some point and now exist within reality prime. I suppose one can choose any number of historical inflections to represent the branching point. For me, it was the reelection of George W. Bush in 2004. (The 9/11 attacks and “wars” in Afghanistan and Iraq had already occurred or commenced by then, and it had already revealed as well that lies — Saddam had WMDs — that sold the American public on the Iraq “war” were effective and remain so today.) Lots of other events changed the course of history, but none other felt as much to me like a gut punch precisely because, in the case of the 2004 presidential election, we chose our path. I fantasized waking up from my reality-prime nightmare but eventually had to grudgingly accept that if multiverses exist, ours mine had become one where we chose (collectively, and just barely) to keep in office an executive who behaved like a farce of stupidity. Well, joke’s on us. Twelve years later, we chose someone even more stupid, though with a “certain serpentine cunning,” and with arguably the worst character of any U.S. executive in living history.

So what to do in the face of this dysfunctional state of affairs? Bret Weinstein below has ideas. (As usual, I’m quite late, embedding a video that by Internet standards is already ancient. I also admit this is equivalent to a smash cut because I don’t have a particularly good transition or justification for turning so suddenly to Weinstein.) Weinstein is an evolutionary biologist, so no surprise that the approach he recommends is borne out of evolutionary thinking. In fairness, a politician would logically recommend political solutions, a financier would recommend economic solutions, and other professionals would seek solutions from within their areas of expertise.

The title of the interview is “Harnessing Evolution,” meaning Weinstein suggests we use evolutionary models to better understand our own needs and distortions to guide or plot proper path(s) forward and get back on track. Never mind that a healthy minority of the U.S. public rejects evolution outright while an additional percentage takes a hybrid stance. While I’m impressed that Weinstein has an answer for everything (pedagogue or demagogue or both?) and has clearly thought through sociopolitical issues, I daresay he’s living in reality double-prime if he thinks science education can be a panacea for what ails us. My pessimism is showing.

From time to time, I admit that I’m in no position to referee disputes, usually out of my lack of technical expertise in the hard sciences. I also avoid the impossibility of policing the Internet, assiduously pointing out error where it occurs. Others concern themselves with correcting the record and/or reinterpreting argument with improved context and accuracy. However, once in a while, something crosses my desk that gets under my skin. An article by James Ostrowski entitled “What America Has Done To its Young People is Appalling,” published at LewRockwell.com, is such a case. It’s undoubtedly a coincidence that the most famous Rockwell is arguably Norman Rockwell, whose celebrated illustrations for the Saturday Evening Post in particular helped reinforce a charming midcentury American mythology. Lew Rockwell, OTOH, is described briefly at the website’s About blurb:

The daily news and opinion site LewRockwell.com was founded in 1999 by anarcho-capitalists Lew Rockwell … and Burt Blumert to help carry on the anti-war, anti-state, pro-market work of Murray N. Rothbard.

Those political buzzwords probably deserve some unpacking. However, that project falls outside my scope. In short, they handily foist blame for what ills us in American culture on government planning, as distinguished from the comparative freedom of libertarianism. Government earns its share of blame, no doubt, especially with its enthusiastic prosecution of war (now a forever war); but as snapshots of competing political philosophies, these buzzwords are reductive almost to the point of meaninglessness. Ostrowski lays blame more specifically on feminism and progressive big government and harkens back to an idyllic 1950s nuclear family fully consonant with Norman Rockwell’s illustrations, thus invoking the nostalgic frame.

… the idyllic norm of the 1950’s, where the mother typically stayed home to take care of the kids until they reached school age and perhaps even long afterwards, has been destroyed.  These days, in the typical American family, both parents work fulltime which means that a very large percentage of children are consigned to daycare … in the critical first five years of life, the vast majority of Americans are deprived of the obvious benefits of growing up in an intact family with the mother at home in the pre-school years. We baby boomers took this for granted. That world is gone with the wind. Why? Two main reasons: feminism and progressive big government. Feminism encouraged women to get out of the home and out from under the alleged control of husbands who allegedly controlled the family finances.

Problem is, 1950s social configurations in the U.S. were the product of a convergence of historical forces, not least of which were the end of WWII and newfound American geopolitical and economic prominence. More pointedly, an entire generation of young men and women who had deferred family life during perilous wartime were then able to marry, start families, and provide for them on a single income — typically that of the husband/father. That was the baby boom. Yet to enjoy the benefits of the era fully, one probably needed to be a WASPy middle-class male or the child of one. Women and people of color fared … differently. After all, the 1950s yielded to the sexual revolution and civil rights era one decade later, both of which aimed specifically to improve the lived experience of, well, women and people of color.

Since the 1950s were only roughly 60 years ago, it might be instructive to consider how life was another 60 years before then, or in the 1890s. If one lived in an eastern American city, life was often a Dickensian dystopia, complete with child labor, poorhouses, orphanages, asylums, and unhygienic conditions. If one lived in an agrarian setting, which was far more prevalent before the great 20th-century migration to cities, then life was frequently dirt-poor subsistence and/or pioneer homesteading requiring dawn-to-dusk labor. Neither mode yet enjoyed social planning and progressive support including, for example, sewers and other modern infrastructure, public education, and economic protections such as unionism and trust busting. Thus, 19th-century America might be characterized fairly as being closer to anarcho-capitalism than at any time since. One of its principal legacies, one must be reminded, was pretty brutal exploitation of (and violence against) labor, which can be understood by the emergence of political parties that sought to redress its worst scourges. Hindsight informs us now that reforms were slow, partial, and impermanent, leading to the observation that among all tried forms of self-governance, democratic capitalism can be characterized as perhaps the least awful.

So yeah, the U.S. came a long way from 1890 to 1950, especially in terms of standard of living, but may well be backsliding as the 21st-century middle class is hollowed out (a typical income — now termed household income — being rather challenging for a family), aspirations to rise economically above one’s parents’ level no longer function, and the culture disintegrates into tribal resentments and unrealistic fantasies about nearly everything. Ostrowski marshals a variety of demographic facts and figures to support his argument (with which I agree in large measure), but he fails to make a satisfactory causal connection with feminism and progressivism. Instead, he sounds like 45 selling his slogan Make America Great Again (MAGA), meaning let’s turn back the clock to those nostalgic 1950s happy days. Interpretations of that sentiment run in all directions from innocent to virulent (but coded). By placing blame on feminism and progressivism, it’s not difficult to hear anyone citing those putative causes as an accusation that, if only those feminists and progressives (and others) had stayed in their assigned lanes, we wouldn’t be dealing now with cultural crises that threaten to undo us. What Ostrowski fails to acknowledge is that despite all sorts of government activity over the decades, no one in the U.S. is steering the culture nearly as actively as in centrally planned economies and cultures, current and historical, which in their worst instances are fascist and/or totalitarian. One point I’ll agree on, however, just to be charitable, is that the mess we’ve made and will leave to youngsters is truly appalling.

Among the many complaints that cross my path in the ongoing shitshow that American culture has become is an article titled “The Tragic Decline of Music Literacy (and Quality),” authored by Jon Henschen. His authorship is a rather unexpected circumstance since he is described as a financial advisor rather than an authority on music, technology, or culture. Henschen’s article reports on (without linking to it as I do) an analysis by Joan Serrà et al., a postdoctoral scholar at the Artificial Intelligence Research Institute. Curiously, the analysis is reported on and repackaged by quite a few news sites and blogs since its publication in 2012. For example, the YouTube video embedded below makes many of the same arguments and cites the so-called Millennial Whoop, a hook or gesture now ubiquitous in pop music that’s kinda sorta effective until one recognizes it too manifestly and it begins to sound trite, then irritating.

I won’t recount or summarize arguments except to say that neither the Henschen article nor the video discusses the underlying musical issues quite the way a trained musician would. Both are primarily quantitative rather than qualitative, equating an observed decrease in variety of timbre, loudness, and pitch/harmony as worse music (less is more worse). Lyrical (or poetical) complexity has also retreated. It’s worth noting, too, that the musical subject is narrowed to recorded pop music from 1955 to 2010. There’s obviously a lot to know about pop music, but it’s not generally the subject of serious study among academic musicians. AFAIK, no accredited music school offers degrees in pop music. Berklee College of Music probably comes the closest. (How exactly does songwriting as a major differ from composition?) That standard may be relaxing.

Do quantitative arguments demonstrate degradation of pop music? Do reduced variety, range, and experimentation make pop music the equivalent of a paint-by-the-numbers image with the self-imposed limitation that allows only unmixed primary colors? Hard to say, especially if one (like me) has a traditional education in art music and already regards pop music as a rather severe degradation of better music traditions. Reduction of the artistic palette from the richness and variety of, say, 19th-century art music proceeded through the 20th century (i.e., musical composition is now understood by the lay public to mean songs, which is just one musical genre among many) to a highly refined hit-making formula that has been proven to work remarkably well. Musical refinements also make use of new technological tools (e.g., rhythm machines, autotune, digital soundfield processing), which is another whole discussion.

Musical quality isn’t mere quantity (where more is clearly better), however, and some manage pretty well with limited resources. Still, a sameness or blandness is evident and growing within a genre that is already rather narrowly restricted to using drums, guitars, keyboards, vocals. The antidote Henschen suggests (incentivizing musical literacy and participation, especially in schools) might prove salutary, but such recommendations are ubiquitous throughout modern history. The magical combination of factors that actually catalyzes creativity, as opposed to degradation, is rather quixotic. Despite impassioned pleas not to allow quality to disappear, nothing could be more obvious than that culture drifts according to its own whims (to anthropomorphize) rather than being steered by well-meaning designs.

More to say in part 2 to follow.

One of the very best lessons I took from higher education was recognizing and avoiding the intentional fallacy — in my own thinking no less than in that of others. Although the term arguably has more to do with critical theory dealing specifically with texts, I learned about it in relation to abstract fine arts, namely, painting and music. For example, the enigmatic expression of the Mona Lisa by Leonardo Da Vinci continues to spark inquiry and debate. What exactly does that smile mean? Even when words or programs are included in musical works, it’s seductively easy to conclude that the composer intends this or the work itself means that. Any given work purportedly allows audiences to peer into the mind of its creator(s) to interrogate intent. Conclusions thus drawn, however, are notoriously unreliable though commonplace.

It’s inevitable, I suppose, to read intent into artistic expression, especially when purpose feels so obvious or inevitable. Similar excavations of meaning and purpose are undertaken within other domains of activity, resulting in no end of interpretation as to surface and deep strategies. Original intent (also originalism) is a whole field of endeavor with respect to interpretation of the U.S. Constitution and imagining the framers’ intent. Geopolitics is another domain where hindsight analysis results in some wildly creative but ultimately conjectural interpretations of events. Even where authorial (and political) intent is explicitly recorded, such as with private diaries or journals, the possibility of deceptive intent by authors keeps everyone wondering. Indeed, although “fake news” is modern coin, a long history of deceptive publishing practice well beyond the adoption of a nom de plume attests to hidden or unknowable intent making “true intent” a meta property.

The multi-ring circus that the modern information environment has become, especially in the wake of electronic media (e.g., YouTube channels) produced by anyone with a camera and an Internet connection, is fertile ground for those easily ensnared by the intentional fallacy. Several categories of intent projected onto content creators come up repeatedly: profit motive, control of the narrative (no small advantage if one believes this blog post), setting the record straight, correcting error, grandstanding, and trolling for negative attention. These categories are not mutually exclusive. Long ago, I pointed to the phenomenon of arguing on-line and how it typically accomplishes very little, especially as comment threads lengthen and civility breaks down. These days, comments are an Internet legacy and/or anachronism that many content creators persist in offering to give the illusion of a wider discussion but in fact roundly ignore. Most blogs and channels are actually closed conversations. Maybe a Q&A follows the main presentation when held before an audience, but video channels are more often one-way broadcasts addressing an audience but not really listening. Public square discussion is pretty rare.

Some celebrate this new era of broadcasting, noting with relish how the mainstream media is losing its former stranglehold on attention. Such enthusiasm may be transparently self-serving but nonetheless rings true. A while back, I pointed to New Media Rockstars, which traffics in nerd culture entertainment media, but the term could easily be expanded to include satirical news, comedy, and conversational webcasts (also podcasts). Although some folks are rather surprised to learn that an appetite for substantive discussion and analysis exists among the public, I surmise that the shifting media landscape and disintegrated cultural narrative have bewildered a large segment of the public. The young in particular are struggling to make sense of the world, figure out what to be in life and how to function, and working out an applied philosophy that eschews more purely academic philosophy.

By way of example of new media, let me point to a trio of YouTube channels I only recently discovered. Some More News parodies traditional news broadcasts by sardonically (not quite the same as satirically) calling bullshit on how news is presented. Frequent musical cues between segments make me laugh. Unlike the mainstream media, which are difficult not to regard as propaganda arms of the government, Some More News is unapologetically liberal and indulges in black humor, which doesn’t make me laugh. Its raw anger and exasperation are actually a little terrifying. The second YouTube channel is Three Arrows, a sober, thorough debunking of news and argumentation found elsewhere in the public sphere. The speaker, who doesn’t appear onscreen, springs into action especially when accusations of current-day Nazism come up. (The current level of debate has devolved to recklessly calling nearly everyone a Nazi at some stage. Zero points scored.) Historical research often puts things into proper context, such as the magnitude of the actual Holocaust compared to some garden-variety racist running his or her mouth comparatively harmlessly. The third YouTube channel is ContraPoints, which is rather fanciful and profane but remarkably erudite considering the overall tone. Labels and categories are explained for those who may not have working definitions at the ready for every phrase or ideology. Accordingly, there is plenty of jargon. The creator also appears as a variety of different characters to embody various archetypes and play devil’s advocate.

While these channels may provide abundant information, correcting error and contextualizing better than most traditional media, it would be difficult to conclude they’re really moving the conversation forward. Indeed, one might wonder why bother preparing these videos considering how time consuming it has to be to do research, write scripts, assemble pictorial elements, etc. I won’t succumb to the intentional fallacy and suggest I know why they bother holding these nondebates. Further, unless straight-up comedy, I wouldn’t say they’re entertaining exactly, either. Highly informative, perhaps, if one pays close attention to frenetic online pace and/or mines for content (e.g., studying transcripts or following links). Interestingly, within a fairly short period of time, these channels are establishing their own rhetoric, sometimes useful, other times too loose to make strong impressions. It’s not unlike the development of new stylistic gestures in music or painting. What if anything worthwhile will emerge from the scrum will be interesting.

If the previous blog in this series was about how some ideas and beliefs become lodged or stuck in place (fixity bias), this one is about how other ideas are notoriously mutable (flexibility bias), especially the latest, loudest thing to turn one’s head and divert attention. What makes any particular idea (or is it the person?) prone to one bias or another (see this list) is mysterious to me, but my suspicion is that a character disposition toward openness and adherence to authoritative evidence figure prominently in the case of shifting opinion. In fact, this is one of the primary problems with reason: if evidence can be deployed in favor of an idea, those who consider themselves “reasonable” and thus rely on accumulation of evidence and argumentation to sharpen their thinking are vulnerable to the latest “finding” or study demonstrating sumpinorutha. It’s the intellectual’s version of “New! Improved!”

Sam Harris exploits rationalism to argue against the existence of free will, saying that if sufficient evidence can be brought to bear, a disciplined thinker is compelled to subscribe to the conclusions of reasoned argument. Choice and personal agency (free will) are removed. I find that an odd way to frame the issue. Limitless examples of lack of choice are nonequivalent to the destruction of free will. For example, one can’t decide not to believe in gravity and fly up into the air more than a few inches. One can’t decide that time is an illusion (as theoretical physicists now instruct) and decide not to age. One can’t decide that pooping is too disgusting and just hold it all in (as some children attempt). Counter-evidence doesn’t even need to be argued because almost no one pretends to believe such nonsense. (Twisting one’s mind around to believe in the nonexistence of time, free will, or the self seems to be the special province of hyper-analytical thinkers.) Yet other types of belief/denial — many of them conspiracy theories — are indeed choices: religion, flat Earth, evolution, the Holocaust, the moon landings, 9/11 truth, who really killed JFK, etc. Lots of evidence has been mustered on different sides (multiple facets, actually) of each of these issues, and while rationalists may be compelled by a preponderance of evidence in favor of one view, others are free to fly in the face of that evidence for reasons of their own or adopt by default the dominant narrative and not worry or bother so much.

The public struggles in its grasp of truthful information, as reported in a Pew Research Center study called “Distinguishing Between Factual and Opinion Statements in the News.” Here’s the snapshot:

The main portion of the study, which measured the public’s ability to distinguish between five factual statements and five opinion statements, found that a majority of Americans correctly identified at least three of the five statements in each set. But this result is only a little better than random guesses. Far fewer Americans got all five correct, and roughly a quarter got most or all wrong.

Indiscriminate adoption by many Americans of a faulty viewpoint, or more pointedly, the propaganda and “fake news” on offer throughout the information environment, carries the implication that disciplined thinkers are less confused about truth or facts, taking instead a rational approach as the basis for belief. However, I suggest that reason suffers its own frailties not easily recognized or acknowledged. In short, we’re all confused, though perhaps not hopelessly so. For several years now, I’ve sensed the outline of a much larger epistemological crisis where quintessential Enlightenment values have come under open attack. The irony is that the wicked stepchild of science and reason — runaway technology —  is at least partially responsible for this epochal conflict. It’s too big an idea to grok fully or describe in a paragraph or two, so I’ll simply point to it an move on.

My own vulnerability to flexibility bias manifests specifically in response to appeals to authority. Although well educated, a lifelong autodidact, and an independent thinker, I’m careful not to succumb to the hubris of believing I’ve got it all figgered. Indeed, it’s often said that as one gains expertise and experience in the world, the certainty of youth yields to caution precisely because the mountain of knowledge and understanding one lacks looms larger even as one accumulates wisdom. Bodies of thought become multifaceted and all arguments must be entertained. When an expert, researcher, or academic proposes something outside my wheelhouse, I’m a sitting duck: I latch onto the latest, greatest utterance as the best truth yet available. I don’t fall for it nearly so readily with journalists, but I do recognize that some put in the effort and gain specialized knowledge and context well outside the bounds of normal life, such as war reporters. Various perverse incentives deeply embedded in the institutional model of journalism, especially those related to funding, make it nearly impossible to maintain one’s integrity without becoming a pariah, so only a handful have kept my attention. John Pilger, Chris Hedges, and Matt Taibbe figure prominently.

By way of example, one of the topics that has been of supreme interest to me, though its historical remove renders it rather toothless now, is the cataclysm(s) that occurred at the conclusion of the last ice age roughly 12,000 years ago. At least three hypotheses (of which I’m aware) have been proposed to explain why glacial ice disappeared suddenly over the course of a few weeks, unleashing the Biblical Flood: Earth crust displacement, asteroidal impact(s), and coronal mass ejection(s). Like most hypotheses, evidence is both physical and conjectural, but a sizable body of evidence and argumentation for each is available. As I became familiar with each, my head turned and I became a believer, sorta. Rather than “last one is the rotten egg,” however, the latest, most recent one typically displaces the previous one. No doubt another hypothesis will appear to turn my head and disorient me further. With some topics, especially politics, new information piling on top of old is truly dizzying. And as I’ve written about many topics, I simply lack the expertise to referee competing claims, so whatever beliefs I eventually adopt are permanently provisional.

Finally, my vulnerability to authoritative appeal also reacts to the calm, unflappable tones and complexity of construction of speakers such as Sam Harris, Steven Pinker, and Charles Murray. Their manner of speaking is sometimes described pejoratively as “academese,” though only Pinker has a teaching position. Murray in particular relies heavily on psychometrics, which may not be outright lying with statistics but allows him to rationalize (literally) extraordinarily taboo subjects. In contrast, it’s easy to disregard pundits and press agents foaming and fulminating over their pet narratives. Yet I also recognize that with academese, I’m being soothed more by style than by substance, a triumph of form over function. In truth, this communication style is an appeal to emotion masquerading as an appeal to authority. I still prefer it, just as I prefer a steady, explanatory style of journalism over the snarky, reinterpretive style of disquisition practiced by many popular media figures. What communicates most effectively to me and (ironically) pushes my emotional buttons also weakens my ability to discriminate and think properly.

Yet still more to come in part 5.