In the lost decades of my youth (actually, early adulthood, but to an aging fellow like me, that era now seems like youth), I began to acquire audio equipment and recordings (LPs, actually) to explore classical music as an alternative to frequent concert attendance. My budget allowed only consumer-grade equipment, but I did my best to choose wisely rather than guess and end up with flashy front-plates that distract from inferior sound (still a thing, as a visit to Best Buy demonstrates). In the decades since, I’ve indulged a modest fetish for high-end electronics that fits neither my budget nor lifestyle but nonetheless results in my simple two-channel stereo (not the surround sound set-ups many favor) of individual components providing fairly astounding sonics. When a piece exhibits problems or a connection gets interrupted, I often resort to older, inferior, back-up equipment before troubleshooting and identifying the problem. Once the correction is made, return to premium sound is an unmistakable improvement. When forced to resort to less-than-stellar components, I’m sometimes reminded of a remark a friend once made, namely, that when listening, he tries to hear the quality in the performance despite degraded reproduced sound (e.g., surface noise on the LP).

Though others may argue, I insist that popular music does not requires high fidelity to enjoy. The truth in that statement is evidenced by how multifunction devices such as phones and computers are used by most people to listen to music. Many influencers laugh and scoff at the idea that anyone would buy physical media or quality equipment anymore; everything now is streamed to their devices using services such as Spotify, Apple Music, or Amazon Prime. From my perspective, they’re fundamentally insensitive to subtle gradations of sound. Thumping volume (a good beat) is all that’s needed or understood.

However, multifunction devices do not aim at high fidelity. Moreover, clubs and outdoor festivals typically use equipment designed for sheer volume rather than quality. Loud jazz clubs might be the worst offenders, especially because intimate, acoustic performance (now mostly abandoned) set an admirable artistic standard only a few decades ago. High volume creates the illusion of high energy, but diminishing returns set in quickly as the human auditory system reacts to extreme volume by blocking as much sound as possible to protect itself from damage, or more simply, by going deaf slowly or quickly. Reports of performers whose hearing is wrecked from short- or long-term overexposure to high volume are legion. Profound hearing loss is already appearing throughout the general public the same way enthusiastic sunbathers are developing melanoma.

As a result of technological change, notions of how music is meant to sound is shifting. Furthermore, the expectation that musical experiences are to be shared by audiences of more than, say, a few people at a time is giving way to the singular, private listening environment enabled by headphones and earbuds. (Same thing happened with reading.) Differences between music heard communally in a purposed performance space (whether live or reproduced) and music reproduced in the ear (earbuds) or over the ear (headphones) canal — now portable and ubiquitous — lead to audio engineers shifting musical perspective yet again (just as they did at the onset of the radio and television eras) to accommodate listeners with distorted expectations how music should sound.

No doubt, legitimate musical experiences can be had through reproduced sound, though degraded means produce lesser approximations of natural sound and authenticity as equipment descends in price and quality or the main purpose is simply volume. Additionally, most mainstream popular musics require amplification, as opposed to traditional acoustic forms of musicmaking. Can audiences/listeners actually get beyond degradation and experience artistry and beauty? Or must we be content with facsimiles that no longer possess the intent of the performers or a robust aesthetic experience? These may well be questions for the ages for which no solid answers obtain.

Advertisements

In an uncharacteristic gesture of journalistic integrity (i.e., covering news of real importance rather than celebrity nonsense, lottery jackpots, or racehorse politics), the mainstream media has been blaring each new development as a caravan of Honduran refugees makes its way though Mexico toward the U.S. border. Ten days ago, CNN published a map of the caravan’s location and projected that at its current rate, arrival at the border would occur in Feb. 2019. Already the caravan has shrunk from 10,000 to 4,000 people. Hard to fathom it won’t shrink further. I’ve read reports that decent Mexican locals are handing out sandwiches and water.

The refugee crisis has been stewing and growing since at least 2016 when 45 introduced rhetoric about building a wall and making Mexico pay for it. Instead, it appears U.S. taxpayers are footing the bill. Frankly, I don’t know that there are any particularly good answers to the problem of illegal immigration. However, I daresay First World countries owe a humanitarian duty to refugees in what will prove to be an increasingly desperate diaspora from political, economic, and ecological disaster. It appears that the Mexican government gets that and has rendered aid, but intransigent members of the caravan are only interested in getting to the U.S., where they will most likely be met by razor wire and troops. Predictably, armed U.S. citizens are jumping at the opportunity to protect border integrity and prevent illegals from entering. That should end well. The U.S. looks pretty heartless in comparison with Mexico.

As industrial collapse gets worse and conditions deteriorate, the already unmanageable flow of populations away from locations where life is intolerable or impossible will only increase. Although the impulse to refuse admission is understandable, other countries have stepped up and taken in sizeable populations flowing out of the Middle East and North Africa in particular — regions that have been actively destabilized and undermined but were well into overshoot anyway. The U.S. government has often pretended to exercise its humanitarian duty, especially where armed intervention aligns with strategic interests. In the case of the caravan, risibly mischaracterized as an invasion, the powers that be reveal themselves as unusually cruel. I anticipate this unfolding drama is only the start of something big, but probably not what most people want or envision.

Update (Nov. 9)

I only just saw this short video, which predates my blog post slightly:

Guy Mcpherson is saying almost the same thing I’m saying: it’s only gonna get worse.

Update (Nov. 21)

According to the Military Times,

The White House late Tuesday signed a memo allowing troops stationed at the border to engage in some law enforcement roles and use lethal force, if necessary — a move that legal experts have cautioned may run afoul of the Posse Comitatus Act. [links redacted]

This is no surprise, of course. I can’t read into the minds of our chief executive and his staff, but suspicions are the border is like a scene from World War Z and asylum seekers are the equivalent of zombies, so just open fire — they’re already the undead.

The largest lottery jackpot ever (roughly $1.6 billion) was won last week by some lucky or unlucky soul, depending. The mainstream media promoted this possible windfall relentlessly, instructing everyone as possible winners the first steps to take with the winning ticket. It prompts the question, What Would a (sudden, new) Billionaire Do? with all that money, and many of us toyed with the prospect actively. The ruinous appeal is far too seductive to put out of mind entirely. Lottery winners, however, are not in the same class as the world’s billionaires, whose fortunes are closely associated with capitalist activity. Topping the list is Jeff Bezos of Amazon. The Walmart fortune deposits four Walton family members on the list, whose combined wealth exceeds even that of Bezos. Beyond conjecture what billionaires should or might do besides the billionaire challenge or purchasing land in New Zealand for boltholes to leave the rest of us behind, it’s worth pointing out how such extraordinary wealth was amassed in the first place, because it surely doesn’t happen passively.

Before Amazon and Walmart but well after the robber barons of the early 20th century, McDonald’s was the ubiquitous employer offering dead-end, entry-level jobs that churned through people (labor) before discarding them carelessly, all the while locking up profits the placard “millions [then billions] sold!” Its hallmark euphemism (still in use) is the McJob. After McDonald’s, Walmart was widely understood as the worst employer in the world in terms of transfer of obscene wealth to the top while rank-and-file workers struggle below the poverty line. Many Walmart employees are still so poorly compensated that they qualify for government assistance, which effectively functions as a government subsidy to Walmart. Walmart’s awful labor practices, disruption of local mom-and-pop economies, and notorious squeezing of suppliers by virtue of its sheer market volume established the template for others. For instance, employers emboldened by insecure or hostage labor adopt hard-line policies such as firing employees who fail to appear at work in the midst of a hurricane or closing franchise locations solely to disallow labor organizing. What Walmart pioneered Amazon has refined. Its fulfillment-center employees have been dubbed CamperForce for being made primarily of older people living in vans and campers and deprived of meaningful alternatives. Jessica Bruder’s new book Nomadland (2018), rather ironically though shamelessly and predictably sold by Amazon, provides sorry description, among other things, of how the plight of the disenfranchised is repackaged and sold back them. As a result of severe criticism (not stemming directly from the book), Amazon made news earlier this month by raising its minimum wage to $15 per hour, but it remains to be seen if offsetting cuts to benefits wipe out apparent labor gains.

These business practices are by no means limited to a few notoriously bad corporations or their billionaire owners. As reported by the Economic Policy Institute and elsewhere, income inequality has been rising for decades. The graph below shows that wage increases have been entirely disproportionate, rewarding the top 10 percent, top 1 percent, and top 0.1 percent at increasingly absurd levels compared to the remaining 90 percent.

157228-20055

It’s a reverse Robin Hood situation: the rich taking from not just the poor but everyone and giving to themselves. Notably, trickle-down economics has been widely unmasked as a myth but nonetheless remains a firmly entrenched idea among those who see nothing wrong with, say, ridiculous CEO pay precisely because they hope to eventually be counted among those overcompensated CEOs (or lottery winners) and so preserve their illusory future wealth. Never mind that the entire economic system is tilted egregiously in favor a narrow class of predatory plutocrats. Actual economic results (minus all the rhetoric) demonstrate that as a function of late-stage capitalism, the ultrarich, having already harvested all the low-hanging fruit, has even gone after middle-class wealth as perhaps the last resource to plunder (besides the U.S. Treasury itself, which was looted with the last series of bailouts).

So what would a billionaire do in the face of this dynamic? Bezos is the new poster boy, a canonical example, and he shows no inclination to call into question the capitalist system that has rewarded him so handsomely. Even as he gives wage hikes, he takes away other compensation, keeping low-level employees in a perpetual state of doubt as to when they’ll finally lose what’s left to them before dying quietly in a van down by the river or out in the desert somewhere. Indeed, despite the admirable philanthropy of some billionaires (typically following many years of cutthroat activity to add that tenth and eleventh digit), structural change necessary to restore the middle class, secure the lower class with a living wage, and care for the long-term unemployed, permanently unemployable, and disabled (estimated to be at least 10% of the population) are nowhere on the horizon. Those in the best position to undertake such change just keep on building their wealth faster than everyone else, forsaking the society that enables them and withdrawing into armed compounds insulated from the rabble. Hardly a life most of us would desire if we knew in advance what a corrupting prison it turns out to be.

Caveat: Rather uncharacteristically long for me. Kudos if you have the patience for all of this.

Caught the first season of HBO’s series Westworld on DVD. I have a boyhood memory of the original film (1973) with Yul Brynner and a dim memory of its sequel Futureworld (1976). The sheer charisma of Yul Brynner in the role of the gunslinger casts a long shadow over the new production, not that most of today’s audiences have seen the original. No doubt, 45 years of technological development in film production lends the new version some distinct advantages. Visual effects are quite stunning and Utah landscapes have never been used more appealingly in terms of cinematography. Moreover, storytelling styles have changed, though it’s difficult to argue convincingly that they’re necessarily better now than then. Competing styles only appear dated. For instance, the new series has immensely more time to develop its themes; but the ancient parables of hubris and loss of control over our own creations run amok (e.g., Shelley’s Frankenstein, or more contemporaneously, the surprisingly good new movie Upgrade) have compact, appealing narrative arcs quite different from constant teasing and foreshadowing of plot developments while actual plotting proceeds glacially. Viewers wait an awful lot longer in the HBO series for resolution of tensions and emotional payoffs, by which time investment in the story lines has been dispelled. There is also no terrifying crescendo of violence and chaos demanding rescue or resolution. HBO’s Westworld often simply plods on. To wit, a not insignificant portion of the story (um, side story) is devoted to boardroom politics (yawn) regarding who actually controls the Westworld theme park. Plot twists and reveals, while mildly interesting (typically guessed by today’s cynical audiences), do not tie the narrative together successfully.

Still, Westworld provokes considerable interest from me due to my fascination with human consciousness. The initial episode builds out the fictional future world with characters speaking exposition clearly owing its inspiration to Julian Jayne’s book The Origins of Consciousness in the Breakdown of the Bicameral Mind (another reference audiences are quite unlikely to know or recognize). I’ve had the Julian Jaynes Society’s website bookmarked for years and read the book some while back; never imagined it would be captured in modern fiction. Jaynes’ thesis (if I may be so bold as to summarize radically) is that modern consciousness coalesced around the collapse of multiple voices in the head — ideas, impulses, choices, decisions — into a single stream of consciousness perhaps better understood (probably not) as the narrative self. (Aside: the multiple voices of antiquity correspond to polytheism, whereas the modern singular voice corresponds to monotheism.) Thus, modern human consciousness arose over several millennia as the bicameral mind (the divided brain having two camera, chambers, or halves) functionally collapsed. The underlying story of the new Westworld is the emergence of machine consciousness, a/k/a strong AI, a/k/a The Singularity, while the old Westworld was about a mere software glitch. Exploration of machine consciousness modeling (e.g., improvisation builds on memory to create awareness) as a proxy for better understanding human consciousness might not be the purpose of the show, but it’s clearly implied. And although conjectural, the speed of emergence of human consciousness contrasts sharply with the abrupt ON switch regarding theorized machine consciousness. Westworld treats them as roughly equivalent, though in fairness, 35 years or so in Westworld is in fact abrupt compared to several millennia. (Indeed, the story asserts that machine consciousness sparked alive repeatedly (which I suggested here) over those 35 years but was dialed back repeatedly. Never mind all the unexplored implications.) Additionally, the fashion in which Westworld uses the term bicameral ranges from sloppy to meaningless, like the infamous technobabble of Star Trek.

Read the rest of this entry »

Political discussion usually falls out of scope on this blog, though I use the politics category and tag often enough. Instead, I write about collapse, consciousness, and culture (and to a lesser extent, music). However, politics is up front and center with most media, everyone taking whacks at everyone else. Indeed, the various political identifiers are characterized these days by their most extreme adherents. The radicalized elements of any political persuasion are the noisiest and thus the most emblematic of a worldview if one judges solely by the most attention-grabbing factions, which is regrettably the case for a lot of us. (Squeaky wheel syndrome.) Similarly, in the U.S. at least, the spectrum is typically expressed as a continuum from left to right (or right to left) with camps divided nearly in half based on voting. Opinion polls reveal a more lopsided division (toward Leftism/Progressivism as I understand it) but still reinforce the false binary.

More nuanced political thinkers allow for at least two axes of political thought and opinion, usually plotted on an x-y coordinate plane (again, left to right and down to up). Some look more like the one below (a quick image search will reveal dozens of variations), with outlooks divided into regions of a Venn diagram suspiciously devoid of overlap. The x-y coordinate plane still underlies the divisions.

600px-political-spectrum-multiaxis

If you don’t know where your political compass points, you can take this test, though I’m not especially convinced that the result is useful. Does it merely apply more labels? If I had to plot myself according to the traditional divisions above, I’d probably be a centrist, which is to say, nothing. My positions on political issues are not driven by party affiliation, motivated by fear or grievance, subject to a cult of personality, or informed by ideological possession. Perhaps I’m unusual in that I can hold competing ideas in my head (e.g., individualism vs. collectivism) and make pragmatic decisions. Maybe not.

If worthwhile discussion is sought among principled opponents (a big assumption, that), it is necessary to diminish or ignore the more radical voices screaming insults at others. However, multiple perverse incentives reward the most heinous adherents the greatest attention and control of the narrative(s). in light of the news out just this week, call it Body Slam Politics. It’s a theatrical style borne out of fake drama from the professional wrestling ring (not an original observation on my part), and we know who the king of that style is. Watching it unfold too closely is a guaranteed way to destroy one’s political sensibility, to say nothing of wrecked brain cells. The spectacle depicted in Idiocracy has arrived early.

I’m on the sidelines with the issue of free speech, an observer with some skin in the game but not really much at risk. I’m not the sort of beat my breast and seek attention over what seems to me a fairly straightforward value, though with lots of competing interpretations. It helps that I have no particularly radical or extreme views to express (e.g., won’t find me burning the flag), though I am an iconoclast in many respects. The basic value is that folks get to say (and by extension think) whatever they want short of inciting violence. The gambit of the radicalized left has been to equate speech with violence. With hate speech, that may actually be the case. What is recognized as hate speech may be changing, but liberal inclusion strays too far into mere hurt feelings or discomfort, thus the risible demand for safe spaces and trigger warnings suitable for children. If that standard were applied rigorously, free speech as we know it in the U.S. would come to an abrupt end. Whatever SJWs may say they want, I doubt they really want that and suggest they haven’t thought it through well enough yet.

An obvious functional limitation is that one doesn’t get to say whatever one wishes whenever and wherever one wants. I can’t simply breach security and go onto The Tonight Show, a political rally, or a corporate boardroom to tell my jokes, voice my dissent, or vent my dissatisfaction. In that sense, deplatforming may not be an infringement of free speech but a pragmatic decision regarding whom it may be worthwhile to host and promote. Protest speech is a complicated area, as free speech areas designated blocks away from an event are clearly set up to nullify dissent. No attempt is made here to sort out all the dynamics and establish rules of conduct for dissent or the handling of dissent by civil authorities. Someone else can attempt that.

My point with this blog post is to observe that for almost all of us in the U.S., free speech is widely available and practiced openly. That speech has conceptual and functional limitations, such as the ability to attract attention (“move the needle”) or convince (“win hearts and minds”), but short of gag orders, we get to say/think what we want and then deal with the consequences (often irrelevance), if any. Adding terms to the taboo list is a waste of time and does no more to guide people away from thinking or expressing awful things than does the adoption of euphemism or generics. (The terms moron, idiot, and imbecile used to be acceptable psychological classifications, but usage shifted. So many euphemisms and alternatives to calling someone stupid exist that avoiding the now-taboo word retard accomplishes nothing. Relates to my earlier post about epithets.)

Those who complain their free speech has been infringed and those who support free speech vociferously as the primary means of resolving conflict seem not to realize that their objections are less to free speech being imperiled but more to its unpredictable results. For instance, the Black Lives Matter movement successfully drew attention to a real problem with police using unnecessary lethal force against black people with alarming regularity. Good so far. The response was Blue Lives Matter, then All Lives Matter, then accusations of separatism and hate speech. That’s the discussion happening — free speech in action. Similarly, when Colin Kaepernick famously took a knee rather than stand and sing the national anthem (hand over heart, uncovered head), a rather modest protest as protests go, he drew attention to racial injustice that then morphed into further, ongoing discussion of who, when, how, why anyone gets to protest — a metaprotest. Nike’s commercial featuring Kaepernick and the decline of attendance at NFL games are part of that discussion, with the public participating or refusing to participate as the case may be. Discomforts and sacrifices are experienced all around. This is not Pollyannaish assurance that all is well and good in free speech land. Whistleblowers and Me Too accusers know only too well that reprisals ruin lives. Rather, it’s an ongoing battle for control of the narrative(s). Fighting that battle inevitably means casualties. Some engage from positions of considerable power and influence, others as underdogs. The discussion is ongoing.

Among the many complaints that cross my path in the ongoing shitshow that American culture has become is an article titled “The Tragic Decline of Music Literacy (and Quality),” authored by Jon Henschen. His authorship is a rather unexpected circumstance since he is described as a financial advisor rather than an authority on music, technology, or culture. Henschen’s article reports on (without linking to it as I do) an analysis by Joan Serrà et al., a postdoctoral scholar at the Artificial Intelligence Research Institute. Curiously, the analysis is reported on and repackaged by quite a few news sites and blogs since its publication in 2012. For example, the YouTube video embedded below makes many of the same arguments and cites the so-called Millennial Whoop, a hook or gesture now ubiquitous in pop music that’s kinda sorta effective until one recognizes it too manifestly and it begins to sound trite, then irritating.

I won’t recount or summarize arguments except to say that neither the Henschen article nor the video discusses the underlying musical issues quite the way a trained musician would. Both are primarily quantitative rather than qualitative, equating an observed decrease in variety of timbre, loudness, and pitch/harmony as worse music (less is more worse). Lyrical (or poetical) complexity has also retreated. It’s worth noting, too, that the musical subject is narrowed to recorded pop music from 1955 to 2010. There’s obviously a lot to know about pop music, but it’s not generally the subject of serious study among academic musicians. AFAIK, no accredited music school offers degrees in pop music. Berklee College of Music probably comes the closest. (How exactly does songwriting as a major differ from composition?) That standard may be relaxing.

Do quantitative arguments demonstrate degradation of pop music? Do reduced variety, range, and experimentation make pop music the equivalent of a paint-by-the-numbers image with the self-imposed limitation that allows only unmixed primary colors? Hard to say, especially if one (like me) has a traditional education in art music and already regards pop music as a rather severe degradation of better music traditions. Reduction of the artistic palette from the richness and variety of, say, 19th-century art music proceeded through the 20th century (i.e., musical composition is now understood by the lay public to mean songs, which is just one musical genre among many) to a highly refined hit-making formula that has been proven to work remarkably well. Musical refinements also make use of new technological tools (e.g., rhythm machines, autotune, digital soundfield processing), which is another whole discussion.

Musical quality isn’t mere quantity (where more is clearly better), however, and some manage pretty well with limited resources. Still, a sameness or blandness is evident and growing within a genre that is already rather narrowly restricted to using drums, guitars, keyboards, vocals. The antidote Henschen suggests (incentivizing musical literacy and participation, especially in schools) might prove salutary, but such recommendations are ubiquitous throughout modern history. The magical combination of factors that actually catalyzes creativity, as opposed to degradation, is rather quixotic. Despite impassioned pleas not to allow quality to disappear, nothing could be more obvious than that culture drifts according to its own whims (to anthropomorphize) rather than being steered by well-meaning designs.

More to say in part 2 to follow.

I caught the presentation embedded below with Thomas L. Friedman and Yuval Noah Harari, nominally hosted by the New York Times. It’s a very interesting discussion but not a debate. For this now standard format (two or more people sitting across from each other with a moderator and an audience), I’m pleased to observe that Friedman and Harari truly engaged each others’ ideas and behaved with admirable restraint when the other was speaking. Most of these talks are rude and combative, marred by constant interruptions and gotchas. Such bad behavior might succeed in debate club but makes for a frustratingly poor presentation. My further comments follow below.

With a topic as open-ended as The Future of Humanity, arguments and support are extremely conjectural and wildly divergent depending on the speaker’s perspective. Both speakers here admit their unique perspectives are informed by their professions, which boils down to biases borne out of methodology, and to a lesser degree perhaps, personality. Fair enough. In my estimation, Harari does a much better job adopting a pose of objectivity. Friedman comes across as both salesman and a cheerleader for human potential.

Both speakers cite a trio of threats to human civilization and wellbeing going forward. For Harari, they’re nuclear war, climate change, and technological disruption. For Friedman, they’re the market (globalization), Mother Nature (climate change alongside population growth and loss of diversity), and Moore’s Law. Friedman argues that all three are accelerating beyond control but speaks of each metaphorically, such as when refers to changes in market conditions (e.g., from independent to interdependent) as “climate change.” The biggest issue from my perspective — climate change — was largely passed over in favor of more tractable problems.

Climate change has been in the public sphere as the subject of considerable debate and confusion for at least a couple decades now. I daresay it’s virtually impossible not to be aware of the horrific scenarios surrounding what is shaping up to be the end of the world as we know it (TEOTWAWKI). Yet as a global civilization, we’ve barely reacted except with rhetoric flowing in all directions and some greenwashing. Difficult to assess, but perhaps the appearance of more articles about surviving climate change (such as this one in Bloomberg Businessweek) demonstrates that more folks recognize we can no longer stem or stop climate change from rocking the world. This blog has had lots to say about the collapse of industrial civilization being part of a mass extinction event (not aimed at but triggered by and including humans), so for these two speakers to cite but then minimize the peril we face is, well, façile at the least.

Toward the end, the moderator finally spoke up and directed the conversation towards uplift (a/k/a the happy chapter), which almost immediately resulted in posturing on the optimism/pessimism continuum with Friedman staking his position on the positive side. Curiously, Harari invalidated the question and refused to be pigeonholed on the negative side. Attempts to shoehorn discussions into familiar if inapplicable narratives or false dichotomies is commonplace. I was glad to see Harari calling bullshit on it, though others (e.g., YouTube commenters) were easily led astray.

The entire discussion is dense with ideas, most of them already quite familiar to me. I agree wholeheartedly with one of Friedman’s remarks: if something can be done, it will be done. Here, he refers to technological innovation and development. Plenty of prohibitions throughout history not to make available disruptive technologies have gone unheeded. The atomic era is the handy example (among many others) as both weaponry and power plants stemming from cracking the atom come with huge existential risks and collateral psychological effects. Yet we prance forward headlong and hurriedly, hoping to exploit profitable opportunities without concern for collateral costs. Harari’s response was to recommend caution until true cause-effect relationships can be teased out. Without saying it manifestly, Harari is citing the precautionary principle. Harari also observed that some of those effects can be displaced hundreds and thousands of years.

Displacements resulting from the Agrarian Revolution, the Scientific Revolution, and the Industrial Revolution in particular (all significant historical “turnings” in human development) are converging on the early 21st century (the part we can see at least somewhat clearly so far). Neither speaker would come straight out and condemn humanity to the dustbin of history, but at least Harari noted that Mother Nature is quite keen on extinction (which elicited a nervous? uncomfortable? ironic? laugh from the audience) and wouldn’t care if humans were left behind. For his part, Friedman admits our destructive capacity but holds fast to our cleverness and adaptability winning out in the end. And although Harari notes that the future could bring highly divergent experiences for subsets of humanity, including the creation of enhanced humans from our reckless dabbling with genetic engineering, I believe cumulative and aggregate consequences of our behavior will deposit all of us into a grim future no sane person should wish to survive.

rant on/

As the world turns and history piles up against us, nature (as distinguished from human civilization) takes hit after hit. One reads periodically about species extinction proceeding at an estimated rate of dozens per day (or even faster), 1,000 to 10,000 times faster than the background rate of evolution without anthropocentric climate change thrown in. Headlines usually read that large populations of plants or animals show up dead where they once used to thrive. When it’s insects such as crickets or bees, we often lack concern. They’re insects after all, which we happily exterminate from places of human habitation. Although we know they’re significant parts of the terrestrial food web just as plankton function as the base of the marine food web, they’re too small and/or icky for us to identify with closely. Species die-offs occurring with large mammals such as whales or dolphins make it easier to feel empathy. So, too, with aspen trees suffering from beetle infestations and deer populations with chronic wasting disease. When at-risk species finally go extinct, no fanfare, report, or memorial is heard. Here’s an exception: a new tree species discovered and declared extinct at the same time.

Something similar can be said of cities and communities established in hurricane alleys, atop earthquake fault lines, in flood plains, and near active volcanoes. They’re the equivalent of playing Russian roulette. We know the gun will fire eventually because the trigger is pulled repeatedly (by us or by nature itself). Catastrophists believe the planet across long time spans (tens of thousands of years) has always been a killing field or abattoir, though long respites between episodes can be surprisingly nurturing. Still, the rate of natural disasters has been creeping up now for decades. According to the statistics, we can certainly tolerate disaster better (in terms of death rates) than in the early 20th century. Yet the necessity of building out civilization in perilous locations is Pyrrhic. The human species must ineluctably expand its territory wherever it can, other species be damned. We don’t need no stinkin’ whales, dolphins, aspens, deer, bees, crickets, etc. We also don’t need no stinkin’ oceanfront property (Carolina outer banks, New Jersey shore, New Orleans, Houston) that keeps getting hit, requiring regular, predictable rebuilding. Let it all go to hell (meet you there!) ruin. The insurance companies will bail us out, just like taxpayers the federal government bailed out all those banks dicking playing around with the casino economy a decade ago (which, BTW, hasn’t abated).

The typical metaphor for slow death between major planetary catastrophes is “death by a thousand cuts,” as though what’s happening this time is occurring to us rather than by and because of us. I propose a different metaphor: Jenga tower civilization. The tower is civilization, obviously, which we keep building taller by removing pieces (of nature) from the bottom to stack on top. Jenga (say it everyone: Jenga! Yahtzee!) ends when the entire edifice crashes down into pieces. Until then, it’s all fun and games with no small bit of excitement and intrigue — not so much a game of skill as a game of rank stupidity. Just how far can we build until the eventual crash? It’s built right into the game, right? We know the dynamics and the outcome; we just don’t know when the critical piece will be pulled out from under us. Isn’t the excitement just about killing us?

jenga-falling

rant off/

Heard a curious phrase used with some regularity lately, namely, that “we’ve Nerfed the world.” Nerf refers to the soft, foam toys popular in the 70s and beyond that made balls and projectiles essentially harmless. The implication of the phrase is that we’ve become soft and vulnerable as a result of removing the routine hazards (physical and psychological) of existence. For instance, in the early days of cell phones, I recall padded street poles (like endzone goalposts) to prevent folks with their attention fixated too intently on their phones from harming themselves when stumbling blindly down the sidewalk.

Similarly, anti-bullying sentiment has reached fevered pitch such that no level of discomfort (e.g., simple name calling) can be tolerated lest the victim be scarred for life. The balancing point between preparing children for competitive realities of the world and protecting their innocence and fragility has accordingly moved heavily in favor of the latter. Folks who never develop the resilience to suffer even modest hardships are snowflakes, and they agitate these days on college campuses (and increasingly in workplaces) to withdraw into safe spaces where their beliefs are never challenged and experiences are never challenging. The other extreme is a hostile, cruel, or at least indifferent world where no one is offered support or opportunity unless he or she falls within some special category, typically connected through family to wealth and influence. Those are entitled.

A thermostatic response (see Neil Postman for more on this metaphor) is called for here. When things veer too far toward one extreme or the other, a correction is inevitable. Neither extreme is healthy for a functioning society, though the motivations are understandable. Either it’s toughen people up by providing challenge, which risks brutalizing people unnecessarily, or protect people from the rigors of life or consequences of their own choices to such a degree that they become dependent or dysfunctional. Where the proper balance lies is a question for the ages, but I daresay most would agree it’s somewhere squarely in the middle.

Jonathan Haidt and Greg Lukianoff have a new book out called The Coddling of the American Mind: How Good Intentions and Bad Ideas Are Setting Up a Generation for Failure (2018), which is an expansion of an earlier article in The Atlantic of the same title. (Both are callbacks to Allan Bloom’s notorious The Closing of the American Mind (1987), which I’ve read twice. Similar reuse of a famous title references Robert Bork’s Slouching Toward Gomorrah (1996).) I haven’t yet read Haidt’s book and doubt I will bother, but I read the source article when it came out. I also don’t work on a college campus and can’t judge contemporary mood compared to when I was an undergraduate, but I’m familiar with the buzzwords and ​intellectual fashions reported by academics and journalists. My alma mater is embroiled in these battles, largely in connection with identity politics. I’m also aware of detractors who believe claims of Haidt and Lukianoff (and others) are essentially hysterics limited to a narrow group of progressive colleges and universities.

As with other cultural developments that lie outside my expertise, I punt when it comes to offering (too) strong opinions. However, with this particular issue, I can’t help but to think that the two extremes coexist. A noisy group of students attending highly competitive institutions of higher education lead relatively privileged lives compared to those outside the academy, whereas high school grads and dropouts not on that track (and indeed grads of less elite schools) frequently struggle getting their lives going in early adulthood. Most of us face that struggle early on, but success, despite nonsensical crowing about the “best economy ever” from the Oval Office, is difficult to achieve now as the broad socioeconomic middle is pushed toward the upper and lower margins (mostly lower). Stock market notwithstanding, economic reality is frankly indifferent to ideology.