Archive for the ‘Nomenclature’ Category

For more than a decade, I’ve had in the back of my mind a blog post called “The Power of Naming” to remark that bestowing a name gives something power, substance, and in a sense, reality. That post never really came together, but its inverse did. Anyway, here’s a renewed attempt.

The period of language acquisition in early childhood is suffused with learning the names of things, most of which is passive. Names of animals (associated closely with sounds they make) are often a special focus using picture books. The kitty, doggie, and horsie eventually become the cat, dog, and horse. Similarly, the moo-cow and the tweety-bird shorten to cow and bird (though songbird may be an acceptable holdover). Words in the abstract are signifiers of the actual things, aided by the text symbols learned in literate cultures to reinforce mere categories instead of examples grounded in reality. Multiply the names of things several hundred thousand times into adulthood and indeed throughout life and one can develop a formidable vocabulary supporting expressive and nuanced thought and speech. Do you know the differences between acute, right, obtuse, straight, and reflex angles? Does it matter? Does your knowledge of barware inform when to use a flute, coupe, snifter, shot (or shooter or caballito), nosing glass (or Glencairn), tumbler, tankard, goblet, sling, and Stein? I’d say you’ve missed something by never having drunk dark beer (Ger.: Schwarzbier) from a frosted schooner. All these varieties developed for reasons that remain invisible to someone content to drink everything from the venerable red Solo cup. Funnily enough, the red Solo cup now comes in different versions, fooling precisely no one.

Returning to book blogging, Walter Ong (in Orality and Literacy) has curious comparisons between primarily oral cultures and literate cultures. For example:

Oral people commonly think of names (one kind of words) as conveying power over things. Explanations of Adam’s naming of the animals in Genesis 2:20 usually call condescending attention to this presumably quaint archaic belief. Such a belief is in fact far less quaint than it seems to unreflective chirographic and typographic folk. First of all, names do give humans beings power over what they name: without learning a vast store of names, one is simply powerless to understand, for example, chemistry and to practice chemical engineering. And so with all other intellectual knowledge. Secondly, chirographic and typographic folk tend to think of names as labels, written or printed tags imaginatively affixed to an object named. Oral folk have no sense of a name as a tag, for they have no idea of a name as something that can be seen. Written or printed representations of words can be labels; real, spoken words cannot be. [p. 33]

This gets at something that has been developing over the past few decades, namely, that as otherwise literate (or functionally literate) people gather more and more information through electronic media (screens that serve broadcast and cable TV, YouTube videos, prerecorded news for streaming, and podcasts, and most importantly, audiobooks — all of which speak content to listeners), the spoken word (re)gains primacy and the printed word fades into disuse. Electronic media may produce a hybrid of orality/literacy, but words are no longer silent, internal, and abstract. Indeed, words — all by themselves — are understood as being capable of violence. Gone are the days when “stick and stones ….” Now, fighting words incite and insults sting again.

Not so long ago, it was possible to provoke a duel with an insult or gesture, such as a glove across the face. Among some people, defense of honor never really disappeared (though dueling did). History has taken a strange turn, however. Proposed legislation to criminalize deadnaming (presumably to protect a small but growing number of transgender and nonbinary people who have redefined their gender identity and accordingly adopted different names) recognizes the violence of words but then tries to transmute the offense into an abstract criminal law. It’s deeply mixed up, and I don’t have the patience to sort it out.

More to say in later blog posts, but I’ll raise the Counter-Enlightenment once more to say that the nature of modern consciousness if shifting somewhat radically in response to stimuli and pressures that grew out of an information environment, roughly 70 years old now but transformed even more fundamentally in the last 25 years, that is substantially discontinuous from centuries-old traditions. Those traditions displaced even older traditions inherited from antiquity. Such is the way of the world, I suppose, and with the benefit of Walter Ong’s insights, my appreciation of the outlines is taking better shape.

Wanted to provide an update to the previous post in my book-blogging project on Walter Ong’s Orality and Literacy to correct something that wasn’t clear to me at first. The term chirographic refers to writing, but I conflated writing more generally with literacy. Ong actually distinguishes chirographic (writing) from typographic (type or print) and includes another category: electronic media.

Jack Goody … has convincingly shown how shifts hitherto labeled as shifts from magic to science, or from the so-called ‘prelogical’ to the more and more ‘rational’ state of consciousness, or from Lévi-Strauss’s ‘savage’ mind to domesticated thought, can be more economically and cogently explained as shifts from orality to various stages of literacy … Marshall McLuhan’s … cardinal gnomic saying, ‘The medium is the message’, registered his acute awareness of the importance of the shift from orality through literacy and print to electronic media. [pp. 28–29]

So the book’s primary contrast is between orality and literacy, but literacy has a sequence of historical developments: chirographic, typographic, and electronic media. These stages are not used interchangeably by Ong. Indeed, they exist simultaneously in the modern world and all contribute to overall literacy while each possesses unique characteristics. For instance, reading from handwriting (printing or cursive, the latter far less widely used now except for signatures) is different from reading from print on paper or on the screen. Further, writing by hand, typing on a typewriter, typing into a word-processor, and composing text on a smartphone each has its effects on mental processes and outputs. Ong also mentions remnants of orality that have not yet been fully extinguished. So the exact mindset or style of consciousness derived from orality vs. literacy is neither fixed nor established universally but contains aspects from each category and subcategory.

Ong also takes a swing at Julian Jaynes. Considering that Jaynes’ book The Origin of Consciousness in the Breakdown of the Bicameral Mind (1977) (see this overview) was published only seven years prior to Orality and Literacy (1982), the impact of Jaynes’ thesis must have still been felt quite strongly (as it is now among some thinkers). Yet Ong disposes of Jaynes rather parsimoniously, stating

… if attention to sophisticated orality-literacy contrasts is growing in some circles, it is still relatively rare in many fields where it could be helpful. For example, the early and late stages of consciousness which Julian Jaynes (1977) describes and related to neuro-physiological changes to the bicameral mind would also appear to lend themselves largely to much simpler and more verifiable descriptions in terms of a shift from orality to literacy. [p. 29]

In light of the details above, it’s probably not accurate to say (as I did before) that we are returning to orality from literacy. Rather, the synthesis of characteristics is shifting, as it always has, in relation to new stimuli and media. Since the advent of cinema and TV — the first screens, now supplemented by the computer and smartphone — the way humans consume information is undergoing yet another shift. Or perhaps it’s better to conclude that it’s always been shifting, not unlike how we have always been and are still evolving, though the timescales are usually too slow to observe without specialized training and analysis. Shifts in consciousness arguably occur far more quickly than biological evolution, and the rate at which new superstimuli are introduced into the information environment suggest radical discontinuity with even the recent past — something that used to be call the generation gap.

I’ve always wondered what media theorists such as McLuhan (d. 1980), Neil Postman (d. 2003), and now Ong (d. 2003) would make of the 21st century had they lived long enough to witness what has been happening, with 2014–2015 being the significant inflection point according to Jonathan Haidt. (No doubt there are other media theorists working on this issue who have not risen to my attention.) Numerous other analyses point instead to the early 20th century as the era when industrial civilization harnessed fossil fuels and turned the mechanisms and technologies of innovators decidedly against humanity. Pick your branching point.

Caveat: this post is uncharacteristically long and perhaps a bit disjointed. Or perhaps an emerging blogging style is being forged. Be forewarned.

Sam Harris has been the subject of or mentioned in numerous previous blog posts. His podcast Making Sense (formerly, Waking Up), partially behind a paywall but generously offered for free (no questions asked) to those claiming financial hardship, used to be among those I would tune in regularly. Like the Joe Rogan Experience (soon moving to Spotify — does that mean its disappearance from YouTube?), the diversity of guests and reliable intellectual stimulation have been attractive. Calling his podcast Making Sense aligns with my earnest concern over actually making sense of things as the world spins out of control and our epistemological crisis deepens. Yet Harris has been a controversial figure since coming to prominence as a militant atheist. I really want to like what Harris offers, but regrettably, he has lost (most of) my attention. Others reaching the same conclusion have written or vlogged their reasons, e.g., “Why I’m no longer a fan of ….” Do a search.

Having already ranted over specific issues Harris has raised, let me instead register three general complaints. First, once a subject is open for discussion, it’s flogged to death, often without reaching any sort of conclusion, or frankly, helping to make sense. For instance, Harris’ solo discussion (no link) regarding facets of the killing of George Floyd in May 2020, which event sparked still unabated civil unrest, did more to confuse than clarify. It was as though Harris were trying the court case by himself, without a judge, jury, or opposing counsel. My second complaint is that Harris’ verbosity, while impressive in many respects, leads to interviews marred by long-winded, one-sided speeches where the thread is hopelessly lost, blocking an interlocutor from tracking and responding effectively. Whether Harris intends to bury others under an avalanche of argument or does so uncontrollably doesn’t matter. It’s still a Gish gallop. Third is his over-emphasis on hypotheticals and thought experiments. Extrapolation is a useful but limited rhetorical technique, as is distillation. However, treating prospective events as certainties is tantamount to building arguments on poor foundations, namely, abstractions. Much as I admire Harris’ ambition to carve out a space within the public sphere to get paid for thinking and discussing topics of significant political and philosophical currency, he frustrates me enough that I rarely tune in anymore.

(more…)

Caveat: rather overlong for me, but I got rolling …

One of the better articles I’ve read about the pandemic is this one by Robert Skidelsky at Project Syndicate (a publication I’ve never heard of before). It reads as only slightly conspiratorial, purporting to reveal the true motivation for lockdowns and social distancing, namely, so-called herd immunity. If that’s the case, it’s basically a silent admission that no cure, vaccine, or inoculation is forthcoming and the spread of the virus can only be managed modestly until it has essentially raced through the population. Of course, the virus cannot be allowed to simply run its course unimpeded, but available impediments are limited. “Flattening the curve,” or distributing the infection and death rates over time, is the only attainable strategy and objective.

Wedding mathematical and biological insights, as well as the law of mass action in chemistry, into an epidemic model may seem obvious now, but it was novel roughly a century ago. We’re also now inclined, if scientifically oriented and informed, to understand the problem and its potential solutions management in terms of engineering rather than medicine (or maybe in terms of triage and palliation). Global response has also made the pandemic into a political issue as governments obfuscate and conceal true motivations behind their handling (bumbling in the U.S.) of the pandemic. Curiously, the article also mentions financial contagion, which is shaping up to be worse in both severity and duration than the viral pandemic itself.

(more…)

/rant on

Had a rather dark thought, which recurs but then fades out of awareness and memory until conditions reassert it. Simply put, it’s that the mover-shaker-decision-maker sociopaths types in government, corporations, and elsewhere (I refuse to use the term influencer) are typically well protected (primarily by virtue of immense wealth) from threats regular folks face and are accordingly only too willing to sit idly by, scarcely lifting a finger in aid or assistance, and watch dispassionately as others scramble and scrape in response to the buffeting torrents of history. The famous example (even if not wholly accurate) of patrician, disdainful lack of empathy toward others’ plight is Marie Antoinette’s famous remark: “Let them eat cake.” Citing an 18th-century monarch indicates that such tone-deaf sentiment has been around for a long time.

Let me put it another way, since many of our problems are of our own creation. Our styles of social organization and their concomitant institutions are so overloaded with internal conflict and corruption, which we refuse to eradicate, that it’s as though we continuously tempt fate like fools playing Russian roulette. If we were truly a unified nation, maybe we’d wise up and adopt a different organizational model. But we don’t shoulder risk or enjoy reward evenly. Rather, the disenfranchised and most vulnerable among us, determined a variety of ways but forming a substantial majority, have revolvers to their heads with a single bullet in one of five or six chambers while the least vulnerable (the notorious 1%) have, in effect, thousands or millions of chambers and an exceedingly remote chance of firing the one with the bullet. Thus, vulnerability roulette.

In the midst of an epochal pandemic and financial crisis, who gets sacrificed like so much cannon fodder while others retreat onto their ocean-going yachts or into their boltholes to isolate from the rabble? Everyone knows it’s always the bottom rungs of the socioeconomic ladder who unjustly suffer the worst, a distinctly raw deal unlikely ever to change. The middle rungs are also suffering now as contraction affects more and more formerly enfranchised groups. Meanwhile, those at the top use crises as opportunities for further plunder. In an article in Rolling Stone, independent journalist Matt Taibbi, who covered the 2008 financial collapse, observes that our fearless leaders (fearless because they secure themselves before and above all else) again made whole the wealthiest few at the considerable expense of the rest:

The $2.3 trillion CARES Act, the Donald Trump-led rescue package signed into law on March 27th, is a radical rethink of American capitalism. It retains all the cruelties of the free market for those who live and work in the real world, but turns the paper economy into a state protectorate, surrounded by a kind of Trumpian Money Wall that is designed to keep the investor class safe from fear of loss.

This financial economy is a fantasy casino, where the winnings are real but free chips cover the losses. For a rarefied segment of society, failure is being written out of the capitalist bargain.

Why is this a “radical rethink”? We’ve seen identical behaviors before: privatization of profit, indemnification of loss, looting of the treasury, and refusal to prosecute exploitation, torture, and crimes against humanity. Referring specifically to financialization, this is what the phrase “too big to fail” means in a nutshell, and we’ve been down this stretch of road repeatedly.

Naturally, the investor class isn’t ordered back to work at slaughterhouses and groceries to brave the epidemic. Low-wage laborers are. Interestingly, well compensated healthcare workers are also on the vulnerability roulette firing line — part of their professional oaths and duties — but that industry is straining under pressure from its inability to maintain profitability during the pandemic. Many healthcare workers are being sacrificed, too. Then there are tens of millions newly unemployed and uninsured being told that the roulette must continue into further months of quarantine, the equivalent of adding bullets to the chambers until their destruction is assured. The pittance of support for those folks (relief checks delayed or missing w/o explanation or recourse and unemployment insurance if one qualifies, meaning not having already been forced into the gig economy) does little to stave off catastrophe.

Others around the Web have examined the details of several rounds of bailout legislation and found them unjust in the extreme. Many of the provisions actually heap insult and further injury upon injury. Steps that could have been taken, and in some instances were undertaken in past crises (such as during the Great Depression), don’t even rate consideration. Those safeguards might include debt cancellation, universal basic income (perhaps temporary), government-supported healthcare for all, and reemployment through New Deal-style programs. Instead, the masses are largely left to fend for themselves, much like the failed Federal response to Hurricane Katrina.

Some of this is no doubt ideological. A professional class of ruling elites are the only ones to be entrusted with guiding the ship of state, or so goes the political philosophy. But in our capitalist system, government has been purposefully hamstrung and hollowed out to the point of dysfunction precisely so that private enterprise can step in. And when magical market forces fail to stem the slide into oblivion, “Welp, sorry, th-th-that’s all folks,” say the supposed elite. “Nothing we can do to ease your suffering! Our attentions turn instead to ourselves, the courtiers and sycophants surrounding us, and the institutions that enable our perfidy. Now go fuck off somewhere and die, troubling us no more.”

/rant off

A complex of interrelated findings about how consciousness handles the focus of perception has been making the rounds. Folks are recognizing the limited time each of us has to deal with everything pressing upon us for attention and are adopting the notion of the bandwidth of consciousness: the limited amount of perception / memory / thought one can access or hold at the forefront of attention compared to the much larger amount occurring continuously outside of awareness (or figuratively, under the hood). Similarly, the myriad ways attention is diverted by advertisers and social media (to name just two examples) to channel consumer behaviors or increase time-on-device metrics have become commonplace topics of discussion. I’ve used the terms information environment, media ecology, and attention economy in past posts on this broad topic.

Among the most important observations is how the modern infosphere has become saturated with content, much of it entirely pointless (when not actively disorienting or destructive), and how many of us willingly tune into it without interruption via handheld screens and earbuds. It’s a steady flow of stimulation (overstimulation, frankly) that is the new normal for those born and/or bred to the screen (media addicts). Its absence or interruption is discomfiting (like a toddler’s separation anxiety). However, mental processing of information overflow is tantamount to drinking from a fire hose: only a modest fraction of the volume rushing nonstop can be swallowed. Promoters of meditation and presensing, whether implied or manifest, also recognize that human cognition requires time and repose to process and consolidate experience, transforming it into useful knowledge and long-term memory. More and more stimulation added on top is simply overflow, like a faucet filling the bathtub faster than drain can let water out, spilling overflow onto the floor like digital exhaust. Too bad that the sales point of these promoters is typically getting more done, because dontcha know, more is better even when recommending less.

Quanta Magazine has a pair of articles (first and second) by the same author (Jordana Cepelewicz) describing how the spotlight metaphor for attention is only partly how cognition works. Many presume that the mind normally directs awareness or attention to whatever the self prioritizes — a top-down executive function. However, as any loud noise, erratic movement, or sharp pain demonstrates, some stimuli are promoted to awareness by virtue of their individual character — a bottom-up reflex. The fuller explanation is that neuroscientists are busy researching brain circuits and structures that prune, filter, or gate the bulk of incoming stimuli so that attention can be focused on the most important bits. For instance, the article mentions how visual perception circuits process categories of small and large differently, partly to separate figure from ground. Indeed, for cognition to work at all, a plethora of inhibitory functions enable focus on a relatively narrow subset of stimuli selected from the larger set of available stimuli.

These discussions about cognition (including philosophical arguments about (1) human agency vs. no free will or (2) whether humans exist within reality or are merely simulations running inside some computer or inscrutable artificial intelligence) so often get lost in the weeds. They read like distinctions without differences. No doubt these are interesting subjects to contemplate, but at the same time, they’re sorta banal — fodder for scientists and eggheads that most average folks dismiss out of hand. In fact, selective and inhibitory mechanisms are found elsewhere in human physiology, such as pairs of muscles to move to and fro or appetite stimulants / depressants (alternatively, activators and deactivators) operating in tandem. Moreover, interactions are often not binary (on or off) but continuously variable. For my earlier post on this subject, see this.

/rant on

Yet another journalist has unburdened herself (unbidden story of personal discovery masquerading as news) of her addiction to digital media and her steps to free herself from the compulsion to be always logged onto the onslaught of useless information hurled at everyone nonstop. Other breaking news offered by our intrepid late-to-the-story reporter: water is wet, sunburn stings, and the Earth is dying (actually, we humans are actively killing it for profit). Freeing oneself from the screen is variously called digital detoxification (detox for short), digital minimalism, digital disengagement, digital decoupling, and digital decluttering (really ought to be called digital denunciation) and means limiting the duration of exposure to digital media and/or deleting one’s social media accounts entirely. Naturally, there are apps (counters, timers, locks) for that. Although the article offers advice for how to disentangle from screen addictions of the duh! variety (um, just hit the power switch), the hidden-in-plain-sight objective is really how to reengage after breaking one’s compulsions but this time asserting control over the infernal devices that have taken over life. It’s a love-hate style of technophilia and chock full of illusions embarrassing even to children. Because the article is nominally journalism, the author surveys books, articles, software, media platforms, refusniks, gurus, and opinions galore. So she’s partially informed but still hasn’t demonstrated a basic grasp of media theory, the attention economy, or surveillance capitalism, all of which relate directly. Perhaps she should bring those investigative journalism skills to bear on Jaron Lanier, one of the more trenchant critics of living online.

I rant because the embedded assumption is that anything, everything occurring online is what truly matters — even though online media didn’t yet exist as recently as thirty years ago — and that one must (must I say! c’mon, keep up!) always be paying attention to matter in turn or suffer from FOMO. Arguments in favor of needing to be online for information and news gathering are weak and ahistorical. No doubt the twisted and manipulated results of Google searches, sometimes contentious Wikipedia entries, and various dehumanizing, self-as-brand social media platforms are crutches we all now use — some waaaay, way more than others — but they’re nowhere close to the only or best way to absorb knowledge or stay in touch with family and friends. Career networking in the gig economy might require some basic level of connection but shouldn’t need to be the all-encompassing, soul-destroying work maintaining an active public persona has become.

Thus, everyone is chasing likes and follows and retweets and reblogs and other analytics as evidence of somehow being relevant on the sea of ephemera floating around us like so much disused, discarded plastic in those infamous garbage gyres. (I don’t bother to chase and wouldn’t know how to drive traffic anyway. Screw all those solicitations for search-engine optimization. Paying for clicks is for chumps, though lots apparently do it to lie enhance their analytics.) One’s online profile is accordingly a mirror of or even a substitute for the self — a facsimile self. Lost somewhere in my backblog (searched, couldn’t find it) is a post referencing several technophiles positively celebrating the bogus extension of the self accomplished by developing and burnishing an online profile. It’s the domain of celebrities, fame whores, narcissists, and sociopaths, not to mention a few criminals. Oh, and speaking of criminals, recent news is that OJ Simpson just opened a Twitter account to reform his disastrous public image? but is fundamentally outta touch with how deeply icky, distasteful, and disgusting it feels to others for him to be participating once again in the public sphere. Disgraced criminals celebrities negatively associated with the Me-Too Movement (is there really such a movement or was it merely a passing hashtag?) have mostly crawled under their respective multimillion-dollar rocks and not been heard from again. Those few who have tried to reemerge are typically met with revulsion and hostility (plus some inevitable star-fuckers with short memories). Hard to say when, if at all, forgiveness and rejoining society become appropriate.

/rant off

As I reread what I wrote 2.5 years ago in my first blog on this topic, I surmise that the only update needed to my initial assessment is a growing pile of events that demonstrate my thesis: our corrupted information environment is too taxing on human cognition, with the result that a small but growing segment of society gets radicalized (wound up like a spring) and relatively random individuals inevitably pop, typically in a self-annihilating gush of violence. News reports bear this out periodically, as one lone-wolf kook after another takes it upon himself (are there any examples of females doing this?) to shoot or blow up some target, typically chosen irrationally or randomly though for symbolic effect. More journalists and bloggers are taking note of this activity and evolving or resurrecting nomenclature to describe it.

The earliest example I’ve found offering nomenclature for this phenomenon is a blog with a single post from 2011 (oddly, no follow-up) describing so-called stochastic terrorism. Other terms include syntactic violence, semantic violence, and epistemic violence, but they all revolve around the same point. Whether on the sending or receiving end of communications, some individuals are particularly adept at or sensitive to dog whistles that over time activate and exacerbate tendencies toward radical ideology and violence. Wired has a brief article from a few days ago discussing stochastic terrorism as jargon, which is basically what I’m doing here. Admittedly, the last of these terms, epistemic violence (alternative: epistemological violence), ranges farther afield from the end effect I’m calling wind-up toys. For instance, this article discussing structural violence is much more academic in character than when I blogged on the same term (one of a handful of “greatest hits” for this blog that return search-engine hits with some regularity). Indeed, just about any of my themes and topics can be given a dry, academic treatment. That’s not my approach (I gather opinions differ on this account, but I insist that real academic work is fundamentally different from my armchair cultural criticism), but it’s entirely valid despite being a bit remote for most readers. One can easily get lost down the rabbit hole of analysis.

If indeed it’s mere words and rhetoric that transform otherwise normal people into criminals and mass murderers, then I suppose I can understand the distorted logic of the far Left that equates words and rhetoric themselves with violence, followed by the demand that they be provided with warnings and safe spaces lest they be triggered by what they hear, read, or learn. As I understand it, the fear is not so much that vulnerable, credulous folks will be magically turned into automatons wound up and set loose in public to enact violent agendas but instead that virulent ideas and knowledge (including many awful truths of history) might cause discomfort and psychological collapse akin to what happens to when targets of hate speech and death threats are reduced, say, to quivering agoraphobia. Desire for protection from harm is thus understandable. The problem with such logic, though, is that protections immediately run afoul of free speech, a hallowed but misunderstood American institution that preempts quite a few restrictions many would have placed on the public sphere. Protections also stall learning and truth-seeking straight out of the gate. And besides, preemption of preemption doesn’t work.

In information theory, the notion of a caustic idea taking hold of an unwilling person and having its wicked way with him or her is what’s called a mind virus or meme. The viral metaphor accounts for the infectious nature of ideas as they propagate through the culture. For instance, every once in a while, a charismatic cult emerges and inducts new members, a suicide cluster appears, or suburban housewives develop wildly disproportionate phobias about Muslims or immigrants (or worse, Muslim immigrants!) poised at their doorsteps with intentions of rape and murder. Inflaming these phobias, often done by pundits and politicians, is precisely the point of semantic violence. Everyone is targeted but only a few are affected to the extreme of acting out violently. Milder but still invalid responses include the usual bigotries: nationalism, racism, sexism, and all forms of tribalism, “othering,” or xenophobia that seek to insulate oneself safely among like folks.

Extending the viral metaphor, to protect oneself from infectious ideas requires exposure, not insulation. Think of it as a healthy immune system built up gradually, typically early in life, through slow, steady exposure to harm. The alternative is hiding oneself away from germs and disease, which has the ironic result of weakening the immune system. For instance, I learned recently that peanut allergies can be overcome by gradual exposure — a desensitization process — but are exacerbated by removal of peanuts from one’s environment and/or diet. This is what folks mean when they say the answer to hate speech is yet more (free) speech. The nasty stuff can’t be dealt with properly when it’s quarantined, hidden away, suppressed, or criminalized. Maybe there are exceptions. Science fiction entertains those dangers with some regularity, where minds are shunted aside to become hosts for invaders of some sort. That might be overstating the danger somewhat, but violent eruptions may provide some credence.

I’m on the sidelines with the issue of free speech, an observer with some skin in the game but not really much at risk. I’m not the sort of beat my breast and seek attention over what seems to me a fairly straightforward value, though with lots of competing interpretations. It helps that I have no particularly radical or extreme views to express (e.g., won’t find me burning the flag), though I am an iconoclast in many respects. The basic value is that folks get to say (and by extension think) whatever they want short of inciting violence. The gambit of the radicalized left has been to equate speech with violence. With hate speech, that may actually be the case. What is recognized as hate speech may be changing, but liberal inclusion strays too far into mere hurt feelings or discomfort, thus the risible demand for safe spaces and trigger warnings suitable for children. If that standard were applied rigorously, free speech as we know it in the U.S. would come to an abrupt end. Whatever SJWs may say they want, I doubt they really want that and suggest they haven’t thought it through well enough yet.

An obvious functional limitation is that one doesn’t get to say whatever one wishes whenever and wherever one wants. I can’t simply breach security and go onto The Tonight Show, a political rally, or a corporate boardroom to tell my jokes, voice my dissent, or vent my dissatisfaction. In that sense, deplatforming may not be an infringement of free speech but a pragmatic decision regarding whom it may be worthwhile to host and promote. Protest speech is a complicated area, as free speech areas designated blocks away from an event are clearly set up to nullify dissent. No attempt is made here to sort out all the dynamics and establish rules of conduct for dissent or the handling of dissent by civil authorities. Someone else can attempt that.

My point with this blog post is to observe that for almost all of us in the U.S., free speech is widely available and practiced openly. That speech has conceptual and functional limitations, such as the ability to attract attention (“move the needle”) or convince (“win hearts and minds”), but short of gag orders, we get to say/think what we want and then deal with the consequences (often irrelevance), if any. Adding terms to the taboo list is a waste of time and does no more to guide people away from thinking or expressing awful things than does the adoption of euphemism or generics. (The terms moron, idiot, and imbecile used to be acceptable psychological classifications, but usage shifted. So many euphemisms and alternatives to calling someone stupid exist that avoiding the now-taboo word retard accomplishes nothing. Relates to my earlier post about epithets.)

Those who complain their free speech has been infringed and those who support free speech vociferously as the primary means of resolving conflict seem not to realize that their objections are less to free speech being imperiled but more to its unpredictable results. For instance, the Black Lives Matter movement successfully drew attention to a real problem with police using unnecessary lethal force against black people with alarming regularity. Good so far. The response was Blue Lives Matter, then All Lives Matter, then accusations of separatism and hate speech. That’s the discussion happening — free speech in action. Similarly, when Colin Kaepernick famously took a knee rather than stand and sing the national anthem (hand over heart, uncovered head), a rather modest protest as protests go, he drew attention to racial injustice that then morphed into further, ongoing discussion of who, when, how, why anyone gets to protest — a metaprotest. Nike’s commercial featuring Kaepernick and the decline of attendance at NFL games are part of that discussion, with the public participating or refusing to participate as the case may be. Discomforts and sacrifices are experienced all around. This is not Pollyannaish assurance that all is well and good in free speech land. Whistleblowers and Me Too accusers know only too well that reprisals ruin lives. Rather, it’s an ongoing battle for control of the narrative(s). Fighting that battle inevitably means casualties. Some engage from positions of considerable power and influence, others as underdogs. The discussion is ongoing.

I caught the presentation embedded below with Thomas L. Friedman and Yuval Noah Harari, nominally hosted by the New York Times. It’s a very interesting discussion but not a debate. For this now standard format (two or more people sitting across from each other with a moderator and an audience), I’m pleased to observe that Friedman and Harari truly engaged each others’ ideas and behaved with admirable restraint when the other was speaking. Most of these talks are rude and combative, marred by constant interruptions and gotchas. Such bad behavior might succeed in debate club but makes for a frustratingly poor presentation. My further comments follow below.

With a topic as open-ended as The Future of Humanity, arguments and support are extremely conjectural and wildly divergent depending on the speaker’s perspective. Both speakers here admit their unique perspectives are informed by their professions, which boils down to biases borne out of methodology, and to a lesser degree perhaps, personality. Fair enough. In my estimation, Harari does a much better job adopting a pose of objectivity. Friedman comes across as both salesman and a cheerleader for human potential.

Both speakers cite a trio of threats to human civilization and wellbeing going forward. For Harari, they’re nuclear war, climate change, and technological disruption. For Friedman, they’re the market (globalization), Mother Nature (climate change alongside population growth and loss of diversity), and Moore’s Law. Friedman argues that all three are accelerating beyond control but speaks of each metaphorically, such as when he refers to changes in market conditions (e.g., from independent to interdependent) as “climate change.” The biggest issue from my perspective — climate change — was largely passed over in favor of more tractable problems.

Climate change has been in the public sphere as the subject of considerable debate and confusion for at least a couple decades now. I daresay it’s virtually impossible not to be aware of the horrific scenarios surrounding what is shaping up to be the end of the world as we know it (TEOTWAWKI). Yet as a global civilization, we’ve barely reacted except with rhetoric flowing in all directions and some greenwashing. Difficult to assess, but perhaps the appearance of more articles about surviving climate change (such as this one in Bloomberg Businessweek) demonstrates that more folks recognize we can no longer stem or stop climate change from rocking the world. This blog has had lots to say about the collapse of industrial civilization being part of a mass extinction event (not aimed at but triggered by and including humans), so for these two speakers to cite but then minimize the peril we face is, well, façile at the least.

Toward the end, the moderator finally spoke up and directed the conversation towards uplift (a/k/a the happy chapter), which almost immediately resulted in posturing on the optimism/pessimism continuum with Friedman staking his position on the positive side. Curiously, Harari invalidated the question and refused to be pigeonholed on the negative side. Attempts to shoehorn discussions into familiar if inapplicable narratives or false dichotomies are commonplace. I was glad to see Harari calling bullshit on it, though others (e.g., YouTube commenters) were easily led astray.

The entire discussion is dense with ideas, most of them already quite familiar to me. I agree wholeheartedly with one of Friedman’s remarks: if something can be done, it will be done. Here, he refers to technological innovation and development. Plenty of prohibitions throughout history not to make available disruptive technologies have gone unheeded. The atomic era is the handy example (among many others) as both weaponry and power plants stemming from cracking the atom come with huge existential risks and collateral psychological effects. Yet we prance forward headlong and hurriedly, hoping to exploit profitable opportunities without concern for collateral costs. Harari’s response was to recommend caution until true cause-effect relationships can be teased out. Without saying it manifestly, Harari is citing the precautionary principle. Harari also observed that some of those effects can be displaced hundreds and thousands of years.

Displacements resulting from the Agrarian Revolution, the Scientific Revolution, and the Industrial Revolution in particular (all significant historical “turnings” in human development) are converging on the early 21st century (the part we can see at least somewhat clearly so far). Neither speaker would come straight out and condemn humanity to the dustbin of history, but at least Harari noted that Mother Nature is quite keen on extinction (which elicited a nervous? uncomfortable? ironic? laugh from the audience) and wouldn’t care if humans were left behind. For his part, Friedman admits our destructive capacity but holds fast to our cleverness and adaptability winning out in the end. And although Harari notes that the future could bring highly divergent experiences for subsets of humanity, including the creation of enhanced humans due to reckless dabbling with genetic engineering, I believe cumulative and aggregate consequences of our behavior will deposit all of us into a grim future no sane person should wish to survive.