Archive for the ‘Idle Nonsense’ Category

I started reading Yuval Harari’s book Homo Deus: A Brief History of Tomorrow (2017). Had expected to read Sapiens (2014) first but its follow-up came into my possession instead. My familiarity with Harari’s theses and arguments stem from his gadfly presence on YouTube being interviewed or giving speeches promoting his books. He’s a compelling yet confounding thinker, and his distinctive voice in my mind’s ear lent to my reading the quality of an audiobook. I’ve only read the introductory chapter (“A New Human Agenda”) so far, the main argument being this:

We have managed to bring famine, plague and war under control thanks largely to our phenomenal economic growth, which provides us with abundant food, medicine, energy and raw materials. Yet this same growth destabilises the ecological equilibrium of the planet in myriad ways, which we have only begun to explore … Despite all the talk of pollution, global warming and climate change, most countries have yet to make any serious economic or political sacrifices to improve the situation … In the twenty-first century, we shall have to do better if we are to avoid catastrophe. [p. 20]

“Do better”? Harari’s bland understatement of the catastrophic implications of our historical moment is risible. Yet as a consequence of having (at least temporarily) brought three major historical pestilences (no direct mention of the fabled Four Horsemen of the Apocalypse) under administrative, managerial, and technical control (I leave that contention unchallenged), Harari states rather over-confidently — forcefully even — that humankind is now turning its attention and ambitions toward different problems, namely, mortality (the fourth of the Four Horsemen and one of the defining features of the human condition), misery, and divinity.

Harari provides statistical support for his thesis (mere measurement offered as indisputable evidence — shades of Steven Pinker in Enlightenment Now), none of which I’m in a position to refute. However, his contextualization, interpretation, and extrapolation of trends purportedly demonstrating how humans will further bend the arc of history strike me as absurd. Harari also misses the two true catalyzing factors underlying growth and trends that have caused history to go vertical: (1) a fossil-fuel energy binge of roughly two and one-half centuries that peaked more than a decade ago and (2) improved information and material flows and processing that enabled managerial and bureaucratic functions to transcend time and space or at least lessen their constraints on human activity dramatically. James Beniger addresses information flow and processing in his book The Control Revolution (1989). Many, many others have provided in-depth analyses of energy uses (or inputs) because, contrary to the familiar song lyric, it’s energy that makes the world go round. No one besides Harari (to my knowledge but I’m confident some lamebrained economist agrees with Harari) leaps to the unwarranted conclusion that economic growth is the principal forcing factor of the last 2–3 centuries.

I’ve taken issue with Harari before (here and here) and will not repeat those arguments. My impression of Homo Deus, now that I’ve got 70 pages under my belt, is that Harari wants to have it both ways: vaguely optimistic (even inspirational and/or aspirational) regarding future technological developments (after all, who doesn’t want the marvels and wonders we’ve been ceaselessly teased and promised?) yet precautionary because those very developments will produce disruptive and unforeseeable side effects (black swans) we can’t possibly yet imagine. To his credit, Harari’s caveats regarding unintended consequences are plain and direct. For instance, one of the main warnings is that the way we treat nonhuman species is the best model for how we humans will in turn be treated when superhumans or strong AI appear, which Harari believes is inevitable so long as we keep tinkering. Harari also indicates that he’s not advocating for any of these anticipated developments but is merely mapping them as likely outcomes of human restlessness and continued technological progress.

Harari’s disclaimers do not convince me; his writing is decidedly Transhumanist in character. In the limited portion I’ve read, Harari comes across far more like “golly, gee willikers” at human cleverness and potential than as someone seeking to slam on the brakes before we innovate ourselves out of relevance or existence. In fact, by focusing on mortality, misery, and divinity as future projects, Harari gets to indulge in making highly controversial (and fatuous) predictions regarding one set of transformations that can happen only if the far more dire and immediate threats of runaway global warming and nonlinear climate change don’t first lead to the collapse of industrial civilization and near-term extinction of humans alongside most other species. My expectation is that this second outcome is far more likely than anything contemplated by Harari in his book.

Update: Climate chaos has produced the wettest winter, spring, and summer on record, which shows no indication of abating. A significant percentage of croplands in flooded regions around the globe is unplanted, and those that are planted are stunted and imperiled. Harari’s confidence that we had that famine problem licked is being sorely tested.

Advertisements

This is about to get weird.

I caught a good portion of a recent Joe Rogan podcast (sorry, no link or embedded video) with Alex Jones and Eddie Bravo (nearly 5 hours long instead of the usual 2 to 3) where the trio indulged themselves in a purported grand conspiracy to destroy civilization and establish a new post-human one. The more Jones rants speaks (which is quite a lot), the more he sounds like a madman. But he insists he does so to serve the public. He sincerely wants people to know things he’s figured out about an evil cabal of New World Order types. So let me say at least this: “Alex Jones, I hear you.” But I’m unconvinced. Apologies to Alex Jones et al. if I got any details wrong. For instance, it’s not clear to me whether Jones believes this stuff himself or he’s merely reporting what others may believe.

The grand conspiracy is supposedly interdimensional beings operating at a subliminal range below or beyond normal human perception. Perhaps they revealed themselves to a few individuals (to the cognoscenti, ya know, or is that shared revelation how one is inducted into the cognoscenti?). Rogan believes that ecstatic states induced by drugs provide access to revelation, like tuning a radio to the correct (but secret) frequency. Whatever exists in that altered cognitive state appears like a dream and is difficult to understand or remember. The overwhelming impression Rogan reports as lasting is of a distinct nonhuman presence.

Maybe I’m not quite as barking mad as Jones or as credulous as Rogan and Bravo, but I have to point out that humans are interdimensional beings. We move through three dimensions of space and one unidirectional dimension of time. If that doesn’t quite make sense, then I refer readers to Edwin Abbott’s well-known book Flatland. Abbott describes what it might be like for conscious beings in only two dimensions of space (or one). Similarly, for most of nature outside of vertebrates, it’s understood that consciousness, if it exists at all (e.g., not in plants), is so rudimentary that there is no durable sense of time. Beings exist in an eternal now (could be several seconds long/wide/tall — enough to function) without memory or anticipation. With that in mind, the possibility of multidimensional beings in 5+ dimensions completely imperceptible to us doesn’t bother me in the least. The same is true of the multiverse or many-worlds interpretation. What bothers me is that such beings would bother with us, especially with a conspiracy to crash civilization.

The other possibility at which I roll my eyes is a post-human future: specifically, a future when one’s consciousness escapes its biological boundaries. The common trope is that one’s mind is uploaded to a computer to exist in the ether. Another is that one transcends death somehow with intention and purpose instead of simply ceasing to be (as atheists believe) or some variation of the far more common religious heaven/hell/purgatory myth. This relates as well to the supposition of strong AI about to spark (the Singularity): self-awareness and intelligent thought that can exist on some substrate other than human biology (the nervous system, really, including the brain). Sure, cognition can be simulated for some specific tasks like playing chess or go, and we humans can be fooled easily into believing we are communicating with a thought machine à la the Turing Test. But the rather shocking sophistication, range, utility, and adaptability of even routine human consciousness is so far beyond any current simulation that the usual solution to get engineers from where they are now to real, true, strong AI is always “and then a miracle happened.” The easy, obvious route/accident is typically a power surge (e.g., a lightning strike).

Why bother with mere humans is a good question if one is post-human or an interdimensional being. It could well be that existence in such a realm would make watching human interactions either impenetrable (news flash, they are already) or akin to watching through a dim screen. That familiar trope is the lost soul imprisoned in the spirit world, a parallel dimension that permits viewing from one side only but prohibits contact except perhaps through psychic mediums (if you believe in such folks — Rogan for one doesn’t).

The one idea worth repeating from the podcast is the warning not to discount all conspiracy theories out of hand as bunk. At least a few have been demonstrated to be true. Whether any of the sites behind that link are to be believed I leave you readers to judge.

Addendum: Although a couple comments came in, no one puzzled over the primary piece I had to add, namely, that we humans are interdimentional beings. The YouTube video below depicts a portion of the math/science behind my statement, showing how at least two topographical surfaces behave paradoxically when limited to 2 or 3 dimensions but theoretically cohere in 4+ dimensions imperceptible to us.

Everyone is familiar with the convention in entertainment media where characters speak without the use of recognizable language. (Not related really to the convention of talking animals.) The first instance I can recall (someone correct me if earlier examples are to be found) is the happy-go-lucky bird Woodstock from the old Peanuts cartoons (do kids still recognize that cast of characters?), whose dialog was shown graphically as a series of vertical lines:

When the cartoon made its way onto TV for holiday specials, its creator Charles Schultz used the same convention to depict adults, never shown onscreen but with dialogue voiced by a Harmon-muted trombone. Roughly a decade later, two characters from the Star Wars franchise “spoke” in languages only other Star Wars characters could understand, namely, Chebacca (Chewie) and R2D2. More recently, the character Groot from Guardians of the Galaxy (known to me only through the Marvel movie franchise, not through comic books) speaks only one line of dialogue, “I am Groot,” which is understood as full speech by others Guardians characters. When behemoths larger than a school bus (King Kong, Godzilla, Jurassic dinosaurs, Cloverfield, Kaiju, etc.) appear, the characters are typically denied the power of speech beyond the equivalent of a lion’s roar. (True villains talk little or not at all as they go about their machinations — no monologuing! unless it’s a James Bond film. An exception notable for its failure to charm audiences is Ultron, who wouldn’t STFU. You can decide for yourself which is the worse kind of villainy.)

This convention works well enough for storytelling and has the advantage of allowing the reader/viewer to project onto otherwise blank speech. However, when imported into the real world, especially in politics, the convention founders. There is no Babelfish universal translator inserted in the ear to transform nonsense into coherence. The obvious example of babblespeech is 45, whose speech when off the teleprompter is a series of rambling non sequiturs, free associations, slogans, and sales pitches. Transcripts of anyone’s extemporaneous speech reveal lots of restarts and blind alleys; we all interrupt ourselves to redirect. However, word salad that substitutes for meaningful content in 45’s case is tragicomic: alternately entirely frustrating or comically entertaining depending on one’s objective. Satirical news shows fall into the second category.

45 is certainly not the first. Sarah Palin in her time as a media darling (driver of ratings and butt of jokes — sound familiar?) had a knack for crazy speech combinations that were utter horseshit yet oddly effective for some credulous voters. She was even a hero to some (nearly a heartbeat away from being the very first PILF). We’ve also now been treated to a series of public interrogations where a candidate for a cabinet post or an accused criminal offers testimony before a congressional panel. Secretary of Education Betsy DeVos famously evaded simple yes/no questions during her confirmation hearing, and Supreme Court Justice Brett Kavanaugh similarly refused to provide direct answers to direct questions. Unexpectedly, sacrificial lamb Michael Cohen does give direct answers to many questions, but his interlocutors then don’t quite know how to respond considering their experience and expectation that no one answers appropriately.

What all this demonstrates is that there is often a wide gulf between what is said and what is heard. In the absence of what might be understood as effective communication (honest, truthful, and forthright), audiences and voters fill in the blanks. Ironically, we also can’t handle hear too much truth when confronted by its awfulness. None of this is a problem in storytelling, but when found in politic narratives, it’s emblematic of how dysfunctional our communications have become, and with them, the clear thought and principled activity of governance.

Some while back, Scott Adams (my general disdain for him noted but unexpanded, since I’m not in the habit of shitting on people), using his knowledge of hypnosis, began pushing the string selling the narrative that our Commander-in-Chief is cannily adept at the art of persuasion. I, for one, am persuaded by neither Adams nor 45 but must admit that many others are. Constant shilling for control of narratives by agents of all sorts could not be more transparent (for me at least), rendering the whole enterprise null. Similarly, when I see an advertisement (infrequently, I might add, since I use ad blockers and don’t watch broadcast TV or news programs), I’m rarely inclined to seek more information or make a purchase. Once in a long while, an ad creeps through my defenses and hits one of my interests, and even then, I rarely respond because, duh, it’s an ad.

In the embedded video below, Stuart Ewen describes how some learned to exploit a feature (not a bug) in human cognition, namely, appeals to emotion that overwhelm rational response. The most obvious, well-worn example is striking fear into people’s hearts and minds to convince them of an illusion of safety necessitating relinquishing civil liberties and/or fighting foreign wars.

The way Ewen uses the term consciousness differs from the way I use it. He refers specifically to opinion- and decision-making (the very things vulnerable to manipulation) rather than the more generalized and puzzling property of having an individual identity or mind and with it self-awareness. In fact, Ewen uses the terms consciousness industry and persuasion industry instead of public relations and marketing to name those who spin information and thus public discourse. At some level, absolutely everyone is guilty of seeking to persuade others, which again is a basic feature of communication. (Anyone negotiating the purchase of, say, a new or used car faces the persuasion of the sales agent with some skepticism.) What turns it into something maniacal is using lies and fabrication to advance agendas against the public interest, especially where public opinion is already clear.

Ewen also points to early 20th-century American history, where political leaders and marketers were successful in manipulating mass psychology in at least three ways: 1. drawing the pacifist U.S. public into two world wars of European origin, 2. transforming citizens into consumers, thereby saving capitalism from its inherently self-destructive endgame (creeping up on us yet again), and 3. suppressing emergent collectivism, namely, socialism. Of course, unionism as a collectivist institution still gained considerable strength but only within the larger context of capitalism, e.g., achieving the American Dream in purely financial terms.

So getting back to Scott Adams’ argument, the notion that the American public is under some form of mass hypnosis (persuasion) and that 45 is the master puppeteer is perhaps half true. Societies do sometimes go mad and fall under the spell of a mania or cult leader. But 45 is not the driver of the current episode, merely the embodiment. I wouldn’t say that 45 figured out anything because that awards too much credit to presumed understanding and planning. Rather, he worked out (accidentally and intuitively — really by default considering his job in 2016) that his peculiar self-as-brand could be applied to politics by treating it all as reality TV, which by now everyone knows is its own weird unreality the same way professional wrestling is fundamentally unreal. (The term political theater applies here.) He demonstrated a knack (at best) for keeping the focus firmly on himself and driving ratings (abetted by the mainstream media that had long regarded him as a clown or joke), but those objectives were never really in service of a larger political vision. In effect, the circus brought to town offers its own bizarre constructed narrative, but its principle characteristic is gawking, slack-jawed, made-you-look narcissism, not any sort of proper guidance or governance.

For a time after the 2008 financial collapse, skyscraper projects in Chicago came to a dead halt, mostly due to dried-up financing. My guess (since I don’t know with any reliability) is that much the same obtained worldwide. However, the game appears to be back on, especially in New York City, one of few cities around the globe where so-called “real money” tends to pool and collect. Visual Capitalist has an interesting infographic depicting changes to the NYC skyline every 20 years. The number of supertalls topping 1,000 feet expected by 2020 is quite striking.

Courtesy of Visual Capitalist

The accompanying text admits that NYC is left in the dust by China, specifically, the Pearl River Delta Megacity, which includes Hong Kong, Shenzhen, Macau, and others. As I’ve written before, the mad rush to build (earning ridiculous, absurd, imaginary prestige points awarded by and to exactly no one) takes no apparent notice of a slo-mo crack-up in the way modern societies organize and fund themselves. The new bear market might give one … um, pause.

Also left in the dust is Chicago, home of the original skyscraper. Since the 2008 collapse, Chicago’s most ambitious project, the ill-fated Chicago Spire (a/k/a the Fordham Spire) was abandoned despite a big hole dug in the ground and some foundation work completed. An absence of completed prestige projects since 2008 means Chicago has been lapped several times over by NYC, not that anyone is counting. The proposed site of the Chicago Spire is too enticing, however — just inside Lake Shore Drive at the mouth of the Chicago River — for it to be dormant for long. Indeed, a press release last year (escaped my attention at the time) announced redevelopment of the site, and a slick website is operating for now (linked in the past to similar sites that went abandoned along with their subject projects). Also reported late last year, Chicago appears to have rejoined the game in earnest, with multiple projects already under construction and others in the planning/approval phases.

So if hiatus was called the last time we crashed financially (a regular occurrence, I note), it seems we’ve called hiatus on the hiatus and are back in a mad, futile race to remake modernity into gleaming vertical cities dotting the globe. Such hubris and exuberance might be intoxicating to technophiles, but I’m reminded of a observation (can’t locate a quote, sorry) to the effect that civilizations’ most extravagant projects are undertaken just before their collapses. Our global civilization is no different.

I’ve written a different form of this blog post at least once before, maybe more. Here’s the basic thesis: the bizarro unreality of the world in which we now live is egregious enough to make me wonder if we haven’t veered wildly off the path at some point and now exist within reality prime. I suppose one can choose any number of historical inflections to represent the branching point. For me, it was the reelection of George W. Bush in 2004. (The 9/11 attacks and “wars” in Afghanistan and Iraq had already occurred or commenced by then, and it had already revealed as well that lies — Saddam had WMDs — that sold the American public on the Iraq “war” were effective and remain so today.) Lots of other events changed the course of history, but none other felt as much to me like a gut punch precisely because, in the case of the 2004 presidential election, we chose our path. I fantasized waking up from my reality-prime nightmare but eventually had to grudgingly accept that if multiverses exist, ours mine had become one where we chose (collectively, and just barely) to keep in office an executive who behaved like a farce of stupidity. Well, joke’s on us. Twelve years later, we chose someone even more stupid, though with a “certain serpentine cunning,” and with arguably the worst character of any U.S. executive in living history.

So what to do in the face of this dysfunctional state of affairs? Bret Weinstein below has ideas. (As usual, I’m quite late, embedding a video that by Internet standards is already ancient. I also admit this is equivalent to a smash cut because I don’t have a particularly good transition or justification for turning so suddenly to Weinstein.) Weinstein is an evolutionary biologist, so no surprise that the approach he recommends is borne out of evolutionary thinking. In fairness, a politician would logically recommend political solutions, a financier would recommend economic solutions, and other professionals would seek solutions from within their areas of expertise.

The title of the interview is “Harnessing Evolution,” meaning Weinstein suggests we use evolutionary models to better understand our own needs and distortions to guide or plot proper path(s) forward and get back on track. Never mind that a healthy minority of the U.S. public rejects evolution outright while an additional percentage takes a hybrid stance. While I’m impressed that Weinstein has an answer for everything (pedagogue or demagogue or both?) and has clearly thought through sociopolitical issues, I daresay he’s living in reality double-prime if he thinks science education can be a panacea for what ails us. My pessimism is showing.

For ambulatory creatures, vision is arguably the primary sense of the five (main) senses. Humans are among those species that stand upright, facilitating a portrait orientation when interacting among ourselves. The terrestrial environment on which we live, however, is in landscape (as distinguished from the more nearly 3D environments of birds and insects in flight or marine life in rivers, lakes, seas, and oceans). My suspicion is that modest visual conflict between portrait and landscape is among the dynamics that give rise to the orienting response, a step down from the startle reflex, that demands full attention when visual environments change.

I recall reading somewhere that wholesale changes in surroundings, such as when crossing a threshold, passing through a doorway, entering or exiting a tunnel, and notably, entering and exiting an elevator, trigger the orienting response. Indeed, the flush of disorientation before one gets his or her bearings is tantamount to a mind wipe, at least momentarily. This response may also help to explain why small, bounded spaces such as interiors of vehicles (large and small) in motion feel like safe, contained, hermetically sealed personal spaces. We orient visually and kinesthetically at the level of the interior, often seated and immobile, rather than at the level of the outer landscape being traversed by the vehicle. This is true, too, of elevators, a modern contraption that confounds the nervous system almost as much as revolving doors — particularly noticeable with small children and pets until they become habituated to managing such doorways with foreknowledge of what lies beyond.

The built environment has historically included transitional spaces between inner and outer environments. Churches and cathedrals include a vestibule or narthex between the exterior door and inner door leading to the church interior or nave. Additional boundaries in church architecture mark increasing levels of hierarchy and intimacy, just as entryways of domiciles give way to increasingly personal spaces: parlor or sitting room, living room, dining room, kitchen, and bedroom. (The sheer utility of the “necessary” room defies these conventions.) Commercial and entertainment spaces use lobbies, atria, and prosceniums in similar fashion.

What most interests me, however, is the transitional space outside of buildings. This came up in a recent conversation, where I observed that local school buildings from the early to middle part of the 20th century have a distinguished architecture set well back from the street where lawns, plazas, sidewalks, and porches leading to entrances function as transitional spaces and encourage social interaction. Ample window space, columnar entryways, and roof embellishments such as dormers, finials, cupolas, and cornices add style and character befitting dignified public buildings. In contrast, 21st-century school buildings in particular and public buildings in general, at least in the city where I live, tend toward porchless big-box warehouses built right up to the sidewalk, essentially robbing denizens of their social space. Blank, institutional walls forbid rather than invite. Consider, for example, how students gathered in a transitional space are unproblematic, whereas those congregated outside a school entrance abutting a narrow sidewalk suggest either a gauntlet to be run or an eruption of violence in the offing. (Or maybe they’re just smoking.) Anyone forced to climb past loiterers outside a commercial establishment experiences similar suspicions and discomforts.

Beautifully designed and constructed public spaces of yore — demonstrations of a sophisticated appreciation of both function and intent — have fallen out of fashion. Maybe they understood then how transitional spaces ease the orientation response, or maybe they only intuited it. Hard to say. Architectural designs of the past acknowledged and accommodated social functions and sophisticated aesthetics that are today actively discouraged except for pointless stunt architecture that usually turns into boondoggles for taxpayers. This has been the experience of many municipalities when replacing or upgrading schools, transit centers, sports arenas, and public parks. Efficient land use today drives toward omission of transitional space. One of my regular reads is James Howard Kunstler’s Eyesore of the Month, which profiles one architectural misfire after the next. He often mocks the lack of transitional space, or when present, observes its open hostility to pedestrian use, including unnecessary obstacles and proximity to vehicular traffic (noise, noxious exhaust, and questionable safety) discouraging use. Chalk this up as another collapsed art (e.g., painting, music, literature, and poetry) so desperate to deny the past and establish new aesthetics that it has ruined itself.

In an earlier blog post, I mentioned how killing from a distance is one way among many that humans differentiate from other animals. The practical advantage of weaponry that distances one combatant from another should be obvious. Spears and swords extend one’s reach yet keep fighting hand-to-hand. Projectiles (bullets, arrows, catapults, artillery, etc.) allow killing from increasingly long distances, with weapons launched into low orbit before raining down ruin being the far extreme. The latest technology is drones (and drone swarms), which remove those who wield them from danger except perhaps psychological torment accruing gradually on remote operators. Humans are unique among animals for having devised such clever ways of destroying each other, and in the process, themselves.

I finally got around to seeing the film Black Panther. Beyond the parade of clichés and mostly forgettable punchfest action (interchangeable with any other Marvel film), one particular remark stuck with me. When the warrior general of fictional Wakanda went into battle, a female as it happens, she dismissed the use of guns as “primitive.” Much is made of Wakanda’s advanced technology, some of it frankly indistinguishable from magic (e.g., the panther elixir). Wakanda’s possession of weaponry not shared with the rest of the world (e.g., invisible planes) is the MacGuffin the villain seeks to control so as exact revenge on the world and rule over it. Yet the film resorts predictably to punching and acrobatics as the principal mode of combat. Some of that strategic nonsense is attributable to visual storytelling found in both comic books and cinema. Bullets fly too fast to be seen and tracking airborne bombs never really works, either. Plus, a punch thrown by a villain or superhero arguably has some individual character to it, at least until one recognizes that punching leaves no lasting effect on anyone.

As it happens, a similar remark about “primitive” weapons (a blaster) was spat out by Obi-Wan Kenobi in one of the Star Wars prequels (dunno which one). For all the amazing technology at the disposal of those characters long ago in a galaxy far, far away, it’s curious that the weapon of choice for a Jedi knight is a light saber. Again, up close and personal (color coded, even), including actual peril, as opposed to, say, an infinity gauntlet capable of dispatching half a universe with a finger snap. Infinite power clearly drains the stakes out of conflict. Credit goes to George Lucas for recognizing the awesome visual storytelling the light saber offers. He also made blaster shots — the equivalent of flying bullets — visible to the viewer. Laser beams and other lighted projectiles had been done in cinema before Star Wars but never so well.

Among the myriad ways we have of mistreating each other, epithets may well be the most ubiquitous. Whether using race, sex, age, nationality, or nominal physical characteristic (especially genital names), we have so many different words with which to insult and slur it boggles the mind. Although I can’t account for foreign cultures, I doubt there is a person alive or dead who hasn’t suffered being made fun of for some stupid thing. I won’t bother to compile a list there are so many (by way of example, Wikipedia has a list of ethnic slurs), but I do remember consulting a dictionary of historical slang, mostly disused, and being surprised at how many terms were devoted specifically to insults.

I’m now old and contented enough for the “sticks and stones …” dismissal to nullify any epithets hurled my way. When one comes up, it’s usually an obvious visual characteristic, such as my baldness or ruddiness. Those characteristics are of course true, so why allow them to draw ire when used with malicious intent? However, that doesn’t stop simple words from giving grave offense for those with either thin skins or being so-called fighting words for those habituated to answering provocation with physical force. And in an era when political correctness has equated verbal offense with violence, the self-appointed thought police call for blood whenever someone steps out of line in public. Alternatively, when such a person is one’s champion, then the blood sport becomes spectacle, such as when 45 gifts another public figure with a sobriquet.

The granddaddy of all epithets — the elephant in the room, at least in the U.S. — will not be uttered by me, sorta like the he-who-shall-not-be-named villain of the Harry Potter universe or the forbidden language of Mordor from the Tolkien universe. I lack standing to use the term in any context and won’t even venture a euphemism or placeholder using asterisks or capitalisms. Reclaiming the term in question by adopting it as a self-description — a purported power move — has decidedly failed to neutralize the term. Instead, the term has become even more egregiously insulting than ever, a modern taboo. Clarity over who gets to use the term with impunity and when is elusive, but for my own part, there is no confusion: I can never, ever speak or write it in any context. I also can’t judge whether this development is a mark of cultural progress or regression.

Continuing from part 1, which is altogether too much screed and frustration with Sam Harris, I now point to several analyses that support my contentions. First is an article in The Nation about the return of so-called scientific racism and speaks directly about Charles Murray, Sam Harris, and Andrew Sullivan, all of whom are embroiled in the issue. Second is an article in The Baffler about constructing arguments ex post facto to conform to conclusions motivated in advance of evidence. Most of us are familiar with the the constructed explanation, where in the aftermath of an event, pundits, press agents, and political insiders propose various explanatory narratives to gain control over what will eventually become the conventional understanding. Published reports such as the Warren Commission‘s report on the assassination of JFK is one such example, and I daresay few now believe the report and the consensus that it presents weren’t politically motivated and highly flawed. Both linked articles above are written by Edward Burmilla, who blogs at Gin and Tacos (see blogroll). Together, they paint a dismal picture of how reason and rhetoric can be corrupted despite the sheen of scientific respectability.

Third is an even more damaging article (actually a review of the new anthology Trump and the Media) in the Los Angeles Review of Books by Nicolas Carr asking the pointed question “Can Journalism Be Saved?” Admittedly, journalism is not equivalent with reason or rationalism, but it is among several professions that employ claims of objectivity, accuracy, and authority. Thus, journalism demands both attention and respect far in excess of the typical blogger (such as me) or watering-hole denizen perched atop a barstool. Consider this pullquote:

… the flaws in computational journalism can be remedied through a more open and honest accounting of its assumptions and limitations. C. W. Anderson, of the University of Leeds, takes a darker view. To much of the public, he argues, the pursuit of “data-driven objectivity” will always be suspect, not because of its methodological limits but because of its egghead aesthetics. Numbers and charts, he notes, have been elements of journalism for a long time, and they have always been “pitched to a more policy-focused audience.” With its ties to social science, computational journalism inevitably carries an air of ivory-tower elitism, making it anathema to those of a populist bent.

Computational journalism is contrasted with other varieties of journalism based on, say, personality, emotionalism, advocacy, or simply a mad rush to print (or pixels) to scoop the competition. This hyperrational approach has already revealed its failings, as Carr reports in his review.

What I’m driving at is that, despite frequent appeals to reason, authority, and accuracy (especially the quantitative sort), certain categories of argumentation fail to register on the average consumer of news and information. It’s not a question of whether arguments are right or wrong, precisely; it’s about what appeals most to those paying even a modest bit of attention. And the primary appeal for most (I judge) isn’t reason. Indeed, reason is swept aside handily when a better, um, reason for believing something appears. If one has done the difficult work of acquiring critical thinking and reasoning skills, it can be quite the wake-up call when others fail to behave according to reason, such as with acting against enlightened self-interest. The last presidential election was a case in point.

Circling back so something from an earlier blog, much of human cognition is based on mere sufficiency: whatever is good enough in the moment gets nominated then promoted to belief and/or action. Fight, flight, or freeze is one example. Considered evaluation and reason are not even factors. Snap judgments, gut feelings, emotional resonances, vibes, heuristics, and Gestalts dominate momentary decision-making, and in the absence of convincing countervailing information (if indeed one is even vulnerable to reason, which would be an unreasonable assumption), action is reinforced and suffices as belief.

Yet more in part 3 to come.