Posts Tagged ‘Philosophy’

For more than a decade, I’ve had in the back of my mind a blog post called “The Power of Naming” to remark that bestowing a name gives something power, substance, and in a sense, reality. That post never really came together, but its inverse did. Anyway, here’s a renewed attempt.

The period of language acquisition in early childhood is suffused with learning the names of things, most of which is passive. Names of animals (associated closely with sounds they make) are often a special focus using picture books. The kitty, doggie, and horsie eventually become the cat, dog, and horse. Similarly, the moo-cow and the tweety-bird shorten to cow and bird (though songbird may be an acceptable holdover). Words in the abstract are signifiers of the actual things, aided by the text symbols learned in literate cultures to reinforce mere categories instead of examples grounded in reality. Multiply the names of things several hundred thousand times into adulthood and indeed throughout life and one can develop a formidable vocabulary supporting expressive and nuanced thought and speech. Do you know the differences between acute, right, obtuse, straight, and reflex angles? Does it matter? Does your knowledge of barware inform when to use a flute, coupe, snifter, shot (or shooter or caballito), nosing glass (or Glencairn), tumbler, tankard, goblet, sling, and Stein? I’d say you’ve missed something by never having drunk dark beer (Ger.: Schwarzbier) from a frosted schooner. All these varieties developed for reasons that remain invisible to someone content to drink everything from the venerable red Solo cup. Funnily enough, the red Solo cup now comes in different versions, fooling precisely no one.

Returning to book blogging, Walter Ong (in Orality and Literacy) has curious comparisons between primarily oral cultures and literate cultures. For example:

Oral people commonly think of names (one kind of words) as conveying power over things. Explanations of Adam’s naming of the animals in Genesis 2:20 usually call condescending attention to this presumably quaint archaic belief. Such a belief is in fact far less quaint than it seems to unreflective chirographic and typographic folk. First of all, names do give humans beings power over what they name: without learning a vast store of names, one is simply powerless to understand, for example, chemistry and to practice chemical engineering. And so with all other intellectual knowledge. Secondly, chirographic and typographic folk tend to think of names as labels, written or printed tags imaginatively affixed to an object named. Oral folk have no sense of a name as a tag, for they have no idea of a name as something that can be seen. Written or printed representations of words can be labels; real, spoken words cannot be. [p. 33]

This gets at something that has been developing over the past few decades, namely, that as otherwise literate (or functionally literate) people gather more and more information through electronic media (screens that serve broadcast and cable TV, YouTube videos, prerecorded news for streaming, and podcasts, and most importantly, audiobooks — all of which speak content to listeners), the spoken word (re)gains primacy and the printed word fades into disuse. Electronic media may produce a hybrid of orality/literacy, but words are no longer silent, internal, and abstract. Indeed, words — all by themselves — are understood as being capable of violence. Gone are the days when “stick and stones ….” Now, fighting words incite and insults sting again.

Not so long ago, it was possible to provoke a duel with an insult or gesture, such as a glove across the face. Among some people, defense of honor never really disappeared (though dueling did). History has taken a strange turn, however. Proposed legislation to criminalize deadnaming (presumably to protect a small but growing number of transgender and nonbinary people who have redefined their gender identity and accordingly adopted different names) recognizes the violence of words but then tries to transmute the offense into an abstract criminal law. It’s deeply mixed up, and I don’t have the patience to sort it out.

More to say in later blog posts, but I’ll raise the Counter-Enlightenment once more to say that the nature of modern consciousness if shifting somewhat radically in response to stimuli and pressures that grew out of an information environment, roughly 70 years old now but transformed even more fundamentally in the last 25 years, that is substantially discontinuous from centuries-old traditions. Those traditions displaced even older traditions inherited from antiquity. Such is the way of the world, I suppose, and with the benefit of Walter Ong’s insights, my appreciation of the outlines is taking better shape.

I have observed various instances of magical thinking in mainstream culture, especially here, which I find problematical. Although it’s not my ambition to disabuse anyone of magical thinking, which extends far beyond, say, religious thought, I was somewhat taken aback at the suggestion found in the comic at this link (not embedded). For those not familiar with Questionable Content (one of two online comics I read regularly), the comic presents an extended cast of characters, mostly in their early 20s, living in a contemporary New England college town. Those characters are supplemented by a few older parents and lots of AIs (in robot bodies). The AIs are not particularly futuristic but are simply accepted as a normal (if curious) part of the world of the comic. Major story arcs involve characters and AIs (the AIs are characters, I suppose) in the process of discovering and establishing themselves as they (the humans, anyway) transition into early adulthood. There are no great political themes or intrusions into life in a college town. Rather, the comic is largely about acceptance of difference. Often, that means washing away meaningful difference in the name of banal tolerance. Real existential struggle is almost entirely absent.

In the linked comic, a new character comes along and offers advice to an established character struggling with sexual attractions and orientation. The dialogue includes this exchange:

Character A: If tarot or astrology or religion halps you make sense of the world and your place in it, then why not use them?
Character B: But they’re not real. [emphasis in original]
Character A: It doesn’t matter, if you use them constructively!

There it is in a nutshell: believe whatever you want if it, um, halps. I’ve always felt that being wrong (i.e., using unreal or make-believe things) was a sufficient injunction against anchoring oneself to notions widely known to be false. Besides, isn’t it often remarked that the biggest fool is one who fools himself? (Fiction as a combination of entertainment and building a worldview is quite normative, but it’s understood as fiction, or to a lesser degree, as life imitating art and its inverse. Exceptions abound, which are regarded as psychopathy.) The instruction in that dialogue (part object lesson, part lesson in cognition) is not that it’s OK to make mistakes but that knowingly believing something false has worthwhile advantages.

Surveying examples where promulgating false beliefs have constructive and destructive effects is too large a project. Well short of that, nasty categories include fraud, gaslighting, and propaganda, which are criminal in many cases and ought to be in most others (looking at you, MSM! — or not, since I neither trust nor watch). One familiar benevolent category is expressed in the phrase fake it til you make it, often recommended to overcome a lack of confidence. Of course, a swindle is also known as a confidence game (or by its diminutive, a con), so beware overconfidence when asked by another to pay for something (e.g., tarot or astrology readings), take risks, or accept an ideology without question.

As philosophy, willful adoption of falsity for its supposed benefits is half-baked. Though impossible to quantify, my suspicion is that instances of positive outcomes are overbalanced by negative ones. Maybe living in a constructed reality or self-reinforcing fantasy is what people want. The comic discussed is certainly in line with that approach. However, while we dither and delude ourselves with happy, aspirational stories based on silliness, the actual world around us, including all the human institutions that used to serve us but no longer do, falls to tatters. Is it better going through life and eventually to one’s grave refusing to see that reality? Should childlike wonder and innocence be retained in spite of what is easily observable just by poking one’s head up and dismissing comforting lies? Decide for yourself.

Once in a while, when discussing current events and their interpretations and implications, a regular interlocutor of mine will impeach me, saying “What do you know, really?” I’m always forced to reply that I know only what I’ve learned through various media sources, faulty though they may be, not through first-hand observation. (Reports of anything I have observed personally tend to differ considerably from my own experience once the news media completes its work.) How, then, can I know, to take a very contemporary instance this final week of July 2020, what’s going on in Portland from my home in Chicago other than what’s reported? Makes no sense to travel there (or much of anywhere) in the middle of a public health crisis just to see a different slice of protesting, lawbreaking, and peacekeeping [sic] activities with my own eyes. Extending the challenge to its logical extremity, everything I think I know collapses into solipsism. The endpoint of that trajectory is rather, well, pointless.

If you read my previous post, there is an argument that can’t be falsified any too handily that what we understand about ourselves and the world we inhabit is actually a constructed reality. To which I reply: is there any other kind? That construction achieves a fair lot of consensus about basics, more than one might even guess, but that still leaves quite a lot of space for idiosyncratic and/or personal interpretations that conflict wildly. In the absence of stabilizing authority and expertise, it has become impossible to tease a coherent story out of the many voices pressing on us with their interpretations of how we ought to think and feel. Twin conspiracies foisted on us by the Deep State and MSM known and RussiaGate and BountyGate attest to this. I’ll have more to say about inability to figure things out when I complete my post called Making Sense and Sensemaking.

In the meantime, the modern world has in effect constructed its own metaphorical Tower of Babel (borrowing from Jonathan Haidt — see below). It’s not different languages we speak so much (though it’s that, too) as the conflicting stories we tell. Democratization of media has given each us of — authorities, cranks, and everyone between — new platforms and vehicles for promulgating pet stories, interpretations, and conspiracies. Most of it is noise, and divining the worthwhile signal portion is a daunting task even for disciplined, earnest folks trying their best to penetrate the cacophony. No wonder so many simply turn away in disgust.

I admit (again) to being bugged by things found on YouTube — a miserable proxy for the marketplace of ideas — many of which are either dumb, wrongheaded, or poorly framed. It’s not my goal to correct every mistake, but sometimes, inane utterances of intellectuals and specialists I might otherwise admire just stick in my craw. It’s hubris on my part to insist on my understandings, considering my utter lack of standing as an acknowledged authority, but I’m not without my own multiple areas of expertise (I assert immodestly).

The initial purpose for this blog was to explore the nature of consciousness. I’ve gotten badly sidetracked writing about collapse, media theory, epistemology, narrative, and cinema, so let me circle back around. This is gonna be long.

German philosopher Oswald Spengler takes a crack at defining consciousness:

Human consciousness is identical with the opposition between the soul and the world. There are gradations in consciousness, varying from a dim perception, sometimes suffused by an inner light, to an extreme sharpness of pure reason that we find in the thought of Kant, for whom soul and world have become subject and object. This elementary structure of consciousness is not capable of further analysis; both factors are always present together and appear as a unity.

(more…)

In my preparations for a speech to be given in roughly two months, I stumbled across a prescient passage in an essay entitled “Jesuitism” from Latter-Day Pamphlets (1850) by Thomas Carlyle. Connect your own dots as this is offered without comment.

… this, then, is the horrible conclusion we have arrived at, in England as in all countries; and with less protest against it hitherto, and not with more, in England than in other countries? That the great body of orderly considerate men; men affecting the name of good and pious, and who, in fact, excluding certain silent exceptionary individuals one to the million, such as the Almighty Beneficence never quite withholds, are accounted our best men,–have unconsciously abnegated the sacred privilege and duty of acting or speaking the truth; and fancy that it is not truth that is to be acted, but that an amalgam of truth and falsity is the safe thing. In parliament and pulpit, in book and speech, in whatever spiritual thing men have to commune of, or to do together, this is the rule they have lapsed into, this is the pass they have arrived at. We have to report than Human Speech is not true! That it is false to a degree never witnessed in this world till lately. Such a subtle virus of falsity in the very essence of it, as far excels all open lying, or prior kinds of falsity; false with consciousness of being sincere! The heart of the world is corrupted to the core; a detestable devil’s-poison circulates in the life-blood of mankind; taints with abominable deadly malady all that mankind do. Such a curse never fell on men before.

For the falsity of speech rests on a far deeper falsity. False speech, as is inevitable when men long practise it, falsifies all things; the very thoughts, or fountains of speech and action become false. Ere long, by the appointed curse of Heaven, a man’s intellect ceases to be capable of distinguishing truth, when he permits himself to deal in speaking or acting what is false. Watch well the tongue, for out of it are the issues of life! O, the foul leprosy that heaps itself in monstrous accumulation over Human Life, and obliterates all the divine features of it into one hideous mountain of purulent disease, when Human Life parts company with truth; and fancies, taught by Ignatius or another, that lies will be the salvation of it! We of these late centuries have suffered as the sons of Adam never did before; hebetated, sunk under mountains of torpid leprosy; and studying to persuade ourselves that this is health.

And if we have awakened from the sleep of death into the Sorcerer’s Sabbath of Anarchy, is it not the chief of blessings that we are awake at all? Thanks to Transcendent Sansculottism and the long-memorable French Revolution, the one veritable and tremendous Gospel of these bad ages, divine Gospel such as we deserved, and merciful too, though preached in thunder and terror! Napoleon Campaignings, September Massacres, Reigns of Terror, Anacharsis Clootz and Pontiff Robespierre, and still more beggarly tragicalities that we have since seen, and are still to see: what frightful thing were not a little less frightful than the thing we had? Peremptory was our necessity of putting Jesuitism away, of awakening to the consciousness of Jesuitism. ‘Horrible,’ yes: how could it be other than horrible? Like the valley of Jehoshaphat, it lies round us, one nightmare wilderness, and wreck of dead-men’s bones, this false modern world; and no rapt Ezekiel in prophetic vision imaged to himself things sadder, more horrible and terrible, than the eyes of men, if they are awake, may now deliberately see. Many yet sleep; but the sleep of all, as we judge by their maundering and jargoning, their Gorham Controversies, street-barricadings, and uneasy tossings and somnambulisms, is not far from ending. Novalis says, ‘We are near awakening when we dream that we are dreaming.’ [italics in original]

A complex of interrelated findings about how consciousness handles the focus of perception has been making the rounds. Folks are recognizing the limited time each of us has to deal with everything pressing upon us for attention and are adopting the notion of the bandwidth of consciousness: the limited amount of perception / memory / thought one can access or hold at the forefront of attention compared to the much larger amount occurring continuously outside of awareness (or figuratively, under the hood). Similarly, the myriad ways attention is diverted by advertisers and social media (to name just two examples) to channel consumer behaviors or increase time-on-device metrics have become commonplace topics of discussion. I’ve used the terms information environment, media ecology, and attention economy in past posts on this broad topic.

Among the most important observations is how the modern infosphere has become saturated with content, much of it entirely pointless (when not actively disorienting or destructive), and how many of us willingly tune into it without interruption via handheld screens and earbuds. It’s a steady flow of stimulation (overstimulation, frankly) that is the new normal for those born and/or bred to the screen (media addicts). Its absence or interruption is discomfiting (like a toddler’s separation anxiety). However, mental processing of information overflow is tantamount to drinking from a fire hose: only a modest fraction of the volume rushing nonstop can be swallowed. Promoters of meditation and presensing, whether implied or manifest, also recognize that human cognition requires time and repose to process and consolidate experience, transforming it into useful knowledge and long-term memory. More and more stimulation added on top is simply overflow, like a faucet filling the bathtub faster than drain can let water out, spilling overflow onto the floor like digital exhaust. Too bad that the sales point of these promoters is typically getting more done, because dontcha know, more is better even when recommending less.

Quanta Magazine has a pair of articles (first and second) by the same author (Jordana Cepelewicz) describing how the spotlight metaphor for attention is only partly how cognition works. Many presume that the mind normally directs awareness or attention to whatever the self prioritizes — a top-down executive function. However, as any loud noise, erratic movement, or sharp pain demonstrates, some stimuli are promoted to awareness by virtue of their individual character — a bottom-up reflex. The fuller explanation is that neuroscientists are busy researching brain circuits and structures that prune, filter, or gate the bulk of incoming stimuli so that attention can be focused on the most important bits. For instance, the article mentions how visual perception circuits process categories of small and large differently, partly to separate figure from ground. Indeed, for cognition to work at all, a plethora of inhibitory functions enable focus on a relatively narrow subset of stimuli selected from the larger set of available stimuli.

These discussions about cognition (including philosophical arguments about (1) human agency vs. no free will or (2) whether humans exist within reality or are merely simulations running inside some computer or inscrutable artificial intelligence) so often get lost in the weeds. They read like distinctions without differences. No doubt these are interesting subjects to contemplate, but at the same time, they’re sorta banal — fodder for scientists and eggheads that most average folks dismiss out of hand. In fact, selective and inhibitory mechanisms are found elsewhere in human physiology, such as pairs of muscles to move to and fro or appetite stimulants / depressants (alternatively, activators and deactivators) operating in tandem. Moreover, interactions are often not binary (on or off) but continuously variable. For my earlier post on this subject, see this.

Continuing (after some delay) from part 1, Pankaj Mishra concludes chapter 4 of The Age of Anger with an overview of Iranian governments that shifted from U.S./British client state (headed by the Shah of Iran, reigned 1941–1979) to its populist replacement (headed by Ayatollah Khomeini, ruled 1979–1989), both leaders having been authoritarians. During the period discussed, Iran underwent the same modernization and infiltration by liberal, Western values and economics, which produced a backlash familiar from Mishra’s descriptions of other nations and regions that had experienced the same severed roots of place since the onset of the Enlightenment. Vacillation among two or more styles of government might be understood as a thermostatic response: too hot/cold one direction leads to correction in another direction. It’s not a binary relationship, however, between monarchy and democracy (to use just one example). Nor are options between a security state headed by an installed military leader and a leader elected by popular vote. Rather, it’s a question of national identity being alternatively fractured and unified (though difficult to analyze and articulate) in the wake of multiple intellectual influences.

According to Lewis and Huntington, modernity has failed to take root in intransigently traditional and backward Muslim countries despite various attempts to impose it by secular leaders such as Turkey’s Atatürk, the Shah of Iran, Algeria’s Ben Bella, Egypt’s Nasser and Sadat, and Pakistan’s Ayub Khan.

Since 9/11 there have been many versions, crassly populist as well as solemnly intellectual, of the claims by Lewis and Huntington that the crisis in Muslim countries is purely self-induced, and [that] the West is resented for the magnitude of its extraordinary success as a beacon of freedom, and embodiment of the Enlightenment’s achievements … They have mutated into the apparently more sophisticated claim that the clash of civilizations occurs [primarily] within Islam, and that Western interventions are required on behalf of the ‘good Muslim’, who is rational, moderate and liberal. [p. 127]

This is history told by the putative winners. Mishra goes on:

Much of the postcolonial world … became a laboratory for Western-style social engineering, a fresh testing site for the Enlightenment ideas of secular progress. The philosophes had aimed at rationalization, or ‘uniformization’, of a range of institutions inherited from an intensely religious era. Likewise, postcolonial leaders planned to turn illiterate peasants into educated citizens, to industrialize the economy, move the rural population to cities, alchemize local communities into a singular national identity, replace the social hierarchies of the past with an egalitarian order, and promote the cults of science and technology among a pious and often superstitious population. [p. 133]

Readers may recognize this project and/or process by its more contemporary name: globalization. It’s not merely a war of competing ideas, however, because those ideas manifest in various styles of social and political organization. Moreover, the significance of migration from rural agrarian settings to primarily urban and suburban ones can scarcely be overstated. This transformation (referring to the U.S. in the course of the 20th century) is something James Howard Kunstler repeatedly characterizes rather emphatically as the greatest misallocation of resources in the history of the world. Mishra summarizes the effects of Westernization handily:

In every human case, identity turns out to be porous and inconsistent rather than fixed and discrete; and prone to get confused and lost in the play of mirrors. The cross-currents of ideas and inspirations — the Nazi reverence for Atatürk, a gay French philosopher’s denunciation of the modern West and sympathy for the Iranian Revolution, or the various ideological inspirations for Iran’s Islamic Revolution (Zionism, Existentialism, Bolshevism and revolutionary Shiism) — reveal that the picture of a planet defined by civilizations closed off from one another and defined by religion (or lack thereof) is a puerile cartoon. They break the simple axis — religious-secular, modern-medieval, spiritual-materialist — on which the contemporary world is still measured, revealing that its populations, however different their pasts, have been on converging and overlapping paths. [p. 158]

These descriptions and analyses put me in mind of a fascinating book I read some years ago and reviewed on Amazon (one of only a handful of Amazon reviews): John Reader’s Man on Earth (1988). Reader describes and indeed celebrates incredibly diverse ways of inhabiting the Earth specially adapted to the landscape and based on evolving local practices. Thus, the notion of “place” is paramount. Comparison occurs only by virtue of juxtaposition. Mishra does something quite different, drawing out the connective ideas that account for “converging and overlapping paths.” Perhaps inevitably, disturbances to collective and individual identities that flow from unique styles of social organization, especially those now operating at industrial scale (i.e., industrial civilization), appear to be picking up. For instance, in the U.S., even as mass shootings (a preferred form of attack but not the only one) appear to be on the rise at the same time that violent crime is at an all-time low, perpetrators of violence are not limited to a few lone wolves, as the common trope goes. According to journalist Matt Agorist,

mass shootings — in which murdering psychopaths go on rampages in public spaces — have claimed the lives of 339 people since 2015 [up to mid-July 2019]. While this number is certainly shocking and far too high, during this same time frame, police in America have claimed the lives of 4,355 citizens.

And according to this article in Vox, this crazy disproportion (police violence to mass shootings) is predominantly an American thing at least partly because of our high rate of fetishized civilian gun ownership. Thus, the self-described “land of the free, home of the brave” has transformed itself into a paranoid garrison state affecting civil authority even more egregiously than the disenfranchised (mostly young men). Something similar occurred during the Cold War, when leaders became hypervigilant for attacks and invasions that never came. Whether a few close calls during the height of the Cold War were the result of escalating paranoia, brinkmanship, or true, maniacal, existential threats from a mustache-twirling, hand-rolling despot hellbent on the destruction of the West is a good question, probably impossible to answer convincingly. However, the result today of this mindset couldn’t be more disastrous:

It is now clear that the post-9/11 policies of pre-emptive war, massive retaliation, regime change, nation-building and reforming Islam have failed — catastrophically failed — while the dirty war against the West’s own Enlightenment [the West secretly at war with itself] — inadvertently pursued through extrajudicial murder, torture, rendition, indefinite detention and massive surveillance — has been a wild success. The uncodified and unbridled violence of the ‘war on terror’ ushered in the present era of absolute enmity in which the adversaries, scornful of all compromise, seek to annihilate each other. Malignant zealots have emerged at the very heart of the democratic West after a decade of political and economic tumult; the simple explanatory paradigm set in stone soon after the attacks of 9/11 — Islam-inspired terrorism versus modernity — lies in ruins. [pp.124–125]

Decades ago, I read Douglas Adams’ Hitchhiker’s Guide to the Galaxy trilogy. Lots of inventive things in those books have stayed with me despite not having revisited them. For instance, I found the SEP (Somebody-Else’s-Problem) Field and the infinite improbability drive tantalizing concepts even though they’re jokes. Another that resonates more as I age is disorientation felt (according to Adams) because of dislocation more than 500 light-years away from home, namely, the planet of one’s origin. When I was younger, my wanderlust led me to venture out into the world (as opposed to the galaxy), though I never gave much thought to the stabilizing effect of the modest town in which I grew up before moving to a more typical American suburb and then to various cities, growing more anonymous with each step. Although I haven’t lived in that town for 25+ years, I pass through periodically and admit it still feels like home. Since moving away, it’s been swallowed up in suburban sprawl and isn’t really the same place anymore.

Reading chapter 4 of Pankaj Mishra’s The Age of Anger brought back to me the idea of being rooted in a particular place and its culture, and more significantly, how those roots can be severed even without leaving. The main cause appears to be cultural and economic infiltration by foreign elements, which has occurred many places through mere demographic drift and in others by design or force (i.e., colonialism and globalization). How to characterize the current waves of political, economic, and climate refugees inundating Europe and the smaller migration of Central Americans (and others) into the U.S. is a good question. I admit to being a little blasé about it: like water, people gonna go where they gonna go. Sovereign states can attempt to manage immigration somewhat, but stopgap administration ultimately fails, at least in open societies. In the meantime, the intractable issue has made many Americans paranoid and irrational while our civil institutions have become decidedly inhumane in their mistreatment of refugees. The not-so-hidden migration is Chinese people into Africa. Only the last of these migrations gives off the stink of neocolonialism, but they all suggest decades of inflamed racial tension to come if not open race wars.

Mishra cites numerous authors and political leaders/revolutionaries in chapter 4 who understand and observe that modernizing and Westernizing countries, especially those attempting to catch up, produce psychic turmoil in their populations because of abandonment and transformation of their unique, local identities as they move, for instance, from predominantly agrarian social organization to urbanization in search of opportunity and in the process imitate and adopt inappropriate Western models. Mishra quotes a 1951 United Nations document discussing the costs of supposed progress:

There is a sense in which rapid economic progress in impossible without painful adjustments. Ancient philosophies have to be scrapped; old social institutions have to disintegrate; bonds of cast, creed and race have to burst; and large numbers of persons who cannot keep up with progress have to have their expectations of a comfortable life frustrated. [p. 118]

Thus, men were “uprooted from rural habitats and condemned to live in the big city,” which is a reenactment of the same transformation the West underwent previously. Another insightful passage comes from the final page of Westoxification (1962) or Weststruckness (English transliteration varies) by the Iranian novelist Jalal Al-e-Ahmad:

And now I, not as an Easterner, but as one like the first Muslims, who expected to see the Resurrection on the Plain of Judgment in their lifetimes, see that Albert Camus, Eugene Ionesco, Ingmar Bergman, and many other artists, all of them from the West, are proclaiming this same resurrection. All regard the end of human affairs with despair. Sartre’s Erostratus fires a revolver at the people in the street blindfolded; Nabokov’s protagonist drives his car into the crowd; and the stranger, Meursault, kills someone in reaction to a bad case of sunburn. These fictional endings all represent where humanity is ending up in reality, a humanity that, if it does not care to be crushed under the machine, must go about in a rhinoceros’s skin. [pp. 122–123]

It’s unclear that the resurrection referenced above is the Christian one. Nonetheless, how sobering is it to recognize that random, anonymous victims of nihilistic violence depicted in storytelling have their analogues in today’s victims of mass killings? A direct line of causality from the severed roots of place to violent incidents cannot be drawn clearly, but the loss of a clear, stabilizing sense of self, formerly situated within a community now suffering substantial losses of historical continuity and tradition, is certainly an ingredient.

More to come in pt. 2.

First, a bit of history. The U.S. Constitution was ratified in 1788 and superseded the Articles of Confederation. The first ten Amendments, ratified in 1791 (rather quickly after the initial drafting and adoption of the main document — oops, forgot these obvious assumptions), are known as the Bill of Rights. The final amendment to date, the 27th Amendment, though proposed in 1789 along with others, was not ratified until 1992. A half dozen additional amendments approved by Congress have not yet been ratified, and a large number of other unapproved amendments have been proposed.

The received wisdom is that, by virtue of its lengthy service as the supreme law of the land, the U.S. Constitution has become sacrosanct and invulnerable to significant criticism and further amendment. That wisdom has begun to be questioned actively as a result of (at least) two factors: (1) recognition that the Federal government serves the common good and citizenry rather poorly, having become corrupt and dysfunctional, and (2) the Electoral College, an anachronism from the Revolutionary Era that skews voting power away from cities, handed two recent presidential elections to candidates who failed to win the popular vote yet won in the Electoral College. For a numerical analysis of how electoral politics is gamed to subvert public opinion, resulting in more government seats held by Republicans than voting (expressing the will of the people) would indicate, see this article by the Brookings Institute.

These are issues of political philosophy and ongoing public debate, spurred by dissatisfaction over periodic Federal shutdowns, power struggles between the executive and legislative branches that are tantamount to holding each other hostage, and income inequality that pools wealth and power in the hands of ever fewer people. The judicial branch (especially the U.S. Supreme Court) is also a significant point of contention; its newly appointed members are increasingly right wing but have not (yet) taken openly activist roles (e.g., reversing Roe v. Wade). As philosophy, questioning the wisdom of the U.S. Constitution requires considerable knowledge of history and comparative government to undertake with equanimity (as opposed to emotionalism). I don’t possess such expert knowledge but will observe that the U.S. is an outlier among nations in relying on a centuries-old constitution, which may not have been the expectation or intent of the drafters.

It might be too strong to suggest just yet that the public feels betrayed by its institutions. Better to say that, for instance, the U.S. Constitution is now regarded as a flawed document — not for its day (with limited Federal powers) but for the needs of today (where the Federal apparatus, including the giant military, has grown into a leviathan). This would explain renewed interest in direct democracy (as opposed to representative government), flirtations with socialism (expanded over the blended system we already have), and open calls for revolution to remove a de facto corporatocracy. Whether the U.S. Constitution can or should survive these challenges is the question.

Update

Seems I was roughly half a year early. Harper’s Magazine has as its feature article for the October 2019 issue a serendipitous article: “Constitution in Crisis” (not behind a paywall, I believe). The cover of the issue, however, poses a more provocative question: “Do We Need the Constitution?” Decide for yourself, I suppose, if you’re aligned with the revolutionary spirit.

I caught the presentation embedded below with Thomas L. Friedman and Yuval Noah Harari, nominally hosted by the New York Times. It’s a very interesting discussion but not a debate. For this now standard format (two or more people sitting across from each other with a moderator and an audience), I’m pleased to observe that Friedman and Harari truly engaged each others’ ideas and behaved with admirable restraint when the other was speaking. Most of these talks are rude and combative, marred by constant interruptions and gotchas. Such bad behavior might succeed in debate club but makes for a frustratingly poor presentation. My further comments follow below.

With a topic as open-ended as The Future of Humanity, arguments and support are extremely conjectural and wildly divergent depending on the speaker’s perspective. Both speakers here admit their unique perspectives are informed by their professions, which boils down to biases borne out of methodology, and to a lesser degree perhaps, personality. Fair enough. In my estimation, Harari does a much better job adopting a pose of objectivity. Friedman comes across as both salesman and a cheerleader for human potential.

Both speakers cite a trio of threats to human civilization and wellbeing going forward. For Harari, they’re nuclear war, climate change, and technological disruption. For Friedman, they’re the market (globalization), Mother Nature (climate change alongside population growth and loss of diversity), and Moore’s Law. Friedman argues that all three are accelerating beyond control but speaks of each metaphorically, such as when he refers to changes in market conditions (e.g., from independent to interdependent) as “climate change.” The biggest issue from my perspective — climate change — was largely passed over in favor of more tractable problems.

Climate change has been in the public sphere as the subject of considerable debate and confusion for at least a couple decades now. I daresay it’s virtually impossible not to be aware of the horrific scenarios surrounding what is shaping up to be the end of the world as we know it (TEOTWAWKI). Yet as a global civilization, we’ve barely reacted except with rhetoric flowing in all directions and some greenwashing. Difficult to assess, but perhaps the appearance of more articles about surviving climate change (such as this one in Bloomberg Businessweek) demonstrates that more folks recognize we can no longer stem or stop climate change from rocking the world. This blog has had lots to say about the collapse of industrial civilization being part of a mass extinction event (not aimed at but triggered by and including humans), so for these two speakers to cite but then minimize the peril we face is, well, façile at the least.

Toward the end, the moderator finally spoke up and directed the conversation towards uplift (a/k/a the happy chapter), which almost immediately resulted in posturing on the optimism/pessimism continuum with Friedman staking his position on the positive side. Curiously, Harari invalidated the question and refused to be pigeonholed on the negative side. Attempts to shoehorn discussions into familiar if inapplicable narratives or false dichotomies are commonplace. I was glad to see Harari calling bullshit on it, though others (e.g., YouTube commenters) were easily led astray.

The entire discussion is dense with ideas, most of them already quite familiar to me. I agree wholeheartedly with one of Friedman’s remarks: if something can be done, it will be done. Here, he refers to technological innovation and development. Plenty of prohibitions throughout history not to make available disruptive technologies have gone unheeded. The atomic era is the handy example (among many others) as both weaponry and power plants stemming from cracking the atom come with huge existential risks and collateral psychological effects. Yet we prance forward headlong and hurriedly, hoping to exploit profitable opportunities without concern for collateral costs. Harari’s response was to recommend caution until true cause-effect relationships can be teased out. Without saying it manifestly, Harari is citing the precautionary principle. Harari also observed that some of those effects can be displaced hundreds and thousands of years.

Displacements resulting from the Agrarian Revolution, the Scientific Revolution, and the Industrial Revolution in particular (all significant historical “turnings” in human development) are converging on the early 21st century (the part we can see at least somewhat clearly so far). Neither speaker would come straight out and condemn humanity to the dustbin of history, but at least Harari noted that Mother Nature is quite keen on extinction (which elicited a nervous? uncomfortable? ironic? laugh from the audience) and wouldn’t care if humans were left behind. For his part, Friedman admits our destructive capacity but holds fast to our cleverness and adaptability winning out in the end. And although Harari notes that the future could bring highly divergent experiences for subsets of humanity, including the creation of enhanced humans due to reckless dabbling with genetic engineering, I believe cumulative and aggregate consequences of our behavior will deposit all of us into a grim future no sane person should wish to survive.